Channeling My Inner Groundhog: Predictions for 2014

More [Data, Cloud, Cyber], Faster and Secured

2013 was a big year for the cloud, mobile technology, big data and social media. What’s in store for the new year? I’ve identified six information technology trends that are poised to make a big difference in your IT operations and strategies.

Here’s what I expect to see in 2014 <crosspost to>


  1. BYOT: Also, look at tefficient’s recent 1h2013 mobile data usage -> “no signs of saturation”
  2. BYOT: NYTimes “US Mobile Internet Traffic Nearly Doubles” -> 1.2GB/mo/user

Special thanks to the LEF for their insights in developing these predictions.

1. The “Outside-In” Enterprise Rises

Increasingly, innovation, information and value will all come from outside your organization’s own four walls. Until recently, these were created internally by organizations’ design, production, sales and marketing, and support teams. But the days of self-containment are over. Many of today’s top IT technologies and techniques — including cloud, everything as a service, post-PC mobility, the consumerization of IT, social media, crowdsourcing and community content — are happening outside the organization. In 2014, this trend will continue and accelerate.

For CIOs, this “outside-in” shift will require a new perspective. CIOs will need to re-architect their organizations’ internal networks, making them more like the Internet, able to fully exploit the online communities that have emerged in the outside world. And these changes will need to be done quickly. I expect this “outside-in” transformation to gain speed rapidly.

2. BYOD Shifts to BYOT

The well-documented BYOD (short for “bring your own device”) trend is just the tip of a much larger iceberg. To be sure, many employees want to use their personal mobile devices at work. But they also want to use their own apps, networks and all manner of Web applications and tools. To reflect this change, we need to shift the conversation to BYOT, short for “bring your own technology.”

Today, employees expect to have massive amounts of information at their fingertips, empowering them to make better, quicker decisions. Similarly, customers now fully expect your company’s website to offer features that include Google-quality search, the ability to mash-up information, and high-quality videos.

Are you ready? It’s my observation that most organizations are not. For them, this level of applications is still novel, even hard to imagine. The result is a large and growing gap between the expectations of customers and employees on the one hand, and the capabilities of enterprise IT on the other. It’s a gap that CIOs will be expected to close.

3. The Multi-Cloud Leads

2014 will be a big year for multiple clouds. You’ll have public clouds, private clouds, and hybrid clouds. And you’ll have more than one of each. Some of these clouds will be built by your internal IT department, while others will be sourced from external third parties.

This new, multi-cloud landscape will deliver new efficiencies for organizations. But it will also create new challenges for CIOs: How will they effectively manage information across multiple clouds? Who will help them ? And how will they avoid losing control?

Cloud Brokering will be the key. This involves a cloud management platform that lets CIOs take control of their multiple cloud environments. I also expect to see the emergence of enterprise app marketplaces and stores. These will help IT managers deploy IT workloads into various clouds with great agility and low levels of friction.

4. Big Data Gets Fast

You’ve already heard plenty about big data. But 2014 will be the year of big data that’s also fast.

While big data offers the promise of better business decisions, I believe that promise remains largely unfulfilled so far. Mainly, that’s because the process of transforming big data into actionable information is still too slow. Today, business happens in real time. Think how quickly you respond to tweets and text messages. Yet most big data analytics are still running on batch time.

Deriving insights from our huge data pools is a good start. Next, CIOs will need to deliver big data’s insights in real time (or near real time). You will also need to provide these insights to business users in the form of new and diverse applications. By enabling employees and customers to make more informed decisions faster, big fast data will finally lead to big business successes.

5. The “Internet of Things” Arrives, Finally

Computers process information, that’s a given. But in 2014, thanks to the rapid emergence of Internet-connected sensors and things (IoT), computers will also process physical systems and devices. What’s been called the Internet of Things is already transforming automobiles, personal healthcare devices, TVs, thermostats and appliances. Now, I believe, countless other goods will be intelligently connected, too.

As a result of the exploding Internet of Things, traditional markets like manufacturing and others are beginning to adopt technology like never before. They are Internet-connecting all of their assets in ways that allow them to streamline their own operations. They are even bringing commodity devices into their landscapes to provide more value at a lower cost point and more agility to their own development processes.

6. Governments Emerge as IT Leaders

Government agencies often receive a bad rap as IT backwaters. In 2014, that’s going to change. For the first time in decades, governments will emerge as IT leaders.

Openness is the key. Government agencies are quickly moving to open systems, open innovation, open software, even entirely open IT ecosystems. All this openness is creating new, massive efficiencies. Public agencies can now interact with their citizens as never before. Citizens, in turn, can now contribute ideas, insights, even code to their governments. The level of new value is potentially huge.

Behind this change is a shift from excellent or perfect technology to IT that is “good enough.” For public sector CIOs, this shift will be massive. And for those in the private sector, this could well be the new model — one they’ll have no choice but to follow.

ServiceMesh + CSC = [Agility, Governance, Cloud, Enterprise, Transformation]


Today CSC announced the acquisition of ServiceMesh™.

ServiceMesh is an enterprise Cloud Broker / Cloud Management Platform (CMP) recognized by many analysts as the leading independent provider.  CSC is a Gartner Magic Quadrant Enterprise IaaS provider and technology services firm, helping enterprises transform their delivery of critical business applications.  Together, CSC and ServiceMesh will offer enterprises increased agility, flexibility and the opportunity to fully leverage their multi-vendor cloud landscape.

What we continue to hear from our customers is that having a multitude of cloud providers is a fact in today’s landscape.  Whether a cloud that IT has built, and on premise BizCloud™ or services from Amazon, Microsoft, VMware and others, IT continues to struggle with a consistent model to provision, operate, monitor, manage and optimize their catalog of workloads.  Enter ServiceMesh.


The ServiceMesh Agility Platform™ provides the ability to catalog, policy decorate, provision and manage complex applications between clouds and across them – true hybrid execution.  This solution provides enterprises with new flexibility to support full Cloud Application Lifecycle Management (CALM), providing the flexibility to develop on cheap-and-deep, the ability to test on something closer to production, and then production into an enterprise approved target.  At each step, a simple policy change supports the appropriate provisioning steps, and ensures business controls are rigorously enforced.  In effect bringing together IT operations and Application Development in the DevOps style.

Both companies believe that the synergy is high: with CSC providing the modernization, transformation and migration services and IT operational management whilst ServiceMesh allows for the cataloging of applications into a storefront, and the policy directed deployment into an open choice of cloud targets.

Both companies are on a common mission, as an advisor to IT and a facilitator to developers, to support the natural transformation that DevOps, Cloud and policy-driven cloud automation provide to the enterprise.

CSC looks forward to helping our enterprise customers on their transformational journey to the Multi-Vendor, Multi-Visor, Hybrid Cloud landscape.




The Apps rEVOLUTION… drastic times call for drastic measures

Apps Revolution ReportA really cogent position piece has been developed by a number of technical leaders within CSC.  Lead by Dr.Erika Olimpiew, but also Nabil Fanaian, Henry “Hank” Liang, and a cast of over 20 others, this piece positions the new requirements around mobility, architecture, delivery, economy and other key contributors to leading developers to “storm the gates.”

Below, the prologue…

Applications are the way we put information into action, and shifts in the what, where, how and who of doing this have manifestly shifted the “apps” landscape.

We have seen the movement from general to specific application scopes. In effect, the context of the individ- ual is being brought into the presentation, transaction and securing of information, which is both coming from and flowing through an ever-extending set of channels. From appliances and vehicles to smartphones and TVs, the acceleration of network-attached devices is forcing easy-to-use, easy-to-program, easy-to-integrate strate- gies, with the Internet taking the dominant position as the network of choice.

Consumerization across this expanding set of endpoint technologies, combined with cloud computing on the Internet as the default service provider, have made tech- nologies like HTML5, JavaScript and HTTP the starting point for consumers and enterprises alike. Employees become “customers” of IT, and IT becomes an informa- tion service provider in a landscape where every con- sumer has the control to tune his or her experience to maximize productivity.

As everyone jumps to “mobile first” as a mantra, and HTTP as the connection, enterprises must shift their service delivery strategies to wrap and extend traditional appli- cations and, moreover, integrate with external services that have become ubiquitous in the landscape. From news feeds to social media, from Google to enterprise knowledge repositories, and from Salesforce to produc- tivity applications, there are Application Programming Interfaces (APIs) for everything. The new disciplines of API management and promotion are becoming critical strategic discussions for businesses.

Another critical discussion is the experience. In the past, the user experience (UX) came very late in the design process, often being “designed” by program- mers or engineers. Today, we are seeing a new empha- sis on the experience; in fact, the UX is designed, mocked up and built first. The UX designer is a key member of the agile development team. Likewise,

the API developer, a somewhat new role, is working to provide a set of programmer interfaces that maxi- mize a correct and complete interface to underlying services and data. These APIs have moved from chatty sequences of messages to a single document that is transacted. These APIs must support constant change, with backward compatibility, to maximize value for the broad new ecosystem of developers.

But who is the developer? With new frameworks and tools, not to mention web technologies, it’s everyone. Do- it-yourself (DIY) apps are starting to flourish. Instead of one fat-client application with hundreds of tabs serving a broad set of jobs and roles, we are seeing function-specific apps created by business people and consumers (in addi- tion to IT). These apps use data not just from one system, but mashed up from news feeds, maps and other services to enable one to make more informed decisions. There are over 1 million apps in the Apple App Store, including over 1,500 calendar apps alone. This creates choice for con- sumers, and since employees are also consumers, there is a new expectation for IT to become more like the Internet — offering choice, openness, flexibility and speed.

This revolution in technologies and the open passing of information through communities is accelerating a new revolution in applications, explored in the Apps rEvolution report. We are all consumers. We value choice, and we expect enterprise technologies to be as good as their consumer counterparts. With the advance of Internet- connected devices, we expect to constantly create, interact with, and integrate information from everywhere. We are just beginning to see the value that the apps revolu- tion is ushering in as our enterprises continue to embrace the consumerization of IT.

Apps rEVOLUTION report

Additional Videos:

CSC Distinguished Engineering Program: Darwinism & Talent Evolution

200px-Origin_of_Species_title_pageAs the the Systems Integration marketplace continues to evolve, so must CSC.  The CSC family tree includes a large number of scientific and technical breakthroughs, and we CSCers need to continue to innovate in both business and technical domains.

In talking with our customers, there is a broad recognition of the value of technology in their continued efforts to develop and retain customers, grow revenue, drive agility, improve safety, etc…  CSC must continue to adapt as this change in our environment continues to force us to evolve.

The technical community at CSC is very broad, and diverse, but today we look to begin to harvest our depth.  A Distinguished Engineer is an industry-recognized technical expert who has demonstrated significant impact.  He or she is a paragon of technical excellence and ethics, and a role model for other technical professionals across CSC and our broad set of industry cohorts.

At CSC, Distinguished engineers are proven thought leaders and influencers in their area  of expertise based on their distinguishing engineered solutions, inventions,  patents, publications, presentations, awards and the like. These professionals are leaders, trusted advisors and mentors. They make a direct impact on the business, both individually and in  teams, and can effectively communicate complex technical concepts to client and  CSC executives and in industry forums or technical communities, both internal  and external.

We are launching today, and will be selecting our first class in early 2014.  We will be using our Distinguished Engineers to raise the technical career ladder for Individual Contributors (ICs), and will be enabling these DE’s to collaborate broadly across the company’s diverse portfolio and R&D activities through “non-billable” funding.


So let this announcement serve as a tribute to the technical past of CSC, but also to a refocussing of our technical future as we begin to network, collaborate and innovate to build new value for our customers, our employees and our shareholders.

For those CSCers, more information is available here

As an epilogue, I was thinking about the both the transformation of our technical ecosystem and CSC when my biology background led me back to Darwinian evolution.  Looking at the summarization of Darwin’s theory by Ernst Mayr [taken from Wikipedia], I believe that we as technologists, engineers, scientists and architects can see the natural parallels:

  • Every species is fertile enough that if all offspring survived to reproduce the population would grow (fact).
  • Despite periodic fluctuations, populations remain roughly the same size (fact) [need to graduate more technologists and engineers though].
  • Resources such as food [money] are limited and are relatively stable over time (fact).
  • A struggle for survival ensues (inference).
  • Individuals in a population vary significantly from one another (fact).
  • Much of this variation is inheritable (fact). [diversity is an advantage to a global SI]
  • Individuals less suited to the environment are less likely to survive and less likely to reproduce; individuals more suited to the environment are more likely to survive and more likely to reproduce and leave their inheritable traits to future generations, which produces the process of natural selection (inference). [CSC’s cloud now #2 to AWS, and our Big Data and Cyber businesses are strong!]
  • This slowly effected process results in populations changing to adapt to their environments, and ultimately, these variations accumulate over time to form new species (inference). [must learn from others, inside and outside] and adapt to meet environment/market needs]

Please join me in the celebration and collaboration of deep technical talent!

Infochimps joins the CSC troop


Today, I am finally able to talk about a really cool company, that we’ve been working with to bolster CSC “Big Data” Solutions in the marketplace.  One of the first valuable useful use-cases is to enable big-fast data clouds to be automagically built;  In effect, to build a data lake haystack “as a Service”:

“without a haystack, there is no needle to find”…

And, to enable our customers to quickly “Self”-provision, attach, stream, evaluate, store, analyze their emerging big data workloads.

Storing the data in the cloud [via Apache Hadoop], developing insights from the stored data, and enabling those insights to become actionable through promotion into the streaming ingest services [via Storm / Kafka].

InfoChimps + CSC

InfoChimps + CSC = Big [Fast] Data Made Better

Welcome Infochimps: Jim, Joe, Flip, Dhruv, Adam and the rest of the Austin team.  I have to say that this is an amazingly cool company, that I believe generates massive synergies with CSC’s vertical industry knowledge, existing Big Data offers, and with other key R&D initiatives that we have going on including our Cloud 2.0 services / Internet of Things work.  Adding some Chimp magic, some “open” magic sauce including: WuKong (an open Ruby streaming in Hadoop), IronFan (an open orchestration project for provisioning big data services on clouds), and Configliere (an open configuration project that makes project/module configuration more secure and more functional).  Their proven ability to stand up big data clusters on clouds, manage them with high availability and establishes a key link in the overall cloud and big data lineup for CSC.

I do love this from their site: “Big Data is more than Hadoop” and a bit on the transition from the CEO Jim Kaskade.

This is going to be fun!

AT&T and CSC join forces in “Journey to the Networked-Enterprise Cloud”

CSC & AT&T Announce Global Alliance

CSC has been a recognized leader in Cloud Computing for the past 2 years with a key focus on Enterprise Clouds [Gartner 2011, Gartner 2012, Gartner 2013???].  CSC’s focus on the enterprise, has enabled us to make great strides in helping our customers along the “journey to the Cloud”.  As testimony to the growing maturity, over the past 8 months, we seen a change in the F100’s approach to cloud computing.  With ever larger cloud deals, and a focus on agility, cost and an “Outside In” approach I think that Enterprise [Hybrid] Cloud has become a major force in global business.  There is, in fact a massive and growing enterprise cloud market, and CSC seems to have the application modernization processes, migration templates and the right high availability platform for our customers.

Some would bask in the limelight, but not CSC.  We are looking to take our partners and customers to Cloud 2.0: where the Enterprise SLA meets software defined scale-out infrastructure across a global IP fabric.

This is even a bigger journey, it includes not just “in place” modernization, but new architectures designed for “scale out”, mobility, social, big data, the new technologies driving the enterprise.  To win in this space, we thought that it was vital to not just have a great WAN, but a great Inter-Network, as well as the need to have so much infrastructure that elastic volume requirements for peak loads, or site failures, are normalized across a broad and expanding computing landscape.  CSC believes that having a great Intranet, but also an amazing Internet and mobile network will enable the broadest styles of cloud applications to take hold. With CSC’s focus on building modern applications that are policy controlled, continuously monitored and with agile lifecycle to match the changing pace of business, we believe that CSC and AT&T can together define a perfect platform for big and small enterprises alike.

As part of this deal, we are excited that AT&T values our application modernization services as much as our customers do.  We will be both helping AT&T as well as it’s customers bring their enterprise applications to the cloud.  And to the point on the value of the network, CSC will be working closely with AT&T to help them design the best converged and software control network.  Spanning the globe, AT&T already earns top honors in WAN, Internet and Mobile, but as converged networking takes hold with MPLS/IP everywhere new opportunities for customer privacy/security, workload location placement, proximity, and even availability become possible.

To some it up, this deal is about CSC’s enterprise capabilities in applications, cloud platforms and infrastructure services complimenting AT&T’s cloud infrastructure, secure global network and advanced network services that will create a unique opportunity for both leaders to play to their respective strengths, but in better harmony with each other.  This harmonization will result in less friction for enteprise applications (old and new), a cloud platform that gets better, faster, and scales with the market leaders, as well as a network that again through the combining of scale creates new opportunities for increased throughput, higher availability, and improved service at the most competitive prices.

Razor, Puppet, Hadoop and the CSC Cloud (pt2)

Using Razor to Boot the CSC Cloud (Part 2):

Step 3: Configuring Razor

The Image Catalog

Running razor post installation should show a single OS Image:

dhushon@razor:/opt/razor/conf$ sudo razor image get all
UUID =>  390R2duPsjqKzwmjCCP0LJ
Type =>  MicroKernel Image
ISO Filename =>  razor-microkernel-latest.iso
Path =>  /opt/razor/image/mk/390R2duPsjqKzwmjCCP0LJ
Status =>  Valid
Version =>  0.12.0
Built Time =>  Wed Feb 20 17:10:45 -0500 2013

We now download the Centos 6.4_x86_64 image from and copy it to our razor host

add it to the razor catalog

dhushon@razor:~$ sudo razor image add -t os -p CentOS-6.4-x86_64-minimal.iso -n CentOS6.4x64-minimal -v 6.4x64

now we are showing the new image in the catalog:

dhushon@razor:/opt/razor/conf$ sudo razor image get all
UUID =>  2101TWCg8hOgDJvqoKl6W1
Type =>  OS Install
ISO Filename =>  CentOS-6.4-x86_64-minimal.iso
Path =>  /opt/razor/image/os/2101TWCg8hOgDJvqoKl6W1
Status =>  Valid
OS Name =>  CentOS6.4x64-minimal
OS Version =>  6.4x64
UUID =>  390R2duPsjqKzwmjCCP0LJ
Type =>  MicroKernel Image
ISO Filename =>  razor-microkernel-latest.iso
Path =>  /opt/razor/image/mk/390R2duPsjqKzwmjCCP0LJ
Status =>  Valid
Version =>  0.12.0
Built Time =>  Wed Feb 20 17:10:45 -0500 2013

Building a [Deployment] Model

Now, we must build a model (an association of a behavior with an image).  First, let’s see what templates we have:

dhushon@razor:/opt/razor/conf$ sudo razor model get template
Model Templates:
Template Name           Description
ubuntu_precise          Ubuntu Precise Model
redhat_6                RedHat 6 Model
xenserver_boston        Citrix XenServer 6.0 (boston) Deployment
ubuntu_oneiric          Ubuntu Oneiric Model
oraclelinux_6           Oracle Linux 6 Model
opensuse_12             OpenSuSE Suse 12 Model
debian_wheezy           Debian Wheezy Model
ubuntu_precise_ip_pool  Ubuntu Precise Model (IP Pool)
vmware_esxi_5           VMware ESXi 5 Deployment
sles_11                 SLES 11 Model
xenserver_tampa         Citrix XenServer 6.1 (tampa) Deployment
centos_6                CentOS 6 Model

For this run, we want to go ahead and associate centos_6 model with our iso. You will see, in the example below, that the model requires some configuration information in order to complete; things like hostname prefix, domain name and password.  Nothing big, but supports the kickstart/anaconda styled installation customization, without having to custom edit a kickstart file.

dhushon@razor:/opt/razor/conf$ sudo razor model add --template centos_6 --label install_centos6.4x64 --image-uuid 2101TWCg8hOgDJvqoKl6W1
--- Building Model (centos_6):
Please enter node hostname prefix (will append node number) (example: node)
default: node
(QUIT to cancel)
Please enter root password (> 8 characters) (example: P@ssword!)
default: test1234
(QUIT to cancel)
Please enter local domain name (will be used in /etc/hosts file) (example:
default: localdomain
(QUIT to cancel)
Model created
Label =>  install_centos6.4x64
Template =>  linux_deploy
Description =>  CentOS 6 Model
UUID =>  2SLXLNkCw444imPJLwrdNR
Image UUID =>  2101TWCg8hOgDJvqoKl6W1

Step 4: Booting Nodes

Build a vApp for the new project

To make this set of steps just a little easier, I will build just two nodes, a small memory Job Tracker, and a large memory Name Node.  There are quite a few additional capabilities required to build a successful and available cluster, and we’ll target those in the next set of instructions.

Firstly, this lab is running vCD so the images may seem “familiar” you’ll see in my initial screenshots.


vCD Login Screen via a Secure Network Tunnel

There are 2 initial vApps, the first is Razor, which contains the Razor, Puppet, dnsmasq and other shared services.  The second is the new vApp defined to represent the emergent Hadoop cluster.


The 2 vApps are defined


the hadoop namenode and job tracker are added as VM’s in the vDC

Customize Concrete Instances

We must now create two different vm templates, a big memory one for the namenode and a small memory vm for the job tracker and other shared services. Caveat, to save some resources these nodes are under provisioned, but the different memory configurations allows us to treat these nodes differently from a bootstrap and a puppet perspective downstream.


Note the 8GB memory footprint for the namenode template

Power On

If everything works as planned, you should see a node show up, something like this:

dhushon@razor:/opt/razor/conf$ sudo razor node get
Discovered Nodes
UUID                 Last Checkin  Status                           Tags
5pWaIemUq6n4T3lnXl18Ph  0 sec         A [memsize_8GiB,nics_3,vmware_vm,IntelCorporation,cpus_2]

Here we see that the first node is now registered in Razor, and is waiting (Status “A”) and is being held in a wait state.  Once we apply a policy and update the razor state machine, we’ll see that this node will get to work.

Step 5: Razor Administration

Now that the image and model are built, the nodes enabled (my nodes are virtual and part of a CSC vDC), and the node profiled via the iPXE/microKernel procedure, we’re now ready to assign the useful tags to this node using a tag-matcher, and policy to associate the model with a tagged node.

Setting up a Tag

Starting out tarting out, we see that we have no tags:

dhushon@razor:/opt/razor/conf$ sudo razor tag
Tag Rules
< none >

First we add a new tag, in our case, we want to create our Hadoop NameNodes, so we need a “Big Memory” variety:

dhushon@razor:/opt/razor/conf$ sudo razor tag add --name BigMem --tag bigmem
Name =>  BigMem
Tags =>  bigmem
UUID =>  VwPf7QuDMCow4nfWJpt6v
Matcher =>  <none>

and to provide selection, we need a tag matcher, basically a predicate that enables the differentiation of different classes of nodes.  Being in a vDC this allows us to match capability & cost to reduce waste.

sudo razor tag 6TmVAekfc2tIbYHaOn8Ikb matcher add --key mk_hw_mem_size --compare 'equal' --value 8GiB
Tag Rule:
Key =>  mk_hw_mem_size
Compare =>  equal
Value =>  8GiB
Inverse =>  false

If you were wondering, the “–key” gives you a ton of selection capability, and the keys can be simply found here (really amazing in memory microkernel work):

dhushon@razor:/opt/razor/conf$ sudo razor node 5pWaIemUq6n4T3lnXl18Ph get --field attributes
Node Attributes:
Name                      Value
architecture              i386
boardmanufacturer         Intel Corporation
boardserialnumber         None
hardwareisa               unknown
hardwaremodel             i686
hostname                  mk0050561C01F6
interfaces                dummy0,eth0,eth1,eth2,lo
is_virtual                true
macaddress                3E:01:BB:7E:8E:07
macaddress_dummy0         3E:01:BB:7E:8E:07
macaddress_eth0           00:50:56:1C:01:F6
macaddress_eth1           00:50:56:1C:01:F7
macaddress_eth2           00:50:56:1C:01:FA
manufacturer              VMware, Inc.
memorysize                2.96 GB
memorytotal               2.96 GB
mk_hw_bus_description     Motherboard
mk_hw_bus_physical_id     0
mk_hw_bus_serial          None
mk_hw_bus_vendor          Intel Corporation
mk_hw_bus_version         None
mk_hw_cpu0_bus_info       cpu@0
mk_hw_cpu0_capacity       4230MHz
mk_hw_cpu0_description    CPU
mk_hw_cpu0_physical_id    4
mk_hw_cpu0_serial         0002-06C2-0000-0000-0000-0000
mk_hw_cpu0_size           3333MHz
mk_hw_cpu0_slot           CPU socket #0
mk_hw_cpu0_vendor         Intel Corp.
mk_hw_cpu0_version        6.12.2
mk_hw_cpu0_width          64 bits
mk_hw_cpu1_bus_info       cpu@1
mk_hw_cpu1_capacity       4230MHz
mk_hw_cpu1_description    CPU
mk_hw_cpu1_physical_id    5
mk_hw_cpu1_product        Pentium Pro
mk_hw_cpu1_serial         0002-06C2-0000-0000-0000-0000
mk_hw_cpu1_size           3333MHz
mk_hw_cpu1_slot           CPU socket #1
mk_hw_cpu1_vendor         GenuineIntel
mk_hw_cpu1_version        6.12.2
mk_hw_cpu_count           2
mk_hw_disk0_bus_info      scsi@2:0.0.0
mk_hw_disk0_description   SCSI Disk
mk_hw_disk0_logical_name  /dev/sda
mk_hw_disk0_physical_id   0.0.0
mk_hw_disk0_size          24GiB (25GB)
mk_hw_disk_count          1
mk_hw_fw_date             07/12/2011
mk_hw_fw_description      BIOS
mk_hw_fw_physical_id      0
mk_hw_fw_size             99KiB
mk_hw_fw_vendor           Phoenix Technologies LTD
mk_hw_fw_version          6.00
mk_hw_lscpu_Architecture  i686
mk_hw_lscpu_BogoMIPS      6652.08
mk_hw_lscpu_Byte_Order    Little Endian
mk_hw_lscpu_CPU_MHz       3324.999
mk_hw_lscpu_CPU_family    6
mk_hw_lscpu_CPU_op-modes  32-bit, 64-bit
mk_hw_lscpu_CPU_sockets   1
mk_hw_lscpu_L1d_cache     32K
mk_hw_lscpu_L1i_cache     32K
mk_hw_lscpu_L2_cache      256K
mk_hw_lscpu_L3_cache      12288K
mk_hw_lscpu_Model         44
mk_hw_lscpu_Stepping      2
mk_hw_lscpu_Vendor_ID     GenuineIntel
mk_hw_mem_description     System Memory
mk_hw_mem_physical_id     82
mk_hw_mem_size            8GiB
mk_hw_mem_slot            System board or motherboard
mk_hw_nic0_bus_info       pci@0000:02:00.0
mk_hw_nic0_capacity       1Gbit/s
mk_hw_nic0_clock          66MHz
mk_hw_nic0_description    Ethernet interface
mk_hw_nic0_logical_name   eth0
mk_hw_nic0_physical_id    0
mk_hw_nic0_serial         00:50:56:1c:01:f6
mk_hw_nic0_size           1Gbit/s
mk_hw_nic0_version        01
mk_hw_nic0_width          64 bits
mk_hw_nic1_bus_info       pci@0000:02:01.0
mk_hw_nic1_capacity       1Gbit/s
mk_hw_nic1_clock          66MHz
mk_hw_nic1_description    Ethernet interface
mk_hw_nic1_logical_name   eth1
mk_hw_nic1_physical_id    1
mk_hw_nic1_serial         00:50:56:1c:01:f7
mk_hw_nic1_size           1Gbit/s
mk_hw_nic1_version        01
mk_hw_nic1_width          64 bits
mk_hw_nic2_bus_info       pci@0000:02:02.0
mk_hw_nic2_capacity       1Gbit/s
mk_hw_nic2_clock          66MHz
mk_hw_nic2_description    Ethernet interface
mk_hw_nic2_logical_name   eth2
mk_hw_nic2_physical_id    2
mk_hw_nic2_serial         00:50:56:1c:01:fa
mk_hw_nic2_size           1Gbit/s
mk_hw_nic2_version        01
mk_hw_nic2_width          64 bits
mk_hw_nic_count           3
mtu_dummy0                1500
mtu_eth0                  1500
mtu_eth1                  1500
mtu_eth2                  1500
mtu_lo                    16436
physicalprocessorcount    1
processorcount            1
productname               VMware Virtual Platform
type                      Other
virtual                   vmware

To check that in fact razor has properly matched the node, notice that our node has picked up a new tag:

dhushon@razor:/opt/razor/conf$ sudo razor node get
Discovered Nodes
UUID           Last Checkin  Status                               Tags
5pWaIemUq6n4T3lnXl18Ph  32.7 min      B       [bigmem,memsize_8GiB,nics_3,vmware_vm,IntelCorporation,cpus_2]

The next step is to create a policy that will associate a model with a tag (in our case the linux_deploy template with our Centos image).  The policy is basically another matcher, this time for a node -> a model via a tag.   In our case we want to use the linux_deploy model and apply it to nodes that match the tag “bigmem”

dhushon@razor:/opt/razor/conf$ sudo razor policy add --template linux_deploy --label NameNode --tags bigmem --model 2SLXLNkCw444imPJLwrdNR
Policy created
UUID =>  5HkjZBIfOpNct7IIWPmZmb
Line Number =>  0
Label =>  NameNode
Enabled =>  false
Template =>  linux_deploy
Description =>  Policy for deploying a Linux-based operating system.
Tags =>  [bigmem]
Model Label =>  install_centos6.4x64
Broker Target =>  none
Currently Bound =>  0
Maximum Bound =>  0
Bound Counter =>  0

One last step, now that we’re comfortable is to enable the Policy (they are disabled by default), and we should see some magic happen (just as soon as the next node checkin happens the new behavior will be pulled down, and the node rebooted to the new pxe image target),

dhushon@razor:/opt/razor/conf$ sudo razor policy update 5HkjZBIfOpNct7IIWPmZmb --enabled true
UUID =>  5HkjZBIfOpNct7IIWPmZmb
Line Number =>  0
Label =>  NameNode
Enabled =>  true
Template =>  linux_deploy
Description =>  Policy for deploying a Linux-based operating system.
Tags =>  [bigmem]
Model Label =>  install_centos6.4x64
Broker Target =>  none
Currently Bound =>  0
Maximum Bound =>  0
Bound Counter =>  0
dhushon@razor:/opt/razor/conf$ sudo razor policy
#  Enabled    Label        Tags         Model Label       #/Max  Counter           UUID
0  true     NameNode    [bigmem]    install_centos6.4x64  0/-    0        5HkjZBIfOpNct7IIWPmZmb
1  false    JobTracker  [smallmem]  install_centos6.4x64  0/-    0        1NMAIlZAtU63GvhiBZadw9

And voila… CentOS 6 running on my vm…. now let’s iterate so that I can get Hadoop fully up!


Centos 6.4 x86_64 node provisioned, note the hostname prefix matches the model that we defined.

Next up, we’ll map a few puppet manifests to do the full Hadoop installation.


Thanks to Peter, Eli, Tom, Nan, Nick, and the following sites (incomplete):

An Awesome New Role

I’ve been invited to join an amazing executive team at CSC.  Mike Lawrie, the CEO and his team have asked me to lead the technologists at CSC toward three declared technical foci: Big Data, Cloud and Cyber-Security.

Let me first say that I am incredibly thankful to EMC, an amazing leadership team and some massive talent.  During my 6 years I have seen massive innovation (from within), transformation (through execution), and opportunity (through the Pivotal Initiative).  I’ve worked with people that I’ll always call friends, and know that Mike has told me that he hopes EMC and CSC can just grow closer through this transition.

I’m both excited and sad. I even found this, the other day, in my wallet:

fortune1I look back, and recognize that I’ve been innovating and executing around service providers, cloud and big data now for 6 years at EMC.   Now, I’m being given the opportunity to lead in this same technology landscape and effect a more substantial value production.  In perspective, the Gartner and Forrester articles on Big Data and Cloud, and they always said that 60-70% of the revenue was services.

Gartner: $29B will be spent on big data throughout 2012 by IT departments.  Of this figure, $5.5B will be for software sales and the balance for IT services.

But, what they didn’t say, is how rewarding solving real customer problems is – It’s truely awesome, something I’m looking forward to!  And it is my intention to participate with the CSC community, to deliver innovation and value to our team, our shareholders and our customers.

Thanks to everyone! I cherish the memories, and look forward to the challenges ahead!


Value Proposition for Virtual Networks: The Dilemma

As we think about the corporate data centers and the [Inter]networks that join them, we begin to recognize that there are a number of critical points in which information is accessed, produced, cached, transited, processed and stored.  One critical feature of the emerging landscape is that mobility and the mega-bandwidth edge has changed the underlying network basis of the design of Web Scale systems, and how they overlay with traditional ISP and Carrier Network topologies:

On the location taxonomy:

  1. PREMISE: Person, Device, Service = Ethernet/IP
  2. EDGE: Mobile Base Station, Distribution Point, DSLAM = Network Edge IP->Metro Transport Network [IP/MPLS, xWDM]
  3. EDGE/METRO/BACKBONE: Regional Data Center = Metro ->Core Carrier Transport Network [IP/MPLS, xWDM, PON]
  4. BACKBONE/CORE: Carrier Data Center  = Core Carrier Transport Network -> ISP Network
  5. INTERNET: ISP = ISP Network -> Internet
  6. repeat 5,4,3,2,1 to target.
You can see how the latency, and costs can stack up.  This model is hugely “North / South” targeted with packets making huge horseshoes through the network in order to transit content.  And for most carriers, physical and protocol barriers exist throughout location tiers preventing bypass or intermediating strategies that might support everything from local caching/routing, to increased traffic and congestion management.
Content Carriage Cost, PREMISE->EDGE and EDGE->BACKBONE bandwidth dilemma.
  • Reduce Cost, Complexity and Increase Service Flexibility
    • Ability to introduce new services quickly and adapt to changingtraffic patterns is crucial.
    • Move network functions away from expensive hardwareinfrastructure to software running on lower cost COTS hardware.
    • Use Software Defined Networks (SDN) and Cloud Computing toprovide the required flexibility, and the ability to adapt to changing traffic patterns.
  • N2 GN Principles
    • Separate Control & Data.
      • Open Standards Based Interfaces (OpenFlow)
    • Introduce application/policy -based traffic steering.
    • Introduce virtualization.
    • Standards and Automation for higher level functions will becritical to the transformation.

The Benefits:

  • Virtualized environment
    • You can load up servers to an appropriate production traffic loading
    • Hardware resilience can be disconnected from the service applications
      • this leads directly to reduced Power, AirCon, and other facility requirements
    • Standard hardware
      • Large scale drives cost down (Economy of Scale
      • One Spare holding SKU, drastically reducing the number of unitsneeded.
      • Reduces field force training requirements
      • Get away from the “new software release means a field firmwareupgrade”, which takes many truck rolls and potentially manycombinations to handle.
    • Disaster recovery/business continuity can be simplified
    • Software Implementation of Network Functions
    • Introduce new services without deploying an entire network of newhardware, and if it fails commercially you can reuse the hardware.
    • Move workloads around the network to tactically handle capacity issues in specific geographical areas.

    The Future

    Software Defined Content Networks (SDCN) via SDN:

    • Functional Agility
      • Mutable functions
      • Content/Application-aware routing and separation of packet forwarding from control
        • to rapidly introduce new services and adapt to changing traffic patterns
    • The Network Processing Slice
      • The network infrastructure consists of low cost network elements: Servers, Packet Optical Transport, Data Center
    • Switches
      • Elements are edge located where needed and consolidated where needed
      • Network functions, caching, applications, enablers all run in virtualized distributed equipment
      • Transport is an integrated optical IP network
And just to wrap up on the importance, something that I thought was relevant to the re-invention of the network:
Akamai’s [now retired CEO] Paul Sagan w/ Om Malik @ Structure 2012 advice to carriers “extend IP throughout [and increase transport network bandwidth] and limit protocol differences to enable to the Instant Internet”
  • must deliver to user expectation in TV like experience
  • need 100fold increase in bandwidth & transactions by end of decade
  • 2 BTxns/day today
  • – Content Delivery Services < 50% of business, but rather provide traffic control, namely performance and security guarantees

To play off of Akamai‘s requirements, Service Providers are building out well positioned [read distributed] data centers with a focus on capital and operational [power/people] efficiencies.  As these distributed centers operate, a key opportunity it to avoid IP transit costs (e.g. the long haul inter-connect tariffs) and also to support workload prioritization to reduce over-provisioning and support premium offers.

These new data centers, built for changing workloads like Analytic Warehouses / Hadoop, require substantial East-West or bisection bandwidth across a “stretched” core network.  This emergence is markedly different from the North/South Bandwidth delivered through Content Delivery and Transactional Web applications.

Perhaps the most forgotten value proposition in the mobile/access edge.  Pushing SDN capabilities all the way to the endpoint.  At this point carriers typically dominate the last mile, and view content loads like NetFlix, iCloud and other services as parasitic.  With the ability to create differentiated service offers all the way to the eyeballs, there’s a chance that, regulation permitting: QoS, lower latency or location aware services can again be recognized as a driver in the growth of ARPU/value premiums.

We are just starting to see the broad thinking about NV/SDN in the providers, but I do truly expect these technologies to unlock new services and new revenue streams.

Puppet, DevOps and Big Data

I was really excited to be asked to be one of the really cool Keynotes at PuppetLabsPuppetConf this year.  My initial excitement was that I saw that the DevOps movement, capitalized by the system administrator to make their life easier, had parallels in the life of a software developer and information architect as they tried to create automation and reproduce-ability within their own domains.

Reflecting on trying to simplify, automate and increase the pace of integration of information drove me to talk about the transitive uses of the DevOps models in new domains.

Thanks to the amazing Puppet community for offering me the soap-box to discuss some of the parallels that I am seeing as we[EMC] tries to make information integration more agile, predictable and reproducible in the face of increasing growth, complexity and urgency.

Also, I want to thank a few of the critical people that have contributed massive value to my thinking including the Greenplum team working on the K2 cluster especially Milind and Apurva, as well as Nick Weaver and Tom McSweeney who have worked tirelessly to make Razor not just real, but a leading provisioning platform for operating systems in large scale environments including provisioning Windows, like never before!