Tuesday, March 17, 2009

Good Morning – Central Ohio VMware Users Group (COVMUG)

Today I have the privilege of presenting to the COVMUG.  This is my first trip to COVMUG but since I was invited to speak to the group, I’ve been pretty excited.  I always love talking with fellow VMware aficionados about what happening in virtualization, how virtualization works for them, challenges they face and of course how DataCore Storage Virtualization Solutions can help them achieve their virtualization goals. 

I’ve created a new presentation for the group today and while I’m going to cover some of the standard VMware Virtual Infrastructure themes I’m really focusing on the future vSphere (a.k.a. ESX 4/4i), VMware View, Virtual Datacenter OS and of course my favorite High Availability.

 
What:Central Ohio VMware Users Group (COVMUG)
http://campaign.vmware.com/usergroup/invites/CentralOhio_3-17-09.html Central Ohio Area VMware User Group Meeting Invitation Please join us for the upcoming Central Ohio Area VMware User Group meeting on Tuesday, March 17th. This is a great opportunity to meet with your local peers to discuss virtualization trends, best practices, and the latest technology! Agenda Morning Refreshments Registration & Sign-In DataCore Presentation -SANmelody and SANsymphony -storage virtualization within the context of VMware environments -using customer case studies showing how challenges such as Storage Utilization, Business Continuance and Disaster Recovery have been met DataCore SANmelody Product Demonstration Q&A and Open Discussion Register today to join us for this free informative event. Space is limited, so respond as soon as possible to reserve your seat. The VMUG Team
When:Tuesday, March 17, 2009 9:00 AM to 12:00 PM
Where:Platform Lab
1275 Kinnear Road
Columbus, Ohio 43212   United States
Download iCalendar file

Monday, March 16, 2009

Elvis Visits DataCore At ISS 2009

Today was a great day at the Intel Solutions Summit 2009.  One of the highlights was Elvis stopping by for a rendition of Blue Suede Shoes.

LiveJournal Tags: ,

Technorati Tags: ,

del.icio.us Tags: ,

Saturday, March 14, 2009

Truth in Marketing: Active-Active vs. Active-Passive

Nothing sends me around the bend faster than when a vendor lies to me.  Now I realize that “lie” is a strong word and I shouldn’t use it in accusatory speech or in the written word.  I also understand that in areas of fast paced technological development there can be genuine cases of misunderstanding.  However none of the aforementioned circumstances apply to vendors who knowingly and blatantly misrepresent their products and features.  I think somewhere about the time I attended Kindergarten I learned this type of behavior was not only wrong but completely socially unacceptable. 

So why then do companies like EMC, HP, IBM and others market their Active-Passive (A/P) arrays as Active-Active (A/A)?  I actually have no idea but I will say that although I have the utmost respect for many of the folks in their engineering, R & D, and field teams I think the folks in their Marketing organizations are nothing but a pack of liars out to take advantage of the unaware.

Once upon a time I had occasion to be at Tech show and was casually talking with some of the staff at vendor (name withheld to protect the innocent) booth.  During the conversation the booth staffer began to elaborate on how their latest generation storage array had all these new cool features, enhancements and amongst other things was A/A.  Really?  Now I admit I’m not an expert in practically anything but I do know a little something about storage and what constitutes A/A verses A/P.  So I ask the intrepid marketeer how it was that this Nth generation, that came from a long line of arrays in this array family, was suddenly A/A given all previous generations were A/P. 

Oh, no he tells me, they’ve always been A/A, I must have been misinformed all these years.  Ok, it happens to the best of us, please tell me more.  Well, you see the A/A nature of our platform inherently gives it the ability to serve up LUNs from both Storage Processors (SP) at the same time.   Wow, that is simply amazing, possibly even revolutionary now that I think about it…are you kidding me? 

I was able to get control of myself just before I asked if he was mildly retarded or just storage stupid but not before I had to ask a couple of what I admit in retrospect were unfairly complex storage architecture questions.  I began by asking if their array was A/A then what is an A/P array do and how was theirs different and followed up with if all the paths to the LUNs were accessible simultaneously?  Well, no all the paths weren’t active because after all who would want that? 

That was it, I couldn’t take any more and explained that A/A actually mean that all paths through the SPs were active, all the time, hence the origin of the phrase “Active-Active controllers.”  I further explained that when you have an Nth generation system where only some of the paths are active due to the fact that you couldn’t access the LUN through the other controller unless you trespassed the LUN, it meant that it was in fact an A/P array.  He was absolutely incredulous and proceeded to call over an Engineer who would attempt to brain wash me, I mean educate me further in the Jedi ways of storage. 

The Engineer walks over eager to assist and says that I was in fact correct but that they choose to define A/A as meaning you can “actively use both controllers at the same time, just not with the same LUN, of course.”   Well that certainly clears things up, I mean after all, let’s not let little things like commonly accepted definitions and facts get in the way of good marketing.  In fact, I think its a great idea that vendors just make up whatever they want and tell prospective customers whatever they think they want to hear.

I’m officially done ranting about this issue (probably not) however I do want to point out that there are a number of storage vendors grossly misrepresenting their product’s features and capabilities.  If you are not well versed in the technology you are looking at, listen to what they have to say and ask someone you trust.  If all else fails, ask me.  It’s not that I’m any more likely than anyone else you may have asked to know the answer but at least you’ll know I won’t tell you your new Nth generation array is A/A if it is not.

                                    ###

 

New Thinking in Disaster Recovery Strategies

Over the last few years there has been a lot of discussion in the industry about the various aspects of Business Continuity but the primary focus has centered on two areas:

  • High Availability
  • Disaster Recovery

In regard to Disaster Recovery, the majority of the discussion focused on how you get from your primary business operations center to an alternate location.  But what is you couldn’t go to a single alternate location and needed to do what I describe as “Distributed-DR.”  The difference in a Distributed-DR strategy is in the idea that instead of cutting over to a DR datacenter if you have multiple small Remote Office/Branch Offices (ROBO) then you would distribute your primary datacenter into small pieces across the ROBOs making it more practical to have a real world DR Plan.

One of the more interesting things I ran into while modeling this in our Advanced Technologies lab was the impact this has one of the other quintessential problems in DR planning and execution, bandwidth.  When we think about protecting a company’s data there are two elements: Recovery Point Objective (RPO) and Recovery Time Objective (RTO).  In the simplest terms RPO defines the amount of data you are willing to lose verses RTO which defines how long you are willing to be out of business.

I have consulted for many companies over the years and often when we discussed contingency plans for disaster recovery and I would ask how much are you willing to lose and for how long?  The answer was as little as possible and as near zero downtime as possible or what I call a “0/0” DR Plan.  I started calling them “No-No” DR Plans because as soon as the client got the estimate for what it would cost to meet their objectives, the immediate response was “No way, No how” can we pay that….  I have long asserted that given enough money anything is possible and in the DR business I generally find this to be true.  The challenge is finding the breakpoint between what costs to achieve a 0/0 plan verses the business value of data loss or inaccessibility. 

One of the first reasons to back away from a 0/0 DR Plan is the relative cost of bandwidth necessary to replicate the data between the primary and alternate datacenters.  Another complicating factor is the availability of high speed circuits, I’ve been in a number of locations where it can be difficult to get circuits larger than a DS-3 due to carrier or infrastructure limitations. 

Business Continuity inclusive of both High Availability and Disaster Recovery is as much about physics as it is about methodical planning.  Theories in technology are immensely entertaining to discuss but yield remarkable little in the way of profits.  Any really good theory and a lot of crazy theories need to be modeled and tested against a real world set of data. 

The concept of distributed DR addresses one of the key challenges of DR by allowing the distribution of data in the direction it make sense and the re-aggregation of data where it makes sense, or so the theory goes.  All of this sounds good on the whiteboard but the proof is in the lab and in the real world. 

Meanwhile back in the DataCore AT Lab, we needed to model a company that would be fair representation of a real world organization and the virtual infrastructure needed to support it.  What does that mean, one of my favorite quips is that it’s better to under promise and over deliver that the other way around.  That said to say that we may have over built Demo Company, Inc. with 16 servers and 25 desktops for a company of 25 employees is probably true but it provides a representative sample of what is common practice in the industry today and allow us to measure the scalability of this solution.

American Airlines Virtually Eliminates Carry-On Baggage

I just learned from American Airlines that they have reduced the size of “allowed” carry-on bags by 30%.  According to a company spokesperson the change was a business decision made by American’s management team.  The reason for the change was unclear and when I asked for additional clarification the spokesperson indicated the change was made to reduce the airlines FAA fines for “ground delays.” 

I asked if the aircraft in American Airlines fleet had recently or were planned to be reconfigured based on the change and was told that the change would not affect the current or planed aircraft configurations. 

So basically, American Airlines reduced the size of carry-on bags they will allow you to bring onboard and instead allow you to pay to check the same size bag you carried on last week.  Meanwhile the amount of available overhead space in their aircraft remains the same.

I’ve seen companies do a lot of interesting things but this is just ridiculous.  I realize I’m probably a little more sensitive to the issue because I am a frequent flyer (> 100 segments/year).  I have a standard size Tumi “pulley style”  suitcase that I have carried on many different airlines, on over 200 flights in the last 3 years. 

In the interest of fairness, I wrote a letter of concern to American Airlines regarding this matter and I eagerly await their response.  In the meantime, I have no choice but to board American Airlines flight 697 but I will be cancelling the remainder of my reservations with American Airlines and rebooking with another airline.  I called Continental, Southwest, Jet Blue and US Airways to verify their carry-on baggage policy and they all confirmed my suitcase is within the limits to be carried onboard their flights and welcomed my business.

del.icio.us Tags: ,
LiveJournal Tags: ,

Tuesday, March 10, 2009

Virtualization & Digital Healthcare

There is a lot of talk about the US Government’s digital healthcare initiative.  With billions of dollars slated for these initiatives one of the big questions is what role will virtualization technologies play in the future of healthcare systems.  I think the obvious answer is a large one but the devil in in the details. 

Vendors in the both the healthcare vertical and virtualization need to come together and begin to develop joint solutions that meet the needs of the industry and will fall within the scope of the mass of funding that will become available. 

The good news is that it goes without saying that high availability is going to be a big part of anything that is developed for the healthcare market (or I’m not going to be treated there) so it will be an exciting opportunity for some cool architectures.

XChange Solution Provider 2009 Conference

I’m here at the XChange conference talking with solution providers and fellow vendors about technology, the market, new opportunities and how to survive and grow in tough times.  It has been a great to hear a variety of business owners talk about what is working for them and what they need to continue to be successful.

I find myself talk about virtualization with a lot of people.  There is a strong interest in how virtualization can be leveraged in the small and mid-size business markets.  All of the major virtualization players (Citrix, Microsoft, Parallels and VMware) have packages and programs designed for the SMB.  The key to the success of virtualization in the SMB (IMHO) is the business math.  If virtualization solutions continue to cost more than traditional solutions for unrecognized value*, then virtualization will continue to exist primarily in the upper mid and enterprise markets. 

*I consider anything that a business owner does not recognize as a tangible benefit to be unrecognized value.  In my experience, businesses are reluctant (at best) to pay for things that do not give them something they can use to generate profits.