Saturday, June 6, 2009

DataCore Announces Advanced Site Recovery (ASR) for Microsoft Hyper-V and VMware vSphere

At Microsoft TechEd North America, DataCore announced and demonstrated at the Partner Expo a pre-release version of our ASR technology. ASR allows customers to have site level Distributed Disaster Recovery (D-DR). In simple terms, with ASR you can take a primary datacenter (site 1) and divide it into a single or multiple pieces (multiple sites) and perform 1 to many D-DR. ASR is bi-directional out of the box meaning that users can move to their DR site or sites and return or remerge back to the primary, what we call Reverse Disaster Recovery (R-DR). Leveraging DataCore's ASR product users can execute D-DR and R-DR with 2 (yes, I really mean two) clicks.  Did I mention that ASR supports both physical and virtual servers?

 

Tuesday, April 28, 2009

DataCore Demos Integrated Storage Solutions for Microsoft Infrastructure

At MMS 2009 DataCore Software is demonstrating their new Windows Server 2008 R2 (Server 7) based iSCSI & Fiber Channel storage solution SANsymphony 7.0.  One platform (Server 7), many datacenters and the cloud.  It is nothing short of incredible to see  provisioning and management of storage, systems and applications on a unified platform. 

The live demos show how simple it can be to deploy fabric based storage in real-time with virtualization hosts and virtual machines.  Now as if making deployment easy wasn’t enough, next see the impact of losing 50% of the storage infrastructure that supports the datacenter. 

Did I say “see” the impact, well seeing is believing and when you see half of the storage in the datacenter drop and nothing happens you begin to understand how powerful DataCore’s solution is with Microsoft Infrastructure. 

For those of you familiar with System Center Operations Manager (SCOM) you will appreciate the unprecedented ability to visualize the storage infrastructure as a distributed application, manage  and monitor in one place.

Demos take place at the DataCore booth 617 every hour on the hour during Expo hours throughout MMS.

Microsoft Management Summit 2009

I’m here at the 10th Annual MMS at the Venetian Hotel in Las Vegas.  Microsoft has done a nice job packaging and delivering information on their latest server and management technologies to the user community. 

Here is the philosophy:

  • Unified & Virtualized
  • Process-Led/Model-Driven
  • Service-Enabled
  • User-Focused

So what does this mean to me as an IT user?  In the ideal world it means I have universal access to the information I need, whenever and wherever I need it and I have access to the computing power I need to use the information in real-time.  In the real world it means we don’t know yet.  The future is certainly going to be interesting as well as exciting. 

Wednesday, April 15, 2009

VMware Technology Exchange Spring 2009

I’m listening to Carl Eschenbach deliver the keynote address to the partner community.  Carl is both enthusiastic and cheerful in his articulation of VMware’s outlook for the partner community going forward this year. 

After a brief introductory presentation on vSphere and the introduction of another new Geo based manager for the Americas market, he unveiled the new “VMware Partner Network.”

I can distill the the hour long presentation down to this, “Don’t worry the future is full of virtual opportunity, we’ve got a new partner program so let’s go sell some software.”  The best line of the morning was in describing VMware’s award from Redmond Magazine.  VMware and Microsoft are they partners or competitors, as Carl said, “…I can’t think of a better place to run Windows (than on VMware)…”.

Tuesday, March 17, 2009

Good Morning – Central Ohio VMware Users Group (COVMUG)

Today I have the privilege of presenting to the COVMUG.  This is my first trip to COVMUG but since I was invited to speak to the group, I’ve been pretty excited.  I always love talking with fellow VMware aficionados about what happening in virtualization, how virtualization works for them, challenges they face and of course how DataCore Storage Virtualization Solutions can help them achieve their virtualization goals. 

I’ve created a new presentation for the group today and while I’m going to cover some of the standard VMware Virtual Infrastructure themes I’m really focusing on the future vSphere (a.k.a. ESX 4/4i), VMware View, Virtual Datacenter OS and of course my favorite High Availability.

 
What:Central Ohio VMware Users Group (COVMUG)
http://campaign.vmware.com/usergroup/invites/CentralOhio_3-17-09.html Central Ohio Area VMware User Group Meeting Invitation Please join us for the upcoming Central Ohio Area VMware User Group meeting on Tuesday, March 17th. This is a great opportunity to meet with your local peers to discuss virtualization trends, best practices, and the latest technology! Agenda Morning Refreshments Registration & Sign-In DataCore Presentation -SANmelody and SANsymphony -storage virtualization within the context of VMware environments -using customer case studies showing how challenges such as Storage Utilization, Business Continuance and Disaster Recovery have been met DataCore SANmelody Product Demonstration Q&A and Open Discussion Register today to join us for this free informative event. Space is limited, so respond as soon as possible to reserve your seat. The VMUG Team
When:Tuesday, March 17, 2009 9:00 AM to 12:00 PM
Where:Platform Lab
1275 Kinnear Road
Columbus, Ohio 43212   United States
Download iCalendar file

Monday, March 16, 2009

Elvis Visits DataCore At ISS 2009

Today was a great day at the Intel Solutions Summit 2009.  One of the highlights was Elvis stopping by for a rendition of Blue Suede Shoes.

LiveJournal Tags: ,

Technorati Tags: ,

del.icio.us Tags: ,

Saturday, March 14, 2009

Truth in Marketing: Active-Active vs. Active-Passive

Nothing sends me around the bend faster than when a vendor lies to me.  Now I realize that “lie” is a strong word and I shouldn’t use it in accusatory speech or in the written word.  I also understand that in areas of fast paced technological development there can be genuine cases of misunderstanding.  However none of the aforementioned circumstances apply to vendors who knowingly and blatantly misrepresent their products and features.  I think somewhere about the time I attended Kindergarten I learned this type of behavior was not only wrong but completely socially unacceptable. 

So why then do companies like EMC, HP, IBM and others market their Active-Passive (A/P) arrays as Active-Active (A/A)?  I actually have no idea but I will say that although I have the utmost respect for many of the folks in their engineering, R & D, and field teams I think the folks in their Marketing organizations are nothing but a pack of liars out to take advantage of the unaware.

Once upon a time I had occasion to be at Tech show and was casually talking with some of the staff at vendor (name withheld to protect the innocent) booth.  During the conversation the booth staffer began to elaborate on how their latest generation storage array had all these new cool features, enhancements and amongst other things was A/A.  Really?  Now I admit I’m not an expert in practically anything but I do know a little something about storage and what constitutes A/A verses A/P.  So I ask the intrepid marketeer how it was that this Nth generation, that came from a long line of arrays in this array family, was suddenly A/A given all previous generations were A/P. 

Oh, no he tells me, they’ve always been A/A, I must have been misinformed all these years.  Ok, it happens to the best of us, please tell me more.  Well, you see the A/A nature of our platform inherently gives it the ability to serve up LUNs from both Storage Processors (SP) at the same time.   Wow, that is simply amazing, possibly even revolutionary now that I think about it…are you kidding me? 

I was able to get control of myself just before I asked if he was mildly retarded or just storage stupid but not before I had to ask a couple of what I admit in retrospect were unfairly complex storage architecture questions.  I began by asking if their array was A/A then what is an A/P array do and how was theirs different and followed up with if all the paths to the LUNs were accessible simultaneously?  Well, no all the paths weren’t active because after all who would want that? 

That was it, I couldn’t take any more and explained that A/A actually mean that all paths through the SPs were active, all the time, hence the origin of the phrase “Active-Active controllers.”  I further explained that when you have an Nth generation system where only some of the paths are active due to the fact that you couldn’t access the LUN through the other controller unless you trespassed the LUN, it meant that it was in fact an A/P array.  He was absolutely incredulous and proceeded to call over an Engineer who would attempt to brain wash me, I mean educate me further in the Jedi ways of storage. 

The Engineer walks over eager to assist and says that I was in fact correct but that they choose to define A/A as meaning you can “actively use both controllers at the same time, just not with the same LUN, of course.”   Well that certainly clears things up, I mean after all, let’s not let little things like commonly accepted definitions and facts get in the way of good marketing.  In fact, I think its a great idea that vendors just make up whatever they want and tell prospective customers whatever they think they want to hear.

I’m officially done ranting about this issue (probably not) however I do want to point out that there are a number of storage vendors grossly misrepresenting their product’s features and capabilities.  If you are not well versed in the technology you are looking at, listen to what they have to say and ask someone you trust.  If all else fails, ask me.  It’s not that I’m any more likely than anyone else you may have asked to know the answer but at least you’ll know I won’t tell you your new Nth generation array is A/A if it is not.

                                    ###

 

New Thinking in Disaster Recovery Strategies

Over the last few years there has been a lot of discussion in the industry about the various aspects of Business Continuity but the primary focus has centered on two areas:

  • High Availability
  • Disaster Recovery

In regard to Disaster Recovery, the majority of the discussion focused on how you get from your primary business operations center to an alternate location.  But what is you couldn’t go to a single alternate location and needed to do what I describe as “Distributed-DR.”  The difference in a Distributed-DR strategy is in the idea that instead of cutting over to a DR datacenter if you have multiple small Remote Office/Branch Offices (ROBO) then you would distribute your primary datacenter into small pieces across the ROBOs making it more practical to have a real world DR Plan.

One of the more interesting things I ran into while modeling this in our Advanced Technologies lab was the impact this has one of the other quintessential problems in DR planning and execution, bandwidth.  When we think about protecting a company’s data there are two elements: Recovery Point Objective (RPO) and Recovery Time Objective (RTO).  In the simplest terms RPO defines the amount of data you are willing to lose verses RTO which defines how long you are willing to be out of business.

I have consulted for many companies over the years and often when we discussed contingency plans for disaster recovery and I would ask how much are you willing to lose and for how long?  The answer was as little as possible and as near zero downtime as possible or what I call a “0/0” DR Plan.  I started calling them “No-No” DR Plans because as soon as the client got the estimate for what it would cost to meet their objectives, the immediate response was “No way, No how” can we pay that….  I have long asserted that given enough money anything is possible and in the DR business I generally find this to be true.  The challenge is finding the breakpoint between what costs to achieve a 0/0 plan verses the business value of data loss or inaccessibility. 

One of the first reasons to back away from a 0/0 DR Plan is the relative cost of bandwidth necessary to replicate the data between the primary and alternate datacenters.  Another complicating factor is the availability of high speed circuits, I’ve been in a number of locations where it can be difficult to get circuits larger than a DS-3 due to carrier or infrastructure limitations. 

Business Continuity inclusive of both High Availability and Disaster Recovery is as much about physics as it is about methodical planning.  Theories in technology are immensely entertaining to discuss but yield remarkable little in the way of profits.  Any really good theory and a lot of crazy theories need to be modeled and tested against a real world set of data. 

The concept of distributed DR addresses one of the key challenges of DR by allowing the distribution of data in the direction it make sense and the re-aggregation of data where it makes sense, or so the theory goes.  All of this sounds good on the whiteboard but the proof is in the lab and in the real world. 

Meanwhile back in the DataCore AT Lab, we needed to model a company that would be fair representation of a real world organization and the virtual infrastructure needed to support it.  What does that mean, one of my favorite quips is that it’s better to under promise and over deliver that the other way around.  That said to say that we may have over built Demo Company, Inc. with 16 servers and 25 desktops for a company of 25 employees is probably true but it provides a representative sample of what is common practice in the industry today and allow us to measure the scalability of this solution.

American Airlines Virtually Eliminates Carry-On Baggage

I just learned from American Airlines that they have reduced the size of “allowed” carry-on bags by 30%.  According to a company spokesperson the change was a business decision made by American’s management team.  The reason for the change was unclear and when I asked for additional clarification the spokesperson indicated the change was made to reduce the airlines FAA fines for “ground delays.” 

I asked if the aircraft in American Airlines fleet had recently or were planned to be reconfigured based on the change and was told that the change would not affect the current or planed aircraft configurations. 

So basically, American Airlines reduced the size of carry-on bags they will allow you to bring onboard and instead allow you to pay to check the same size bag you carried on last week.  Meanwhile the amount of available overhead space in their aircraft remains the same.

I’ve seen companies do a lot of interesting things but this is just ridiculous.  I realize I’m probably a little more sensitive to the issue because I am a frequent flyer (> 100 segments/year).  I have a standard size Tumi “pulley style”  suitcase that I have carried on many different airlines, on over 200 flights in the last 3 years. 

In the interest of fairness, I wrote a letter of concern to American Airlines regarding this matter and I eagerly await their response.  In the meantime, I have no choice but to board American Airlines flight 697 but I will be cancelling the remainder of my reservations with American Airlines and rebooking with another airline.  I called Continental, Southwest, Jet Blue and US Airways to verify their carry-on baggage policy and they all confirmed my suitcase is within the limits to be carried onboard their flights and welcomed my business.

del.icio.us Tags: ,
LiveJournal Tags: ,

Tuesday, March 10, 2009

Virtualization & Digital Healthcare

There is a lot of talk about the US Government’s digital healthcare initiative.  With billions of dollars slated for these initiatives one of the big questions is what role will virtualization technologies play in the future of healthcare systems.  I think the obvious answer is a large one but the devil in in the details. 

Vendors in the both the healthcare vertical and virtualization need to come together and begin to develop joint solutions that meet the needs of the industry and will fall within the scope of the mass of funding that will become available. 

The good news is that it goes without saying that high availability is going to be a big part of anything that is developed for the healthcare market (or I’m not going to be treated there) so it will be an exciting opportunity for some cool architectures.

XChange Solution Provider 2009 Conference

I’m here at the XChange conference talking with solution providers and fellow vendors about technology, the market, new opportunities and how to survive and grow in tough times.  It has been a great to hear a variety of business owners talk about what is working for them and what they need to continue to be successful.

I find myself talk about virtualization with a lot of people.  There is a strong interest in how virtualization can be leveraged in the small and mid-size business markets.  All of the major virtualization players (Citrix, Microsoft, Parallels and VMware) have packages and programs designed for the SMB.  The key to the success of virtualization in the SMB (IMHO) is the business math.  If virtualization solutions continue to cost more than traditional solutions for unrecognized value*, then virtualization will continue to exist primarily in the upper mid and enterprise markets. 

*I consider anything that a business owner does not recognize as a tangible benefit to be unrecognized value.  In my experience, businesses are reluctant (at best) to pay for things that do not give them something they can use to generate profits.

Tuesday, February 3, 2009

DataCore Software Sponsors Parallels Summit 2009

DataCore Software Corporation joined Parallels in sponsoring their 4th annual Summit at the Mandalay Bay Hotel in Las Vegas. 

We’re here talking about Virtualization centered on Parallels Virtuozzo Containers and their beta bare-metal hypervisor.  Parallels has come a long way in the last couple of years to challenge the status quo as put forth in the virtualization industry from vendors like Citrix, Microsoft and VMware.  While Parallels Virtuozzo Containers buck some of the traditional thinking in how server and desktop (VDI) virtualization should work.

If you are not familiar with how PVC (Parallels Virtuozzo Containers) works in essence it is a hybrid form of hosted virtualization verses bare-metal virtualization.  It is a hybrid inasmuch as in traditional hosted virtualization you install a base OS and the hypervisor runs on top of the base OS.  In PVC, the containers share the kernel and optionally application binaries from the host OS. 

While this is significantly different philosophically from how CMV (Citrix, Microsoft and VMware)  think virtualization should be approached.  I’m not going to discuss the merits of either school of thought in this post, but suffice it to say Parallels strategy is worth considering for several reasons.

All this said, what is DataCore Software doing at Parallels Summit?  DataCore Software and Parallels have been business and technology partners for some time but besides supporting an Alliance Partner, storage is one of, if not the key to virtualization.  What is the big deal about storage; the bottom line is everything virtual or not has to be stored on disk.  When it comes to storage, DataCore Software wrote the book on three key technologies: Thin-Provisioning, Caching and High-Availability.  How these technologies affect virtualization is often not understood until after problems or constraints have occurred. 

Tuesday, January 6, 2009

VMware Partner Technical Training Event

 
What:Partner Technical Training: VMware Virtual Infrastructure
This 2 day workshop provides the opportunity for a student to integrate the PS Series Storage Array within a VMware Infrastructure 3 environment. This is a hands-on intensive workshop which includes: mounting iSCSI volumes from a PS Series Storage Array to house Virtual Machines and Virtual Machine datastores use snapshots to prepare for backups, as well as performing a live Vmotion of running Virtual Machines. The workshop also provides students with realistic and practical experience as well as an understanding of the requirements for successfully integrating VMware Virtual Infrastructure features using Dell EqualLogic PS Series Storage Arrays. Format: Instructor-led training. Lecture 20% Lab 80% Prerequisites: The following prerequisite knowledge is required before attending this course: PS Series Administration Experience and/or Training VmWare Virtual Infrastructure Installation and Configuration Experience and/or Training For a full course description click here.
When:Tuesday, January 6, 2009 8:30 AM to Wednesday, January 7, 2009 5:00 PM
Download iCalendar file

Where Virtualization and the SMB Meet

Last week DataCore Software Corporation announced the release of several new products offerings targeted at the SMB market.  First let's talk about what's not new, they've kept all the features DataCore's customers love (Caching, HA, Thin-Provisioning, iSCSI, FC, Snapshot, VSS Integration, Remote Replication), second what is really new...the size or "Managed Capacity."  The real net-new here is the ability to get their award winning products in capacities starting at 500 GB.  So net-net, all the storage virtualization goodness in smaller chunks!  One of my favorite things here is their "Carry Forward Value Protection Plan," I start with what I need now and pay to upgrade as I go, no more throwaway systems or data migration projects. 

From my perspective, the biggest news here is that all the people I told that virtualization was out of reach I can go back to and say I was wrong and here is the solution to the problem.  I think I'm similar to most engineers in that I don't stray too far from previous designs that have worked well in the past.  However a 500 GB, HA SAN changes the way I think about things.  It's probably appropriate for me to admit a prejudice here, up to this point I have to say I wouldn't have considered the possibility or need for a SAN that wasn't measured in TB. 

I was having a design session with a colleague of mine (happy-hour at the pub with napkins and a marker) and the topic of virtualization in the SMB came up.  I was quick to lament my frustrations in designing Enterprise class systems scaled down for the SMB market.  The problem isn't designing or implementing them, it's getting them to be in the acceptable price range of SMB customers.  I have been a virtualization aficionado from the earliest days but was spoiled by having worked in the Enterprise market for many years.  There are an abundance of  cool solutions for the enterprise crowd and conversely a dearth of  SMB scaled solutions with Enterprise class architecture.

Much to the credit of the team at DataCore, they really changed my thinking in how to architect these solutions and keep them in the realm of price reality.