Looks like another serious bug has been found for vSphere 6. At the time of this post, no fix or patch has been released. It doesn’t affect the actual operation of VMs, but can cause incorrect values to be returned when CBT values are calculated. Which means any backup software that uses CBT (like 99.9% of most backup jobs) will end up with backed up data that isn’t restorable. I’m quite happy with my decsion to stay with 5.5. Get it together VMWare!
I ran into this just the other day with a new VCSA 5.5 Update 3A appliance. No idea why it happened, I’ve installed the appliance few times now and I didn’t do anything differently this time that I didn’t do last time.
Either way, regenerating the certs under the “admin” tab and rebooting the appliance fixed the issue. Just be sure and uncheck that box once the VM boots back up.
Dell Lifecyle Manager………..it does a few different things but the most important thing I use it for? A nice one stop shop for updating all of the various Dell fimware in one go. It’s a lot faster than the silly ISO’s that take an hour or two to run and contain every firmware for every model from the past billion years or so. And lifecycle manager should be a lot better than manually tracking down the firmware dates for your OS of choice and manually installing them. Unfortunately for me (and any other fellow admins), it’s a massive pain in the butt to deal with. Sometimes it just doesn’t work for mysterious reasons. Sometimes it throws scary error messages in the middle of updating firmware on a spinning SAS disk. And sometimes it downloads and installs firmware just fine, only for you to find out the new firmware is buggy and downgrading is not an option (luckily, in that case Dell was already working on the next version……and it was less buggy the second time around).
It’s been a long time since I last used Lifecycle manager. Virtualization has done away with a lot of the fiddly stuff I used to deal with back when everything ran on bare metal. But here I was, with fairly fast R910 that was getting flattened and repurposed. Seemed like a great time to update all the firmware bits. So I loaded up Lifecycle Manager, assigned an IP address, and clicked on the “Test” button. It failed to ping itself, it failed to ping the DNS server, and it failed to ping the gateway. But it was able to resolve ftp.dell.com via that same DNS setting. And it was able to download new firmware just fine. I have no idea what the error messages were for. There’s zero reason it would’t have been able to ping its own gateway (or any of the other items). But hey, all this from a hardware company widely known for their less than stellar track record with firmware and drivers. I guess I was expecting too much, right?
OK, so that’s awesome. It fails to connect to much of anything but can still access the ‘Net and download updates. Whatever.
Partway though installing the firmware updates, I get this jewel:
I believe that error indicates I had a corrupt download (or someone at Dell fat fingered something in the system that says update X is for component X and it’s really for component 42, but whatever). But great, there’s nothing like failed firmware updates to give you warm and fuzzy feelings. At the end of the day, the system rebooted back into Lifecycle Manager, re-downloaded the failed update, and successfully installed it. But dammit, it shouldn’t be such a bag of crap to do something “as simple” as apply a few firmware updates. Get it together Dell, you need to be setting a better example now that you’re a parent company.
Note: No hardware was harmed in the writing of this article 🙂
Looks like Dell is buying EMC for $67 Billion, and that doesn’t include VMware. At least not outright, Dell will become the majority stakeholder of VMware but not an outright owner. VMware will remain an independent company after the Dell/EMC merger.
EMC did a pretty good job of being hands off with VMware. In a way, that was the only option for them. If EMC had created new features for their storage platform that only worked with vSphere or added features to the vSphere hypervisor that only worked with their hardware, the tech industry would have rightfully screamed foul play. Early reports are indicating that Dell will keep their majority stock at arms reach. That really is the best thing they could do in this particular situation.
$67 Billion is a LOT of money. I can’t speak to EMCs storage hardware other than say several years ago it was a major pain in the butt to get a quote for a new SAN. After we finally got it, it was very expensive and very complex to setup and maintain. However by and large EMC’s gear is well regarded, at least in the “traditional” enterprise storage market. I also can’t speak to Dell’s Compellent hardware. But as a former Equal Logic customer and storage administrator, Holy Crap is that one platform that Dell needed to replace. I expect the Equal Logic hardware to be phased out and replaced with EMC gear. We may even see the Equal Logic name fade away, in some circles that name equals sub-par hardware and really buggy firmware. It won’t be missed by this admin.
If the price for Dell to get back in the storage game (and get a nice shot in the arm with EMC’s other properties) is $67 Billion, so be it. I hope it works out for them. Knowing a good match for your company when you see it AND being able to purchase outright it is a rare thing at this level of the game. That said, if I had $67 Billion to spend to revitalize my enterprise computing company, I’d have bought Nutanix or Pure. Or both. $67 Billion is a LOT of cash.
Looks like there is a patch out for the nasty snapshot bug introduced in 5.5 Update 3. VMware’s site seems to be having issues, apparently there is quite a bit of demand for the patch.
There is a patch for anyone already on Update 3 and the Update 3 bundle has been pulled and replaced with Update 3a which includes this patch.
More details can be found here, since VMware’s site is currently borked.
There is a time and a place to run the newest version of something, but rarely is that place on mission critical production hardware. I am very glad I held off on Update 3, though not upgrading posed its own issues (mostly in the form of known security issues in 5.5 pre-update 3). Being forced to choose between a known buggy crash-y mess or a known insecure platform was not much fun.
Watch out for this one, it looks nasty!
A week or two since VMwold 6.0 Update 1 and 5.5 Update 3 have been released. I tend to err on the side of caution when it comes to our production cluster, so I’ll be upgrading to 5.5 update 3 vs the 6.x track….at least for now. As a matter of fact, I’m not pushing out update 3 until its had at least week or so in the wild with no reported issues. Quite some time ago I held off on one of the 5.5 updates (5.5 Update 1 maybe?) that had a nasty NFS datastore bug. We use Nutanix, so our datastores are NFS. I saved myself a lot of trouble taking the cautious route. We aren’t affected by any bugs that are resolved in update 3, better to wait and let someone else be the guinea pig.
Release notes can be found here
Sitting in the airport waiting to be called for first flight of the day on my way to SFO. This is my first VMWorld and first time in San Francisco. Looking forward to both! Should be a great conference and some much needed content for the blog!