Meetup.com abandons API

Abandoned may be a strong word and I’ll explain why but the changes that were announced on 14th June 2019 and their lack of interest in discussing these changes virtually makes that the case:

The first problem with these changes is that although you can create a new OAuth key for free before August 15th 2019, actually using it is not that simple:

  1. The server flow and implicit methods require you to be logged into the meetup.com site before you OAuth, this requires a form submission, tracking a cookie and sometimes also a Google recaptcha response. For any automated scripting (remember this is what APIs are for) this is practically unworkable. Aside – it’s quite puzzling to be hiding the OAuth authentication method behind a login, that’s kind of the point of what OAuth is for.
  2. The server flow with user credentials method doesn’t require a form login but is only permitted if you are paying $30/month for their Pro service.

The second problem is that all future access requires that you use OAuth 2.0 and APIv3 – all previous versions being discontinued. Meetup.com don’t appear to provide any of their own client libraries (at least for PHP and Python), instead providing a list of third party ones and as of 2019 it’s out of date and lists libraries that range from a mix of old to very old.

The only two python libraries are marked as “discontinued” and “maintenance”. Most of the PHP libraries don’t support API v3 and the only one that does only supports OAuth 1.0.

This all seems like a bit of an oversight so I contacted support with a rather long email pointing out some of these issues only to be met with the following response:

Thanks for asking about Meetup’s API. If you have questions about using the API, please use our API documentation. Beyond that documentation, we don’t provide support for using the API.

Warmly,

XXXXX
Senior Community Support Specialist

Somewhat dumbfounded by the complete lack of interest in response to my long query, I wrote back, this time getting a response from a manager:

I’m YYYYY, a support manager at Meetup HQ. While we don’t offer support for use of the API, anyone who registers for OAuth before August 15th will not be charged.  If you have technical questions, please use the our API documentation.

Warmly,
YYYYY
Support Manager

An alternative attempt at engaging them through twitter resulted in a slightly different but equally ineffective response:

So despite writing a long query regarding a paid for service (albeit not on their new Pro account) it appears as though Meetup.com are completely uninterested in either providing support for their API or acknowledging that many people will no longer have a way of accessing it.

It’s hard to understand what is going through their minds although one can only presume that since the acquisition of Meetup.com by WeWork for $200m they have decided that the API is no longer something they regard as an important part of their platform. But also bizarrely, they think they can shift people to paying $30/month for it whilst providing no support and no client libraries.

It’s a strange change in policy and approach and will only lead to decline in use of their API when you would think they would be wanting to grow their platform.

Twitter security is very lax

Twitter don’t seem to take account security very seriously.

For over six months I’ve been trying to enable two factor authentication on my account and I want to use the Google authenticator app, rather than SMS codes with are insecure.

However, for no logical reason twitter require you add and verify a mobile number on your account before you can enable use of an authentication app.

And here’s the stumbling block, in my case twitter refuses to send an SMS code to my mobile number. My number is in of one of the early mobile number blocks, so this is rather puzzling given that it’s in a 20+ year old number range.

Despite three of four attempts to engage with @TwitterSupport I’ve been met with nothing but deafening silence.

So you have to wonder, do Twitter actually take account security seriously at all?

Addendum: November 2019

I discovered, by chance through periodically re-trying to enable 2FA, that Twitter have since changed their systems and you now don’t have to supply a working mobile number before you can enable use of Google authenticator.

A second point on this that I am happy about is discovering a twitter hack that leaked some users phone numbers, so I am glad that this was never saved into my account.

https://thehackernews.com/2020/02/find-twitter-phone-number.html

letsencrypt hell

I simultaneously both love letsencrypt and also hate it.

Maybe hate is the wrong word, the project’s goals are fantastic.

What I hate is the official software – certbot.

Firstly the way it installs is horrible, under some weird directory structure in your home directory, who in the world generally manages their software like this?  What happened to /usr/local or /opt?

Then to actually using the software itself, the command line syntax is very clunky and hard to work out.  The documentation is confusing.

Finally, we come to wildcard certificates.

certbot, by its very descriptive name, is a certificate robot, an automated tool for generating certificates.  So what did they do for wildcard certificates?

You have to use some clunky command line option to specify an alternative server and then you have to use a manual authenticatation technique using DNS records.  It gets better, if you want to secure both ‘*.xciv.org’ and ‘xciv.org’ in one certificate, you get prompted for two different keys, that both sit on the same DNS record.  Cue…

1. Run script
2. Edit first DNS record key
3. Reload DNS server
4. Edit second DNS record key
5. Reload DNS server

Every time you renew, you have to do all this again – yes, despite being the automated certificate script there is no way to automate renewals when using DNS authentication – which is mandatory for wildcard certificates.

Now I can forgive the requirement for DNS authentication for wildcard certificates if that has been deemed the safest method, SSL is all about improving security, after all.

What I can’t forgive is this god damn awful piece of software design that passes for the official letsencrypt tool.

Addendum

Since late 2018, after some recommendations, I have since switched to the dehydrated client and this is by far a better solution.

Moleskine online order #fail

I placed an order on November 30th.

And waited.

Nothing arrived.

I knew it was an international site and it was approaching Christmas so a busy time.

So I waited some more.

Eventually I logged in to the Moleskine site to find order status ‘Being processed’.

On the 11th of December, with the status not having changed, nor having received any communication, I contacted customer support.

They tell me that one of the items is not in stock and can’t be fulfilled, but could be replaced with an alternative, but they can’t now guarantee Christmas delivery.

Umm?

A few thoughts spring to mind.

  1. Why did they let me submit an order for an item that they couldn’t supply, it is their own product, so not unfeasible to be able to know what they can supply, even if there is a delay before stock is available.
  2. Why did they not contact me to let me know, rather than me having to contact them to find out what was happening.
  3. When would my order have been fulfilled (given the date it’s reasonable to guess this was a Christmas order) if at all.
  4. Why when you are effectively giving someone business in the palm of their hand are they so intept as to just throw it away.

I cancelled the whole order and will revert to a shop – I initially avoided this because it will be a central London trip during the Christmas period and will incur ~£10 in travel.

It’s 2017 and people still don’t get the basics of online commerce.

 

The problem with Facebook

Facebook is a love/hate relationship dilemma

On one hand Facebook is very useful for keeping in touch with friends, and finding news (via them) that I wouldn’t otherwise probably find.

But there are some things about Facebook, or that Facebook do, that I really really struggle with.

  1. It’s a time vampire – this is 100% totally my own fault, it’s easy to get sucked in and I suppose proves how good the platform is at being ‘sticky’.
  2. Two faced morals – on one hand Mark Zuckerberg is always posting about the good that FB does, “connecting the world”, his free internet initiatives and how he and his wife donate to their local hospital.
    But this moral approach doesn’t stack up when you see FB continually exploiting UK taxes for their own benefit.  They have now racked up a £10m tax credit after having previously paid just £4,000 in corporation tax before that.
    https://www.theguardian.com/business/2016/oct/09/facebook-uk-ends-up-11m-in-tax-credit-despite-global-profits-of-5bn
    You can’t have it both ways, either you have a moralistic ‘doing good’ stance or you don’t.
  3. They pay minimum wage to content moderators – moderators who have to watch some pretty horrific stuff on behalf of the rest of us – the burn out from doing this ‘work’ deserves a lot more than minimum wage.
    https://www.wired.com/2014/10/content-moderation/
  4. a) FB board member Peter Thiel donated $1.25m to Donald Trump.
    https://www.theguardian.com/technology/2016/oct/18/donald-trump-peter-thiel-support-facebook
    b) Oculus [owned by FB] founder Palmer Luckey was involved in helping create lots of anti-Hillary media content during the election.
    https://www.theguardian.com/technology/2016/sep/23/oculus-palmer-luckey-funding-trump-reddit-trolls
  5. FB allows advertising targetted against ethnicity and the Trump campaign used this during the election.
    http://www.theverge.com/2016/10/27/13434246/donald-trump-targeted-dark-facebook-ads-black-voters
  6. Fake news – despite denying it as just 1% there is growing evidence that the amount of fake news (pro-Trump, anti-Hillary) during the election may have been substantial – I’m sure this will be fixed in time but nonethless a pretty dire situation.

I’m seriously considering leaving the platform, add in that FB pages are also now pretty useless (with the reach throttling) the negatives are beginning to heavily outweigh the positives.

 

Apple have lost their way

New Macbook Pro

I’ve been patiently awaiting the Macbook Pro refresh to replace a dying 2013 MBP (laptops don’t like coffee, who knew?).

This wait was somewhat of a let down when I saw the rumours and leaks.  I hoped they were wrong.  They weren’t and it was worse than I expected.

Apple have decided in the 2016 MBP refresh to do away with all ports except for four USB-C and a headphone socket.

Gone are thunderbolt2, traditional USB2/3, HDMI output, SD card slot and the Magsafe power connector.

Like many people, I find change hard but I also accept that things do need to change or you can never make progress and move forward.  USB-C does indeed have some nice benefits in terms of throughput and functionality.

For my usage I mostly need the ethernet port (occasionally), the SD card slot for photography and USB2/3 ports for thumb drives, time machine backups and Lightroom camera tethering.

So what are my options to continue with this functionality?

I already have an Apple thunderbolt2 ethernet adapter but this is now physically incompatible.  There is an Apple TB2 to USB-C adapter if I want to stump up a hefty £50 but it’s actually cheaper to discard(!) my existing adapter and buy a new USB-C adapter for £30 or so.

A pair of Apple USB2/3 to USB-C adapters are £40 or a USB-C hub is around £30.

A USB-C SD card reader is another £20 or so.

So, to get me back to where I was, I’m looking at somewhere between £50 and £100 in adapters, dongles or hubs.

For me this is not too bad I guess, but what if I was one of those people who wanted HDMI output, the Apple adapter to provide HDMI output is £70.

What about if I had an expensive TB2 storage device, or I had some legacy Firewire device, I’d still be looking at £50 for that TB2 adapter.

Ideally what I want is an all in one hub and SD card reader, perhaps with ethernet too, but that just doesn’t seem to exist as USB-C is still so new.

The new style of all-in-one docks are quite a good solution if you are taking your laptop to an office every day, and want to plug in one cable to connect up all of this but they are again a bit pricey, bulky for mobile usage, often want a PSU and I’m not sure there is one yet that offers all of this including SD card reader.

Now you may type into web searches and find plenty of USB-C kit around, but as I have increasingly found a lot of the non-brand stuff is just too often a load of cheap crap and seemingly even has potential to kill your gear.

http://gizmodo.com/cheap-usb-c-cables-could-kill-your-phone-or-laptop-1757115350

As personal examples, the last time I bought a cheap USB2 hub off eBay it lasted a few months before it broke, randomly disconnecting devices any time some i/o happened on another port.  Some time back I bought a no brand USB2 to IDE+SATA adapter, for quickly connecting up hard drives for extracting data, and it worked well.  Until that is the time that I needed to extract a large amount of data, it was slow, very very s-l-o-w, so much that I had to eventually give up and plug it into an actual IDE interface.

So the branded stuff really is essential if you want it to actually work and work well, never mind emitting magic smoke from your new laptop.

Update: Apple say that this is the best selling MBP ever.  At the same time there has been so much negative reception to the whole dongle issue that they have temporarily reduced the price on all their adapters!

apple-macbook-2016

What is going on at Apple?

Apple has always been about aesthetics, usability and user experience.  Or to put it another way, Apple products have been nice to look at, nice to touch, intuitive to use and functional.  These were all a big part of the influence of Steve Jobs.

Since Steve Jobs passed away Apple has gradually lost sight of these core values, sometimes in small ways and sometimes in bigger ways.

You could argue that the sticky out camera bulge on the iPhone6/6+ and 7/7+ is cosmetic but Steve Jobs would have been a stickler for that.  As to the Apple add on battery pack case I think he would have picked that up and thrown it at whoever came up with that idea.  Not only does it look the most horrible undesigned product in the new Apple era it’s also a messaging problem that admits that mobile battery stamina is really not good enough yet.  That would never have seen the light of day under Jobs.  The new iOS7 redesign is nice and up to date, but there are some aspects of it that are really unintuitive and you can see hacks that have been added in such as the small ‘< Back to [app]’ that appeas in the menu bar.  Again, some of this wouldn’t have happened under the attention to detail of Jobs.

Back to the MBP, it’s an advance, new CPU,RAM,SSD and the Touch Bar is actually a really nice innovation.  But this is where Apple have actually gone wrong, they are now so focused on innovation (after too much harsh critique on lack of innovation) that they have completely forgotten about user experience and functionality.  They’ve advanced so much that they’ve pushed USB-C through before the market and userbase are ready.

The new MBP requires me to buy a whole load of add-on adapters to enable me to do the basic work I want to do.  That’s right, the first thing I need to do to enable me to use my new Apple product is buy more product, some of it even third party, this is totally utterly brain dead and not the Apple way.

This is bad from not just a cost perspective, but more importantly from a user experience point of view it means that when you sit down with your laptop the first thing you need to do is reach in to your bag and pull out some more gear, and that’s providing you’re remembered to put it in your bag.

Not only that but the aesthetics of it are ugly with lots of dongles and hubs sprawling all over your desk (remember why the iMac is an all in one desktop? to eliminate the rats nest of cables and mess..).

As if to compound this lack of consideration to functionality, Apple don’t even have new adapters yet for some things, it’s been 18 months since the new 12″ Macbook was released with a sole USB-C port but there isn’t a new Apple ethernet adapter with USB-C connector, only third party offerings, it’s almost as if they don’t care.

At the keynote Apple said they had made the new MBP thinner and with a smaller footprint, totally missing the point that this is why the Macbook Air was created.

The Pro is meant to provide a range of functionality for professionals (hence the name) — such as being able to pop an SD card straight in, or plugging an HDMI screen right in.  Apple have gone for thinner or smaller instead of retaining the SD card slot and a couple of USB2/3 ports that professionals need.

We’ve also gone backwards with the loss of the Magsafe connector, a really great innovation that has saved many many power cable trips from destroying laptops.

Lastly, and this is the most visible representation of the loss of vision at Apple — if you buy the latest apple iPhone handset, out of the box you can’t plug it into the latest Apple Macbook Pro because there are no USB2/3 ports.

The Brexit effect

One last niggle, the worse than I expected bit, but I can’t blame Apple for this, the new MBPs have all suffered a hefty price rise.  Brexit has knocked the exchange rates quite hard and Apple have had to adjust all their pricing.  This issue is then compounded further when you have to buy a load of expensive adapters on top.

Conclusion

I’ve had discussions about this in my friend circles on Facebook and there were many photographers who were very dismayed with the new model, for all the reasons I have cited.  I think the new MBP will suffer sales from anyone wants a mobile laptop with connectivity.

Apple really need to hire a new Jobs, not a new CEO but a new stickler, a new pain in the arse, a new user experience and usability authoritarian who goes around making sure that their products don’t ship like this.  Sadly, I can’t see this happening and I see Apple slipping further and further away from where they once were.

Updating a Dell server BIOS – or – a (very) long tale of woe

Introduction

You can read what follows as a rant, or you can read it as helpful information to save other people from wasted time and effort.

My goal here was to do a quick update of the BIOS on a Dell R210 II server.  Quick however, it was anything but.

I wanted to do this update remotely as it would be quicker than a round trip to the data centre to do probably 20mins work, or so I thought.

Modern Dell servers incorporate iDRAC web based console, power control and virtual media – this is only in the enterprise iDRAC but well worth the extra investment IMO.  They have some clever stuff in the way of virtual media, you can attach a local floppy or ISO image on your laptop and present it as if its a local drive on the server, you can also attach an ISO file over CIFS/SMB/NFS.

Word of note, if you are going to try and attach a large ISO file over broadband with slow upload this will likely not work, it’s just too much data and the server can’t usually cope with the lag of waiting for the data to come across in my experience.  Small files work fine, larger files work fine if you’re on ethernet local to the server or I presume have good broadband upload speed.

With the 11th generation (11G) servers you have approximately three methods of performing BIOS updates, I ended up working through all three in the following order, update via OS, update via DOS image, update through Lifecycle controller and finally succeeded with an update via DOS image.

OS Update – Linux

The first method of BIOS update is by using an operating system utility on either Windows or Linux, in this case the server is running Debian 6.0/squeeze so I downloaded the updater binary.  The only Linux updater Dell offer says it is for Redhat, with some reservation I executed the binary and was presented with a screen full of ‘set’ errors.  Thanks Dell, really helpful.

I know why it IS this way, Redhat is the commercially supported operating system for Dell.

What I don’t understand is why it HAS to be this way.

Most modern Linux distributions have IMO standardised libc, bash etc to a degree so I don’t think it would be too hard to build an updater that would work on say Redhat X, Debian X or newer, Ubuntu X or newer.  Having a binary that depends on Redhat when it doesn’t need to, from a technical point of view, is quite restrictive.  Thanks Dell, really helpful.

So we have to move on to another method…

Repository Manager

I promised myself I would never use Repository Manager again after the last time I had to do battle with it, yet here we are.  RM is a utility to build SUU packages that can be updated through the Lifecycle controller, more on this later.

The first time I used this was a while ago and the particular install was on a Vista VM, so being slightly au fait with it I thought I would just install the latest version, and as laborious as the process is, build the SUU.

Firstly you need to install .NET 4 and by install I mean the whole package, you can’t just use the client package from Windows update so you’ll need to google and download the installer.  Once this was done I installed RM 1.5 and then the fun began.

To understand how RM works a few things to note..  Dell maintain a master catalog on the Dell ftp server, this is like an index of all their hardware products and corresponding drivers and firmware updates.  You can build your own local copy (repository) from this by selecting the subset of hardware you have (eg. R200 and R210 servers) and it will download the applicable updates into your local repository.  I would suggest selecting as few hardware profiles as possible unless you have lots of time and lots of bandwidth.

You then select what components you want to update and build an SUU from it.

Now here’s the thing, to build an SUU you need to install the SUU plug-in, and I say plug-in like it’s some snappy little thing that plugs right in, in reality I’m talking about approximately 300 MB of cab file, yes really.

My problem was that however I installed 1.5 and the latest 720 cab file, the cab file would NEVER actually install and RM would not by itself cache the download, so yes on each attempt it would re-download that 300 MB file again, and again, until I used the option to download and store it, and load it from the local directory.

This installation was on XP because that’s all I had in way of a VM, the problem as far as I can see seemed to be something to do with certificate validation, Dell sign part of these updates/catalog presumably to stop any neferious updates getting in.  Only this appeared to be broken in some way, because it kept popping up saying that action was required and I needed to approve the certificate, only the certificate window was entirely empty.  I think the problem might be XP, but I don’t know as google didn’t find anyone else experiencing this problem.

More google research suggested that in the past they have had problems with a revoked certificate in the chain somewhere and the suggestion was to disable revocation checks in IE (yes because IE is so tied into the core OS).  This however did not seem to solve the problem for me.

In the end I solved this by installing RM 1.4 and a 651 cab file.  It still seemed very finicky, occasionally saying the plugin was not installed and having to reinstall it, I’m not sure whether it was disabling the certificate revocation check, using an older RM or using an older SUU cab that fixed the problem.  But by now, I was tearing my hair out so I didn’t much care.

Great!  So to build an SUU for Lifecycle controller you select the latest bundle of updates for Windows, yes this is slighty odd.. Lifecycle controller is not Windows and there is nothing to indiciate what you actually have to do, but take it from me a Windows bundle is what you need and in fact a Lifecycle controller screen says it will only handle Windows bundles in the latest releases.  Some time passes and we are presented with a 1.2 GB data set to up date our BIOS.

Wuh huh.

Okay okay I don’t really need to update every bit of firmware, nor all the Windows drivers since we’re on a Debian system.

So I build a custom bundle containing JUST the BIOS update.  We’re now presented with a 400 MB data set.

What. The. Fuck.  400 MB of data to load an 11 MB BIOS data file?  Seriously.  Are you kidding?

This would not represent a problem I suppose except that I am trying to do this update remotely, you know, where modern IT systems are designed so that you can do stuff without having to go on site to the data centre.

Aside, I had a poke around in this data and found the culprit.. there is a java directory using most of that space.  Uh.

Amusingly, during the course of trying to work through all this I downloaded a PDF called “Dell Repository Manager Tutorial” and it was subtitled with the following:

“A simple and Efficient Way to Manage the Dell Update files”

Clearly written by someone who has never actually used this product.  Or they were making a sneaky joke.

To be honest this product would be best renamed Suppository Manager because by the time you’ve finished using it you’ll feel like someone has bent you over and…

Anyhow, the good news is that now we have our SUU we can crack on and get the updates installed..

Lifecycle controller / Dell Unified Server Configurator (USC)

Like that klunky name do you?  This is a hint of things to come.

Now I don’t know why, but generally this part of the system is referred to as Lifecycle controller however it also pops up with the title Unified Server Configurator (is configurator actually even a word?) and seems to be referenced as such in various places so I’m going to take a guess that marketing got hold of this product late in the game and decided that Unified Server Configurator wasn’t snappy enough and didn’t really sell its worth so it got renamed.  I’ll further refer to it as USC for the purposes of this post.

During boot you hit F10 for system services and up pops (eventually, it’s s-l-o-w) USC.

From here you can do lots of things but we want platform update, and then select our source media.

I should point out that I had to make the trip to the data centre, because remember how big that update is?  400 MB for the BIOS only SUU and 1.2 GB for the full SUU.

So we kick off the process and we are presented with “Catalog file not authenticated correctly.  Do you want to proceed?” Oh well that sounds kind of worrying but we’ll select Yes and continue..

More time passes..

“The updates you are trying to apply are not Dell-authorized updates.”

Greeeeeeat.  Is this something to do with those certificate issues?  Who knows really, it’s all a bit vague isn’t it?  Thanks Dell, really helpful.

Google provides the answer, older versions of USC will not apply updates, um, why?  This is what USC is for surely?  You know, the new modern system for applying all your updates, except when it won’t.

Thankfully Dell provide a way of easily updating the USC firmware, you can do it through iDRAC as if you were updating the iDRAC firmware.

You simply need to download the Lifecycle Controller Repair Package.

Hmm, repair package you say, for a firmware?  Curious.  This is actually another hint of things to come.

Once this is done we can repeat the procedure, one oddity I note is that now the USC presents the list of components to update and while BIOS is ticked by default it is also greyed out, no indication of what on earth this might mean?  With some trepidation I attempt to proceed and it lets me… hurrah, the BIOS starts updating..  we’re almost there now!

And up pops a message “bioswrapper.efl – return code mismatch” or something.  I say something because you’d want a pen and paper handy, or your fingers near the screen capture option because this is accompanied by a 10second countdown before the system reboots.

Yeah because no-one would need to make a note of this pretty critical sounding error message would they?  Thanks Dell, really helpful.

Gah.  That’s it, I’m done with USC for this now.

Before I move on, I’ll throw in some other USC observations.

Firstly, USC seems to maintain “state” somehow about what you’ve been doing, at least you can have rebooted and the server will pop back into USC without you having hit F10.  Why, I don’t know, but I don’t like it, *I* should be deciding when that happens or not.

I also got the following reassuring screen upon attempting to enter USC “MAS001 partition not found! Unable to launch System Services image. System halted!”  I power cycled and have not seen this screen since.  Good to know that this flakey thing is not doing anything mission critical like I don’t know, fiddling with my device firmware?

Remember when I was talking about the USC repair package, I quote:

“The Unified Server Configurator (USC) repair package restores embedded tools and utilities in the event of a hardware failure or flash memory corruption. Such failures can occur for many reasons, including power loss or interruption of an update process.”

How many other firmware based tools have you come across that specifically need a repair package?  The above makes no sense, firmware doesn’t get corrupted unless you are updating it and interrupt the process.  My assertion is that USC is flakey and gets its knickers in a knot often enough that it needs a dedicated generally available repair option.  Inspiring, not.

I’m taking a guess that USC is written in Java considering that it is so slow, the general visual look of it and there were a humungous amount of Java files in the SUU package, perhaps, maybe..

One final note is that since actually completing the BIOS update (more on that later) the USC will no longer read my existing SUU sources, I tried a CD ROM copy and also the ISO mounted via RFS – the image that had previously been working fine.  Um, I have no idea why, I might try to re-upload the USC repair image, I guess?  Sigh.

“There was an internal error while trying to retrieve information on the updates in your system. Perform an A/C power cycle and retry the operation.”

The USC trouble shooting resource doesn’t even list this error:
http://support.euro.dell.com/support/edocs/software/smusc/smlc/lc_1_5/usclce/en/ug/html/faq.htm

All in all I’m left feeling that USC or LCC or whatever it’s called is.. klunky, slow, unpredictable and scary.

Let’s not forget this is a production system, and I’m having to take it off-line at every attempt to complete this update, it’s just not up to par.

I’m sure it must work for some people, and I know that if you have a large MS deployment it ties into Openmanage and vCenter or something and is possibly quite useful.

From my experiences I hope that the life of Lifecycle controller is quite short.  Alas I fear that Dell have committed to this horrible thing for the medium term.

DOS

Performing a BIOS update from a DOS image was actually my second choice before doing battle with RM and USC but I just could not get a DOS image to load remotely.  Dell actually supply a utility for creating images, unfortunately I can’t remember what this is called.

It will let you create a floppy image and a hard disk image, so I looked at the 11 MB standalone DOS BIOS updater and knew that a floppy image would be no good, so I elected to create a hard disk image.

I was presented with a 4 MB image file.  Thanks Dell, really helpful.

In the end I found somewhere a Dell DOS image and I managed to get the BIOS update file into that and on to a memory stick but it did not work, in fact the updater just hung.  Not knowing what “DOS” this was based on I reverted to a standard (Windows ME) based DOS image on a USB stick and finally(!!!) the BIOS updated.

In the process of doing this I had a quick look at the readme from the DOS updater package to see how you invoke the updater, if it needs any command line flags, what you might expect to see on screen or for it to ask you, etc.

Needless to say it does not really give you any information in this regard.  Thanks Dell, really helpful.

It was at this point that something raised an eyebrow:

“Note 2: In order to flash to BIOS version 2.2.3 or newer using USC, you must first flash to BIOS version 2.1.2,  if you are on an older version of BIOS. Other BIOS update methods do not have this prerequisite.”

In case the penny hasn’t dropped, I’ll spell it out.

All my efforts in using RM and USC to update the BIOS have been:  A. Complete. Fucking. Waste. Of. Time.

Given that I was on BIOS 1.3.1 it was technically IMPOSSIBLE to ever have updated to BIOS 2.2.3 through USC.  EVER.

You know USC, that new modern system that is the way forward for managing your updates?  Yeah.

Thanks Dell, really helpful.

I made a fundamental school boy error here, I just trusted that I could install the latest BIOS update via USC and I hadn’t read the BIOS release notes, well actually I had, but I must’ve read them earlier and somehow not picked up on this critical nugget of information.

You remember that BIOS component that was greyed out in USC before it failed to actually update the BIOS?  I guess now we know why it was greyed out, USC obviously knew there was no way you could apply it.  Why not warn you of this or prevent you?  Thanks Dell, really helpful.

Oh, and by the by, the Dell site (at least the route where you enter your asset tag and have it offer you all the relevent BIOS and drivers for that hardware) never did offer me BIOS 2.1.2 at all anyway so I don’t even know where you get that from.  Thanks Dell, really helpful.

Conclusion

The Linux updater method is no use to anyone unless you are running Redhat.

The Lifecycle controller is no use to anyone.

The DOS updater is ESSENTIAL if you can find an appropriately sized and compatible DOS image.

You may be wondering why on earth I didn’t just persue the DOS updater method from the off, you know I am wondering that myself after all this nonsense.

A couple of things steered me away from this.  Firstly I associate DOS images with floppy drives and no-one uses them any more.  Secondly DOS itself is antiquated and no-one uses it any more.

Lifecycle controller is supposed to be the modern way forward for managing your updates, so I leant away from the DOS approach.  Clearly if you want relatively quick and hassle free BIOS updates then a DOS USB drive is the way to go.

A few thoughts…
Why can’t there be a multi distribution capable Linux updater?  It’s not *that* hard surely.
Why can’t RM have more intelligence and warn you about updates it can’t apply?
Why can’t RM warn you about Lifecycle firmware versions?
Why can’t Dell offer sensible sized useful DOS images?
Why can’t I just bloody hit an option in the BIOS and get it to read a firmware file off a USB drive?
Lifecycle controller was developed to reduce the lifecycle of the person using it.

One amusing though, I did ask a Dell engineer once about BIOS updates and mentioned Lifecycler controller “oh I never use that, just a quick DOS update”…

I think they may have changed things a little in the 12G servers, but anyhow my final words on this debacle are, as ever..

Thanks Dell, really helpful.