Processing cluster upgrade

Monday, January 7, 2019

Processing cluster upgrade

Over the next week, we plan to upgrade the Hadoop processing cluster. The upgrade procedure foresees that you can continue processing during this process. It is however still possible that a job is somehow affected, so you should be aware that you might see things that do not occur during 'normal' operations.

Upgrade impact

This upgrade consists mostly of bug fixes and minor new features to the platform. Hence, all your existing software will continue to work after the upgrade. For more information, you can have a look at the release notes:

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_release-not…

The biggest new feature after the upgrade will be the availability of Spark 2.3.0. Users that want to use this will have to explicitly indicate this when submitting their job, otherwise it will continue to run with Spark 1.6.3.

Upgrade steps

For those interested in the details, the following steps will be performed:

  1. First we switch the software on your virtual machine to the new version. This is being done by running 'hdp-select', and is fully reversible, as the new software is installed side-by-side with the existing software. You can run 'hdp-select' yourself in a terminal to see the current version of your VM. If your VM is upgraded, it will show 2.6.5.0-292.
  2. Then we start a 'rolling upgrade' procedure. This procedure upgrades and restarts the components on the cluster machines one by one, performing checks in between to ensure that all services are healthy after the upgrade. The most critical components that your jobs rely on are configured in a 'high availability' mode, so they can be restarted without downtime. Note that this step can take more than one day, due to the size of the cluster, and the safety precautions that are taken by the 'rolling upgrade' procedure.