Servers patching is very important but can be very challenging. One of our customers, Idorsia Pharmaceuticals Ltd implemented successfully AWS Systems Manager (SSM) to automate patching not only of instances running in AWS Cloud but also covering on-premises instances no matter if it’s bare-metal or virtual machines.
What is AWS Systems Manager?
AWS Systems Manager is built with many capabilities sorted in different categories and not all were used in the scope of that project. The different components involved in our projects are:
- Maintenance Windows
- Patch Manager and patch baselines
- AWS SSM agent
- Fleet Manager
Why using AWS Systems Manager?
Idorsia was looking for a solution covering different OS types, it was necessary to be able to patch Windows Server as well as different Linux distributions on a regular basis. The second point was to have a central solution managing both cloud instances and on-premises servers.
SSM is supporting a wide range of Operating System supporting the first requirement, see the list of supported OS
On-premises instances can be managed very easily once the AWS SSM agent is installed. To work in hybrid environment, you need to create an activation code and use it to register the server as a managed instance in the Fleet Manager.
Some Amazon Machine Images (AMIs) are provided with the agent already installed. So, the first step was to deploy the AWS SSM agent on all the remaining servers in scope.
For hybrid environment, the server is associated to an IAM service role during registration. Whereas an IAM instance profile can be attached directly to EC2 instances. In both cases, the policy “AmazonSSMManagedInstanceCore” should be part of the role at minimum and can be customized based on customer needs.
Once all servers are available as managed instances in the Fleet Manager, we can sort the instances in different patch groups. We don’t want to apply patches on all servers exactly at the same date and time. It’s better to apply even security patches on test environments before moving to Production.
Some patches may be released in the middle of a patch campaign. To avoid installing new patches not yet installed in Test, we created different patch baselines for Test and Prod. All the magic happens within several Maintenance Windows. Those maintenance windows are defining the date/time and the different steps for patching.
After patching, the SSM agent reports the status to the AWS Console. We can check the compliance status for each server in the Patch Manager.
We had different challenges during the implementation of the whole process. In some cases, the instances are stopped at the time of patching, we added a task to start the instances in the maintenance window as a first step. However, it was sometimes not sufficient, the customer was getting timeouts preventing the patching to complete on time for some instances. We created a new document based on “AWS-RunPatchBaseline” to customize the timeouts.
We also had an issue with CentOS default repositories where we got some patches installed in Production even if not yet installed in Test. This is covered in a separate blog from Daniel: Attaching your own CentOS 7 yum repository to AWS SSM (https://www.dbi-services.com/blog/attaching-your-own-centos-7-yum-repository-to-aws-ssm)
The customer is now considering using AWS Systems Manager for reporting patching compliance on other machines like AWS Workspaces.