#aws #cloudwatch
Welcome to day 4 of the unofficial AWS Cost Optimisation Advent Calendar 2024, where every day we will be sharing new tips or tricks to help you optimise your cloud costs before Christmas 2024.
Today we are talking about purging Cloudwatch logs to save on storage costs.
Even in businesses that are not particularly log-heavy, Cloudwatch logs can quickly add up and become a significant cost. This is because Cloudwatch logs are stored indefinitely by default, and the cost of storing logs is not insignificant when extrapolated over time.
In addition, even companies who like keeping logs indefinitely for compliance reasons or for debugging purposes, can still benefit from cleansing logs to avoid unnecessary costs.
The first step is of course to understand what logs you have and how much they are costing you. You can do this by visiting the Cloudwatch console and looking at the log groups and their retention policies.
To see a breakdown of cost by log group, you can go to Cost Explorer
, select Service
as Cloudwatch
and filter by Usage Type
.
This will give you an indication as to whether your main cost is raw storage, metrics, transfer, canaries or anything else.
If you wanted to see what individual log groups are costing, the easiest option is filter by Tag (please see our tip from Day 1).
For storage costs there are two main areas to focus on:
For Each log group you should set a sensible retention policy. For example, you may want to keep logs for 30 days in production, but only 7 days in development and staging.
If you have a compliance requirement to keep logs for longer, depending on the current cost of this log group, it may be worth considering periodically moving these logs to a cheaper storage solution such as S3 Glacier, rather than leaving them in Cloudwatch indefinitely. You will want to ensure the S3 bucket is in the same region as your Cloudwatch logs to avoid data transfer costs. You would ideally want to automate this process with a Lambda function, so that you can ensure it reliably runs before your Cloudwatch retention period.
The second area to focus on is the volume of logged data. Often you can spot problems at the application level, where excessive logs are being pushed to CloudWatch, simply by having an old fashioned browse of any of the expensive log groups.
Problems of excessive logging are best caught at the time they happen so this is where anomaly detection can be useful. You can set up anomaly detection for free on all standard log groups to alert you when there is an unusually high amount of logs. This can be found in the configuration for a log group and then associated with a Cloudwatch alarm.
Once any problems are fixed, there is unfortunately no way to delete individual problematic log lines in Cloudwatch. You will either need to wait the retention period, or delete the log group and recreate it if that makes sense for your organisation.
That's it for today. There are many other topics to cover on Cloudwatch but other than unexpected canary run costs (these are more expensive than many people realise), in most firms, the most common costs associated with Cloudwatch are storage and data transfer.
To be one of the first to know when the next edition is published please follow us on LinkedIn, X or subscribe to the RSS feed.
Join our newsletter for Cost Optimization tips and tricks
By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.