Every day I check the status of the server on the CloudWatch dashboard. When you come to work, first check the dashboard in order ... It's annoying! : persevere: I want to get dashboard information at once! Alright, let's write in Python!
It would be nice to use Boto3's get_metric_statistics ()
.
As per Official Documentation, first prepare to load CloudWatch with Boto3.
import boto3
client = boto3.client('cloudwatch')
It seems that get_metric_statistics ()
is used as follows.
response = client.get_metric_statistics(
Namespace = 'string',
MetricName = 'string',
Dimensions = [
{
'Name': 'string',
'Value': 'string'
},
],
StartTime = datetime(2020, 2, 11),
EndTime = datetime(2020, 2, 11),
Period = 123,
Statistics = [
'SampleCount'|'Average'|'Sum'|'Minimum'|'Maximum',
])
Namespace
ʻAWS / EC2 or ʻAWS / ElastiCache
or ʻAWS / RDS. It's written at the top of the information that appears when you hover your mouse cursor over the detailed information on the CloudWatch
Graphed Metrics` tab.
MetricName
CPUUtilization
, MemoryUtilization
, DiskSpaceAvailable
, etc.
It's written above the separator line, the second from the top of the information that appears when you hover your mouse cursor over the details on the CloudWatch Graphed Metrics
tab.
Dimensions
ʻInstanceId,
CacheClusterId,
DBInstanceIdentifier, etc. It's written below the information separator that appears when you hover your mouse cursor over the detailed information on the CloudWatch
Graphed Metricstab. (All dimensions information below the dividing line) As in the example above, Dimensions is written in the following form.
Dimensions=[{'Name': 'string', 'Value': 'string'}] So specifically
Dimensions = [{'Name':'InstanceId','Value':'i-xxxxxxxxxxx'}]
Dimensions=[{'Name': 'Role', 'Value': 'WRITER'}, {'Name': 'DBClusterIdentifier', 'Value': 'xxxxxxxxxxx'}]`
It will be shaped like.
Period Write the period in seconds. So 1 minute → 60 5 minutes → 300 24 hours → 86400
Statistics Write statistics.
The response of get_metric_statistics ()
is as follows.
{'Label': 'CPUUtilization', 'Datapoints': [{'Timestamp': datetime.datetime(2020, 2, 10, 19, 8, tzinfo=tzutc()), 'Maximum': 6.66666666666667, 'Unit': 'Percent'}], 'ResponseMetadata': {'RequestId': 'xxxxxxxxxxx', 'HTTPStatusCode': 200, ...(abridgement)
So
The value is response ['Datapoints'] [0] [value specified in Statistics (Maximum in the above response)]
The unit of the value is response ['Datapoints'] [0] ['Unit']
It seems that you can make it feel good if you use around.
Since I was visually checking with CloudWatch every day, in this script I will get the past 24 hours from the script execution date and time. Once you have a script, you may want to run it regularly with cron.
import boto3
from datetime import datetime, timedelta
client = boto3.client('cloudwatch')
def get_metric_statistics(name_space, metric_name, dimensions_values, statistic):
#Get CloudWatch Information
response = client.get_metric_statistics(
#For CPU usage`AWS/EC2`Enter
Namespace = name_space,
#For CPU usage`CPUUtilization`Enter
MetricName = metric_name,
# `[{'Name': 'InstanceId', 'Value': instance_id}]`Enter
Dimensions = dimensions_values,
#Start date and time`Script execution date and time-1 day`Specified by
StartTime = datetime.now() + timedelta(days = -1),
#End date and time`Script execution date and time`Specified by
EndTime = datetime.now(),
#Specify 24 hours
Period = 86400,
# `Maximum`Enter
Statistics = [statistic]
)
#Output statement creation
response_text = name_space + ' ' + metric_name + statistic + ': ' + str(response['Datapoints'][0][statistic]) + ' ' + response['Datapoints'][0]['Unit']
print(response_text)
#Output target metric
instance_id = 'i-xxxxxxxxxxx'
#CPU usage
get_metric_statistics('AWS/EC2', 'CPUUtilization', [{'Name': 'InstanceId', 'Value': instance_id}], 'Maximum')
#Memory usage
get_metric_statistics('System/Linux', 'MemoryUtilization', [{'Name': 'InstanceId', 'Value': instance_id}], 'Maximum')
Output result
AWS/EC2 CPUUtilizationMaximum: 6.66666666666667 Percent
System/Linux MemoryUtilizationMaximum: 18.1909615159559 Percent
I tried to get CloudWatch information from a Python script.
This time, I used a pattern with only one Datapoints
, but depending on the specified period, there will be multiple outputs.
In that case, please aim at the necessary data with a good feeling by loop processing.
We're hiring! We are developing an AI chatbot. If you are interested, please feel free to contact us from the Wantedly page!
Recommended Posts