Python StatsD client configuration

Building and deploying highly scalable distributed applications in an ever-changing software development environment is only half the way. The other half is to monitor the health and instances of your application while recording accurate metrics.

You may want to see how many resources are being consumed, how many files are being accessed by a special process, and so on. These metrics provide valuable insights into the execution and management of the technology stack. This gives you the leverage to understand the final performance of your design and ultimately helps you optimize.

Most of the tools to get the job done are already there, but today we'll talk more about StatsD. We will explain how to deploy your own PythonStatsD client, how to use it to monitor your Python application, and how to save the final recorded metrics to the database. After that, if you want to display the StatsD metric on the Grafana dashboard with Prometheus or Graphite, [MetricFire](https://www.metricfire.com/japan/?utm_source=blog&utm_medium=Qiita&utm_campaign=Japan&utm_content=Configuring%20Python%20StatsD Get a free trial at Book Demo of% 20Client). But before that, let's start with StatsD!

What is StatsD?

StatsD is a node.js project that collects and listens for statistics and system performance metrics. These statistics are sent over the network and can collect different types of metric data. The main advantage of using StatsD is that it can be easily integrated with other tools such as Grafana, Graphite and InfluxDB.

Advantages of StatsD

  1. Excellent startup time
  2. Percentile processing is done by the server, providing an aggregated view of multiple instances at once (for the same service).
  3. The client side overhead is relatively low.
  4. We employ UDP to send all the data to prevent connectivity issues.
  5. Simple, effective and easy to implement when building short-lived apps.
  6. Metrics are pushed to the server only on arrival, resulting in low memory usage.

But ... what does it mean to be "pushed" when a metric comes in?

There are two main execution models for metric reports. In the pull model, the surveillance system "scraps" the app on a particular HTTP endpoint. In the push model used by StatsD, the application sends the metric to the monitoring system.

Prerequisites and installation

  1. First, you need to have Python 3.6 or later and pip installed on your system.

You can verify your Python installation by running the following command on your Linux, Windows, or macOS system.

$ python --version

If it is not installed, check your system's These installation instructions (https://www.makeuseof.com/tag/install-pip-for-python/).

  1. You will need StatsD, Flask, Flask-StatsD, and flask-related metrics will be collected automatically. In addition to that, you will need a virtualenv. This is a tool for creating an isolated Python environment and SQLALCHEMY is a sample database.
pip install StatsD, flask, Flask-StatsD, virtualenv, Flask-SQLAlchemy

Pip will automatically install the latest versions of these packages.

Let's play with the StatsD package

Start by implementing a basic timer.

import statsd
timer = statsd.Timer('Statsd_Tutorial_Application')
timer.start()
# we can do just about anything over here
timer.stop('TutorialTimer')

Similarly, for basic counters:

import statsd
counter = statsd.Counter('Statsd_Tutorial_Application')
# again do anything random over here
counter += 1

Monitoring with Flask

In this tutorial, you will design a basic to-do list application in Flask and record operational metrics.

The complete tutorial repository can be forked from Github.

Step 1: Import dependencies-5-12 lines:

from flask import Flask, request
from flask import render_template
from flask import redirect
from flask_sqlalchemy import SQLAlchemy
import time
import statsd

Step 2: Launch the Flask app, Statsd client and DB-14th line: 23rd line:

c = statsd.StatsClient('localhost',8125)
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///test.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
db = SQLAlchemy(app)

Create a task class and define it in the DB model-lines 26-35:

class Task(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    content = db.Column(db.Text)
    done = db.Column(db.Boolean, default=False)

    def __init__(self, content):
        self.content = content
        self.done = False

db.create_all()
  1. Create a variable ID for the DB column that holds the integer as the primary key.
  2. Create a text content column.
  3. Create a Boolean done column with a default of false to indicate whether the task has been completed / resolved.
  4. Start the content and perfect columns.
  5. Return the database in a printable format.
  6. Create and start a new DB.

Now add a task-Lines 42-57:

@app.route('/task', methods=['POST'])
def add_task():
    start=time.time()
    content = request.form['content']
    if not content:
     dur = (time.time() - start) *1000
     c.timing("errortime",dur)
     c.incr("errorcount")
        return 'Error'

    task = Task(content)
    db.session.add(task)
    db.session.commit()
    dur = (time.time() - start) *1000
    c.timing("tasktime",dur)
    c.incr("taskcount")

This code adds the content of the task received from the form to the POST request. However, more important to discuss here is the metric report that is added.

  1. The basic timer starts as soon as the function starts.
  2. If there is an error in the content, the error will be incremented at the base counter. Similarly, the time of error is recorded. Eventually, an error will be returned.
  3. When the DB adds a task, it calculates the full time the function was executed and updates the incrementer. The total period will also be updated.

Delete Task-Lines 60-65:

@app.route('/delete/<int:task_id>')
def delete_task(task_id):
    task = Task.query.get(task_id)
    db.session.delete(task)
    db.session.commit()
    c.incr("deletecount")

The above code performs a task delete from the DB by adding a delete count to the base counter for increments.

Sending metrics from pyStatsD to MetricFire

Recording these metrics using StatsD is helpful for beginners. However, in a more industry-grade production environment, these metrics need to be processed by services that facilitate the storage and processing of graphs. This is where Graphite comes in.

Introducing Graphite:

Graphite is designed as a monitoring tool used to track the performance of websites, applications / other services, and network servers. Graphite has created a sensation in the world of technology, igniting a new generation of monitoring tools in essence, making it much easier to share and visualize as well as store and retrieve time series data.

Graphite basically performs two operations.

--Save numerical time series data --Render the graph of this data on demand

Graphite is not a collection agent and should not be treated like a collection agent. Rather, it provides a simpler path for getting measurements into a time series DB. To test sending metrics from a server or local machine to an already running graphite instance, run the following one-line command:

`$ echo "foo.bar 1 `date +%s`" | nc localhost 2003`

Once installed, simply log the metric in StatsD and Graphite will retrieve all the logged data. Well, Graphite seems like a big deal, but there are still certain Graphite fallbacks that developers want to resolve. This is where MetricFire comes into play.

Why MetricFire:

  1. Provides Graphite-as-a-Service
  2. Added a built-in agent to fill the gap in the current Graphite version
  3. Allow team accounts to resolve previous collaboration issues
  4. Custom detailed dashboard permissions
  5. Great integration with other services such as AWS, Heroku
  6. Operation via the API (https://www.hostedgraphite.com/docs/api/index.html?_ga=2.10920884.1662333892.1603353088-2038175526.1567400116) enhances development
  7. Hourly backup of dashboard and user data
  8. Easily and quickly scale as needed
  9. Solid track record in Graphite monitoring
  10. 24/7 support from experienced engineers
  11. Easy to use with Plug and Play model

However, if you still want self-hosted and self-managed services and want full control over everything, using StatsD and docker is an easy way to launch graphite.

Deployment

StatsD can be deployed in your preferred environment using your preferred architecture and other services / microservices. Make sure that all client-side apps that send metrics to the StatsD server are reachable to the StatsD server. That way, StatsD won't complain about it.

Just In: AWS Cloudwatch also supports StatsD metrics in case you're using the AWS cloud to host your infrastructure [https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch] -Agent-custom-metrics-statsd.html) is now available.

As far as the visualization of the metrics we have accumulated, Grafana is its de facto visualization tool.

If you want to try it yourself, sign up for a demo (https://www.metricfire.com/demo-japan/?utm_source=blog&utm_medium=Qiita&utm_campaign=Japan&utm_content=Configuring%20Python%20StatsD%20Client) and we Can talk about the best graphite and Prometheus monitoring solutions for you. Feel free to make a reservation.

StatsD API

The Python StatsD client also comes with an API that you can trigger via HTTP request if you want to integrate it.

reference

Recommended Posts

Python StatsD client configuration
Use LiquidTap Python Client ③
Use LiquidTap Python Client ②
Use LiquidTap Python Client ①
Try using Kubernetes Client -Python-
Simple IRC client in python
Python
[google-oauth] [python] Google APIs Client Library for Python
A simple HTTP client implemented in Python
Simple Slack API client made with Python
Write a TCP client with Python Twisted
Manipulate BigQuery tables from a Python client
"First Elasticsearch" starting with a python client
I made a configuration file with Python