Until a Docker beginner deploys a delusional microservice made in Python to Fargate

Overview

This is an article about creating a microservice-like application with Python, MySQL, and Docker and deploying it to Fargate.

I wanted to use Fargate, which is said to be the hottest (in part) on AWS, and I was interested in microservices architecture. However, since I have never dealt with microservices properly, it is a delusional architecture that "I wonder if it looks like this". If you have any mistakes, we would appreciate it if you could give us your opinion.

In addition, AWS has RDS, but since containers are beginners in the first place, I also use MySQL containers for studying.

What are microservices and Fargate in the first place?

There are many explanations on the net, but I will briefly describe my understanding.

Microservices

--The application consists of __ small services, each loosely coupled __. --Because of the loose coupling, the repair is completed within each service. ――Since each service can be developed by multiple teams, each team can use the language and framework that they are good at. (On the contrary, it may be unified for the entire application) --The ** container ** is used as a technology to realize a small service. --There are Kubernetes and AWS ECS as tools for managing containers. --In data communication between services, ** REST and gRPC ** are mainly used. Messaging technology (such as Kafka) may also be used.

Fargate --To be exact, use it together with ECS and EKS. --You can manage containers by using container orchestration tools such as ECS, but you need to manage the host (server) to which you deploy the container separately. ――For example, if the container scales and the load on the host increases, the host also needs to be scaled, which is difficult for the user of such management. --Fargate is a management-type service ** where AWS manages this ** host. Since the scaling mentioned in the example is also performed automatically, ** users do not need to be aware of host management **.

This is the understanding. In other words, this time, I will try deploying the microservice built with __Docker to Fargate, which is an AWS management service __.

Image of the app

arch03.png

--There are two microservices, Apl and BackEnd. --Communication between services is HTTP (REST) --Access the API of the Apl service from the client. Image of accessing from client to DB by accessing API of BackEnd service from Apl service ――The reason why Python is divided into 3.6 and 3.8 is that I just wanted to experience that "Docker can do this", and there is no other reason. There is no problem with both 3.6.

Article structure and goals

First deploy the microservice locally as a host and then bring it to Fargate. After a successful deployment to Fargate, the goal is to be able to communicate with two microservices from the outside via HTTP.

environment

1. Start with Mac as host

Deploy the app on your local Mac. The sources etc. are as follows.

Folder structure

Create folders for each of the three containers (apl-service, db-service, mysql).


.
├── apl-service
│   ├── Dockerfile
│   └── src
│       ├── results.py
│       └── server.py
├── db-service
│   ├── Dockerfile
│   └── src
│       ├── server.py
│       └── students.py
├── docker-compose.yml
└── mysql
    ├── Dockerfile
    └── db
        ├── mysql_data
        └── mysql_init
            └── setup.sql

MySQL related

mysql/Dockerfile


FROM mysql/mysql-server:5.7

RUN chown -R mysql /var/lib/mysql && \
    chgrp -R mysql /var/lib/mysql

setup.sql and the data actually registered


create table students (id varchar(4), name varchar(20), score int);
insert into students values ('1001', 'Alice', 60);
insert into students values ('1002', 'Bob', 80);
commit;
mysql> select * from DB01.students;
+------+-------+-------+
| id   | name  | score |
+------+-------+-------+
| 1001 | Alice |    60 |
| 1002 | Bob   |    80 |
+------+-------+-------+

db-service related

db-service/Dockerfile


FROM python:3.6

#Work DIR on container
WORKDIR /usr/src/

#Library installation
RUN pip install flask mysql-connector-python

CMD python ./server.py

db-service/src/server.py


from flask import Flask, request, abort, render_template, send_from_directory
from students import get_students

app = Flask(__name__)


@app.route('/students', methods=['GET'])
def local_endpoint():
    return get_students()


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)

db-service/src/students.py


import json
import mysql.connector as mydb


def get_students():
    conn = mydb.connect(user="user", passwd="password",
                        host="mysql", port="3306")

    cur = conn.cursor()
    sql_qry = 'select id, name, score from DB01.students;'
    cur.execute(sql_qry)

    rows = cur.fetchall()

    results = [{"id": i[0], "name": i[1], "score": i[2]} for i in rows]
    return_json = json.dumps({"students": results})
    cur.close()
    conn.close()

    return return_json

--Easy source explanation --Execute server.py when starting the Docker container. --server.py uses Flask to set up a server. It is an API that returns the result by executing get_students imported from students.py when the resource `` / students``` is accessed by GET. --get_students connects to the MySQL container and stores the result selected from DB in rows. It is converted from rows to dict in the comprehension notation, then converted to json and the value is returned. --For local deployment, you can connect well between containers without using links. Or rather, links` seems to be an old technique that hasn't been used very recently ...

apl-service related

apl-service/Dockerfile


FROM python:3.8

#Work DIR on container
WORKDIR /usr/src/

#Library installation
RUN pip install requests flask

CMD python ./server.py

apl-service/src/server.py


from flask import Flask, request, abort, render_template, send_from_directory
from results import get_results

API_URI = 'http://db-service:5001/students'

app = Flask(__name__)


@app.route('/results', methods=['GET'])
def local_endpoint():
    return get_results(API_URI)


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

apl-service/src/results.py


import json
import requests


def get_results(uri):
    r = requests.get(uri)
    students = r.json()["students"]
    return_students = []
    for student in students:
        if student["score"] > 70:
            student.setdefault("isPassed", True)
            return_students.append(student)
        else:
            student.setdefault("isPassed", False)
            return_students.append(student)

    return_json = json.dumps({"addedStudents": return_students})

    return return_json

--Easy source explanation --server.py is almost the same as db-service. It is an API that returns the result by executing get_results imported from results.py when the resource `` / results``` is accessed by GET. --The difference is that the URI is set to access the db-service API from get_results. --get_results stores the value obtained from db-serive in students. Turn this with for and set " isPassed "` to True if the value of each score is greater than 70, False otherwise, and add the json element.

docker-compose

docker-compose.yml


version: '3'

services:
  mysql:
    container_name: mysql
    build:
      context: .
      dockerfile: ./mysql/Dockerfile
    hostname: mysql
    ports:
      - "3306:3306"
    volumes:
      - ./mysql/db/mysql_init:/docker-entrypoint-initdb.d
      - ./mysql/db/mysql_data:/var/lib/mysql
    environment:
      MYSQL_USER: user
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
      MYSQL_DATABASE: DB01
    command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --skip-character-set-client-handshake
 
  db-service:
    build:
      context: .
      dockerfile: ./db-service/Dockerfile
    container_name: db-service
    ports:
      - "5001:5001"
    volumes:
      - ./db-service/src/:/usr/src/

  apl-service:
    build:
      context: .
      dockerfile: ./apl-service/Dockerfile
    container_name: apl-service
    ports:
      - "5002:5002"
    volumes:
      - ./apl-service/src/:/usr/src/

--Easy source explanation --In mysql volumes, docker-entrypoint-initdb.d is set to store the setup.sql mentioned earlier. In MySQL, the SQL under docker-entrypoint-initdb.d is executed at the first startup. In short, it's the initial data.

Deploy and run

I will omit the result, but deploy it with the docker-compose command.

build&up


$ docker-compose build
$ docker-compose up

After successful deployment, try accessing each microservice via HTTP as a communication check. As you can see from the source, db-service uses ports 5001 and apl-service uses ports 5002.

db-Access service


$ curl http://127.0.0.1:5001/students
{"students": [{"id": "1001", "name": "Alice", "score": 60}, {"id": "1002", "name": "Bob", "score": 80}]}

apl-Access service


$ curl http://127.0.0.1:5002/results
{"addedStudents": [{"id": "1001", "name": "Alice", "score": 60, "isPassed": false}, {"id": "1002", "name": "Bob", "score": 80, "isPassed": true}]}

You have now successfully deployed the underlying (delusional) microservices locally. Since the result of the apl-service API is returned, you can also confirm that there is communication between the apl-service and db-service microservices.

I'll bring this to Fargate right away! But it was a long time from here ...

2. Image of the final version again

It is a configuration diagram of the final version based on deploying to Fargate. arch04.png

The points are as follows.

--1 Deploying microservices to one Fargate. --There is inter-container communication (db-service⇔mysql) in the task. --There is inter-task communication (apl-service⇔db-service).

Based on these, we will deploy.

3. Register with ECR

First, push the created three docker images (apl-service, db-service, mysql) to ECR. Since the UI of ECR is easy to understand, I think that you can push it by pressing "Create repository" from the AWS console and proceeding as it is.

4. Before deploying to Fargate

With ECS, you can use docker-compose.yml. After setting ecs-cli, we will deploy to Fargate according to the following tutorial, but docker-compose.yml etc. need to be modified. I will raise various correction points, but the source of the final version is posted at the end of the article, so if you are busy, please refer to that.

Tutorial: Create a Cluster of Fargate Tasks Using the Amazon ECS CLI (https://docs.aws.amazon.com/ja_jp/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html#ECS_CLI_tutorial_fargate_configure )

Also, in this app, we are using ports 5001 and 5002, so we need to allow these in the security group settings in step 3.

When allowing 5001


$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 5001 --cidr 0.0.0.0/0 --region <region>

Split docker-compose.yml

--This time, we will deploy two microservices as two tasks. For this reason, docker-compose.yml also needs to be split in two. You also need to execute the deploy command twice. --Split by backend (db-service, mysql) and apl (apl-service). --You will have two docker-compose.yml files with the same name, so separate the folders appropriately.

Example) Folder "aws_Store in "work" and deploy with task name backend


$ pwd
/Users/<abridgement>/aws_work
$ ls docker-compose.yml   # db-service +mysql docker-compose.yml
docker-compose.yml
$ ecs-cli compose --project-name backend service up ~

Example) Folder "aws_Store in "work" and deploy with task name apl


$ pwd
/Users/<abridgement>/aws_work2
$ ls docker-compose.yml   # apl-service docker-compose.yml
docker-compose.yml
$ ecs-cli compose --project-name apl service up ~

Specifying the image pushed to ECR

--Specify the image you pushed earlier instead of the Dockerfile. build, container_name, hostname are not needed, so specify ʻimageinstead. --In the example below, the ECR repository name for the mysql container ispython-mysql / mysql. --Replace db-service and apl-service with ʻimage as well.

Example) docker-compose.yml fix points(image settings)


  mysql:
    image: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/python-mysql/mysql

Fixed not to use volumes

--Fargate does not support persistent storage volumes. Sounds difficult, but it means that you can't use volumes in docker-compose.yml. Therefore, the source such as server.py is not reflected in the container. --So, comment out (or physically delete) volumes from docker-compose.yml and modify each to include COPY in the Dockerfile. --db-service and apl-service make similar modifications. --Since I modified the Dockerfile, I need to push it to ECR again.

Example) docker-compose.yml fix points


    # volumes:
    #   - ./db-service/src/:/usr/src/

Example) db-service/Dockerfile modification points


#Copy the source to the container
COPY ./src/server.py /usr/src/server.py 
COPY ./src/students.py /usr/src/students.py 

Container log settings

--As mentioned in the tutorial above, set up container logs as a best practice for Fargate tasks. Here the log group name is python-docker. --Set the same for db-service and apl-service.

Example) docker-compose.yml fix points(Container log)


  mysql:
    logging:
      driver: awslogs
      options: 
        awslogs-group: python-docker
        awslogs-region: <region>
        awslogs-stream-prefix: mysql

Also, although I haven't fixed it this time, it's also important to note that in Fargate, ports must be the same number on the host and container sides. For example, you can specify 80: 5001 locally, but in Fargate you will get an error if you do not specify 5001: 5001.

I've fixed about 4 points, but it still doesn't work.

5. Further modify and deploy to Fargate

5-1. Inter-container communication and inter-task communication

From here, it is the modification required to realize the above-mentioned inter-container communication within the task and inter-task communication. First, in order to realize communication between containers, we will describe db-service and mysql.

Deploy and try HTTP communication

After deploying the fix so far, if you check the status with the command ```ecs-cli compose ~ service ps ~ `` , it will be RUNNING` for the time being. Actually, it is the result of ps after startup.

$ ecs-cli compose --project-name backend service ps --cluster-config sample-config --ecs-profile sample-profile
Name                                              State    Ports                         TaskDefinition    Health
<task-id>/db-service   RUNNING  XXX.XXX.XXX.XXX:5001->5001/tcp  backend:12  UNKNOWN
<task-id>/mysql      RUNNING  XXX.XXX.XXX.XXX:3306->3306/tcp  backend:12  UNKNOWN

Since the IP address of Globus is displayed, I try to communicate with db-service from Mac by GET, but as mentioned above, it still does not work.

$ curl http://XXX.XXXX.XXX.XXX:5001/students
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

5-2. Modification for inter-container communication

Since it is on the ECS console, if you check the log on the Logs tab etc., you will get an error that the name cannot be resolved. In this case, I get an error saying that the mysql host name cannot be found from db-service. The point is that there is no communication between the containers. In addition, I wrote it when deploying locally, but this time I am not using Links. Also, the Fargate type does not support Links in the first place.

Then, how to solve it is that Fargate's container-to-container communication can be accessed by port number if it is on the same task. So, change the source of students.py as follows. Also, since the on-coding of the host name is subtle, I will modify it so that it is passed as an environment variable from docker-compose.yml.

db-service/src/students.py fix points


import json
import mysql.connector as mydb
import os  #add to

def get_students():
    #Fixed to get from environment variable
    conn = mydb.connect(user=os.environ['DB_USER'], passwd=os.environ['DB_PASS'],
                        host=os.environ['DB_HOST'], port=os.environ['DB_PORT'])

docker-compose.yml fix points(db-service)


 db-service:
    -abridgement-
    environment:
      DB_HOST: 127.0.0.1
      DB_USER: "user"
      DB_PASS: "password"
      DB_PORT: "3306"

Push to ECR again and deploy. If you have deployed to Fargate once, to reflect the changes, drop it once with the ecs-cli compose ~ service down ~ `` `command in the tutorial, and ecs-cli compose ~ Restart with the command ~ service up ~ ```.

By deploying again, db-service can communicate!

You can confirm that HTTP communication from Mac to db-service is successful. The following is a review of the communication paths.

  1. HTTP communication from outside (Mac) to a microservice called db-service on Fargate
  2. db-service and MySQL communicate between containers to get data
  3. The value is returned to the request source via db-service.
$ curl http://XXX.XXX.XXX.XXX:5001/students
{"students": [{"id": "1001", "name": "Alice", "score": 60}, {"id": "1002", "name": "Bob", "score": 80}]}

5-3. Modification for inter-task communication

Next, let the apl-service communicate. As mentioned earlier, apl-service executes the db-service API, so it needs to communicate with another task. To achieve this in Fargate, enable service detection and launch. The mechanism of service detection is omitted because the following article by Classmethod is very easy to understand, but in short, Route53 A record is automatically created between microservices, and name detection is possible.

ECS Service Discovery has arrived in Tokyo, making it easier to implement inter-container communication!

Since we are using ecs-cli this time, we will deploy it with commands by referring to the following AWS official tutorial.

Tutorial: Create an Amazon ECS Service That Uses Service Discovery Using the Amazon ECS CLI (https://docs.aws.amazon.com/en_jp/AmazonECS/latest/developerguide/ecs-cli-tutorial-servicediscovery .html)

The host name when the service is detected is service_name.namespace. This time, the service name is created as backend and the namespace is created as sample, so in order to access db-service from apl-service, it is necessary to set the host name backend.sample. So, as with db-service, modify it to set the host name from the environment variable.

apl-service/src/server.py fix points


from flask import Flask, request, abort, render_template, send_from_directory
from results import get_results
import os  #add to

#Fixed to get from environment variable
API_URI = 'http://' + os.environ['BACKEND_HOST'] +':' + os.environ['BACKEND_PORT'] + '/students'

docker-compose.yml fix points(apl-service)


 apl-service:
    -abridgement-
    environment:
      BACKEND_HOST: "backend.sample"
      BACKEND_PORT: "5001"

Now you're ready to go. Push the apl-service container to ECR again. If db-service and mysql are already running, it is necessary to set and deploy on the same namespace, so drop the service once. Then, in the folder that contains each docker-compose.yml file, add namespace options and deploy. The service names are backend and apl, respectively.

Deploy backend service


$ ecs-cli compose --project-name backend service up --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile

Deploy apl service


$ ecs-cli compose --project-name apl service up --private-dns-namespace sample.com --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile

As a caveat, you should also add --ecs-profile to the tutorial deploy command.

Communication with apl-service!

$ curl http://XXX.XXX.XXX.XXX:5002/results
{"addedStudents": [{"id": "1001", "name": "Alice", "score": 60, "isPassed": false}, {"id": "1002", "name": "Bob", "score": 80, "isPassed": true}]}

Finally, the communication of apl-service was confirmed. The following is a review of the communication paths.

  1. HTTP communication from outside (Mac) to apl-service on Fargate
  2. apl-service executes db-service API
  3. db-service and MySQL communicate between containers to get data
  4. The value is returned to the request source via db-service and apl-service.

6. Summary of source and deploy commands

I've fixed a lot, so I'll summarize the source and deploy commands that have changed. In addition, region was done with ap-northeast-1, but please note that those who are using another need to change.

MySQL related

mysql/Dockerfile


FROM mysql/mysql-server:5.7

COPY ./db/mysql_init/setup.sql /docker-entrypoint-initdb.d/setup.sql

RUN chown -R mysql /var/lib/mysql && \
    chgrp -R mysql /var/lib/mysql

Since setup.sql is unchanged, it is omitted.

db-service related

db-service/Dockerfile


FROM python:3.6

#Work DIR on container
WORKDIR /usr/src/

#Library installation
RUN pip install flask mysql-connector-python

#Copy the source to the container
COPY ./src/server.py /usr/src/server.py 
COPY ./src/students.py /usr/src/students.py 

CMD python ./server.py

db-service / src / server.py has not changed, so it is omitted.

db-service/src/students.py


import json
import mysql.connector as mydb
import os


def get_students():
    conn = mydb.connect(user=os.environ['DB_USER'], passwd=os.environ['DB_PASS'],
                        host=os.environ['DB_HOST'], port=os.environ['DB_PORT'])

    cur = conn.cursor()
    sql_qry = 'select id, name, score from DB01.students;'
    cur.execute(sql_qry)

    rows = cur.fetchall()

    results = [{"id": i[0], "name": i[1], "score": i[2]} for i in rows]
    return_json = json.dumps({"students": results})
    cur.close()
    conn.close()

    return return_json

apl-service related

apl-service/Dockerfile


FROM python:3.8

#Work DIR on container
WORKDIR /usr/src/

#Library installation
RUN pip install requests flask

#Copy the source to the container
COPY ./src/server.py /usr/src/server.py 
COPY ./src/results.py /usr/src/results.py 

CMD python ./server.py

apl-service/src/server.py


from flask import Flask, request, abort, render_template, send_from_directory
from results import get_results

API_URI = 'http://db-service:5001/students'

app = Flask(__name__)


@app.route('/results', methods=['GET'])
def local_endpoint():
    return get_results(API_URI)


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5002)

ʻApl-service / src / results.py` has not changed, so it is omitted.

docker-compose (db-service, mysql)

docker-compose.yml


# aws_account_id,region needs to be modified according to the actual situation
version: '3'

services:
  mysql:
    image: <aws_account_id>.dkr.ecr.ap-northeast-1.amazonaws.com/python-mysql/mysql
    ports:
      - "3306:3306"
    environment:
      MYSQL_USER: user
      MYSQL_PASSWORD: password
      MYSQL_ROOT_PASSWORD: password
      MYSQL_DATABASE: DB01
    command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --skip-character-set-client-handshake
    logging:
      driver: awslogs
      options: 
        awslogs-group: python-docker
        awslogs-region: ap-northeast-1
        awslogs-stream-prefix: mysql

  db-service:
    image: <aws_account_id>.dkr.ecr.ap-northeast-1.amazonaws.com/python-mysql/db-service
    ports:
      - "5001:5001"
    environment:
      DB_HOST: 127.0.0.1
      DB_USER: "user"
      DB_PASS: "password"
      DB_PORT: "3306"
    logging:
      driver: awslogs
      options: 
        awslogs-group: python-docker
        awslogs-region: ap-northeast-1
        awslogs-stream-prefix: db-service

docker-compose (apl-service)

docker-compose.yml


# aws_account_id,region needs to be modified according to the actual situation
version: '3'

services:
  apl-service:
    image: <aws_account_id>.dkr.ecr.ap-northeast-1.amazonaws.com/python-mysql/apl-service
    ports:
      - "5002:5002"
    environment:
      BACKEND_HOST: "backend.sample"
      BACKEND_PORT: "5001"
    logging:
      driver: awslogs
      options: 
        awslogs-group: python-docker
        awslogs-region: ap-northeast-1
        awslogs-stream-prefix: apl-service

Deploy command

We'll cover the main parts from step 3 of the tutorial (using the Amazon ECS CLI to create a cluster of Fargate tasks), but the ones with comments are the changes. Replace the parentheses and region with the actual values.

Step 3


$ ecs-cli up --cluster-config sample-config --ecs-profile sample-profile
$ aws ec2 describe-security-groups --filters Name=vpc-id,Values=<vpc-id> --region ap-northeast-1
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 80 --cidr 0.0.0.0/0 --region ap-northeast-1
# 5001,5002 also allowed
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 5001 --cidr 0.0.0.0/0 --region ap-northeast-1
$ aws ec2 authorize-security-group-ingress --group-id <security-group-id> --protocol tcp --port 5002 --cidr 0.0.0.0/0 --region ap-northeast-1

Step 5


#deploy backend-Added namespace creation option for service discovery
$ ecs-cli compose --project-name backend service up --create-log-groups --cluster-config sample-config --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
#deploy apl-Same as backend
$ ecs-cli compose --project-name apl service up --create-log-groups --cluster-config sample-config --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile

Step 6


$ ecs-cli compose --project-name backend service up --private-dns-namespace sample --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile
$ ecs-cli compose --project-name apl service up --private-dns-namespace sample.com --vpc <vpc-id> --enable-service-discovery --ecs-profile sample-profile

Step 10


$ ecs-cli compose --project-name backend service down --cluster-config sample-config --ecs-profile sample-profile
$ ecs-cli compose --project-name apl service down --cluster-config sample-config --ecs-profile sample-profile
$ ecs-cli down --force --cluster-config sample-config --ecs-profile sample-profile

What I learned (what I was addicted to)

--Image of using microservice apps in Fargate --Mechanisms and methods of intra-task communication and inter-task communication in ECS --Various restrictions, such as Fargate not supporting persistent storage volumes

Finally

I feel that I have deepened my understanding of not only Fargate but also microservices in general because I made microservices with full scratch. I wrote a lot about what I was addicted to, so it may have been a difficult article to understand. It's been a long time, but thank you for reading. I would be very grateful if you could point out or ask questions.

reference

-Build python (jupyter) + MySQL environment using docker compose -When you want to communicate between containers with AWS Fargate -Use mysql with docker

Recommended Posts

Until a Docker beginner deploys a delusional microservice made in Python to Fargate
I can't sleep until I build a server !! (Introduction to Python server made in one day)
Until you put Python in Docker
I made a payroll program in Python!
How to get a stacktrace in python
I made a Docker container to use JUMAN ++, KNP, python (for pyKNP).
I made a web application in Python that converts Markdown to HTML
Environment maintenance made with Docker (I want to post-process GrADS in Python
I made a script in python to convert .md files to Scrapbox format
How to study until a beginner in statistics gets started with Bayesian statistics
I made a program to check the size of a file in Python
How to stop a program in python until a specific date and time
Try to calculate a statistical problem in Python
How to clear tuples in a list (Python)
To execute a Python enumerate function in JavaScript
How to embed a variable in a python string
I want to create a window in Python
How to create a JSON file in Python
Until a beginner secures disk space in response to a disk usage alert [tmpwatch command]
A clever way to time processing in Python
Steps to develop a web application in Python
To add a module to python put in Julialang
How to notify a Discord channel in Python
I made a module in C language to filter images loaded by Python
[Python] How to draw a histogram in Matplotlib
[Docker] Create a jupyterLab (python) environment in 3 minutes!
I made a Caesar cryptographic program in Python.
A Python beginner made a chat bot, so I tried to summarize how to make it.
Parse a JSON string written to a file in Python
How to convert / restore a string with [] in python
I want to embed a variable in a Python string
I want to easily implement a timeout in python
I made a prime number generation program in Python
Try to make a Python module in C language
I want to write in Python! (2) Let's write a test
Until drawing a 3D graph in Python on windows10
[Python] How to expand variables in a character string
Create a plugin to run Python Doctest in Vim (2)
I made a script to put a snippet in README.md
I tried to implement a pseudo pachislot in Python
I made a Python module to translate comment outs
Create a plugin to run Python Doctest in Vim (1)
A memorandum to run a python script in a bat file
I want to randomly sample a file in Python
How to build a Django (python) environment on docker
I want to work with a robot in python.
Things to note when initializing a list in Python
Until you insert data into a spreadsheet in Python
I made a prime number generation program in Python 2
Introduction to Linear Algebra in Python: A = LU Decomposition
[Python] Created a method to convert radix in 1 second
I made a python library to do rolling rank
How to execute a command using subprocess in Python
Publish / upload a library created in Python to PyPI
I made a script in Python to convert a text file for JSON (for vscode user snippet)
I made a class to get the analysis result by MeCab in ndarray with python
I made a program to collect images in tweets that I liked on twitter with Python
I made a simple typing game with tkinter in Python
I tried to implement a one-dimensional cellular automaton in Python
How to slice a block multiple array from a multiple array in Python
I made a package to filter time series with python