(◎◎) {Let's let Python do the boring things) ......... (Hey? Let's let Python do the homework} (゜) (゜)

Did you guys let Python do the boring things?

I didn't let you do

A book on the cover of a robot like this C-3PO on the grass

image.png

It is famous as a good book of Python, and many people may have read it.

So has anyone read this book and made Python do something really boring? Isn't it quite small? I think there are many people who are satisfied with reading. I was one of them. I only read it once about a year ago. .. .. was

AI training started

Recently, AI has become a buzz. It seems that various companies are starting to introduce AI. The company I work for is no exception, and I have begun to introduce AI training.

During the training, I had the task of creating a machine learning model myself and predicting handwritten characters.

Challenge flow

image.png

1. Modeling

Defines a model for deep learning. How many layers of deep learning should be used here, and how many nodes should be used in each layer? We will do the overall configuration and fine tuning of the model.

2. Learning

Train the model by passing the data (training data) in which the correct label and the image of the handwritten characters are paired. Here, you can measure the tentative prediction result by using a part of the training data as verification data (see 3. Prediction). Use the model with the highest tentative prediction results for subsequent work.

3. Prediction:

Predict the handwritten characters by passing the data (verification data) of only the image of the handwritten characters. The forecast is output as a CSV file.

4. Submission:

Upload the predicted CSV file to the web server prepared by the training company via a browser.

5. Result receipt:

The accuracy of the prediction result is displayed on the web server.

The challenge was to get the accuracy of the passing score by repeating steps 1-5 above.

Working game

After repeating it several times, I came to think that this was a routine work. Once the outline of the model is decided, all you have to do is tune the parameters of the model and repeat learning-> prediction → submission. Moreover, the waiting time is very long! !! I came to wonder if this could be automated.

Let's automation with python

Let's let Python do the boring things.

Organize the flow of tasks again

image.png

Modeling / learning / prediction

It is done in an interactive python execution environment called Jupyter notebook. I thought that automation could be realized by making the code written in Jupyter notebook into a python class / function and passing the parameters used for tuning to it as an argument.

How to convert jupyter notebook code to python file https://qiita.com/abts/items/25bb611b6d83e646abdd

Submission result / result receipt

Upload the CSV file output by prediction to the upload site via google chrome. It seems that the web browser can be automated by using a Python library called Selenium.

About Selenium https://qiita.com/Chanmoro/items/9a3c86bb465c1cce738a

Constraint: Only 5 files can be uploaded per day

There was a limit that you can only upload up to 5 predicted CSV files a day. Without this, it would have been easier to simply loop through the above process. .. .. It seems that we need to devise here.

Automation flow

image.png

What humans do

--Define dozens / hundreds of sets of parameters to be passed to the model in the JSON file in advance. ――Watch manga or youtube until all the passed parameters are processed. ――After all is completed, check the result, and if you have achieved the goal, finish, and consider the next parameter based on the result.

What Python does

Modeling / learning / prediction

This is a loop process.

--Get the parameters used for the model from the JSON file --Modeling --Learning (temporary prediction accuracy calculated at this time is stored in CSV) --Prediction (one result CSV is created for each loop)

Submission / receipt of results

It loops once a day, only 5 times after midnight. Five times is the maximum number of submissions per day.

--First, check the CSV in which the provisional prediction results are stored, and extract the top 5 prediction CSVs. --Submit the extracted CSV one loop at a time --Get the prediction result displayed in upload and write it to CSV

Implementation (coding)

before that

As expected, the upload site of the training company cannot be published, so this time we will use kaggle's digit recognizer instead. The basic flow does not change. Also, kaggle has an API for uploading, but don't mess with it this time (-_-) I hope you can see this as a reproduction.

Let's briefly explain the code actually used.

code

Machine learning class

ai.py


from itertools import product
import os

import pandas as pd
import numpy as np

np.random.seed(2)

from keras.utils.np_utils import to_categorical  # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
from sklearn.model_selection import train_test_split


class MnistModel(object):
    def __init__(self, train_data_name='train.csv', test_data_csv='test.csv'):
        input_dir = self.get_dir_path('input')
        train_data_path = os.path.join(input_dir, train_data_name)
        test_data_path = os.path.join(input_dir, test_data_csv)
        #Load the data
        self.train = pd.read_csv(train_data_path)
        self.test = pd.read_csv(test_data_path)

    def get_dir_path(self, dir_name):
        """ Function to directory path

        Params:
            dir_name(str): The name of directory
        Return:
            str: The directory path

        """
        base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
        data_directory = os.path.join(base_dir, dir_name)
        return data_directory


    def learning_and_predict(self, csv_file_path, config):
        label = self.train["label"]
        train = self.train.drop(labels=["label"], axis=1)


        # Normalize the data
        train = train / 255.0
        test = self.test / 255.0

        # Reshape image in 3 dimensions (height = 28px, width = 28px , canal = 1)
        train = train.values.reshape(-1, 28, 28, 1)
        test = test.values.reshape(-1, 28, 28, 1)

        # Encode labels to one hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
        label = to_categorical(label, num_classes=10)

        # Set the random seed
        random_seed = 2

        # Split the train and the validation set for the fitting
        X_train, X_val, Y_train, Y_val = train_test_split(train, label, test_size=0.1, random_state=random_seed)

        model = Sequential()
        model.add(Conv2D(filters=32, kernel_size=(5, 5), padding='Same',
                              activation='relu', input_shape=(28, 28, 1)))
        model.add(Conv2D(filters=32, kernel_size=(5, 5), padding='Same',
                              activation='relu'))
        model.add(MaxPool2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))
        model.add(Conv2D(filters=64, kernel_size=(3, 3), padding='Same',
                              activation='relu'))
        model.add(Conv2D(filters=64, kernel_size=(3, 3), padding='Same',
                              activation='relu'))
        model.add(MaxPool2D(pool_size=(2, 2), strides=(2, 2)))
        model.add(Dropout(0.25))
        model.add(Flatten())
        model.add(Dense(256, activation="relu"))
        model.add(Dropout(0.5))
        model.add(Dense(10, activation="softmax"))

        optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
        model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"])

        datagen = ImageDataGenerator(
            featurewise_center=False,  # set input mean to 0 over the dataset
            samplewise_center=False,  # set each sample mean to 0
            featurewise_std_normalization=False,  # divide inputs by std of the dataset
            samplewise_std_normalization=False,  # divide each input by its std
            zca_whitening=False,  # apply ZCA whitening
            rotation_range=config['ROTATION_RANGE'],  # randomly rotate images in the range (degrees, 0 to 180)
            zoom_range=config['ZOOM_RANGE'],  # Randomly zoom image
            width_shift_range=config['WIDTH_SHIFT_RANGE'],  # randomly shift images horizontally (fraction of total width)
            height_shift_range=config['HEIGHT_SHIFT_RANGE'],  # randomly shift images vertically (fraction of total height)
            horizontal_flip=False,  # randomly flip images
            vertical_flip=False)  # randomly flip images
        datagen.fit(X_train)

        learning_rate_reduction = ReduceLROnPlateau(
                                                    monitor='val_acc',
                                                    patience=3,
                                                    verbose=1,
                                                    factor=0.5,
                                                    min_lr=0.00001
        )

        epochs = 1
        batch_size = 86

        history = model.fit_generator(
            datagen.flow(
                X_train,
                Y_train,
                batch_size=batch_size
            ),
            epochs=epochs,
            validation_data=(
                X_val,
                Y_val
            ),
            verbose=2,
            steps_per_epoch=X_train.shape[0] // batch_size,
            callbacks=[learning_rate_reduction])

        results = model.predict(test)
        results = np.argmax(results, axis=1)
        results = pd.Series(results, name="Label")
        submission = pd.concat([pd.Series(range(1, 28001), name="ImageId"), results], axis=1)
        submission.to_csv(csv_file_path, index=False)

        return history.history['val_acc'][0]


This machine learning model was created with reference to the following (round plagia). https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6

All model creation / learning / prediction is done by learning_and_predict method. config will be a list of parameters.

datagen = ImageDataGenerator(
....
    rotation_range=config['ROTATION_RANGE'],  # randomly rotate images in the range (degrees, 0 to 180)
    zoom_range=config['ZOOM_RANGE'],  # Randomly zoom image
    width_shift_range=config['WIDTH_SHIFT_RANGE'],  # randomly shift images horizontally (fraction of total width)
    height_shift_range=config['HEIGHT_SHIFT_RANGE'],  # randomly shift images vertically (fraction of total height)
....
        datagen.fit(X_train)

This time, the parameters of Image Data Generator are tuned.

Parameter definition JSON

{
    "CONFIG": [
        {
        "ROTATION_RANGE": 10,
        "ZOOM_RANGE": 0.1,
        "WIDTH_SHIFT_RANGE": 0.1,
        "HEIGHT_SHIFT_RANGE": 0.1
        },
        {
        "ROTATION_RANGE": 10,
        "ZOOM_RANGE": 0.1,
        "WIDTH_SHIFT_RANGE": 0.1,
        "HEIGHT_SHIFT_RANGE": 0.2
        },
        {
        "ROTATION_RANGE": 10,
        "ZOOM_RANGE": 0.1,
        "WIDTH_SHIFT_RANGE": 0.1,
        "HEIGHT_SHIFT_RANGE": 0.3
        },
        {
        "ROTATION_RANGE": 10,
        "ZOOM_RANGE": 0.1,
        "WIDTH_SHIFT_RANGE": 0.1,
        "HEIGHT_SHIFT_RANGE": 0.4
        }
    ]
}

Web upload class

This is a class for uploading predictive CSV to a website. It is mainly implemented in selenium.

import os
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


DRIVER_FILE_PATH = 'C:\Drivers\chromedriver_win32\chromedriver.exe'
DRIVER_FILE_NAME = 'chromedriver'
TIMEOUT = 30


class BaseBrowserOperator(object):
    """ Base model of browser operator """
    def __init__(self, headless = False):
        driver_path = os.path.join(os.path.dirname(DRIVER_FILE_PATH), DRIVER_FILE_NAME)
        if headless == True:
            options = webdriver.ChromeOptions()
            options.add_argument('--headless')
            self.browser = webdriver.Chrome(driver_path, options=options)
        else:
            self.browser = webdriver.Chrome(driver_path)

    def __del__(self):
        self.browser.close()


class BrowserOperator(BaseBrowserOperator):
    """ The browser operator model """
    def go_to_page(self, url):
        """ Function to go to a page

        Params:
            url(str): The url of page
        """
        self.browser.get(url)

    def click(self, element_xpath, wait=True):
        """ Function to click the page's element

        TODO:
            implement finding element method other than xpath

        Params:
            element_xpath(str): The xpath of element be clicked
            wait(boolean): If disable waiting, please give False.
        """
        if wait:
            self.wait_element(element_xpath)
        self.browser.find_element_by_xpath(element_xpath).click()

    def input_value(self, element_xpath, value, wait=True):
        """ Function to input value to page's element

        TODO:
            implement finding element method other than xpath

        Params:
            element_xpath(str): The xpath of element be clicked
            value(str): The value be inputed
            wait(boolean): If disable waiting, please give False.
        """
        if wait:
            self.wait_element(element_xpath)
        self.browser.find_element_by_xpath(element_xpath).send_keys(value)

    def get_value(self, element_xpath, wait=True):
        """ Function to get value from page's element

        Params:
            element_xpath(str): The xpath of element be clicked
            wait(boolean): If disable waiting, please give False.
        Returns:
            str: Value from page's element
        """
        if wait:
            self.wait_element(element_xpath)
        return self.browser.find_element_by_xpath(element_xpath).text

    def import_cookies(self):
        """ Function to import cookie informations """
        cookies = self.browser.get_cookies()
        for cookie in cookies:
            self.browser.add_cookie({
                'name': cookie['name'],
                'value': cookie['value'],
                'domain': cookie['domain'],
            })

    def wait_element(self, element_xpath):
        """ Function to wait to appear element on page

        TODO:
            implement finding element method other than xpath

        Params:
            element_xpath(str): The xpath of element be used to wait
        """
        WebDriverWait(self.browser, TIMEOUT).until(EC.element_to_be_clickable((By.XPATH, element_xpath)))

    def wait_value(self, element_xpath, value, timeout=300):
        """ Function to wait until element's value equal the specific value

        Params:
            element_xpath(str): The xpath of element be used for wait
            value(str): The used value for wait
            timeout(int): The waiting timeout(sec)
        """
        state = ''
        sec = 0
        while not state == value:
            state = self.browser.find_element_by_xpath(element_xpath).text
            time.sleep(1)
            if sec > timeout:
                raise TimeoutError("Timeout!! The value wasn't available")
            sec += 1


Use each method as follows.

Uploading a predicted CSV file with kaggle can be achieved with the following code.

    def upload_csv_to_kaggle(self, file_path):
        """ Function to upload csv file to kaggle

        Params:
            file_path(str): The path of csv file uploaded
        """
        uploader = BrowserOperator()
        #Transition to kaggle page
        uploader.go_to_page(
            'https://www.kaggle.com/c/digit-recognizer'
        )
        #Click the Sign in button
        uploader.click(
            '/html/body/main/div[1]/div/div[1]/div[2]/div[2]/div[1]/a/div/button'
        )
        #Click Sign in with google
        uploader.click(
            '/html/body/main/div/div[1]/div/form/div[2]/div/div[1]/a/li/span'
        )
        #Enter your email address
        uploader.input_value(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[1]/div/form/span/section/div/div/div[1]/div/div[1]/div/div[1]/input',
            GOOGLE_MAILADDRESS
        )
        uploader.click(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[2]/div/div[1]/div'
        )
        #Enter password
        uploader.input_value(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[1]/div/form/span/section/div/div/div[1]/div[1]/div/div/div/div/div[1]/div/div[1]/input',
            GOOGLE_PASSWORD
        )
        uploader.click(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[2]/div/div[1]/div/span/span'
        )
        time.sleep(10) #It didn't work without sleep
     #Import cookies
        uploader.import_cookies()
     #Transition to CSV submission screen of digit recognizer
        uploader.go_to_page('https://www.kaggle.com/c/digit-recognizer/submit')
        time.sleep(30) #It didn't work without sleep
     #file upload
        uploader.input_value(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[1]/div[2]/div[1]/div/input',
            file_path,
            wait=False
        )
     #Enter a comment
        uploader.input_value(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[2]/div[2]/div/div/div/div[2]/div/div/textarea',
            'test'
        )
        uploader.wait_element(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[1]/div[2]/div[1]/ul/li/div/span[1]')
        uploader.click('/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[3]/div[2]/div/a')
        uploader.wait_value(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[2]/div[2]/div/div[3]/div[1]/div[1]/span',
            'Complete'
        )

Task execution class

homeworker.py


import csv
import datetime
import json
import os
import time


import retrying


from mnist_auto.models.operator import BrowserOperator
from mnist_auto.models.ai import MnistModel


DEFAULT_DAILY_SCORES_DIR = 'daily_scores'
DEFAULT_CSVS_DIR = 'results'
DEFAULT_UPLOADED_SCORE_FILE_NAME = 'uploaded_score.csv'
DEFAULT_KAGGLE_DAILY_LIMIT = 5
COLUMN_DATETIME = 'DATETIME'
COLUMN_SCORE = 'SCORE'
GOOGLE_MAILADDRESS = '[email protected]'
GOOGLE_PASSWORD = 'password'



class BaseHomeworker(object):
    """ Base model of homeworker """

    def __init__(self, daily_score_dir_name, csvs_dir_name):
        self.daily_score_dir_path = self.get_dir_path(daily_score_dir_name)
        self.csvs_dir_path = self.get_dir_path(csvs_dir_name)

    def get_dir_path(self, dir_name):
        """ Function to get directory path
            if direcotry doen't exist, The direcotry will be made

        Params:
            dir_name(str): The directory name
        Returns:
            str: The directory path
        """
        base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
        dir_path = os.path.join(base_dir, dir_name)
        if not os.path.exists(dir_path):
            os.mkdir(dir_path)
        return dir_path


class Homeworker(BaseHomeworker):
    """ The homeworker model """

    def __init__(self):
        super().__init__(daily_score_dir_name=DEFAULT_DAILY_SCORES_DIR, csvs_dir_name=DEFAULT_CSVS_DIR)
        self.uploaded_scores = []
        self.config = self.get_confing()
        self.mnist_model = MnistModel()

    def get_confing(self, config_file_name='config.json'):
        """ Function to get configuration with json format

        Params:
            config_file_name(str): The name of config file
        Return:
            dict: The dict including config
        """
        base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
        config_file_path = os.path.join(base_dir, config_file_name)
        json_file = open(config_file_path, 'r')
        return json.load(json_file)

    def write_daily_to_file(self, date_ymd, date_ymdhm, score):
        """ Function to write daily data to file

        Params:
            date_ymd(str): The formatted date (YYYYmmdd)
            date_ymdhm(str): The formatted date (YYYYmmddHHMM)
            score(int): The score
        """
        date_ymd = date_ymd + '.csv'
        file_path = os.path.join(self.daily_score_dir_path, date_ymd)
        with open(file_path, 'a', newline='') as csv_file:
            fieldnames = [COLUMN_DATETIME, COLUMN_SCORE]
            writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
            writer.writerow({
                COLUMN_DATETIME: date_ymdhm,
                COLUMN_SCORE: score
            })


    def upload_csv_files(self, date_ymd, num=DEFAULT_KAGGLE_DAILY_LIMIT):
        """ Function to upload designated number csv files

        Params:
            data_ymd(str): The formatted data
            num(int): The number of file that will be uploaded
        """
        targets_uploaded = self.get_tops(date_ymd, num)
        for target in targets_uploaded:
            file_path = os.path.join(self.daily_score_dir_path, target[COLUMN_DATETIME]) + '.csv'
            try:
                self.upload_csv_to_kaggle(file_path)
            except retrying.RetryError:
                continue

    def get_tops(self, date_ymd, num):
        """ Function to get data that have some high score from daily data

        Params:

            num(int): The number of data that will be gotten

        Return:
            list: The list that includes some highest data
        """
        file_name = date_ymd + '.csv'
        file_path = os.path.join(self.daily_score_dir_path, file_name)
        scores = []
        with open(file_path, 'r') as csv_file:
            reader = csv.reader(csv_file)
            for row in reader:
                scores.append({
                    COLUMN_DATETIME: row[0],
                    COLUMN_SCORE: row[1]
                })
        sorted_list = sorted(scores, key=lambda x: x[COLUMN_SCORE], reverse=True)
        if len(sorted_list) < num:
            num = len(sorted_list)
        return sorted_list[:num]

    @retrying.retry(stop_max_attempt_number=3)
    def upload_csv_to_kaggle(self, file_path):
        """ Function to upload csv file to kaggle

        Params:
            file_path(str): The path of csv file uploaded
        """
        uploader = BrowserOperator()
        uploader.go_to_page(
            'https://www.kaggle.com/c/digit-recognizer'
        )
        uploader.click(
            '/html/body/main/div[1]/div/div[1]/div[2]/div[2]/div[1]/a/div/button'
        )
        uploader.click(
            '/html/body/main/div/div[1]/div/form/div[2]/div/div[1]/a/li/span'
        )
        uploader.input_value(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[1]/div/form/span/section/div/div/div[1]/div/div[1]/div/div[1]/input',
            GOOGLE_MAILADDRESS
        )
        uploader.click(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[2]/div/div[1]/div'
        )
        uploader.input_value(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[1]/div/form/span/section/div/div/div[1]/div[1]/div/div/div/div/div[1]/div/div[1]/input',
            GOOGLE_PASSWORD
        )
        uploader.click(
            '/html/body/div[1]/div[1]/div[2]/div/div[2]/div/div/div[2]/div/div[2]/div/div[1]/div/span/span'
        )
        time.sleep(10)
        uploader.import_cookies()
        uploader.go_to_page('https://www.kaggle.com/c/digit-recognizer/submit')
        time.sleep(30)
        uploader.input_value(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[1]/div[2]/div[1]/div/input',
            file_path,
            wait=False
        )
        uploader.input_value(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[2]/div[2]/div/div/div/div[2]/div/div/textarea',
            'test'
        )
        uploader.wait_element(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[1]/div[2]/div[1]/ul/li/div/span[1]')
        uploader.click('/html/body/main/div[1]/div/div[5]/div[2]/div/div[3]/div[2]/div[2]/div[3]/div[2]/div/a')
        uploader.wait_value(
            '/html/body/main/div[1]/div/div[5]/div[2]/div/div[2]/div[2]/div/div[3]/div[1]/div[1]/span',
            'Complete'
        )
        
    def work(self):
        """ Function to run a series of tasks
                1.  learning and prediction with parameter (It's written on json)
                    one prediction results is outputed as one csv
                2.  Writing learning's score (acc) to another csv.
                3.  Once a day, uploading result csv files to kaggle in high score order
         """
        last_upload_time = datetime.datetime.now()
        for config in self.config['CONFIG']:
            now = datetime.datetime.now()
            now_format_ymdhm = '{0:%Y%m%d%H%M}'.format(now)
            now_format_ymd = '{0:%Y%m%d}'.format(now)
            if (now - last_upload_time).days > 0:
                last_upload_time = now
                self.upload_csv_files()
            csv_file_name = now_format_ymdhm + '.csv'
            csv_file_path = os.path.join(self.csvs_dir_path, csv_file_name)
            score = self.mnist_model.learning_and_predict(csv_file_path=csv_file_path, config=config)
            self.write_daily_to_file(date_ymd=now_format_ymd, date_ymdhm=now_format_ymdhm, score=score)

The work method is the main part.

Result announcement

I was able to pass the task just by sleeping almost! The number of uploads by each trainee is displayed, but only I was an order of magnitude w

image.png

Let Python do the boring things for you too! !! Then!

Recommended Posts

(◎◎) {Let's let Python do the boring things) ......... (Hey? Let's let Python do the homework} (゜) (゜)
"Let Python do the boring things" exercise ~ Command line mailer ~
MoneyForward Cloud Automates time stamping [Let's let Python do the trouble]
Receive the form in Python and do various things
[2020 version] Let Python do all the tax and take-home calculations
Let's do image scraping with Python
[Blender x Python] Let's master the material !!
Let Heroku do background processing with Python
Let's read the RINEX file with Python ①
Let's summarize the Python coding standard PEP8 (1)
Let's observe the Python Logging cookbook SocketHandler
[Python] Summarize the rudimentary things about multithreading
Let's summarize the Python coding standard PEP8 (2)
Let's do MySQL data manipulation with Python
Introduction to Python Let's prepare the development environment
Let's parse the git commit log in Python!
When do python functions affect the caller's arguments?
[Python] Let's execute the module regularly using schedule
Let's do web scraping with Python (weather forecast)
Let's do web scraping with Python (stock price)