[PYTHON] streamlit tutorial Japanese translation

The following Japanese translation. https://docs.streamlit.io/en/latest/tutorial/create_a_data_explorer_app.html#create-an-app

Create an app

uber_pickups.py


import streamlit as st
import pandas as pd
import numpy as np

st.title('Uber pickups in NYC')

streamlit run uber_pickups.py A new tab will open automatically.

Get the data

DATE_COLUMN = 'date/time'
DATA_URL = ('https://s3-us-west-2.amazonaws.com/'
         'streamlit-demo-data/uber-raw-data-sep14.csv.gz')

def load_data(nrows):
    data = pd.read_csv(DATA_URL, nrows=nrows)
    lowercase = lambda x: str(x).lower()
    data.rename(lowercase, axis='columns', inplace=True)
    data[DATE_COLUMN] = pd.to_datetime(data[DATE_COLUMN])
    return data

# Create a text element and let the reader know the data is loading.
data_load_state = st.text('Loading data...')
# Load 10,000 rows of data into the dataframe.
data = load_data(10000)
# Notify the reader that the data was successfully loaded.
data_load_state.text('Loading data...done!')

This function takes one parameter (nrows) that specifies the number of rows you want to load into the data frame.

In the upper right corner of the app, you'll see several buttons asking if you want to rerun the app. If you select Always Rerun, your changes will be displayed automatically each time you save.

Labor saving by cash

Added as follows

@st.cache
def load_data(nrows):

Streamlit will automatically rerun the app when you save the script. This is my first time running this script on @ st.cache, so nothing has changed. Let's play with the files a little more so that you can feel the power of the cache.

Changed as follows

#Change before
data_load_state.text('Loading data...done!')

#After change
data_load_state.text("Done! (using st.cache)")

Can you see that the added line came out immediately? Taking a step back, this is actually very surprising. Something magical is happening behind the scenes and it only takes one line of code to launch it.

Investigate raw data

Add the following to the end

st.subheader('Raw data')
st.write(data)

Draw a histogram

st.subheader('Number of pickups by hour')
hist_values = np.histogram(
    data[DATE_COLUMN].dt.hour, bins=24, range=(0,24))[0]
st.bar_chart(hist_values)

After a quick check, it seems that the busiest time is 17:00 (5 pm).

I used Streamlit's native bar_chart () method to draw this diagram, but it's important to know that Streamlit supports more complex chart libraries such as Altair, Bokeh, Plotly, and Matplotlib. is. See the supported chart libraries for a complete list.

Plot the data on the map

By using a histogram on your Uber dataset, you've been able to identify the busiest times of the pickup, but what if you want to know where the pickups are concentrated in the city? This data can be displayed as a bar graph, but it is not easy to interpret unless you are familiar with the latitude and longitude coordinates of the city. Let's use the Streamlit st.map () function to overlay the data on a map of New York City to show the concentration of the pickups.

st.subheader('Map of all pickups')
st.map(data)

After drawing the histogram, I found that the busiest time for Uber pickups was 17:00. Let's redraw the map to show that the pickups are concentrated at 17:00.

Add the following to the end.

hour_to_filter = 17
filtered_data = data[data[DATE_COLUMN].dt.hour == hour_to_filter]
st.subheader(f'Map of all pickups at {hour_to_filter}:00')
st.map(filtered_data)

Use the slider

#Change before
hour_to_filter = 17

#After change
hour_to_filter = st.slider('hour', 0, 23, 17)  # min: 0h, max: 23h, default: 17h

Use the button

#Change before
st.subheader('Raw data')
st.write(data)

#After change
if st.checkbox('Show raw data'):
    st.subheader('Raw data')
    st.write(data)

Recommended Posts

streamlit tutorial Japanese translation
streamlit explanation Japanese translation
Biopython Tutorial and Cookbook Japanese translation (4.3)
Biopython Tutorial and Cookbook Japanese translation (4.1)
Biopython Tutorial and Cookbook Japanese translation (4.5)
Biopython Tutorial and Cookbook Japanese translation (4.8)
Biopython Tutorial and Cookbook Japanese translation (4.7)
Biopython Tutorial and Cookbook Japanese translation (4.9)
Biopython Tutorial and Cookbook Japanese translation (4.6)
Biopython Tutorial and Cookbook Japanese translation (4.2)
Biopython Tutorial and Cookbook Japanese translation (4.4)
sosreport Japanese translation
[Translation] hyperopt tutorial
Biopython Tutorial and Cookbook Japanese translation (Chapter 1, 2)
man systemd Japanese translation
man systemd.service Japanese translation
man nftables Japanese translation
Dockerfile reference Japanese translation
docker-compose --help Japanese translation
docker help Japanese translation
SymPy tutorial Japanese notebook
[PyTorch] Tutorial (Japanese version) ② ~ AUTOGRAD ~
docker build --help Japanese translation
Japanese translation of sysstat manual
[PyTorch] Tutorial (Japanese version) ① ~ Tensor ~
Japanese translation of Linux manual
docker run --help Japanese translation
[Translation] scikit-learn 0.18 Tutorial Text data manipulation
Docker mysql quick reference Japanese translation
Japanese translation of the e2fsprogs manual
[Translation] scikit-learn 0.18 Tutorial Table of Contents
Compose file version 3 reference Japanese translation
Japanese translation of the man-db manual
Appropriate Japanese translation of pytorch tensor_tutorial
Japanese translation of the util-linux manual
Japanese translation of the iproute2 manual
Apache Spark Document Japanese Translation --Submitting Applications
Apache Spark Document Japanese Translation --Quick Start
Japanese translation of the LDP man-pages manual
[Google App Engine] User Objects (Japanese translation)
[Translation] scikit-learn 0.18 Tutorial External resources, videos, talk
Getting started: 30 seconds to Keras Japanese translation
[PyTorch] Tutorial (Japanese version) ③ ~ NEURAL NETWORKS (Neural Network) ~
[Translation] scikit-learn 0.18 Tutorial Choosing the Right Model
Japanese translation: PEP 20 --The Zen of Python