If you want to create a small web application, in Python you can use Flask or Bottle. ) Can be used.
These frameworks can be realized by using a Python decorator to support "which URL" and "which program runs".
For example, the following Flask application implements a web server that returns Hello, World!
When HTTP access is made to/
, and the routing response response is very easy to understand.
Quoted from Flask's official page
# https://flask.palletsprojects.com/en/1.1.x/quickstart/
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
Now, these web applications work smoothly when run locally for development purposes.
However, if you try to run it on another server as production operation, you need to procure an accessible server or run middleware such as nginx
+ ʻuwsgi / gunicorn`, so from the completion of development to production The impression is that there are many things to prepare before operation.
Especially for the server, consider whether the server is operating normally apart from the application such as running cost (actual cost), dealing with the fact that the middleware that is running is down and the service can not be provided, disk full etc. There is an idea that if possible, you do not want to use the server in the first place, such as the need to deal with the expiration of security support for the OS and middleware.
From this motivation
--You can use Python to create a web application with a simple description like Flask / Bottle. --The above Web application can be operated without managing the server.
I wondered if these two points could be solved, and as a result, I was able to achieve this by using the micro framework Chalice for AWS Lambda, so I will introduce this method. ..
TL; DR
--Use text / html
in Chalice's response
--Implement a local route that returns static files for local testing
--When returning various contents, consider setting nginx in local and proxying it.
--Chalice's @ app.route
needs to be configured to receive POST from HTML form
--Use CloudFront to stream static files to S3 Origin, otherwise to API Gateway deployed with Chalice
--In the case of API Gateway, it is always accessed as https: // <FQDN> / <STAGE> /
, but by using CloudFront's Origin Path, top-level access (https: // <FQDN> /" Even if you do
), you can access the/
of the API Gateway at a specific stage.
This article deals with many of the resources available on AWS.
--API Gateway + AWS Lambda (automation by Chalice) --CloudFront (the first endpoint to use for site access) --Amazon S3 (location of static files such as image / css / js) --Route 53 (assignment of your own domain) --Certificate Manager (Issuing and managing SSL certificates used by CloudFront)
The basic settings and usage of each service are not covered in this article.
An illustration of these relationships is as follows.
The amount of code is fairly high, so I'm deploying the entire code on Gtihub.
https://github.com/t-kigi/chalice-lambda-webserver-example
The code is quoted from time to time, but see this repository for the overall flow.
The development environment is Ubuntu 18.04.
$ pipenv --version
pipenv, version 2018.11.26
$ pipenv run chalice --version
chalice 1.20.0, python 3.8.2, linux 5.4.0-48-generic
For Chalice setup etc. I wrote an article before, please refer to that. This time, the project is created as chalice new-project server
.
Chalice's default behavior is to use it for API purposes, so if you don't set anything in particular, you will get a response of ʻapplication / json. However, when returning a page that can be displayed by a browser as a Web application, it is better to return a response of
text / html. To do this, you need to set the ContentType of the response header to
text / html`. When implementing the above Flask example with Chalice, the code is as follows.
When Flask example is rewritten in Chalice(app.py)
from chalice import Chalice, Response
app = Chalice(app_name='server')
@app.route('/')
def index():
return Response(
status_code=200,
headers={'Content-Type': 'text/html'},
body='Hello, World!')
By the way, if you set this much, the contents written in the body will be returned to the client side with text / html
.
In other words, you can do the same as a Web application that uses the template engine by creating a page using the template engine and sending the final result to the body.
I'm using jinja2
as the template engine here, but if you want to use a different template engine, that's fine.
This time, the directory structure is as follows (excerpt of only necessary parts)
.
├── app.py #Chalice's entry point
└── chalicelib #All files to deploy must be under chalicelib
├── __init__.py
├── common.py #Place common items to call others from another module
├── template.py # chalicelib/Functions that load templates from templates, etc.
└── templates #Put the template under this
├── index.tpl
└── search.tpl
chalicelib/common.py(Excerpt)
#Chalice object shared by multiple files
app = Chalice(app_name='server')
#The path of the directory where the project is located
chalicelib_dir = os.path.dirname(__file__)
project_dir = os.path.dirname(chalicelib_dir)
chalicelib/templates Settings that can get the following template files(template.py)
import os
from jinja2 import Environment, FileSystemLoader, select_autoescape
from chalicelib.common import project_dir
template_path = os.path.join(project_dir, 'chalicelib/templates')
loader = FileSystemLoader([template_path])
jinja2_env = Environment(
loader=loader,
autoescape=select_autoescape(['html', 'xml']))
def get(template_path):
'''Get template'''
return jinja2_env.get_template(template_path)
An example of reading chalicelib / templates / index.tpl
from ʻapp.py` is as follows.
app.py
from chalicelib import template
from chalicelib.common import app
def html_render(template_path, **params):
'''Returns the HTML rendering response'''
tpl = template.get(template_path)
return Response(
status_code=200,
headers={'Content-Type': 'text/html'},
body=tpl.render(**params))
@app.route('/')
def index():
'''Return top page'''
return html_render('index.tpl')
Use the chalice local
command for verification in the local environment.
However, I personally often want to set different settings for the local environment and the post-deployment environment, so it is recommended to create a local
stage for local verification. If you create a local stage, for example
--Access AWS resources using profile only in the local stage (IAM Role is used during production, so Profile is not used) --Define a route that is valid only in the local stage
Can be realized
Add the local
scope to stages
in .chalice/config.json
to create the local stage.
{
"version": "2.0",
"app_name": "server",
"stages": {
"dev": {
"api_gateway_stage": "v1"
},
"local": {
"environment_variables": {
"STAGE": "local"
}
}
}
}
After adding this, start it as chalice local --stage local
(if it is already running, stop the process and restart it). Now the environment variable STAGE
will be defined with the value local
only when executed as --stage local
.
The main purpose static files used in development are images, CSS, and JavaScript code. Therefore, the path to place these is decided, and when accessing it, those files are read and returned.
I finally uploaded these files to S3 and put them outside chalicelib
because I don't want them to be included in the Lambda upload. The prepared directory structure is as follows.
Excerpt of only the part necessary for explaining the static file
.
├── server
│ ├── app.py
│ ├── chalicelib
│ │ ├── __init__.py
│ │ ├── common.py
│ │ └── staticfiles.py
│ └── static -> ../static
└── static
├── css
│ └── style.css
├── images
│ ├── sample.png
│ └── sub
│ └── sample.jpg
└── js
└── index.js
If you run the server on chalice local --stage local
, the server will be on localhost: 8000
. Therefore, here, I want to set it so that static / images / sample.png
can be obtained by accessing http: // localhost: 8000 / images / sample.png
.
To achieve this, I prepared chalicelib / staticfiles.py
.
chalicelib/staticfiles.py
#!/usr/bin/python
# -*- coding: utf-8 -*-
'''
An implementation that returns a static file that works with chalice local.
CloudFront in production->Because it will be dealt with by the path to S3
It is intended to be used only during development.
'''
import os
from chalice import Response
from chalice import NotFoundError
from chalicelib.common import app, project_dir
def static_filepath(directory, file, subdirs=[]):
'''Generate and return static file path on local server'''
pathes = [f for f in ([directory] + subdirs + [file]) if f is not None]
filepath = os.path.join(*pathes)
localpath = os.path.join(project_dir, 'static', filepath)
return (f'/{filepath}', localpath)
def static_content_type(filepath):
'''Content for static files-Returns Type'''
(_, suffix) = os.path.splitext(filepath.lower())
if suffix in ['.png', '.ico']:
return 'image/png'
if suffix in ['.jpg', '.jpeg']:
return 'image/jpeg'
if suffix in ['.css']:
return 'text/css'
if suffix in ['.js']:
return 'text/javascript'
return 'application/json'
def load_static(access, filepath, binary=False):
'''Read static file'''
try:
with open(filepath, 'rb' if binary else 'r') as fp:
data = fp.read()
return Response(
body=data, status_code=200,
headers={'Content-Type': static_content_type(filepath)})
except Exception:
raise NotFoundError(access)
@app.route('/favicon.ico', content_types=["*/*"])
def favicon():
(access, filepath) = static_filepath(None, 'favicon.ico')
return load_static(access, filepath, binary=True)
@app.route('/images/{file}', content_types=["*/*"])
@app.route('/images/{dir1}/{file}', content_types=["*/*"])
def images(dir1=None, file=None):
'''
Image file response for local environment
(When deploying to Lambda, it does not work due to the path, so stream it to S3 with CloudFront)
'''
(access, filepath) = static_filepath('images', file, [dir1])
return load_static(access, filepath, binary=True)
@app.route('/css/{file}', content_types=["*/*"])
@app.route('/css/{dir1}/{file}', content_types=["*/*"])
def css(dir1=None, file=None):
'''
CSS file response for local environment
(When deploying to Lambda, it does not work due to the path, so stream it to S3 with CloudFront)
'''
(access, filepath) = static_filepath('css', file, [dir1])
return load_static(access, filepath)
@app.route('/js/{file}', content_types=["*/*"])
@app.route('/js/{dir1}/{file}', content_types=["*/*"])
def js(dir1=None, file=None):
'''
JS file response for local environment
(When deploying to Lambda, it does not work due to the path, so stream it to S3 with CloudFront)
'''
(access, filepath) = static_filepath('js', file, [dir1])
return load_static(access, filepath)
It is a module that has a function to read the file under static
and return the response only when the path is specific. To enable this only for the local stage
app.py(Excerpt)
import os
stage = os.environ.get('STAGE', 'dev')
if stage == 'local':
#Used only locally
from chalicelib import staticfiles # noqa
You can load staticfiles
as (# noqa
loads the module to associate with @ app.route
, but ʻapp.py` gives a warning that it is not used directly). ..
If you access http: // localhost: 8000
in this state and the image / CSS / JS is applied properly, the static resource loading is successful.
$ pipenv run chalice local --stage local
Serving on http://127.0.0.1:8000
127.0.0.1 - - [23/Sep/2020 17:56:13] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [23/Sep/2020 17:56:14] "GET /css/style.css HTTP/1.1" 200 -
127.0.0.1 - - [23/Sep/2020 17:56:14] "GET /images/sample.png HTTP/1.1" 200 -
127.0.0.1 - - [23/Sep/2020 17:56:14] "GET /js/index.js HTTP/1.1" 200 -
The screen displayed by accessing http: // localhost: 8000
with Chrome is as follows.
With this amount, simple verification can be completed locally.
The method of adding the above paths one by one is not realistic to add each time the extension increases or the target path increases.
In such a case, you can set up nginx locally and use the server setting that describes the location so that root is static
only for a specific path. After that, you can continue to access and verify at http: // localhost /.
This article does not explain how to use nginx, but if it gets complicated, you may consider introducing it, so I will introduce it as a reference.
nginx.Specific example of conf
location / {
#Click here for normal access
proxy_pass http://localhost:8000;
}
location ~ ^/(images|css|js)/ {
#Get static resources from a fixed path
root (Project path)/static;
}
When POSTing from the browser form, the Content-Type is sent as ʻapplication / x-www-form-urlencodedor
multipart / form-data`.
By default, Chalice does not accept this, so it is necessary to make it possible to receive these in the content_types of the corresponding method.
chalicelib/common.py
post_content_types = [
'application/x-www-form-urlencoded',
'multipart/form-data'
]
def post_params():
'''dict returns the parameters sent to the post method'''
def to_s(s):
try:
return s.decode()
except Exception:
return s
#Convert to str type and return
body = app.current_request.raw_body
parsed = dict(parse.parse_qsl(body))
return {to_s(k): to_s(v) for (k, v) in parsed.items()}
app.py
@app.route('/search', methods=['POST'],
content_types=common.post_content_types)
def search():
'''Search for'''
params = common.post_params() #Get parameters in the form of dict
If you have a broadly privileged IAM with Programmic Access, you can deploy your application with a single chalice deploy
command.
On the other hand, if this is not the case, or if you want to use CI / CD tools, you can use the chalice package
command to create a toolkit that can be deployed with CloudFormation.
Specific examples are as follows. Since --profile
and --region
are omitted, add them as needed.
chalice_Package usage example
BUCKET=<Specify an S3 bucket to upload resources for CloudFormation>
#Convert to deployment method in CloudFormation
$ pipenv run chalice package build
$ cd build
#Package and upload to S3
$ aws cloudformation package --template-file sam.json \
--s3-bucket ${BUCKET} --output-template-file chalice-webapp.yml
#Deploy with CloudFormation
$ aws cloudformation deploy --template-file chalice-webapp.yml --stack-name <STACK name> --capabilities CAPABILITY_IAM
This time, we will use the result of deploying with the chalice deploy
command. Since --stage
is not specified, the dev
stage setting is used here (similarly, --region
and --profile
are omitted).
$ pipenv run chalice deploy
Creating deployment package.
Reusing existing deployment package.
Creating IAM role: server-dev
Creating lambda function: server-dev
Creating Rest API
Resources deployed:
- Lambda ARN: arn:aws:lambda:ap-northeast-1:***********:function:server-dev
- Rest API URL: https://**********.execute-api.ap-northeast-1.amazonaws.com/v1/
By accessing the deployed URL, text / html
will be returned properly.
$ curl https://**********.execute-api.ap-northeast-1.amazonaws.com/v1/
<!DOCTYPE html>
<html lang="ja">
<head>
<title>HELLO</title>
<meta charset="UTF-8">
<link rel="stylesheet" href="/css/style.css">
</head>
<body>
<h1>Lambda Web Hosting</h1>
<p>AWS Lambda as a backend+ Chalice (Python)In Flask/This is a sample that moves like a Bottle.</p>
<h2>Load Static Files</h2>
<p>Another file in the h1 tag/css/style.The style read from css is correct.</p>
<p>The image is as follows.</p>
<img src="/images/sample.png "/><br>
<p>JavaScript is also loaded.</p>
<span id="counter">0</span><br>
<button id="button" type="button">counter(Press the button to add the numbers)</button>
<h2>Form Post</h2>
<p>Get the match from the database mock you have internally.</p>
<form method="POST" action="/search">
<label>Search keyword: </label>
<input type="text" name="keyword" value="" />
<br>
<button type="submit">Search</button>
</form>
<script src="/js/index.js"></script>
</body>
</html>
Create one S3 bucket somewhere so that you can browse the contents here from CloudFront.
This time, I prepared a sample-bucket.t-kigi.net
bucket.
This can be batch uploaded / updated as follows, for example using ʻawscli`.
#Go to root for static files
cd static
#Copy all files
$ aws s3 sync . s3://sample-bucket.t-kigi.net/
upload: css/style.css to s3://sample-bucket.t-kigi.net/css/style.css
upload: js/index.js to s3://sample-bucket.t-kigi.net/js/index.js
upload: ./favicon.ico to s3://sample-bucket.t-kigi.net/favicon.ico
upload: images/sub/sample.jpg to s3://sample-bucket.t-kigi.net/images/sub/sample.jpg
upload: images/sample.png to s3://sample-bucket.t-kigi.net/images/sample.png
Make the following settings to set up CloudFront Distribution as the access source of the website. In addition, all procedures are carried out using the Managed Console.
--Create an SSL certificate for the target domain on ** us-east-1 with Certification Manager
--CloudFront does not specify a region, so all resources used must be created in us-east-1.
--This time, by adding a record certifying that you are an administrator to the domain of the host zone managed by Route53, the certificate is activated & automatically renewed.
--Select Create Distribution
> Web
to create a distribution and set the following
--Enter the API Gateway ** FQDN ** for Origin Domain Name (eg **********. execute-api.ap-northeast-1.amazonaws.com
)
--Supplement: If you paste the URI, this and the next Origin Path will be set to appropriate values.
--Put the API Gateway stage in Origin Path for top page access (eg / v1
)
--Set Viewer Protocol Policy to Redirect HTTP to HTTPS
(because API Gateway only accepts HTTPS)
--Allowed HTTP Methods to accept all methods as a web application server. Specify "GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE"
--Specify "Managed-CachingDisabled" as the Cache Policy in Cache and origin request settings (Do not set the CDN to be used as a cache server at this stage)
--Specify the FQDN of the prepared domain (here lambdasite.t-kigi.net
) in Alternate Domain Names of Distribution Settings.
--Specify the SSL certificate created in advance for SSL Certificate
――Immediately after creating the certificate, it may not appear in the options, so in that case wait for a while
--If it still does not appear, check again if the created region is us-east-1.
--Do not enter anything in Default Root Object **
--If you write here, the request of the URL that is not related to the default access will be sent to API Gateway and the access to the top page will fail.
--Other items are the default or decided at your convenience
--After the Distribution is created, select the ID and select the ʻOrigins and Origin Groupstab --Add S3 bucket for static files prepared in advance with Create Origin --Origin Path is left blank (top of bucket = top of URL) --Set Restrict Bucket Access to Yes --Select Create a New Identity to create a new permission to access S3 from your CDN --Grant Read Permissions on Bucket and "Yes, Update Bucket Policy" to update the bucket policy --Open the
Behaviorstab and specify the origin of a specific path in Create Behavior. --Direct access to
/ images / *,
/ css / *,
/ js / *,
/favicon.ico` to S3 Origin (see image below)
--Allowed HTTP Methods can be GET / HEAD (or set appropriately)
Follow the above steps and wait until the CloudFront status becomes Deployed. It takes about 10 to 20 minutes.
The sample sites that have been deployed above are as follows.
https://lambdasite.t-kigi.net
This procedure forwards CloudFront's /
to API Gateway's / v1 /
, allowing you to handle site-top-level access.
If you want to transfer to / v1
-> / v1
as it is, you do not have to enter anything in ʻOrigin Path` of the origin setting.
The default Lambda concurrency is 1000, so if you combine it with the CDN cache, you can expect to handle a lot of concurrency.
NOTE: Each price is based on the one in the ap-northeast-1 (Tokyo) region at the time of writing the article on September 23, 2020.
The charge for 30 days (1 month) when t2.micro, which is a free tier on AWS (1 year), is left standing
--Instance operation fee: 10.944USD = (0.0152USD / hour * 24 hours * 30 days)
--Minimum EBS Volume (8GB) price associated with Amazon Linux2 AMI: 0.96USD = (0.12USD / month * 8)
--Other data transfer volume outside AWS (0.114USD / GB)
If it is a small system, it will be about 1200 to 1300 yen (as of the time of writing the article). It will be free within 12 months after creating an AWS account, but if you exceed this, or if you create multiple accounts within one company and link them, the free tier will disappear (experience story) .. However, this is a system that does not consider redundancy, and it is assumed that if the instance goes down, it will be dealt with manually. If you want to make it redundant, you need to prepare two (or more) servers or put ALB in the front stage (monthly fee + about 20 USD), which causes a simple cost problem.
On the other hand, if you use AWS Lambda,
—— 1 million free requests per month, and 400,000 GB-second computing time is free (regardless of how long you have created your account)
--If you run your application with 128MB of memory, the cumulative Lambda uptime for about 37 days (400,000GB-seconds ÷ 0.125GB = 3.2 million seconds)
minutes is free per month
――Even if there are more than 1 million requests, 0.2 USD per 1 million next
--S3 charges are as low as 0.37 USD for 1 million requests (GET) and 0.025 USD / month per 1GB holding.
--Data transfer amount is the same through CloudFront (data transfer amount outside AWS (0.114USD / GB))
--In addition to that, small charges such as API Gateway and Route 53 will be incurred.
In such a form, if the server is accessed infrequently, it will be possible to operate it sufficiently with about 1 to 2 USD. A simple comparison has a big advantage, but it is also noteworthy that "each AWS service used here is redundant". It may be difficult to handle all data such as tens of thousands of requests per second with Lambda, but if you use CloudFront's cache and AWS Lambda's parallel execution, you can handle services with hundreds of requests per second without any special effort. There is a possibility. The downside is that Lambda's mechanism is that it may take some time to start up for the first time when there is no access, and the response time may not be stable.
If your website consists of just static content, you can do it with CloudFront --S3 alone. Furthermore, various static contents are placed in S3, API is provided by API Gateway, and some paths are directly connected to API Gateway by CloudFront to make SPA (Single Page Application) that does not care about CORS. I think that is a more modern way of using it.
However, this time there was a requirement to run the template engine using the data exchanged on the back end, so I thought about designing AWS Lambda as the default origin to flow communication there.
I introduced how to implement a Web application like Flask / Bottle, which is Serverless, using various resources of Chalice + AWS Lambda + AWS, and put it in the actual production environment. However, as a result, one service is built by using various AWS services, so I think that it became difficult to understand even if you read this article without any particular knowledge.
However, in the sense of creating a Web application that does not manage the server, it can be said that "in the end, we are creating an environment where we can concentrate on coding." It may seem like a detour to understand the server and each infrastructure because you don't manage the server, but I personally think that there is no loss in learning in this era.
Based on the project introduced here
--Use DynamoDB as a data store on the application side --Since there are some restrictions on using RDB, it may be a good teaching material to learn the implementation of Lambda --DynamoDB set. --Efficient cache operation by using CloudFront --This time, the API cache is OFF and the static resource cache is ON. ――CDN cache may cause information leakage of other users depending on the site creation and settings, so when using it in production, carefully consider the relationship between the path and cache, and test it before use. --Introduction of authentication mechanism --Now, even if you access the URI of API Gateway directly, a response is returned. --You can implement connection restrictions by introducing API Key to API Gateway or by determining the source IP address by X-Forwarded-For in the Lambda header. --Build more restricted sites by using CloudFront Signed Cookies it can --If you have a separate IDaaS, you can use Chalice's Authorizer mechanism. --Implement your web application in a language other than Python ――If you rewrite the support part of Chalice in your favorite language / framework, you can do the same thing.
By introducing such points, it is possible to build an environment that can withstand more actual operations.
Recommended Posts