I will send it to those who want to release the API at explosive speed
https://github.com/awslabs/chalice
This chalice, which bears the name of the Holy Grail, is a tool rather than a framework, and if you have application code, you can configure AWS Lambda, AWS APIGateway, IAM Role, etc., and deploy it on AWS. is. This builds an API at explosive speed and realizes production release.
Since we will build APIs using AWS services fully in both rabbits and corners, it is necessary to set the credentials first.
$ mkdir ~/.aws
$ cat >> ~/.aws/config
[default]
aws_access_key_id=YOUR_ACCESS_KEY_HERE
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
region=ap-northeast-1
If you switch between multiple AWS accounts, we recommend that you set the ACCESS KEY in the environment variable with direnv. It depends on the SDK, but for boto3 used this time, it seems that credentials and environment variables are being read with a priority that is compatible with AWS Cli, so here You may be happy if you refer to the -to-configure-aws-cli /) article.
Let's dive into the command line as below
$ pip install chalice
$ chalice new-project helloworld && cd helloworld
Initial creation of lambda function.
Creating role
Creating deployment package.
Lambda deploy done.
Initiating first time deployment...
Deploying to: dev
https://endpoint/dev/
$ curl https://endpoint/dev
{"hello": "world"}
The above series of work is deployed immediately after creating a new project.
With the magic word chalice deploy
, it deploys the project to ʻAWS Lambda and deploys it in a publicly accessible state by linking ʻAWS API Gateway
and ʻAWS Lambda`. It is.
If you want to mess up the contents of the API, let's play with the contents of ʻapp.py` created at the time of new creation as instinct.
There are many requirements such as wanting an API that stores logs for the time being around me. There are many cases where you can save it as JSON in S3 for the time being and salvage it later. Especially now, AWS Athena is used for JSON aggregation. There are things that can be done interactively, and I often hear about the requirements.
Let's create an API that will write JSON to S3 in that format when you POST JSON to the API.
First, let's make a bucket on S3 (this requires manual work) And let's install the AWS SDK
$ pip install boto3
Let's rewrite ʻapp.py` as follows
app.py
from chalice import Chalice
app = Chalice(app_name='helloworld')
import json
import boto3
from botocore.exceptions import ClientError
from chalice import NotFoundError
S3 = boto3.client('s3', region_name='ap-northeast-1')
BUCKET = 'your-bucket-name'
@app.route('/objects/{key}', methods=['GET', 'PUT'])
def s3objects(key):
request = app.current_request
if request.method == 'PUT':
S3.put_object(Bucket=BUCKET, Key=key,
Body=json.dumps(request.json_body))
elif request.method == 'GET':
try:
response = S3.get_object(Bucket=BUCKET, Key=key)
return json.loads(response['Body'].read())
except ClientError as e:
raise NotFoundError(key)
Let's deploy after rewriting
$ chalice deploy
Updating IAM policy.
The following actions will be added to the execution policy:
s3:GetObject
s3:PutObject
Would you like to continue? [Y/n]: Y
Updating lambda function...
Regen deployment package...
Sending changes to lambda.
Lambda deploy done.
API Gateway rest API already found.
Deleting root resource id
Done deleting existing resources.
Deploying to: dev
https://endpoint/dev/
I entered Y smoothly and deployed it, but in fact, chalice
is doing the setting of IAM Role nicely here.
To explain in detail, at the time of deployment, it parses the source code to be deployed, determines the required permissions, and asks if it is okay to set those permissions, that is, the following two permissions.
And it will set that permission to IAM and deploy it so that Lambda can access S3. In other words, the person who builds the application is deployed after the IAM settings are made without being aware of the IAM settings.
What a terrifying way to do it! !! IAM settings are terribly annoying, and it can be a story like attaching Admin, so I'm glad if it automatically sets properly.
Python has a gap between 2.X series and 3.X series, and there is deep sadness in the world. Python on AWS Lambda is 2.7 series, so it is different from 3.X series Therefore, there are many cases where it is better to match the environment at hand.
To break away from such deep sadness To switch the development environment at hand to any environment Let's put pyenv