It is __AWS (Amazon Web Services) __ that expands rapidly so that there is no day when information is not visible when catching IT news, but I think that it is difficult to get out of hand even though there is a free frame. ..
In this article, I will describe the contents of locally developing __S3 (Amazon Simple Storage Service) __ and __DynamoDB (Amazon DynamoDB) __, which are likely to be used frequently among the services provided by AWS, using Docker. I will go.
--AWS SDK for JavaScript: JavaScript library for using AWS services --S3: Storage service provided by AWS. --DynamoDB: A fully managed NoSQL database provided by AWS. Key-value structure.
Installation of Docker and Node.js will be omitted.
S3 This time, I will use Minio which is compatible with S3. I think that the access key and secret key will be devised including the use of IAM, but since it is local, I will use the official contents without worrying about it.
docker run -p 9000:9000 \
-e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" \
-e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
minio/minio server /data
If the above command is running normally, hit port 9000 and you should see a screen like the one in the image below.
If you enter the access key and secret key specified when docker is started, you can create a bucket and upload / download files at this point. The UI is also cool and there are many functions, so it seems that there are uses other than AWS.
DynamoDB For DynamoDB, the AWS official has published a Docker image. Thank you for using it.
docker run -d --name dynamodb -p 8000:8000 amazon/dynamodb-local
If S3 and DynamoDB start successfully, check the running container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e467706a247 amazon/dynamodb-local "java -jar DynamoDBL…" About an hour ago Up About an hour 0.0.0.0:8000->8000/tcp dynamodb
113388461d4a minio/minio "/usr/bin/docker-ent…" 11 hours ago Up 11 hours 0.0.0.0:9000->9000/tcp fervent_chaplygin
I think that there is no problem if there is no abnormality in the status etc.
AWS SDK for JavaScript There are AWS SDKs for server-side languages such as Python and Go Lang, but I think the front side is easier to check, so this time I will use the one for JS.
CDN:
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.819.0.min.js"></script>
Node.js:
npm install aws-sdk
For other environments here
This time, we will upload / download S3 and register / acquire DynamoDB DB. The implementation screen and implementation contents created this time are as follows. sample: https://github.com/kanaria42/aws-local-test
Sample implementation:
--Add an image file (fruit image at the bottom of the screen) from the file add button in the center of the screen. --Upload the added file to S3 from the upload button at the top of the screen. At this time, save the registration key etc. in DynamoDB. --Get the image list registered in DynamoDB at the time of initial display / upload and display it in the list at the top of the screen. Empty if not. --Select any line in the list and press the Download button at the top right of the screen to download the image from S3.
S3
You can create it in the web application, but since it is created only once, it is executed separately from Node. All you have to do is create an instance of S3 and create a bucket with createBucket. Again, I don't think you will write the access key inside in actual operation.
Reference: src/app/createBucket.js
const AWS = require("aws-sdk");
const s3 = new AWS.S3({
accessKeyId: 'AKIAIOSFODNN7EXAMPLE',
secretAccessKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
endpoint: 'http://127.0.0.1:9000',
s3ForcePathStyle: true,
signatureVersion: 'v4'
});
s3.createBucket({Bucket: [Bucket name]}, function(err, data) {
if (err) {
console.error("Unable to create bucket. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log("Created bucket. Bucket description JSON:", JSON.stringify(data, null, 2));
}
});
Use "putObject" etc. for uploading. The official reference is in the form of callback, but promises can also be used. In the sample implementation, the file name is set to key as it is, but please note that it will be overwritten if there is another file with the same file name.
s3.putObject({Bucket: [Bucket name], Key: [S3 registered name], Body: [File data]}).promise();
Use "getObject" etc. to download. You can get it with the key specified at the time of upload.
s3.getObject({Bucket: [Bucket name], Key: [S3 registered name]}).promise();
DynamoDB
As with S3, the table only needs to be created once, so it will be created separately in advance. The way to give the table definition is not much different from when creating a resource with CroudFormation. In the example, "Name" is set as the primary key.
Reference: src/app/createTable.js
const AWS = require("aws-sdk");
const dynamodb = new AWS.DynamoDB({
endpoint: 'http://127.0.0.1:8000',
region: 'ap-northeast-1',
accessKeyId: 'fakeMyKeyId',
secretAccessKey: 'fakeSecretAccessKey'
});
const params = {
AttributeDefinitions: [
{
AttributeName: "Name",
AttributeType: "S"
}
],
KeySchema: [
{
AttributeName: "Name",
KeyType: "HASH"
}
],
ProvisionedThroughput: {
ReadCapacityUnits: 5,
WriteCapacityUnits: 5
},
TableName: [table name]
};
dynamodb.createTable(params, function(err, data) {
if (err) {
console.error("Unable to create table. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log("Created table. Table description JSON:", JSON.stringify(data, null, 2));
}
});
Use "putItem" etc. for registration. The following is an example of registering some other items in addition to the "Name" set when creating the table. The "S" under "Name" etc. represents the type of the string table definition.
const item = {Name: {S: [file name]}, Type: {S: [file category]}, Size: {S: [file size]}};
dynamodb.putItem({Item: item, TableName: this.TABLE_NAME}).promise());
Use "scan" etc. to get all the registered data. In the case of getting one item, it is a form to specify the key with getItem. In actual use, I think that it may be restricted by adding a Limit option or the like.
dynamodb.scan({TableName: [table name]}).promise();
This is the screen where you have registered 4 image data. I was able to confirm the acquisition, S3 upload, and download from the DynamoDB data registration. This is the only story where it took more time to implement a UI that was fun and not the main idea while creating it.
Web screen:
Minio:
The following is a brief summary and impression of implementing it.
--The corresponding Docker image for S3, DynamoDB, etc. is open to the public and can be developed locally. --AWS SDK etc. can also be used in the above environment --The original AWS may be charged, so it seems better to use this if possible ――However, I feel that it will be difficult to separate the environment from the production environment.
Qiita is the first post and I haven't touched AWS related things yet, so I would appreciate it if you could point out any deficiencies in the description or recognition gaps.
Introduction of DynamoDB Local Try S3 compatible object storage minio