Easily create virtual S3 and test the integration between AWS Lambda and S3 services in your local environment

Advance preparation

Please install the AWS toolkit for Eclise before running it. Please refer to the link below for the installation procedure.

-> AWS Toolkit installation procedure

When the installation is complete, the AWS project should appear on the new project screen.

maven is an installed premise.

Screenshot from 2017-04-20 17-13-55.png

Write a Lambda function

  1. First, create a project for AWS Lambda Java functions.

Screenshot from 2017-04-20 17-19-41.png -Project name: S3EventTutorial -Package name: com.amazonaws.lambda.s3tutorial Let's keep the essential information as above. If you press "Finish", the project will be created and the general project folder will look like the following. Screenshot from 2017-04-20 17-25-12.png

  1. Install a library called "s3mock_2.11" with Maven to mock S3. All you have to do is define the dependent library in the pom file, so please refer to the pom file below and create a pom for your own project.
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>com.amazonaws.lambda</groupId>
	<artifactId>s3tutorial</artifactId>
	<version>4.0.0</version>
	<dependencies>
		<dependency>
			<groupId>com.amazonaws</groupId>
			<artifactId>aws-lambda-java-core</artifactId>
			<version>1.1.0</version>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>com.amazonaws</groupId>
			<artifactId>aws-lambda-java-events</artifactId>
			<version>1.3.0</version>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.11</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>com.amazonaws</groupId>
			<artifactId>aws-java-sdk</artifactId>
			<version>1.11.119</version>
			<scope>compile</scope>
		</dependency>

		<!-- https://mvnrepository.com/artifact/com.typesafe.akka/akka-http-experimental_2.11 -->
		<dependency>
			<groupId>com.typesafe.akka</groupId>
			<artifactId>akka-http-experimental_2.11</artifactId>
			<version>2.4.11.1</version>
		</dependency>

		<!-- https://mvnrepository.com/artifact/com.typesafe.scala-logging/scala-logging_2.11 -->
		<dependency>
			<groupId>com.typesafe.scala-logging</groupId>
			<artifactId>scala-logging_2.11</artifactId>
			<version>3.5.0</version>
		</dependency>

		<!-- https://mvnrepository.com/artifact/io.findify/s3mock_2.11 -->
		<dependency>
			<groupId>io.findify</groupId>
			<artifactId>s3mock_2.11</artifactId>
			<version>0.1.10</version>
			<scope>test</scope>
		</dependency>
		<!-- https://mvnrepository.com/artifact/org.mockito/mockito-core -->
		<dependency>
			<groupId>org.mockito</groupId>
			<artifactId>mockito-core</artifactId>
			<version>2.7.22</version>
		</dependency>
		<!-- https://mvnrepository.com/artifact/com.github.tomakehurst/wiremock -->
		<dependency>
			<groupId>com.github.tomakehurst</groupId>
			<artifactId>wiremock</artifactId>
			<version>2.6.0</version>
		</dependency>


	</dependencies>
</project>

There may be dependent libraries in the local maven repository that are not in the repository, so let's run "mvn package" as a command line in the root folder of the project. And maven will download the dependency defined in pom.

  1. Lambda function logic Let's open the created LambdaFunctionHandler.java and write the logic. The idea is very simple.

When you receive an event that a file has been uploaded from S3, you can see the contents of the event, get the uploaded file, and write the file out on the console. If you look at the code, you'll understand it right away, so you don't have to explain it.


public class LambdaFunctionHandler implements RequestHandler<S3Event, Object> {
	
	private AmazonS3 s3Client;
	
	public LambdaFunctionHandler(AmazonS3 s3Client){
		this.s3Client = s3Client;
	}
	public LambdaFunctionHandler(){
		this.s3Client =  new AmazonS3Client(new ProfileCredentialsProvider());
	}
	
	private static void storeObject(InputStream input) throws IOException {
		// Read one text line at a time and display.
		BufferedReader reader = new BufferedReader(new InputStreamReader(input));
		while (true) {
			String line = reader.readLine();
			if (line == null)
				break;
			System.out.println("    " + line);
		}
		System.out.println();
	}

	@Override
	public Object handleRequest(S3Event input, Context context) {
		context.getLogger().log("Input: " + input);

		// Simply return the name of the bucket in request
		LambdaLogger lambdaLogger = context.getLogger();
		S3EventNotificationRecord record = input.getRecords().get(0);
		lambdaLogger.log(record.getEventName()); //event name

		String bucketName = record.getS3().getBucket().getName();
		String key = record.getS3().getObject().getKey();
		/*
		 * Get file to do further operation
		 */
		try {
			lambdaLogger.log("Downloading an object");

			S3Object s3object = s3Client.getObject(new GetObjectRequest(bucketName, key));

			lambdaLogger.log("Content-Type: " + s3object.getObjectMetadata().getContentType());

			storeObject(s3object.getObjectContent());

			// Get a range of bytes from an object.

			GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key);
			rangeObjectRequest.setRange(0, 10);
			S3Object objectPortion = s3Client.getObject(rangeObjectRequest);

			System.out.println("Printing bytes retrieved.");
			storeObject(objectPortion.getObjectContent());

		} catch (AmazonServiceException ase) {
			System.out.println("Caught an AmazonServiceException, which" + " means your request made it "
					+ "to Amazon S3, but was rejected with an error response" + " for some reason.");
			System.out.println("Error Message:    " + ase.getMessage());
			System.out.println("HTTP Status Code: " + ase.getStatusCode());
			System.out.println("AWS Error Code:   " + ase.getErrorCode());
			System.out.println("Error Type:       " + ase.getErrorType());
			System.out.println("Request ID:       " + ase.getRequestId());
		} catch (AmazonClientException ace) {
			System.out.println("Caught an AmazonClientException, which means" + " the client encountered "
					+ "an internal error while trying to " + "communicate with S3, "
					+ "such as not being able to access the network.");
			System.out.println("Error Message: " + ace.getMessage());
		}catch (IOException ioe){
			System.out.println("Caught an IOException, which means" + " the client encountered "
					+ "an internal error while trying to " + "save S3 object, "
					+ "such as not being able to access the network.");
			System.out.println("Error Message: " + ioe.getMessage());
		}
		return record.getS3().getObject().getKey();
	}

}


Let's create a test case for the code we wrote

This time we will focus on the implemented Lambda code, so open LambdaFunctionHandlerTest and create a test case. Let's first read the test case code.


    private static S3Event input;
    private static AmazonS3Client client;

    @BeforeClass
    public static void createInput() throws IOException {
        input = TestUtils.parse("s3-event.put.json", S3Event.class);
        
        S3Mock api = S3Mock.create(8999, "/tmp/s3");
        api.start();
                
        client = new AmazonS3Client(new AnonymousAWSCredentials());
        client.setRegion(Region.getRegion(Regions.AP_NORTHEAST_1));

        // use IP endpoint to override DNS-based bucket addressing
        client.setEndpoint("http://127.0.0.1:8999");

    }

    private Context createContext() {
        TestContext ctx = new TestContext();

        // TODO: customize your context here if needed.
        ctx.setFunctionName("Your Function Name");

        return ctx;
    }

    @Test
    public void testLambdaFunctionHandlerShouldReturnObjectKey() {
    	
        client.createBucket(new CreateBucketRequest("newbucket", "ap-northeast-1"));
    	ClassLoader classLoader = this.getClass().getClassLoader();
    	File file = new File(classLoader.getResource("file/test.xml").getFile());
        client.putObject(new PutObjectRequest(
        		                 "newbucket", "file/name", file));
    	
        LambdaFunctionHandler handler = new LambdaFunctionHandler(client);
        Context ctx = createContext();

        Object output = handler.handleRequest(input, ctx);

        if (output != null) {
        	assertEquals("file/name", output.toString());
            System.out.println(output.toString());
        }
    }

For testing, create and launch an instance of S3Mock with the createInput function. This instance will buy to port 8999 in your local environment and wait for your request. Create a folder called "/ temp / s3" and imitate the storage of the S3 service.

The most important thing is the contents of the function testLambdaFunctionHandlerShouldReturnObjectKey. As you can see, it implements the following tasks: --Create a "test bucket". Note: It is mandatory to specify Region (if you don't care about Region content, java.lang.NoSuchMethodError: com.amazonaws.regions.RegionUtils.getRegionByEndpoint (Ljava / lang / String;) Lcom / amazonaws / I get an Error called regions / Region ;, which is a bug in AWS) --Upload the file / test.xml created in the resource folder under the project to the temporary storage --Download the uploaded file from temporary S3 and check the contents.

Since the trigger is the content of the event defined in "s3-event.put.json", the information of the uploaded file must be reflected in the content of "s3-event.put.json".


{
  "Records": [
    {
      "eventVersion": "2.0",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-1",
      "eventTime": "1970-01-01T00:00:00.000Z",
      "eventName": "ObjectCreated:Put",
      "userIdentity": {
        "principalId": "EXAMPLE"
      },
      "requestParameters": {
        "sourceIPAddress": "127.0.0.1"
      },
      "responseElements": {
        "x-amz-request-id": "",
        "x-amz-id-2": "FMyUVURIY8//JRWeUWerMUE5JgHvANOjpD"
      },
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "testConfigRule",
        "bucket": {
          "name": "testbucket",
          "ownerIdentity": {
            "principalId": "EXAMPLE"
          },
          "arn": "arn:aws:s3:::mybucket"
        },
        "object": {
          "key": "file/name",
          "size": 1024,
          "eTag": "d41d8cd98f00b204e9800998ecf8427e"
        }
      }
    }
  ]
}

Note: The bucket name and object key are the most important. As you can see, the file was uploaded to testbuck with the key file / name, so the content of json is expressed accordingly.

#the end

I explained it in the draft, but if you have any questions, please contact us.

Recommended Posts

Easily create virtual S3 and test the integration between AWS Lambda and S3 services in your local environment
Create and integrate Slack App and AWS Lambda (for ruby) in 30 minutes
Examine the system information of AWS Lambda operating environment in Java
Install Rails in the development environment and create a new application
How to create your own annotation in Java and get the value
Organize your own differences in writing comfort between Java lambda expressions and Kotlin lambda expressions.