I was considering collecting and analyzing the logs of the Web application and visualizing the results, and I was thinking about what kind of configuration it would be if I wanted to make it myself. It is a story of the examination process at that time and the service used in the AWS environment actually adopted.
At first, there was a time when I was wondering if Elasticsearch → Kibana would be enough ... Certainly, it is possible to search and aggregate logs with Elasticsearch and visualize them with Kibana. However, it is difficult to see the aggregated results on another axis, and if you use Kibana, other people can see the raw log, so there are various problems, so the Kibana plan has disappeared. It was.
Aggregation seems to be no problem with Elasticsearch, so can I get the necessary data from RDB when I put the aggregated result in RDB and visualize it? I thought.
But .... I decided to use a time-series DB because even if I put the aggregated results in the RDB, I didn't get such good results in terms of performance when I took them out by analysis.
I settled on such a system configuration.
However, as a result of various thoughts, from the conclusion, it seems that it will cost money in terms of infrastructure, so I decided to consider another plan ...
Therefore, it was proposed to build in the AWS environment.
What I want to do is "I want to aggregate and analyze logs and visualize them !!" output by the application.
If you check the AWS service for the time being, it seems that the following flow can be done
The flow is simple like this.
To operate Athena from Java, use JDBC for Athena provided by AWS. Currently (as of November 2018) the latest is Athena JDBC42-2.0.5.jar, but [Download here](https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc. html) You can.
This is the Java code that calls Athena to register with Lambda in Flow 2.
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import java.util.Properties;
public class AthenaService {
//Athena Ohio Settings
private static final String CONNECTION_URL = "jdbc:awsathena://athena.us-east-2.amazonaws.com:443";
private static final String S3_BUCKET = "test-bucket";
public void execute(String dateTime) {
Properties info = new Properties();
info.put("UID", "XXXXXXXX");
info.put("PWD", "XXXXXXXX");
info.put("S3OutputLocation",
+ "s3://" + S3_BUCKET + File.separator
+ "test-dir" + File.separator);
Class.forName("com.simba.athena.jdbc.Driver");
Connection connection = DriverManager.getConnection(CONNECTION_URL, info);
Statement statement = connection.createStatement();
String query = "SELECT xxxxxxxxxxxxxxxxxxxx";
ResultSet result = statement.executeQuery(query);
while(result.next()) {
System.out.println(rs.getString("Key name"));
}
result.close();
statement.close();
connection.close();
}
You can easily connect by simply setting the required information in Properties in this way.
As a bonus, I don't need the file with the .metadata extension that was created when I ran Athena, so I'll delete it. Click here for the sample
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import open.ag.kabigon.athena.Constant;
import open.ag.kabigon.s3.service.S3Base;
public class S3Handler {
private static final String S3_BUCKET = "test-bucket";
private AmazonS3 s3;
public void deleteAtenaMetadate(String dateTime) {
BasicAWSCredentials awsCreds = new BasicAWSCredentials(
"UID", "PWD");
s3 = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(Regions.US_EAST_2)
.build();
//Get the Object that exists in the directory of the specified bucket
ObjectListing objectList = s3.listObjects(Constant.S3_AG_BACKET, "test-dir" + File.separator);
deleteObject(objectList);
s3.shutdown();
}
private void deleteObject(ObjectListing objectList) {
objectList.getObjectSummaries().forEach(i -> {
//The extension is.metadata or .Delete txt object
if (i.getKey().endsWith(".metadata") || i.getKey().endsWith(".txt"))
this.s3.deleteObject(Constant.S3_AG_BACKET, i.getKey());
});
if (objectList.isTruncated()) {
ObjectListing remainsObject = this.s3.listNextBatchOfObjects(objectList);
this.deleteObject(remainsObject);
}
}
}
Recommended Posts