[GO] Take a look at profiling and dumping with Dataflow

This is a memo when I was concerned about the performance of Dataflow. Cloud monitoring, Cloud Profiler, and memory dump can be used with Dataflow.

Cloud monitoring You can monitor the memory usage of the JVM with Cloud monitoring.

To understand

Try out

It is disabled by default, so at --experiments=enable_stackdriver_agent_metrics, when you start the pipeline (or build the template) ) Is enabled (*).

PubSubToText of Templates provided by Google Try it with cloud / teleport / templates / PubsubToText.java). The second line from the bottom is the added part.

 mvn compile exec:java \
 -Dexec.mainClass=com.google.cloud.teleport.templates.PubSubToText \
 -Dexec.cleanupDaemonThreads=false \
 -Dexec.args=" \
 --project=${PROJECT_ID} \
 --stagingLocation=gs://${PROJECT_ID}/dataflow/pipelines/${PIPELINE_FOLDER}/staging \
 --tempLocation=gs://${PROJECT_ID}/dataflow/pipelines/${PIPELINE_FOLDER}/temp \
 --runner=DataflowRunner \
 --windowDuration=2m \
 --numShards=1 \
 --inputTopic=projects/${PROJECT_ID}/topics/windowed-files \
 --outputDirectory=gs://${PROJECT_ID}/temp/ \
 --outputFilenamePrefix=windowed-file \
 --experiments=enable_stackdriver_agent_metrics  \
 --outputFilenameSuffix=.txt"

After starting the job, you can check the index with Cloud Monitoring. By default, it is per instance, but you can also Group By by job name.

Cloud Monitoring例

I also tried it with Word Count, but it was not displayed in Monitoring ... (The cause has not been investigated. Execution Is the time short?)

Stackdriver Profiler This is the method introduced by a person in Google at Meidum.

To understand

Stackdriver Profiler seems to be able to get a heap in Java (https://cloud.google.com/profiler/docs/concepts-profiling?hl=ja#types_of_profiling_available), but I didn't know if it could be taken by Dataflow ...

Try out

The profilingAgentConfiguration ='{\ "APICurated ": true}' option must be enabled. It can also be used at the same time as Cloud Monitoring.

 * mvn compile exec:java \
 -Dexec.mainClass=com.google.cloud.teleport.templates.PubSubToText \
 -Dexec.cleanupDaemonThreads=false \
 -Dexec.args=" \
 --project=${PROJECT_ID} \
 --stagingLocation=gs://${PROJECT_ID}/dataflow/pipelines/${PIPELINE_FOLDER}/staging \
 --tempLocation=gs://${PROJECT_ID}/dataflow/pipelines/${PIPELINE_FOLDER}/temp \
 --runner=DataflowRunner \
 --windowDuration=2m \
 --numShards=1 \
 --inputTopic=projects/${PROJECT_ID}/topics/windowed-files \
 --outputDirectory=gs://${PROJECT_ID}/temp/ \
 --outputFilenamePrefix=windowed-file \
 --experiments=enable_stackdriver_agent_metrics  \
 --outputFilenameSuffix=.txt"

経過時間

CPU時間

To read the graph, see Stackdriver Profiler's Quick Start and Frame Graph Description. Please refer to / profiler / docs / concepts-flame? Hl = ja).

Memory dump

[Introduction](https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/dataflow-debug-oom-conditions] at Google Cloud Platform Community /index.md), but you can also use the standard memory dump. Unlike the above two methods, you will be looking at each instance rather than the entire job.

How to take a dump

Three types are introduced. This time I will try downloading from a worker.

To understand

Try out

Follow the steps below:

  1. Start the pipeline
  2. Create a heap dump
  3. Download locally
  4. Take a look

Since there are no special options or precautions, starting the pipeline is omitted.

Creating a heap dump

Enter the Dataflow worker and make a heap dump.

Note that the GCE instance is labeled with the Dataflow job ID and job name, so you can identify the instance with it (example below).

gcloud compute instances list --filter='labels.dataflow_job_name=${job_name}'

Create an SSH tunnel:

gcloud compute ssh --project=$PROJECT --zone=$ZONE \
  $WORKER_NAME --ssh-flag "-L 8081:127.0.0.1:8081"

Download locally

Open http://127.0.0.1:8081/heapz in your browser. It takes quite a while (about 10 minutes with n1-standard-4).

Take a look

View the dump with your favorite tool, such as VisualVM. You can see the status of threads and objects.

VisualVMサマリー

VisualVMスレッド

VisualVMオブジェクト

I think that my predecessors wrote various ways to read the heap dump, so please do your best (Java Performance).

important point

There are some caveats regarding the download from the worker I tried this time:

Recommended Posts

Take a look at profiling and dumping with Dataflow
[Go] Take a look at io.Writer
Take a look at Django's template.
Take a closer look at the Kaggle / Titanic tutorial
Take a look at the Python built-in exception tree structure
Let's take a look at the feature map of YOLO v3
Take a look at the built-in exception tree structure in Python 3.8.2
Let's take a look at the forest fire on the west coast of the United States with satellite images.
Let's take a look at the Scapy code. Overload of special methods __div__, __getitem__ and so on.
A story about custom users having a sweet and painful look at Django
Trim and take a weighted moving average
A memo with Python2.7 and Python3 on CentOS
Implement a model with state and behavior
Let's take a look at the Scapy code. How are you processing the structure?
Practice of creating a data analysis platform with BigQuery and Cloud DataFlow (data processing)
Connect Scratch X and Digispark with a bottle
Building a python environment with virtualenv and direnv
Build a virtual environment with pyenv and venv
Launch a web server with Python and Flask
Airtest, one question at a time. Unity app E2E test starting with Airtest and Poco