[PYTHON] Easy way to analyze AGN time fluctuations with google Colab with RXTE satellite PCA detector

background

We will introduce a method of plotting using 2-10 keV flux data acquired by a detector called PCA of the RXTE satellite.

RXTE also has an all-sky monitor detector called ASM, but in the case of dark celestial bodies such as AGN (mCrab level), proper data analysis cannot be performed without using the data obtained by PCA pointing observations.

Such long-term data have non-uniform and sparse sample times, but we will also show how to finally interpolate and generate a power spectrum in that case. This isn't enough to handle sparse data, but it's a technique you'll get if you know the trends and trends in quick.

If you can read the code, please refer to my google Colab.

What is 3C273?

3C273 is the 273rd object in the 3rd edition of the Cambridge Radio Source Catalog, published in 1959, with a redshift z = 0.158. It is a quasar in. Observations have confirmed that there is a black hole in the center, which is 800 million times the mass of the sun, and a powerful jet is ejected from it.

Here, from https://cass.ucsd.edu/~rxteagn/ I will introduce how to do long-term time fluctuation analysis of the famous blazer called 3C273 with google Colab.

There is light curve data in.

Sample code

Data acquisition and light curve plotting

plot_3C273


import urllib.request
url="https://cass.ucsd.edu/~rxteagn/3C273/F0210_3C273"
urllib.request.urlretrieve(url, '3C273.txt')

import numpy as np 
mjd, flux, err, exp = np.loadtxt("3C273.txt", comments='#', unpack=True)

import matplotlib.pyplot as plt
plt.errorbar(mjd,y=flux,yerr=err)

スクリーンショット 2020-11-16 19.21.14.png

Interpolation of data

python


mjd_uniform=np.linspace(np.amin(mjd), np.amax(mjd), 6000) #Divide 6000 equal parts from the beginning to the end. The number 6000 is appropriate, but one day becomes one bottle.
interpolated_y = np.interp(mjd_uniform,mjd,flux) #Spline interpolation is performed to create evenly spaced data.
plt.errorbar(mjd,y=flux,yerr=err, label="raw data")
plt.errorbar(mjd_uniform,y=interpolated_y, label="uniformly interpolated data")
plt.legend()

Spline completion is performed with a function called interp of numpy to generate data at equal time intervals.

スクリーンショット 2020-11-16 19.23.11.png

Power spectrum calculation

python


import matplotlib.mlab as mlab
fNum=len(mjd_uniform)
timebin=(mjd_uniform[1] - mjd_uniform[0]) * 24*60*60 # day to sec
print("timebin = ", timebin, "  fNum = ", fNum)
psd2_nofil, freqlist_nofil = mlab.psd(interpolated_y,fNum,1./timebin, sides='onesided', scale_by_freq=True)

plt.xscale("log")
plt.yscale("log")
plt.xlabel(r'Frequency (Hz)')
plt.errorbar(freqlist_nofil, psd2_nofil, fmt='c-', label="FFT for uniformly sampled data")

スクリーンショット 2020-11-16 19.24.14.png

This is pretty aggressive, but it applies a power spectrum down to low frequencies.

How to take a moving average

python


#30-day moving average
# https://qiita.com/yamadasuzaku/items/fb744a207e7694d1598d
ave = np.convolve(interpolated_y, np.ones(30)/float(30), 'same')
plt.errorbar(mjd_uniform,y=interpolated_y, label="uniformly interpolated data")
plt.errorbar(mjd_uniform,y=ave, label="30-day average")
plt.legend()

The easiest way to get rid of high frequency components on a moving average is to use numpy's convolve.

スクリーンショット 2020-11-16 19.25.34.png

Comparison of moving average and power spectrum before and after taking

python


ave_psd, freq = mlab.psd(ave,fNum,1./timebin, sides='onesided', scale_by_freq=True)
plt.xscale("log")
plt.yscale("log")
plt.xlabel(r'Frequency (Hz)')
plt.errorbar(freqlist_nofil, psd2_nofil, fmt='c-', label="FFT for uniformly sampled data")
plt.errorbar(freqlist_nofil, ave_psd, fmt='r-', label="FFT for uniformly sampled data") 

スクリーンショット 2020-11-16 19.27.09.png

By taking the moving average, it can be confirmed that the high frequency components are dropped.

bonus

By using this, it is possible to easily correlate 3C 273 multi-wavelength time series data. Data is quick,

It is published in.

However, since the complement introduced here is interpolated by trapezoidal complement to generate data at regular intervals, it is necessary to carefully consider whether the approximation is valid or not. (There is no problem at the level of looking at the correlation roughly for the time being, so I think it is important to try this quickly once you have data such as experimental data.)

Recommended Posts

Easy way to analyze AGN time fluctuations with google Colab with RXTE satellite PCA detector
Easy way to scrape with python using Google Colab
An easy way to create an import module with jupyter
How to analyze with Google Colaboratory using Kaggle API
Easy way to scrape with python using Google Colab
About learning with google colab