For a long time (about 3 months), I used "Execute SQL script" of IBM client solution to issue SQL and acquire data, but since the number of data exceeded 5000, I wanted to acquire all of them. Since scrolling endlessly became troublesome, I tried a batch acquisition experiment using Python and ODBC.
Since it is an experimental code, error checking and output result format are ignored.
PC environment
Connection destination environment
Get the latest stabilizer from python.org and install it. After installation, put the path to the folder containing "python.exe".
The reason why I chose pypyodbc instead of the general pyodbc is that the installation with pip failed. I also tried IBM's recommended ibm_db, but this time it's a path because both required Microsoft Visual C ++ 14.
Install using "pip" attached to python.
pip install pypyodbc
I set it a few months ago, so I remember the contents. Go to https://www.ibm.com/support/pages/ibm-i-access-client-solutions. Follow the link that says "Downloads for IBM i Access Client Solutions". You will be asked to authenticate your IBM ID on the way, so if you have an ID, enter it, otherwise register a new one (free of charge). Click "ACS Windows App Pkg English (64bit)" to download.
Since it is in the installer format, install it according to the installer procedure.
odbctest.py
import pypyodbc
#Get connection information
config = {}
with open("connection_config.txt", 'r', encoding="utf-8") as conf:
for line in conf.read().splitlines():
key_, val_ = line.replace(" ", "").split("=")
config[ key_ ] = val_
#DB connection
connection = pypyodbc.connect(
driver='{iSeries Access ODBC Driver}',
system = config["system"],
uid = config["uid"],
pwd = config["pwd"] )
cur = connection.cursor()
#SQL execution
statement = open("statement.sql", 'r', encoding="utf-8").read()
cur.execute( statement )
for row in cur:
print( row )
The source is roughly divided into 3 parts.
"Get connection information" fetches the connection information to the database from an external file and stores it in the dictionary. The external file "connection_config.txt" looks like this:
connection_config.txt
system = xxx.xxx.xxx.xxx
uid = USER
pwd = PASWORD
In "DB connection", the read connection information is used to try to connect to the database and acquire the cursor.
In "SQL execution", the SQL written in the external file is read and passed to execute () of the cursor to issue the SQL. The for loop below it throws each line into the print () function and writes it to standard output.
statement.sql
select * from QSYS2.LIBLIST
Let's write the above SQL in statement.sql and execute it.
>python testodbc.py
(1, 'QSYS', 'QSYS', 'SYSTEM', 0, '\x0eäýäþämäwäáäÝäbäJäÝäÞäì\x0f')
(2, 'QSYS2', 'QSYS2', 'SYSTEM', 0, 'CPI\x0eá¶àªäýäþämäwäáäÝäbäJäÝäÞäì\x0f')
(3, 'QHLPSYS', 'QHLPSYS', 'SYSTEM', 0, None)
(4, 'QUSRSYS', 'QUSRSYS', 'SYSTEM', 0, 'S Mohe Hoote L Kanon Kanon U Heone He')
(5, 'QIWS', 'QIWS', 'PRODUCT', 0, None)
(6, 'QGPL', 'QGPL', 'USER', 0, 'GENERAL PURPOSE LIBRARY')
(7, 'QTEMP', 'QTEMP', 'USER', 0, None)
Garbled characters are by design ... The labels of some IBM-supplied objects don't seem to translate well, but if it's a normally created table, whether you're using SQL or AS / 400-specific features, Japanese Will convert it to characters that can be read without problems.
If you redirect to a file, tens of thousands of results should take seconds to tens of seconds. Although it depends on the performance of the server, when I tried to execute 15 columns x 10000 cases and a little more SQL, the writing was completed in about 5 seconds (output file size 1MB).
This frees you from the penance of holding down the page down key endlessly to get the entire result set.
Recommended Posts