On the client side, I use JIRA for management, but I use Backlog internally. There are several reasons for this.
――I want to manage the sales amount and comments that I do not want to show to the client (that shit) etc. ――The JIRA account of everyone involved in the project will not be issued --All in English with global clients (Not everyone in the company can do it smoothly in English with JIRA) ――JIRA is also in the region of the head office, so it's shit heavy and unusable
So, after all, I wondered if it would be okay to post the contents of JIRA to something like a table without omissions or mistakes.
--Getting an Open project with JIRA's Rest API --Shaping --Posted to Google Spreadsheet
The barren work of searching and copying with JIRA, which is heavy and heavy, is no longer necessary. Minor operational improvements are yet to come.
jira.py
import base64
import requests
from google.colab import files
import pandas as pd
from google.colab import auth
auth.authenticate_user()
import gspread
import gspread_dataframe as gs_df
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
#Spreadsheet Open sheet with file name
worksheet = gc.open('file name').worksheet('Sheet name')
base_url = 'https://jira.yourjira.com/'
authstr = 'yourID:yourPW'
authkey = base64.encodebytes(authstr.encode('utf-8')).decode('ascii')
auth_header = {'Content-Type':'application/json', 'Authorization':'Basic ' + repr(authkey)}
#See the total number
searchUrl=base_url+'/rest/api/2/search?jql=project+yourproject+AND+resolution+%3D+Unresolved'
result =requests.get(searchUrl, headers=auth_header)
result =result.json()
total=str(result['total'])
print('Total number of projects:'+total+'Case')
#Acquisition contents(Fields)Get all records by specifying
searchUrl=base_url+'/rest/api/2/search?maxResults='+total+'&fields=issuetype,summary,assignee&jql=project+%3D+yourproject+AND+resolution+%3D+Unresolved+ORDER+BY+updated+DESC'
result =requests.get(searchUrl, headers=auth_header)
result =result.json()
issues=result['issues']
#output
with open('example.csv', 'w') as f:
f.write("Type,URL,Summary,Assignee,Reporter,Duedate,Updated\n")
for issue in issues:
if issue['fields']['assignee'] is None:
assignee=""
else:
assignee=issue['fields']['assignee']['displayName']
if issue['fields']['duedate'] is None:
duedate=""
else:
duedate=issue['fields']['duedate']
f.write(issue['fields']['issuetype']['name']+',https://jira.yourjira.com/browse/'+issue['key']+',"'+issue['fields']['summary']+'",'+assignee+','+issue['fields']['reporter']['displayName']+','+duedate+','+issue['fields']['updated']+'\n')
#When dropping with CSV
#files.download('example.csv')
#When exporting to a spreadsheet
df = pd.read_csv('example.csv')
gs_df.set_with_dataframe(worksheet, df)
It's done.
JIRA has a faster API than opening a page and searching. It may have been good from the viewpoint of suppressing the frustration that takes seconds to load and display the page.