I'm about to write it, but I will publish it I will fix it if I have time
I love pixivpy, but I don't see much explanation on domestic sites because it was developed in China. So I would like to share what I learned while using it. Download all the works of a specific user from pixiv at once using pixivpy of python (including moving) Please when you want to modify
In the old days, it was possible to analyze the return value with the software called spyder that comes with anaconda. It is no longer possible with the current version. Maybe you can do it with spyder 3.3.6 Therefore, I decided to use pyscripter or save it as a json file and then analyze it.
qiita.py
from pixivpy3 import *
import json
import os
from PIL import Image
import glob
from time import sleep
#This is a setting for each individual to do
#ID of the user who wants to download(The number at the end of the url when you go to the user's page on the web)
id_search = 11
#Maximum number of works to download per person, if you want to download all, make it as large as possible (exceeding the number of works of one author)
#Downloaded in newest order
works=100000
#Filter by number of bookmarks, set minimum, 0 for all
score=0
#Filter by number of views, set minimum value, 0 is all
view=0
#Filter by tag How to write target_tag = ["Fate/GrandOrder","FGO","FateGO","Fate/staynight"]
target_tag = [] #target_If you write more than one in the tag, download if there is at least one of them
target_tag2 = []#Further target_If you write in tag2, target_Satisfy tag and target_Download only those that satisfy tag2
extag = ["R-18"]#Do not download if even one tag in extag is included
#Directory to save images
main_saving_direcory_path = "./img/"
#Pre-processing
#Read the file with the account information created in advance
with open("client.json", "r") as f:
client_info = json.load(f)
#pixivpy login process
api = PixivAPI()
api.login(client_info["pixiv_id"], client_info["password"])
aapi = AppPixivAPI()
aapi.login(client_info["pixiv_id"], client_info["password"])
illustrator_id = api.users_works(id_search, per_page=works)
with open("users_works.json", "w") as n:
json.dump(illustrator_id, n, indent = 2)
with open ("users_works.json","r") as sd:
users_works = json.load(sd)
api.users_works
papi.py
#User work list
def users_works(self, author_id, page=1, per_page=30,
image_sizes=['px_128x128', 'px_480mw', 'large'],
include_stats=True, include_sanity_level=True):
This is the definition.
api.users_works(11, per_page=100000)
api.users_works(11, per_page=100000).paginaton
api.users_works(11, per_page=100000).response
api.users_works(11, per_page=100000).response[0]
api.users_works(11, per_page=100000).response[0]
age_limit: Is there an age limit?
bookstyle:
caption: Description
content_type
created_time: published time
favorite_id
height: Image height (resolution)
id: id assigned to each image
is_liked
is_manga
metadata
page_count: number of pages
pablicity
reuploaded_time
sanity_level
title: Title of the work
tools: The tools used to draw the work? Cannot be confirmed from the page
type: Type of work: illustration: 1 piece ugoira: Ugoira manga: 2 or more
width: Image width (resolution)
image_urls: There are 3 for each image url size. The best image quality is large
api.users_works(11, per_page=100000).response[0].image_urls
tags: Tags attached to the work
api.users_works(11, per_page=100000).response[0].tags
user: Posted by information
stats: Work information
api.users_works(11, per_page=100000).response[0].user
api.users_works(11, per_page=100000).response[0].user
api.users_works(11, per_page=100000).response[0].stats
api.users_works(11, per_page=100000).response[0].stats
aapi.download(illustrator_id.response[0].user.profile_image_urls.px_50x50, main_saving_direcory_path)
Recommended Posts