[PYTHON] Try using Pelican's draft feature

How to use the built-in Drafts feature with Pelican, a blog generation tool made by Python

Click here for previous articles. It covers everything from setup to common settings.

-How to publish a blog on Amazon S3 with the static blog engine'Pelican'for Pythonista -Various settings of Python static blog generation tool'Pelican' -Try to introduce the theme to Pelican

Pelican, a blog generation engine made by Python, has a "draft" function by default.

I think that Pelican's article will be kept under local / content in reST or md format, but in that state hit the make html command And all articles are indexed and published to the outside world when deployed.

The Drafts function is provided to meet such needs.

How to use the Drafts function

I will write metadata at the top of the article, but there

Status: draft

Just write. The above is for md, but for reST

:Status: draft

Wonder.

What happens when you write this is that when you make html, only the relevant article is not indexed, the generated html file does not go to the output folder, and a separate drafts folder is created in it. It will be stored.

If you deploy in that state, it will not be visible to the general public.

http://[ドメイン名]/drafts/[URL]

You can access it with the URL. (The specific URL depends on each environment)

Put it in drafts once, delete the one line described when you are ready to publish after checking work, generate html again → deploy, and the article will be published properly this time. It's easy!

important point

Strictly speaking, drafts are different from "drafts" because they will be accessed once you know the URL. If you don't want to even publish it, you have to make a draft in a place other than the content folder, but for the time being, I accidentally published the article I was about to write along with other articles! In order to avoid such a situation, it may be safer to add the *** Status: draft *** attribute at first.

Recommended Posts

Try using Pelican's draft feature
Try using Django's template feature
Try using PyCharm's remote debugging feature
Try using Tkinter
Try using docker-py
Try using cookiecutter
Try using PDFMiner
Try using geopandas
Try using Selenium
Try using scipy
Try using pandas.DataFrame
Try using django-swiftbrowser
Try using matplotlib
Try using tf.metrics
Try using PyODE
Try using virtualenv (virtualenvwrapper)
[Azure] Try using Azure Functions
Try using virtualenv now
Try using W & B
Try using Django templates.html
[Kaggle] Try using LGBM
Try using Python's feedparser.
Try using Python's Tkinter
Try using Tweepy [Python2.7]
Try using Pytorch's collate_fn
Try using PythonTex with Texpad.
[Python] Try using Tkinter's canvas
Try using scikit-learn (1) --K-means clustering
Try function optimization using Hyperopt
Try using matplotlib with PyCharm
Try using Azure Logic Apps
Try using Kubernetes Client -Python-
Feature detection using opencv (corner detection)
[Kaggle] Try using xg boost
Try using the Twitter API
Try using OpenCV on Windows
Try using Jupyter Notebook dynamically
Try using AWS SageMaker Studio
Try tweeting automatically using Selenium.
Try using SQLAlchemy + MySQL (Part 1)
Try using the Twitter API
Try using the PeeringDB 2.0 API
Try using pytest-Overview and Samples-
Organized feature selection using sklearn
Try using folium with anaconda
Try using Janus gateway's Admin API
[Statistics] [R] Try using quantile regression.
Try using Spyder included in Anaconda
Try using design patterns (exporter edition)
Try using Pillow on iPython (Part 1)
Try using Pillow on iPython (Part 2)
Try using Pleasant's API (python / FastAPI)
Try using LevelDB in Python (plyvel)
Try using pynag to configure Nagios
Try using ArUco on Raspberry Pi
Try using cheap LiDAR (Camsense X1)
[Sakura rental server] Try using flask.
Try using Pillow on iPython (Part 3)
Reinforcement learning 8 Try using Chainer UI
Try to get statistics using e-Stat
Try using Python argparse's action API