One year after the ban on online elections was lifted, there was a large number of Nostradamus on the Internet that Japan would end if any political party wins, and Kibayashi-san's rush cannot catch up. How are you doing now?
By the way, this time I will try to calculate the D'Hondt formula in the proportional representation election.
http://needtec.sakura.ne.jp/analyze_election/page/dondt/shuin_47
If you enter the number of votes and press the "Calculate" button, you can get the number of seats according to the number of votes. If the party approval rating is pushed in as it is, the Liberal Democratic Party will acquire 140 seats, and even the Ishin no Kai will be wiped out.
At the time of the 2013 Upper House election, if you enter the vote rate for each party in the latest Tokyo Metropolitan Assembly election, the difference between the estimated result and the actual result is 1 seat.
https://github.com/mima3/analyze_election
Actual D'Hondt formula calculation file
dondt_util.py
# coding=utf-8
import os
import re
from collections import defaultdict
import math
import json
import copy
import operator
class political_party_info:
"""
A class that stores political party information
name :Political party name
votes:Number of votes
max :Number of Candidates (No matter how many votes you get, you cannot get more seats)
seats:Number of seats acquired
"""
def __init__(self, name, votes, max):
self.name = name
self.votes = votes
self.max = max
self.seats = 0
def select_political_party(votes):
"""
Political party name as key, value political_party_Get the name of the party with the most votes in the dictionary with info
"""
max = -1
ret = None
for k, v in votes.items():
#If the numbers are the same, the lottery will be used, but this time it will be in the order of registration.
if max < v.votes:
ret = k
max = v.votes
return ret
def dondt(votes_data, max):
"""
By don't method
votes_data:Political party name as key, value political_party_Dictionary with info
max:Total number of seats
votes_data[x].The number of seats is stored in seats.
"""
tmp_votes = copy.deepcopy(votes_data)
for i in range(1, max+1):
s = select_political_party(tmp_votes)
if s is None:
return None
votes_data[s].seats += 1
tmp_votes[s].votes = math.floor(votes_data[s].votes / (votes_data[s].seats + 1))
if tmp_votes[s].max == votes_data[s].seats:
#Voting for this party is invalid since the number of candidates has been exceeded
tmp_votes[s].votes = 0
return votes_data
Fractions when seeking a quotient are rounded down. If the number of votes is the same, it is originally decided by lottery, but here it is in order of character code (ex Midori party has priority over LDP)
See below for how to calculate the D'Hondt formula. http://www.pref.tochigi.lg.jp/senkyo/sangisenkyo/qanda/qanda-9.html
Normally, the number of seats in the block of proportional districts and the number of candidates for each party should be created from the page of the Ministry of Internal Affairs and Communications. However, the Ministry of Internal Affairs and Communications publishes it only in PDF, which cannot be text-processed.
Regardless of the number of seats in the proportional block, it is difficult to enter data for candidates, so I scraped the Asahi Shimbun homepage on the Web to create a CSV file. The following URLs are targeted. http://www.asahi.com/senkyo/sousenkyo47/kouho/B01.html ~ http://www.asahi.com/senkyo/sousenkyo47/kouho/B11.html
It has the same HTML structure as 2012, so it will probably be usable in the next dissolution.
script/analyze_asahi_hirei.py
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
import urllib2
import lxml.html
import re
import os.path
import urlparse
def print_area(url):
r = urllib2.urlopen(url, timeout=30)
html = r.read()
dom = lxml.html.fromstring(html)
parties = dom.xpath('//div[@class="snkH2Box"]/h2')
tables = dom.xpath('//table[@class="snkTbl01"]')
block = dom.xpath('//div[@class="BreadCrumb"]/h1')[0].text_content().encode('utf-8')
block = block[block.find(':')+len(':'):]
for i in range(0, len(parties)):
h2 = parties[i].text_content().encode('utf-8')
partyName = h2.split('\n')[0]
members = tables[i].xpath('tbody/tr')
for m in members:
name = m.xpath('td[@class="namae"]')[0].text_content().encode('utf-8')
lstNum = m.xpath('td[@class="lstNum"]')[0].text_content().encode('utf-8')
age = m.xpath('td[@class="age"]')[0].text_content().encode('utf-8')
status = m.xpath('td[@class="status"]')[0].text_content().encode('utf-8')
net = m.xpath('td[@class="net"]/ul')[0]
twitterEl = net.xpath('li[@id="twitter"]/a')
facebookEl = net.xpath('li[@id="facebook"]/a')
hpEl = net.xpath('li[@id="HomePage1"]/a')
areaEl = m.xpath('td[@class="w"]/a')
area = ''
twitter = ''
facebook = ''
hp = ''
if twitterEl:
twitter = twitterEl[0].attrib['href'].encode('utf-8')
if facebookEl:
facebook = facebookEl[0].attrib['href'].encode('utf-8')
if hpEl:
hp = hpEl[0].attrib['href'].encode('utf-8')
if areaEl:
area =areaEl[0].text_content().encode('utf-8')
print ('%s,%s,%s,%s,%s,%s,%s,%s,%s,%s' % (block, partyName,lstNum, name, age, status, area, twitter, facebook, hp))
def main(argvs, argc):
"""
This script gets candidates for single-seat constituencies from information in the Asahi Shimbun
"""
for i in range(1, 12):
url = ('http://www.asahi.com/senkyo/sousenkyo47/kouho/B%s.html' % str(i).zfill(2))
print_area(url)
if __name__ == '__main__':
argvs = sys.argv
argc = len(argvs)
sys.exit(main(argvs, argc))
** Data on candidates for proportional wards with full-width characters or "prefectures" after obtaining data from the Asahi Shimbun ** https://github.com/mima3/analyze_election/blob/master/script/candidate_shuin_47_hirei.csv
** Kinki proportional block information CSV created while looking at Asahi Shimbun data ** https://github.com/mima3/analyze_election/blob/master/script/block_shuin_47_hirei.csv
Recommended Posts