[PYTHON] 100 traitements linguistiques Knock 2020 [00 ~ 39 réponse]

La version 2020 de 100 coups de traitement linguistique a été publiée.

https://nlp100.github.io/ja/

Dans cet article, je publierai les réponses de manière simple pour une référence facile. L'explication est écrite dans un autre article.

A continué

Traitement linguistique 100 knock 2020 [Chapitre 5: Réponse à l'analyse des dépendances]

Chapitre 1: Mouvement préparatoire

https://kakedashi-engineer.appspot.com/2020/04/15/nlp100-00-09/

00. Ordre inverse des chaînes

s = 'stressed'
print (s[::-1])

01. «Patatokukashi»

s = 'Patatoku Kashii'
print (s[::2])

02. "Patcar" + "Tax" = "Patatokukasie"

s1 = 'Voiture Pat'
s2 = 'Taxi'
print (''.join([a+b for a,b in zip(s1,s2)]))

03. Taux circonférentiel

s = "Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics."
s = s.replace(',','').replace('.','')
[len(w) for w in s.split()]

04. Symbole d'élément

s = "Hi He Lied Because Boron Could Not Oxidize Fluorine. New Nations Might Also Sign Peace Security Clause. Arthur King Can."
s = s.replace('.','')
idx = [1, 5, 6, 7, 8, 9, 15, 16, 19]
mp = {}
for i,w in enumerate(s.split()):
    if (i+1) in idx:
        v = w[:1]
    else:
        v = w[:2]
    mp[v] = i+1
print (mp)
  1. n-gram
def ngram(S, n):
    r = []
    for i in range(len(S) - n + 1):
        r.append(S[i:i+n])
    return r
s = 'I am an NLPer'
print (ngram(s.split(),2))
print (ngram(s,2))

06. Réunion

def ngram(S, n):
    r = []
    for i in range(len(S) - n + 1):
        r.append(S[i:i+n])
    return r
s1 = 'paraparaparadise'
s2 = 'paragraph'
st1 = set(ngram(s1, 2))
st2 = set(ngram(s2, 2))
print(st1 | st2)
print(st1 & st2)
print(st1 - st2)
print('se' in st1)
print('se' in st2)

07. Génération de déclaration par modèle

def temperature(x,y,z):
    return str(x)+'de temps'+str(y)+'Est'+str(z)
x = 12
y = 'Température'
z = 22.4
print (temperature(x,y,z))

08. Cryptographie

def cipher(S):
    new = []
    for s in S:
        if 97 <= ord(s) <= 122:
            s = chr(219 - ord(s))
        new.append(s)
    return ''.join(new)
        
s = 'I am an NLPer'
new = cipher(s)
print (new)

print (cipher(new))
  1. Typoglycemia
import random
s = 'I couldn’t believe that I could actually understand what I was reading : the phenomenal power of the human mind .'
ans = []
text = s.split()
for word in text:
    if (len(word)>4):
        mid = list(word[1:-1])
        random.shuffle(mid)
        word = word[0] + ''.join(mid) + word[-1]
        ans.append(word)
    else:
        ans.append(word)
print (' '.join(ans))

Chapitre 2: Commandes UNIX

https://kakedashi-engineer.appspot.com/2020/04/16/nlp100-10-14/ https://kakedashi-engineer.appspot.com/2020/04/17/nlp100-15-19/

10. Compter le nombre de lignes

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
print (len(df))
wc popular-names.txt

11. Remplacez les onglets par des espaces

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
df.to_csv('space.txt', sep=' ',header=False, index=False)
sed 's/\t/ /g' popular-names.txt > replaced.txt

12. Enregistrez la première colonne dans col1.txt et la deuxième colonne dans col2.txt

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
df.iloc[:,0].to_csv('col1.txt', sep=' ',header=False, index=False)
df.iloc[:,1].to_csv('col2.txt', sep=' ',header=False, index=False)
cut -f 1  popular-names.txt > col1.txt
cut -f 2  popular-names.txt > col2.txt

13. Fusionnez col1.txt et col2.txt

import pandas as pd
df1 = pd.read_csv('col1.txt', delimiter='\t', header=None)
df2 = pd.read_csv('col2.txt', delimiter='\t', header=None)
df = pd.concat([df1, df2], axis=1)
df.to_csv('col1_2.txt', sep='\t',header=False, index=False)
paste col1.txt col2.txt > col1_2.txt

14. Sortie de N lignes depuis le début

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
print (df.head(5))
head -n 5 popu

15. Sortez les N dernières lignes

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
print (df.tail(5))
tail -n 5 popular-names.txt

16. Divisez le fichier en N

N = 3
import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
step = - (-len(df) // N)
for n in range(N):
    df_split = df.iloc[n*step:(n+1)*step]
    df_split.to_csv('popular-names'+str(n)+'.txt', sep='\t',header=False, index=False)
split -n 3 popuar-names.txt

17. Différence dans la chaîne de caractères dans la première colonne

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
new = df[0].unique()
new.sort()
print (new)
cut -f 1  popular-names.txt | sort | uniq

18. Triez chaque ligne dans l'ordre décroissant des nombres dans la troisième colonne

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
new = df[2].sort_values(ascending=False)
print (new)
cut -f 3  popular-names.txt | sort -n -r

19. Recherchez la fréquence d'apparition de la chaîne de caractères dans la première colonne de chaque ligne et organisez-les par ordre décroissant de fréquence d'apparition.

import pandas as pd
df = pd.read_csv('popular-names.txt', delimiter='\t', header=None)
vc = df[0].value_counts()
vc = pd.DataFrame(vc)
vc = vc.reset_index()
vc.columns = ['name','count']
vc = vc.sort_values(['count','name'],ascending=[False,False])
print (vc)
cut -f 1  popular-names.txt | sort | uniq -c | sort -n -r

Chapitre 3: Expressions régulières

https://kakedashi-engineer.appspot.com/2020/04/18/nlp100-20-24/ https://kakedashi-engineer.appspot.com/2020/04/19/nlp100-25-26/ https://kakedashi-engineer.appspot.com/2020/04/20/nlp100-27-28/ https://kakedashi-engineer.appspot.com/2020/04/21/nlp100-29-30/

20. Lecture des données JSON

import pandas as pd
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
print (uk)

21. Extraire les lignes contenant les noms des catégories

import pandas as pd
import re
pattern = re.compile('Category')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
for line in ls:
    if re.search(pattern, line):
        print (line)

22. Extraction du nom de la catégorie

import pandas as pd
import re
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
for line in ls:
    if re.search(pattern, line):
        line = line.replace('[[','').replace('Category:','').replace(']]','').replace('|*','').replace('|Ancien','')
        print (line)

23. Structure de la section

import pandas as pd
import re
pattern = re.compile('^=+.*=+$') #Plus d'une fois=En commençant par, plus d'une fois=Chaîne se terminant par
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
for line in ls:
    if re.search(pattern, line):
        level = line.count('=') // 2 - 1
        print(line.replace('=',''), level )

24. Extraction des références de fichiers

import pandas as pd
import re
pattern = re.compile('File|Fichier:(.+?)\|')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
for line in ls:
    r = re.findall(pattern, line)
    if r:
        print (r[0])

25. Extraction de modèles

import pandas as pd
import re
pattern = re.compile('\|(.+?)\s=\s*(.+)')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
d = {}
for line in ls:
    r = re.search(pattern, line)
    if r:
        d[r[1]]=r[2]
print (d)

26. Suppression du balisage en surbrillance

import pandas as pd
import re
pattern = re.compile('\|(.+?)\s=\s*(.+)')
p_emp = re.compile('\'{2,}(.+?)\'{2,}')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
d = {}
for line in ls:
    r = re.search(pattern, line)
    if r:
        d[r[1]]=r[2]
    r = re.sub(p_emp,'\\1', line)
    print (r)
print (d)

27. Suppression des liens internes

import pandas as pd
import re
pattern = re.compile('\|(.+?)\s=\s*(.+)')
p_emp = re.compile('\'{2,}(.+?)\'{2,}')
p_link = re.compile('\[\[(.+?)\]\]')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
lines = uk[0]
lines = re.sub(p_emp,'\\1', lines)
lines = re.sub(p_link,'\\1', lines)
ls = lines.split('\n')
d = {}
for line in ls:
    r = re.search(pattern, line)
    if r:
        d[r[1]]=r[2]
print (d)

28. Suppression du balisage MediaWiki

import pandas as pd
import re
pattern = re.compile('\|(.+?)\s=\s*(.+)')
p_emp = re.compile('\'{2,}(.+?)\'{2,}')
p_link = re.compile('\[\[(.+?)\]\]')
p_refbr = re.compile('<[br|ref][^>]*?>.+?<\/[br|ref][^>]*?>')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
lines = uk[0]
lines = re.sub(p_emp,'\\1', lines)
lines = re.sub(p_link,'\\1', lines)
lines = re.sub(p_refbr,'', lines)
ls = lines.split('\n')
d = {}
for line in ls:
    r = re.search(pattern, line)
    if r:
        d[r[1]]=r[2]
print (d)

29. Obtenez l'URL de l'image du drapeau

import pandas as pd
import re
import requests
pattern = re.compile('\|(.+?)\s=\s*(.+)')
wiki = pd.read_json('jawiki-country.json.gz', lines = True)
uk = wiki[wiki['title']=='Angleterre'].text.values
ls = uk[0].split('\n')
d = {}
for line in ls:
    r = re.search(pattern, line)
    if r:
        d[r[1]]=r[2]
        
S = requests.Session()
URL = "https://commons.wikimedia.org/w/api.php"
PARAMS = {
    "action": "query",
    "format": "json",
    "titles": "File:" + d['Image du drapeau'],
    "prop": "imageinfo",
    "iiprop":"url"
}
R = S.get(url=URL, params=PARAMS)
DATA = R.json()
PAGES = DATA['query']['pages']
for k, v in PAGES.items():
    print (v['imageinfo'][0]['url'])

Chapitre 4: Analyse morphologique

https://kakedashi-engineer.appspot.com/2020/04/22/nlp100-31-34/ https://kakedashi-engineer.appspot.com/2020/04/22/nlp100-35-39/

30. Lecture des résultats de l'analyse morphologique

import MeCab
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)

31. verbe

import MeCab
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)
[d['surface'] for d in result if d['pos'] == 'verbe' ]

32. Prototype du verbe

import MeCab
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)
[d['base'] for d in result if d['pos'] == 'verbe' ]

33. «B de A»

import MeCab
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)
noun_phrase = []
for i in range(len(result)-2):
    if (result[i]['pos'] == 'nom' and result[i+1]['surface'] == 'de' and result[i+2]['pos'] == 'nom'):
        noun_phrase.append(result[i]['surface']+result[i+1]['surface']+result[i+2]['surface'])
print (noun_phrase)

34. Concaténation de nomenclature

import MeCab
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)

ls_noun = []
noun = ''
for d in result:
    if d['pos']=='nom':
            noun += d['surface']
    else:
        if noun != '':
            ls_noun.append(noun)
            noun = ''
else:
    if noun != '':
        ls_noun.append(noun)
        noun = ''
print (ls_noun)

35. Fréquence d'occurrence des mots

import MeCab
from collections import Counter
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)


surface =  [d['surface'] for d in result]
c = Counter(surface)
print (c.most_common())

36. Top 10 des mots les plus fréquents

import MeCab
from collections import Counter
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'AppleGothic'
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)


surface =  [d['surface'] for d in result]
c = Counter(surface)
target = list(zip(*c.most_common(10)))
plt.bar(*target)
plt.show()

37. Top 10 des mots qui coïncident fréquemment avec "chat"

import MeCab
from collections import Counter
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'AppleGothic'
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
tmp_cooccurrence = []
cooccurrence = []
inCat = False

for line in text[:-1]:
    if line == 'EOS':
        if inCat:
            cooccurrence.extend(tmp_cooccurrence)
        else:
            pass
        tmp_cooccurrence = []
        inCat = False
        continue
            
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)
    if ls[0]!='Chat':
        tmp_cooccurrence.append(ls[0])
    else:
        inCat = True


c = Counter(cooccurrence)
target = list(zip(*c.most_common(10)))
plt.bar(*target)
plt.show()

38. histogramme

import MeCab
from collections import Counter
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'AppleGothic'
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)


surface =  [d['surface'] for d in result]
c = Counter(surface)
plt.hist(c.values(),  range = (1,10))
plt.show()

39. Loi de Zipf

import MeCab
from collections import Counter
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['font.family'] = 'AppleGothic'
path = 'neko.txt.mecab'
with open(path) as f:
    text = f.read().split('\n')
result = []
for line in text[:-1]:
    if line == 'EOS':
        continue
    ls = line.split('\t')
    d = {}
    tmp = ls[1].split(',')
    d = {'surface':ls[0], 'base':tmp[6], 'pos':tmp[0], 'pos1':tmp[1]}
    result.append(d)
surface =  [d['surface'] for d in result]
c = Counter(surface)
v = [kv[1] for kv in c.most_common()]
plt.scatter(np.log(range(len(v))),np.log(v))
plt.show()

A continué

Traitement linguistique 100 coups 2020 [00 ~ 49 réponse]

Recommended Posts

100 traitements linguistiques Knock 2020 [00 ~ 39 réponse]
100 langues de traitement knock 2020 [00-79 réponse]
100 traitements linguistiques Knock 2020 [00 ~ 69 réponse]
100 traitements linguistiques Knock 2020 [00 ~ 49 réponse]
100 traitements linguistiques Knock 2020 [00 ~ 59 réponse]
100 coups de traitement linguistique (2020): 28
100 coups de traitement linguistique (2020): 38
100 traitement de la langue frapper 00 ~ 02
100 Language Processing Knock 2020 Chapitre 1
100 coups de traitement du langage amateur: 17
100 Traitement du langage Knock-52: Stemming
100 Traitement du langage Knock Chapitre 1
100 traitements linguistiques Knock 2020 [00 ~ 89 réponse]
100 coups de langue amateur: 07
Traitement du langage 100 coups 00 ~ 09 Réponse
100 Language Processing Knock 2020 Chapitre 3
100 Language Processing Knock 2020 Chapitre 2
100 coups de traitement du langage amateur: 09
100 coups en traitement du langage amateur: 47
Traitement 100 langues knock-53: Tokenisation
100 coups de traitement du langage amateur: 97
100 coups de traitement du langage amateur: 67
100 coups de traitement du langage avec Python 2015
100 traitement du langage Knock-51: découpage de mots
100 Language Processing Knock-58: Extraction de Taple
100 Language Processing Knock-57: Analyse des dépendances
100 traitement linguistique knock-50: coupure de phrase
100 Language Processing Knock Chapitre 1 (Python)
100 Language Processing Knock Chapitre 2 (Python)
100 Language Processing Knock-25: Extraction de modèles
Traitement du langage 100 Knock-87: similitude des mots
J'ai essayé 100 traitements linguistiques Knock 2020
100 Language Processing Knock-56: analyse de co-référence
Résolution de 100 traitements linguistiques Knock 2020 (01. "Patatokukashi")
100 coups de traitement du langage amateur: Résumé
100 Language Processing Knock 2020 Chapitre 2: Commandes UNIX
100 Language Processing Knock 2015 Chapitre 5 Analyse des dépendances (40-49)
100 traitements de langage avec Python
100 Language Processing Knock Chapitre 1 en Python
100 Language Processing Knock 2020 Chapitre 4: Analyse morphologique
100 Language Processing Knock 2020 Chapitre 9: RNN, CNN
100 traitement du langage knock-76 (en utilisant scicit-learn): étiquetage
100 Language Processing Knock-55: extraction d'expressions uniques
J'ai essayé 100 traitements linguistiques Knock 2020: Chapitre 3
100 Language Processing Knock-82 (mot de contexte): Extraction de contexte
100 traitements de langage avec Python (chapitre 3)
100 Language Processing Knock: Chapitre 1 Mouvement préparatoire
100 Language Processing Knock 2020 Chapitre 6: Apprentissage automatique
100 Traitement du langage Knock Chapitre 4: Analyse morphologique
Traitement du langage 100 knock-86: affichage vectoriel Word
100 Language Processing Knock 2020 Chapitre 10: Traduction automatique (90-98)
100 Language Processing Knock 2020 Chapitre 5: Analyse des dépendances
100 Language Processing Knock-28: Suppression du balisage MediaWiki
100 Traitement du langage Knock 2020 Chapitre 7: Vecteur de mots
100 Language Processing Knock 2020 Chapitre 8: Neural Net
100 traitement du langage knock-59: analyse de la formule S
Le débutant en Python a essayé 100 traitements de langage Knock 2015 (05 ~ 09)
100 traitement du langage knock-31 (en utilisant des pandas): verbe
100 langues de traitement frappent 2020 "pour Google Colaboratory"
J'ai essayé 100 traitements linguistiques Knock 2020: Chapitre 1
100 Language Processing Knock 2020 Chapitre 1: Mouvement préparatoire