Erotic images are the best.
Anyone can do naughty freely as long as there is an erotic image. You can be sexually excited by yourself, even if you don't have her. You can feel certain satisfaction and immerse yourself in happiness. Any taste is at your disposal.
Therefore, for us human beings, collecting erotic images is not an exaggeration to call it a species habit, just like dung beetles rolling feces.
However, we are the longest creatures of primates. It shouldn't be the same as the dung beetle, which has been rolling dung in the same way for tens of thousands of years. Mankind must collect erotic images more efficiently and enthusiastically.
However, even so, collecting erotic images is very difficult. It is necessary to visit various sites, carefully examine them, and then collect and structure the fitting items according to a certain scheme. Many people will need different ones depending on the day.
By the way, recently, deep learning has become popular.
Both cats and scoops are deep learning.
When you hit a difficult project, what can you do with deep learning? The number of consultations like this has increased. If you hit a project with low order accuracy, can you not attract the attention of customers by deep learning for the time being? Isn't it popular? The number of sales people who pretend to be like that has increased.
This is the same even if I meet with engineers outside the company, so it's easy to win with deep learning, right? There are more opportunities to talk about things like, "I think that can be solved by deep learning." Everyone and he are all about deep learning.
So, if it's so popular, you have to take on the challenge.
I tried to collect erotic images by deep learning.
I used chainer for implementation.
It is a safe and secure domestic product.
The model looks like this:
class NIN(chainer.Chain):
def __init__(self, output_dim, gpu=-1):
w = math.sqrt(2)
super(NIN, self).__init__(
mlpconv1=L.MLPConvolution2D(3, (16, 16, 16), 11, stride=4, wscale=w),
mlpconv2=L.MLPConvolution2D(16, (32, 32, 32), 5, pad=2, wscale=w),
mlpconv3=L.MLPConvolution2D(32, (64, 64, 64), 3, pad=1, wscale=w),
mlpconv4=L.MLPConvolution2D(64, (32, 32, output_dim), 3, pad=1, wscale=w),
)
self.output_dim = output_dim
self.train = True
self.gpu = gpu
def __call__(self, x, t):
h = F.max_pooling_2d(F.relu(self.mlpconv1(x)), 3, stride=2)
h = F.max_pooling_2d(F.relu(self.mlpconv2(h)), 3, stride=2)
h = F.max_pooling_2d(F.relu(self.mlpconv3(h)), 3, stride=2)
h = self.mlpconv4(F.dropout(h, train=self.train))
h = F.average_pooling_2d(h, 10)
h = F.reshape(h, (x.data.shape[0], self.output_dim))
loss = F.softmax_cross_entropy(h, t)
print(loss.data)
return loss
As you can see, I just played with the NIN sample that came with chainer. Moreover, the reason I played with it was that the memory of the VPS for development was small, so I could not train a more complicated model. Even with this model, it takes about 2GB when learning.
For the data for learning, I used the illustrations published on the Internet. Out of a total of 2000 images, 1000 images that the son reacts to are rare images, 1000 images that are unexpected are common images, and the classification is between the two types.
The following three images were used to check the results.
The first one is the cover of his own book, "Tanaka-Age Equal, a Wizard Who Has No History". The second one is my wife who requested a business trip from 3D Custom Girl. And the third piece is one when I enjoyed molesting play with her on the train.
We scored these three images using the above model.
As a result, the following scores were obtained.
{
"result": [
[
0.9290218353271484,
"1,rare"
],
[
0.07097823172807693,
"0,common"
]
]
}
{
"result": [
[
0.6085503101348877,
"0,common"
],
[
0.3914496898651123,
"1,rare"
]
]
}
{
"result": [
[
0.5935600399971008,
"1,rare"
],
[
0.40644001960754395,
"0,common"
]
]
}
Similar to the imagenet sample that comes with chainer, the higher the value for the corresponding label, the closer it is to that. In other words, the higher the value of rare, the more happy the son will be.
At this point, I felt some response.
Therefore, in order to collect erotic images more realistically, we will make the access to the above model API, read the image acquired from real-time crawling of twitter, store the result in the database, and rank the web application. created.
The configuration is as follows.
version: '2'
volumes:
db-data:
driver: local
object-data:
driver: local
services:
db:
container_name: db
image: mysql
volumes:
- db-data:/var/lib/mysql
# - ./dockerfiles/staging/mysql:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: $APP_DATABASE_PASSWORD
MYSQL_DATABASE: app_staging
MYSQL_USER: xxxx
MYSQL_PASSWORD: $APP_DATABASE_PASSWORD
expose:
- "3306"
restart: always
app:
build:
context: ../../
dockerfile: dockerfiles/staging/app/Dockerfile
environment:
RAILS_LOG_TO_STDOUT: 'true'
RAILS_SERVE_STATIC_FILES: 'true'
RAILS_ENV: 'staging'
DISABLE_DATABASE_ENVIRONMENT_CHECK: $DISABLE_DATABASE_ENVIRONMENT_CHECK
APP_SECRET_KEY_BASE: $APP_SECRET_KEY_BASE
APP_DATABASE_USERNAME: xxxx
APP_DATABASE_PASSWORD: $APP_DATABASE_PASSWORD
APP_DATABASE_HOST: db
TW_CONSUMER_KEY: $TW_CONSUMER_KEY
TW_CONSUMER_SECRET: $TW_CONSUMER_SECRET
TW_ACCESS_TOKEN: $TW_ACCESS_TOKEN
TW_ACCESS_TOKEN_SECRET: $TW_ACCESS_TOKEN_SECRET
minio:
build:
context: ../../
dockerfile: dockerfiles/staging/minio/Dockerfile
volumes:
- object-data:/var/lib/minio
environment:
MINIO_ACCESS_KEY: 'xxx'
MINIO_SECRET_KEY: 'xxx'
ports:
- '0.0.0.0:9000:9000'
command: [server, /export]
calc:
container_name: calc
build:
context: ../../
dockerfile: dockerfiles/staging/calc/Dockerfile
command: python web.py
expose:
- "5000"
web:
container_name: web
extends:
service: app
command: bin/wait_for_it db:3306 -- pumactl -F config/puma.rb start
expose:
- "3000"
depends_on:
- db
crawler:
container_name: crawler
extends:
service: app
command: bin/wait_for_it db:3306 -- rails runner bin/crawler.rb
depends_on:
- calc
- db
nginx:
container_name: nginx
build:
context: ../../
dockerfile: dockerfiles/staging/nginx/Dockerfile
ports:
- '80:80'
- '443:443'
depends_on:
- web
I tried to collect side dishes on twitter for a while using this.
The results are as follows.
Some of them are extracted from the ranking when about 5,60 sheets are collected.
The value shown on the right side of the image is the value of rare.
I'm surprised that the results so far have been achieved just by giving the model the illustrations divided into A and B without doing anything. I strongly felt that it would be possible to collect images that my son liked by labeling them a little more strictly. Also, if you use multiple models in combination, the accuracy is likely to improve further.
Perhaps the day when each person has an erotic image sommelier optimized for himself is just around the corner.
Recommended Posts