[PYTHON] Paper: Machine learning paper that reproduces images in the brain, (Deep image reconstruction from human brain activity)

Original paper

https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633

What kind of paper is it? How would you summarize it in your own words?

→ Visualize the image you are seeing in your brain! Thesis. When I saw the result, I thought that it was unexpectedly vague. (Of course, there seems to be poor accuracy)

In detail

First, while looking at the image, the reaction of fMRI is measured. In that measurement state, there is one that generates fMRI → image features. That is, originally, it is an image → a feature amount, but it is fMRI → a feature amount. In this respect, it is unavoidable that the accuracy is lower than the image → feature amount. Rather, I think that ambiguity exists here and may be useful for creativity research. Then, from the result, the feature amount of the image can be obtained. This is probably like CNN, and I think you can get features in multiple layers. In other words, it is considered that the shallow layer captures the concrete shape and the deep layer retains the abstract shape. It is passed through a second network that successfully combines multiple layers of features for these images. That is, it uses a network model called DGN, which is the reverse of CNN, which generates an image from features. As a result, geometric structures such as the alphabet worked well. However, the result was that it was extremely difficult to imagine an image in a complicated state → to image it. The cause may be in the network, but probably the point that human imagination cannot draw in such detail.

Thoughts / interests

I want to draw the inside of a professional shogi player's head. I wonder how detailed the image can be drawn if I try it. It is possible that each person is different. I also want to see Hayao Miyazaki. A conversion network called fMRI → image features. At this point, a fairly logical leap was taking place, and I wondered if the accuracy was low. Of course, I think that the accuracy of DGN, which is image feature amount → image generation, is also low. It seems to be a common recognition that it is considerably lowered by passing through two filters with poor accuracy. However, even with its poor accuracy, I thought it was amazing to be able to generate images with alphabets.

Recommended Posts

Paper: Machine learning paper that reproduces images in the brain, (Deep image reconstruction from human brain activity)
[Part 1] Use Deep Learning to forecast the weather from weather images
[Part 3] Use Deep Learning to forecast the weather from weather images
[Part 2] Use Deep Learning to forecast the weather from weather images
People memorize learned knowledge in the brain, how to memorize learned knowledge in machine learning
Paper: Music processing in the brain
How to make a face image data set used in machine learning (3: Face image generation from candidate images Part 1)
Image recognition model using deep learning in 2016
Image alignment: from SIFT to deep learning
"Deep Learning from scratch" in Haskell (unfinished)
Simple code that gives a score of 0.81339 in Kaggle's Titanic: Machine Learning from Disaster