For some reason, I got the Fathom Neural Compute Stick [^ 4], so I tried running the tutorial.
--It works just by plugging a USB calculator into a PC etc. [^ 3] ――It seems to be a substitute that makes inference at high speed [^ 1]. (Learning is impossible) --It works when written in TensorFlow. --Lower power consumption than GPU
Maybe it doesn't work even if you follow the tutorial, so you need to make some corrections.
There is code to try seq2seq in the Fathom tutorial [^ 2], but since the code needs to put the data in the place where root authority is required, the root user instead of the fathom user described in the tutorial procedure You need to log in and work with.
docker pull rdadolf/fathom
docker run -it --user root rdadolf/fathom
Since the file name of the data to be downloaded has changed, modify fathom / seq2seq / data_utils.py
to the following.
- train_path = os.path.join(directory, "giga-fren.release2")
+ train_path = os.path.join(directory, "giga-fren.release2.fixed")
Execute the Seq2Seq sample with the following command. It takes about an hour to download the WMT'15 data required to run Seq2Seq in the first tutorial, so wait. (There is no progress bar, but it is working.)
cd fathom/
mkdir -p /data/WMT15
python fathom/seq2seq/seq2seq.py
--Used to speed up image recognition processing for drones, Raspberry Pi, surveillance cameras, etc. --Since there is about 50GB of free space, it can also be used as a USB memory. ――If you don't understand, you can ask in the forum [^ 5].
References
Recommended Posts