--I want to transfer the source code of myapp checked out to local ~ / ansible / work / to / var / ma2saka / myapp on the server. --The server is set up locally with vagrant Cent OS 6.5 --Originally the script was written and working for the AWS server
Even if the port number is different, it will malfunction if the host name part is the same.
Instead of writing the IP directly in the inventory file, specify something like host1 ansible_ssh_host = 127.0.0.1
so that the host name part is not covered.
main.yml
- name: sync source code
synchronize:>
dest=/var/ma2saka/myapp
src=/Users/ma2saka/ansible/work/myapp/
recursive=yes
links=yes
rsync_opts="--exclude='.git'"
inventory.ini
[dev]
127.0.0.1
[dev:vars]
ansible_ssh_user=vagrant
ansible_ssh_port=2222
A server set up with vagrant is waiting at the local 2222.
The public key is registered in advance as follows.
ssh-add -D
ssh-add ~/.ssh/id_rsa
ssh-copy-id -p 2222 [email protected]
TASK: [deploy | sync source code] *********************************************
failed: [127.0.0.1 -> 127.0.0.1] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no -o Port=2222' --exclude='.git' --out-format='<<CHANGED>>%i %n%L' \"/Users/ma2saka/ansible/work/myapp/\" \"/var/ma2saka/myapp\"", "failed": true, "rc": 23}
msg: rsync: change_dir "/Users/ma2saka/ansible/work/myapp/" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
FATAL: all hosts have already failed -- aborting
No, even if you say msg: rsync: change_dir" / Users / ma2saka / ansible / work / myapp / "failed: No such file or directory (2)
. Because there is that directory. No matter how you look at it.
There is no particular change in 1.9.1.
inventory.ini
[dev]
myapp1.amazon.example.com
myapp2.amazon.example.com
myapp3.amazon.example.com
[dev:vars]
ansible_ssh_user=ec2-user
ansible_ssh_port=22
It works as expected without any problems.
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 this.is.it
inventory.ini
[dev]
this.is.it
[dev:vars]
ansible_ssh_user=vagrant
ansible_ssh_port=2222
When you run it with this.
..Abbreviation..
TASK: [deploy | sync source code] *********************************************
changed: [this.is.it -> 127.0.0.1]
..Abbreviation..
It works ... This works as expected, so this is fine, but there's still something unsettling.
If you can't specify the IP, why don't you use localhost?
TASK: [deploy | sync source code] *********************************************
failed: [localhost -> 127.0.0.1] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no -o Port=2222' --exclude='.git' --out-format='<<CHANGED>>%i %n%L' \"/Users/ma2saka/ansible/work/myapp/\" \"/var/ma2saka/myapp\"", "failed": true, "rc": 12}
msg: ssh: connect to host localhost port 2222: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
Well.
So, in fact, I'm late to realize that the "name" of the host is the problem?
It was exactly this phenomenon that was caught.
When connecting with ansible to a host that is port forwarding on the local host, it seems that it is a good idea to give an appropriate alias in the inventory file.
[dev]
# lvh.me or this.is.You don't have to write it
host1 ansible_ssh_host=127.0.0.1
[dev:vars]
ansible_ssh_user=vagrant
ansible_ssh_port=2222
I also checked the code of the synchronize module and the script that was transferred at runtime, so I understood the behavior, but this is a trap. Mostly the error message is too unfriendly. While complaining. It feels good because it has been resolved.
If you look closely at -vvvv
, you can see" where the script is running ". When I wrote 127.0.0.1 and it failed, the script was actually running on the remote server. However, the directory cannot be found because the relative path of the local machine is written in the source specification.
When I noticed that, I could have immediately guessed that this was not a bug in a specific module but the host name resolution logic of ansible, but I was spending time tracing the wrong part. It's quite difficult.
Recommended Posts