[PYTHON] Synchronizing with the server port forwarding ssh on localhost fails [Resolved]

Overview

Thing you want to do

--I want to transfer the source code of myapp checked out to local ~ / ansible / work / to / var / ma2saka / myapp on the server. --The server is set up locally with vagrant Cent OS 6.5 --Originally the script was written and working for the AWS server

Solution first

Even if the port number is different, it will malfunction if the host name part is the same.

Instead of writing the IP directly in the inventory file, specify something like host1 ansible_ssh_host = 127.0.0.1 so that the host name part is not covered.

Pattern that failed

Definition I wrote

main.yml


- name: sync source code
  synchronize:> 
    dest=/var/ma2saka/myapp
    src=/Users/ma2saka/ansible/work/myapp/ 
    recursive=yes
    links=yes
    rsync_opts="--exclude='.git'"

In inventory

inventory.ini


[dev]
127.0.0.1

[dev:vars]
ansible_ssh_user=vagrant
ansible_ssh_port=2222

A server set up with vagrant is waiting at the local 2222.

The public key is registered in advance as follows.

ssh-add -D
ssh-add ~/.ssh/id_rsa
ssh-copy-id -p 2222 [email protected]

Get an error

TASK: [deploy | sync source code] *********************************************
failed: [127.0.0.1 -> 127.0.0.1] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh  -S none -o StrictHostKeyChecking=no -o Port=2222' --exclude='.git' --out-format='<<CHANGED>>%i %n%L' \"/Users/ma2saka/ansible/work/myapp/\" \"/var/ma2saka/myapp\"", "failed": true, "rc": 23}
msg: rsync: change_dir "/Users/ma2saka/ansible/work/myapp/" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]


FATAL: all hosts have already failed -- aborting

No, even if you say msg: rsync: change_dir" / Users / ma2saka / ansible / work / myapp / "failed: No such file or directory (2). Because there is that directory. No matter how you look at it.

I put it in with brew 1.8.2 I suspect it's because I put it in again with pip, but Akan

There is no particular change in 1.9.1.

Works well with amazon or another server

inventory.ini


[dev]
myapp1.amazon.example.com
myapp2.amazon.example.com
myapp3.amazon.example.com

[dev:vars]
ansible_ssh_user=ec2-user
ansible_ssh_port=22

It works as expected without any problems.

It worked when I gave 127.0.0.1 an alias in / etc / hosts

127.0.0.1	localhost
255.255.255.255	broadcasthost
::1             localhost
127.0.0.1 this.is.it

inventory.ini


[dev]
this.is.it

[dev:vars]
ansible_ssh_user=vagrant
ansible_ssh_port=2222

When you run it with this.

 ..Abbreviation..

TASK: [deploy | sync source code] *********************************************
changed: [this.is.it -> 127.0.0.1]

 ..Abbreviation..

It works ... This works as expected, so this is fine, but there's still something unsettling.

If the host name is localhost, it will not connect in the first place

If you can't specify the IP, why don't you use localhost?

TASK: [deploy | sync source code] *********************************************
failed: [localhost -> 127.0.0.1] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh  -S none -o StrictHostKeyChecking=no -o Port=2222' --exclude='.git' --out-format='<<CHANGED>>%i %n%L' \"/Users/ma2saka/ansible/work/myapp/\" \"/var/ma2saka/myapp\"", "failed": true, "rc": 12}
msg: ssh: connect to host localhost port 2222: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]

Well.

So, in fact, I'm late to realize that the "name" of the host is the problem?

Resolved by setting ansible_ssh_host

It was exactly this phenomenon that was caught.

When connecting with ansible to a host that is port forwarding on the local host, it seems that it is a good idea to give an appropriate alias in the inventory file.

[dev]
# lvh.me or this.is.You don't have to write it
host1 ansible_ssh_host=127.0.0.1

[dev:vars]
ansible_ssh_user=vagrant
ansible_ssh_port=2222

I also checked the code of the synchronize module and the script that was transferred at runtime, so I understood the behavior, but this is a trap. Mostly the error message is too unfriendly. While complaining. It feels good because it has been resolved.

Reflection

If you look closely at -vvvv, you can see" where the script is running ". When I wrote 127.0.0.1 and it failed, the script was actually running on the remote server. However, the directory cannot be found because the relative path of the local machine is written in the source specification.

When I noticed that, I could have immediately guessed that this was not a bug in a specific module but the host name resolution logic of ansible, but I was spending time tracing the wrong part. It's quite difficult.

Recommended Posts

Synchronizing with the server port forwarding ssh on localhost fails [Resolved]
Edit the file of the SSH connection destination server on the server with VS Code
Log in to the remote server with SSH
Drawing tips with matplotlib on the server side
Access the host SQL Server with python27 / pyodbc on the container
A note on how to check the connection to the license server port
Settings until the Dango project is started on the server with Pycharm
Linux ssh port forwarding (tunnel) settings
Notes on using matplotlib on the server
Get the width of the div on the server side with Selenium + PhantomJS + Python
Connect to VPN with your smartphone and turn off / on the server
Install the python module with pip on a server without root privileges