Thursday, 15 March 2012

postgresql - pg_dump through ssh stops after some seconds when use in script -



postgresql - pg_dump through ssh stops after some seconds when use in script -

i backup of 3 postgresql servers pgdump launched script through ssh. command line in script :

sudo -u barman ssh postgres@$server 'pg_dump -fc -b $database 2> ~/dump_error.txt' | gzip > $dump_root/$server-$backupdate.gz

but dump size 1k, servers. when execute line in shell, replacing variable values, works. executed root (sudo -u barman ssh postgres@server ...), , barman, user barman (ssh postgres@server ...), dump correct.

when open dump, see start of dump, stops.

the dump_error.txt on servers empty.

there nil in log (postgres log , syslog), in backup , postgresql servers.

the user barman can connect server user postgres without password.

the limits of shell plenty high not block script (open files 1024, file size unlimited, max user process 13098).

i seek alter cron hr of script, thinking process consume resources, same thing, , ps -e show nil special.

the version of postgresql 9.1.

why line never produce finish dump when executed in script, when executed in shell ?

thanks help, denis

your problem related bad quoting. simple quotes cause string not expanded, while double quotes expand what's inside. instance :

>myvariable=test >echo '$myvariable' $myvariable >echo "$myvariable" test

in case, ssh postgres@$server 'pg_dump -fc -b $database 2> ~/dump_error.txt' execute command on remote computer, without expanding variables. means ssh pass look pg_dump -fc -b $database, , bash interprete variable $database on remote computer. if variable doesn't exist there, considered empty string.

you can see difference when ssh user@server 'echo $pwd' , ssh user@server "echo $pwd".

postgresql shell pg-dump

No comments:

Post a Comment