File transfer over SSH onto linux server from hard disk on remote MAC

OK, you've been able to start over SSH, but it times out. Try without the "z" as well - I've had some issues when trying to transfer a large number of files to another location timing out. And if that doesn't work, you might need to edit on the server the /etc/ssh/sshd.conf file (I think that's the location/file) and change the timeout to something larger and restart sshd.


I tried with replace "z" it gave same error. I changed the time " LoginGraceTime 600 to LoginGraceTime 12000" but to no avail. Btw my location
file is /etc/ssh/sshd_config
 


If rsync is failing, try scp and see if it is working any better:
Code:
scp -r ~/source sonal@server:~/
 
@Sonal try either rsync or scp with a smaller test file..

Code:
touch blah.txt
scp blah.txt sonal@server:~/

Code:
touch blah2.txt
rsync -avz blah2.txt sonal@server:~/

Then, ssh into your server and you should see blah.txt and blah2.txt there in your home directory. Let's establish that this works for both scp and rsync.

(Note: touch creates an empty file)
 
rsync -Pa -e "ssh -p 22" /Users/sonalharsh/Desktop/Test_Folder user@Host:~/

This command saves the folder in the home directory but I want to store in the destination folder which gives error which I showed earlier
 
rsync -Pa -e "ssh -p 22" /Users/sonalharsh/Desktop/Test_Folder user@Host:~/

This command saves the folder in the home directory but I want to store in the destination folder which gives error which I showed earlier

Ok, we're getting somewhere..

Make sure you type the destination folder exactly how it is on the server, it's case sensitive. Also, since you're using port 22, you don't need the whole "ssh -p 22" part. Also, be sure to end with a slash (/) on the destination folder so it puts it inside it.

Maybe try with the blah.txt example into your destination folder to make sure it works before going for the large file(s).
 
@Sonal try either rsync or scp with a smaller test file..

Code:
touch blah.txt
scp blah.txt sonal@server:~/

Code:
touch blah2.txt
rsync -avz blah2.txt sonal@server:~/

Then, ssh into your server and you should see blah.txt and blah2.txt there in your home directory. Let's establish that this works for both scp and rsync.

(Note: touch creates an empty file)

I tried and it works and the files are saved in home directory of the user.

Now even with smaller text file I am not able to save it in my destination folder
 
@Rob I guess the error was because of file name as you suggested in previous command. I changed the name format instead of copying the destination path. It seems to work. Though it shows "file vanished" but I suppose its ok. It will still copy those files(correct me if I am wrong).

Since the folder is huge(1.96TB) I am waiting for it finish. Does it show that the file transfer is complete? And as per my knowledge rsync compare and copy. There won't be any duplicates or missing files???
 
if you're using rsync, it won't create any duplicates at all.. even if you kill it and start over.

If you like, open another terminal and ssh into your server, then check the file size of your destination folder like:

Code:
du -s foldername

then count to 10 and do it again. The size should have increased.
 
With the -P switch, you should see a progress and ETA for individual files. Once complete, it'll drop out of the rsync and return to a prompt. rsync works by comparing time-stamps in the default mode. If the source's is newer, it'll transfer modified portions. If the file transfer stops halfway through a file, restarting it will skip to and resume where it left off.

If you're having disconnect issues and want to run it until it finishes, you can do
Code:
until rsync -Pa <rest of the command here>; do echo "waiting 1 second..."; sleep 1; done
(NOTE: This may not work on Mac - not sure if that command is available. Literally just learned about it 2 minutes ago)
 
@Rob @Steve : Thank you so much. It works.

Also, we use samba to mount server on the and then transfer files using matlab+dropbox or file relay which is very slow as samba is slow. Can we have something like WinSCP for mac so that we can transfer files continuously when we get data from the field on regular basis. Or can use SQL database for that?
 
I think what you're asking for is a way to transfer files from one system to another automatically. What you could do is create a script something like
/home/sonal/rsync.script.sh:
Code:
#! /bin/bash
if [ -e /home/sonal/rsync.in.progress ]
   then exit
fi
touch /home/sonal/rsync.in.progress
rsync -Pa --delete <rest of command here>
rm /home/sonal/rsync.in.progress
make it executable:
Code:
chmod 744 /home/sonal/rsync.script.sh
then
Code:
crontab -e
and create an cronjob to run every minute:
Code:
* * * * * /home/sonal/rsync.script.sh > /dev/null

This will execute the rsync script every minute of every hour of every day of the week of every calendar day of every month (see https://ole.michelsen.dk/blog/schedule-jobs-with-crontab-on-mac-osx.html for a description of the crontab), checking to see if it a job is already in progress. This will also delete files from your destination that are NOT on your source so if you don't want that behavior, omit the
Code:
--delete
switch.

This is a relatively simple script and I've done no testing on it, but this should get you a step in the direction you want to go.
 
I think what you're asking for is a way to transfer files from one system to another automatically. What you could do is create a script something like
/home/sonal/rsync.script.sh:
Code:
#! /bin/bash
if [ -e /home/sonal/rsync.in.progress ]
   then exit
fi
touch /home/sonal/rsync.in.progress
rsync -Pa --delete <rest of command here>
rm /home/sonal/rsync.in.progress
make it executable:
Code:
chmod 744 /home/sonal/rsync.script.sh
then
Code:
crontab -e
and create an cronjob to run every minute:
Code:
* * * * * /home/sonal/rsync.script.sh > /dev/null

This will execute the rsync script every minute of every hour of every day of the week of every calendar day of every month (see https://ole.michelsen.dk/blog/schedule-jobs-with-crontab-on-mac-osx.html for a description of the crontab), checking to see if it a job is already in progress. This will also delete files from your destination that are NOT on your source so if you don't want that behavior, omit the
Code:
--delete
switch.

This is a relatively simple script and I've done no testing on it, but this should get you a step in the direction you want to go.
I am gonna try this. like from windows/mac system to server we use matlab script to access SFTP for transferring files. So I want to implement a better way to transfer files onto server. I am gonna try the script
 
Th e data upload is still going on does it this long?
 
Th e data upload is still going on does it this long?
To be fair, as you said starting out:
I have 1.96 TB sized folder on a hard disk and I want to upload it
Running some numbers, 1.96 TB at 10 Mbps (1.25 MB/s), you're looking at about 19 days. 1.96 TB even at 100Mbps (12.5 MB/s) is still 1 day, 21 hours. Even at full gigabit link (1000Mbps/125 MB/s), you're looking at a solid 4 hours. If you have physical access, sneaker-net (ie, dumping to a hard drive and physically carrying it over to the other system) could very well be the fastest way to get a large amount of data from point A to point B.
 
To be fair, as you said starting out:

Running some numbers, 1.96 TB at 10 Mbps (1.25 MB/s), you're looking at about 19 days. 1.96 TB even at 100Mbps (12.5 MB/s) is still 1 day, 21 hours. Even at full gigabit link (1000Mbps/125 MB/s), you're looking at a solid 4 hours. If you have physical access, sneaker-net (ie, dumping to a hard drive and physically carrying it over to the other system) could very well be the fastest way to get a large amount of data from point A to point B.


Yeah, it took more than 3 days.
 


Top