Admin Books

DOWNLOAD Free e-Books for Linux Admin Servers :

Backup VPS Using Rsync and SSH


Using Rsync and SSH
Keys, Validating, and Automation
This document covers using cron, ssh, and rsync to backup files over a local network or the Internet. Part of my goal is to ensure no user intervention is required when the computer is restarted (for passwords, keys, or key managers).
I like to backup some logging, mail, and configuration information sometimes on hosts across the network and Internet, and here is a way I have found to do it. You'll need these packages installed:
  • rsync
  • openssh
  • cron (or vixie-cron)
Please note these instructions may be specific to Red Hat Linux versions 7.3, 9, and Fedora Core 3, but I hope they won't be too hard to adapt to almost any *NIX type OS. The man pages for 'ssh' and 'rsync' should be helpful to you if you need to change some things (use the "man ssh" and "man rsync" commands).
First, I'll define some variables. In my explanation, I will be synchronizing files (copying only new or changed files) one way, and I will be starting this process from the host I want to copy things to. In other words, I will be syncing files from /remote/dir/ on remotehost, as remoteuser, to /this/dir/ on thishost, as thisuser.
I want to make sure that 'rsync' over 'ssh' works at all before I begin to automate the process, so I test it first as thisuser:
$ rsync -avz -e ssh remoteuser@remotehost:/remote/dir /this/dir/ 
and type in remoteuser@remotehost's password when prompted. I do need to make sure that remoteuser has read permissions to /remote/dir/ on remotehost, and that thisuser has write permissions to /this/dir/ on thishost. Also, 'rsync' and 'ssh' should be in thisuser's path (use "which ssh" and "which rsync"), 'rsync' should be in remoteuser's path, and 'sshd' should be running on remotehost.
Configuring thishost

If that all worked out, or I eventually made it work, I am ready for the next step. I need to generate a private/public pair of keys to allow a 'ssh' connection without asking for a password. This may sound dangerous, and it is, but it is better than storing a user password (or key password) as clear text in the script [0]. I can also put limitations on where connections made with this key can come from, and on what they can do when connected. Anyway, I generate the key I will use onthishost (as thisuser):
$ ssh-keygen -t rsa -b 2048 -f /home/thisuser/cron/thishost-rsync-key
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): [press enter here]
Enter same passphrase again: [press enter here]
Your identification has been saved in /home/thisuser/cron/thishost-rsync-key.
Your public key has been saved in /home/thisuser/cron/thishost-rsync-key.pub.
The key fingerprint is:
2e:28:d9:ec:85:21:e7:ff:73:df:2e:07:78:f0:d0:a0 thisuser@thishost 
and now we have a key with no password in the two files mentioned above [1]. Make sure that no other unauthorized user can read the private key file (the one without the '.pub' extension).
This key serves no purpose until we put the public portion into the 'authorized_keys' file [2] on remotehost, specifically the one for remoteuser:
/home/remoteuser/.ssh/authorized_keys 
I use scp to get the file over to remotehost:
$ scp /home/thisuser/cron/thishost-rsync-key.pub remoteuser@remotehost:/home/remoteuser/ 
and then I can prepare things on remotehost.
Configuring remotehost

I 'ssh' over to remotehost:
$ ssh remoteuser@remotehost
remoteuser@remotehost's password: [type correct password here]
$ echo I am now $USER at $HOSTNAME
I am now remoteuser at remotehost
to do some work.
I need to make sure I have the directory and files I need to authorize connections with this key [3]:
$ if [ ! -d .ssh ]; then mkdir .ssh ; chmod 700 .ssh ; fi
$ mv thishost-rsync-key.pub .ssh/
$ cd .ssh/
$ if [ ! -f authorized_keys ]; then touch authorized_keys ; chmod 600 authorized_keys ; fi
$ cat thishost-rsync-key.pub >> authorized_keys 
Now the key can be used to make connections to this host, but these connections can be from anywhere (that the ssh daemon on remotehost allows connections from) and they can do anything (that remoteuser can do), and I don't want that. I edit the 'authorized_keys' file (with vi) and modify the line with 'thishost-rsync-key.pub' information on it. I will only be adding a few things in front of what is already there, changing the line from this:
ssh-dss AAAAB3NzaC1kc3MAAAEBAKYJenaYvMG3nHwWxKwlWLjHb77CT2hXwmC8Ap+fG8wjlaY/9t4u
A+2qx9JNorgdrWKhHSKHokFFlWRj+qk3q+lGHS+hsXuvta44W0yD0y0sW62wrEVegz+JVmntxeYc0nDz
5tVGfZe6ydlgomzj1bhfdpYe+BAwop8L+EMqKLS4iSacNjoPlHsmqHMnbibn3tBqJEq2QJjEPaiYj1iP
5IaCuYBhuTKQGa+oyH3mXEif5CKdsIKBj46B0tCy0/GC7oWcUN92QdLrUyTeRJZsTWsxKpRbMliD2pBh
4oyX/aXEf8+HZBrO5vQjDBCfTFQA+35Xrd3eTVEjkGkncI0SAeUAAAAVAMZSASmQ9Pi38mdm6oiVXD55
Kk2rAAABAE/bA402VuCsOLg9YS0NKxugT+o4UuIjyl6b2/cMmBVWO39lWAjcsKK/zEdJbrOdt/sKsxIK
1/ZIvtl92DLlMhci5c4tBjCODey4yjLhApjWgvX9D5OPp89qhah4zu509uNX7uH58Zw/+m6ZOLHN28mV
5KLUl7FTL2KZ583KrcWkUA0Id4ptUa9CAkcqn/gWkHMptgVwaZKlqZ+QtEa0V2IwUDWS097p3SlLvozw
46+ucWxwTJttCHLzUmNN7w1cIv0w/OHh5IGh+wWjV9pbO0VT3/r2jxkzqksKOYAb5CYzSNRyEwp+NIKr
Y+aJz7myu4Unn9de4cYsuXoAB6FQ5I8AAAEBAJSmDndXJCm7G66qdu3ElsLT0Jlz/es9F27r+xrg5pZ5
GjfBCRvHNo2DF4YW9MKdUQiv+ILMY8OISduTeu32nyA7dwx7z5M8b+DtasRAa1U03EfpvRQps6ovu79m
bt1OE8LS9ql8trx8qyIpYmJxmzIdBQ+kzkY+9ZlaXsaU0Ssuda7xPrX4405CbnKcpvM6q6okMP86Ejjn
75Cfzhv65hJkCjbiF7FZxosCRIuYbhEEKu2Z9Dgh+ZbsZ+9FETZVzKBs4fySA6dIw6zmGINd+KY6umMW
yJNej2Sia70fu3XLHj2yBgN5cy8arlZ80q1Mcy763RjYGkR/FkLJ611HWIA= thisuser@thishost
to this [4]:
from="10.1.1.1",command="/home/remoteuser/cron/validate-rsync" ssh-dss AAAAB3Nza
C1kc3MAAAEBAKYJenaYvMG3nHwWxKwlWLjHb77CT2hXwmC8Ap+fG8wjlaY/9t4uA+2qx9JNorgdrWKhH
SKHokFFlWRj+qk3q+lGHS+hsXuvta44W0yD0y0sW62wrEVegz+JVmntxeYc0nDz5tVGfZe6ydlgomzj1
bhfdpYe+BAwop8L+EMqKLS4iSacNjoPlHsmqHMnbibn3tBqJEq2QJjEPaiYj1iP5IaCuYBhuTKQGa+oy
H3mXEif5CKdsIKBj46B0tCy0/GC7oWcUN92QdLrUyTeRJZsTWsxKpRbMliD2pBh4oyX/aXEf8+HZBrO5
vQjDBCfTFQA+35Xrd3eTVEjkGkncI0SAeUAAAAVAMZSASmQ9Pi38mdm6oiVXD55Kk2rAAABAE/bA402V
uCsOLg9YS0NKxugT+o4UuIjyl6b2/cMmBVWO39lWAjcsKK/zEdJbrOdt/sKsxIK1/ZIvtl92DLlMhci5
c4tBjCODey4yjLhApjWgvX9D5OPp89qhah4zu509uNX7uH58Zw/+m6ZOLHN28mV5KLUl7FTL2KZ583Kr
cWkUA0Id4ptUa9CAkcqn/gWkHMptgVwaZKlqZ+QtEa0V2IwUDWS097p3SlLvozw46+ucWxwTJttCHLzU
mNN7w1cIv0w/OHh5IGh+wWjV9pbO0VT3/r2jxkzqksKOYAb5CYzSNRyEwp+NIKrY+aJz7myu4Unn9de4
cYsuXoAB6FQ5I8AAAEBAJSmDndXJCm7G66qdu3ElsLT0Jlz/es9F27r+xrg5pZ5GjfBCRvHNo2DF4YW9
MKdUQiv+ILMY8OISduTeu32nyA7dwx7z5M8b+DtasRAa1U03EfpvRQps6ovu79mbt1OE8LS9ql8trx8q
yIpYmJxmzIdBQ+kzkY+9ZlaXsaU0Ssuda7xPrX4405CbnKcpvM6q6okMP86Ejjn75Cfzhv65hJkCjbiF
7FZxosCRIuYbhEEKu2Z9Dgh+ZbsZ+9FETZVzKBs4fySA6dIw6zmGINd+KY6umMWyJNej2Sia70fu3XLH
j2yBgN5cy8arlZ80q1Mcy763RjYGkR/FkLJ611HWIA= thisuser@thishost
where "10.1.1.1" is the IP (version 4 [5]) address of thishost, and "/home/remoteuser/cron/validate-rsync" (which is just one of a few options [6], including customization [7] to enhance security) is a script that looks something like this :
#!/bin/sh

case "$SSH_ORIGINAL_COMMAND" in
*\&*)
echo "Rejected"
;;
*\(*)
echo "Rejected"
;;
*\{*)
echo "Rejected"
;;
*\;*)
echo "Rejected"
;;
*\<*)
echo "Rejected"
;;
*\`*)
echo "Rejected"
;;
*\|*)
echo "Rejected"
;;
rsync\ --server*)
$SSH_ORIGINAL_COMMAND
;;
*)
echo "Rejected"
;;
esac 
If thishost has a variable address, or shares its address (via NAT or something similar) with hosts you do not trust, omit the 'from="10.1.1.1",' part of the line (including the comma), but leave the 'command' portion. This way, only the 'rsync' will be possible from connections using this key. Make certain that the 'validate-rsync' script is executable by remoteuser on remotehost and test it.
PLEASE NOTE: The private key, though now somewhat limited in what it can do (and hopefully where it can be done from), allows the possessor to copy any file from remotehost that remoteuser has access to. This is dangerous, and I should take whatever precautions I deem necessary to maintain the security and secrecy of this key. Some possibilities would be ensuring proper file permissions are assigned, consider using a key caching daemon, and consider if I really need this process automated verses the risk.
ALSO NOTE: Another security detail to consider is the SSH daemon configuration on remotehost. This example focuses on a user (remoteuser) who is not root. I recommend not using root as the remote user because root has access to every file on remotehost. That capability alone is very dangerous, and the penalties for a mistake or misconfiguration can be far steeper than those for a 'normal' user. If you do not use root as your remote user (ever), and you make security decisions for remotehost, I recommend either:
PermitRootLogin no 
or:
PermitRootLogin forced-commands-only 
be included in the '/etc/ssh/sshd_config' file on remotehost. These are global settings, not just related to this connection, so be sure you do not need the capability these configuration options prohibit. [8].
The 'AllowUsers', 'AllowGroups', 'DenyUsers', and 'DenyGroups' key words can be used to restrict SSH access to particular users and groups. They are documented in the man page for "sshd_config", but I will mention that they all can use '*' and '?' as wildcards to allow and deny access to users and groups that match patterns. 'AllowUsers' and 'DenyUsers' can also restrict by host when the pattern is in USER@HOST form.
Troubleshooting

Now that I have the key with no password in place and configured, I need to test it out before putting it in a cron job (which has its own small set of baggage). I exit from the ssh session to remotehost and try [9]:
$ rsync -avz -e "ssh -i /home/thisuser/cron/thishost-rsync-key" remoteuser@remotehost:/remote/dir /this/dir/
If this doesn't work, I will take off the "command" restriction on the key and try again. If it asks for a password, I will check permissions on the private key file (onthishost, should be 600), on 'authorized_keys' and (on remotehost, should be 600), on the '~/.ssh/' directory (on both hosts, should be 700), and on the home directory ('~/') itself (on both hosts, should not be writeable by anyone but the user). If some cryptic 'rsync' protocol error occurs mentioning the 'validate-rsync' script, I will make sure the permissions on 'validate-rsync' (on remotehost, may be 755 if every remotehost user is trusted) allow remoteuser to read and execute it.
If things still aren't working out, some useful information may be found in log files. Log files usually found in the /var/log/ directory on most linux hosts, and in the/var/log/secure log file on Red Hat-ish linux hosts. The most useful logfiles in this instance will be found on remotehost, but localhost may provide some client side information in its logs [10] . If you can't get to the logs, or are just impatient, you can tell the 'ssh' executable to provide some logging with the 'verbose' commands: '-v', '-vv', '-vvv'. The more v's, the more verbose the output. One is in the command above, but the one below should provide much more output:
$ rsync -avvvz -e "ssh -i /home/thisuser/cron/thishost-rsync-key" remoteuser@remotehost:/remote/dir /this/dir/ 
Hopefully, it will always just work flawlessly so I never have to extend the troubleshooting information listed here [11] .
Cron Job Setup

The last step is the cron script. I use something like this:
#!/bin/sh

RSYNC=/usr/bin/rsync
SSH=/usr/bin/ssh
KEY=/home/thisuser/cron/thishost-rsync-key
RUSER=remoteuser
RHOST=remotehost
RPATH=/remote/dir
LPATH=/this/dir/

$RSYNC -az -e "$SSH -i $KEY" $RUSER@$RHOST:$RPATH $LPATH 
because it is easy to modify the bits and pieces of the command line for different hosts and paths. I will usually call it something like 'rsync-remotehost-backups' if it contains backups. I test the script too, just in case I carefully inserted an error somewhere.
When I get the script running successfully, I use 'crontab -e' to insert a line for this new cron job:
0 5 * * * /home/thisuser/cron/rsync-remotehost-backups 
for a daily 5 AM sync, or:
0 5 * * 5 /home/thisuser/cron/rsync-remotehost-backups 
for a weekly (5 AM on Fridays). Monthly and yearly ones are rarer for me, so look at "man crontab" or here for advice on those.

Alright! Except for the everyday "keeping up with patches" thing, the insidious "hidden configuration flaws" part, and the unforgettable "total failure of human logic" set of problems, my work here is done. Enjoy!
Notes:
[0] The reason behind choosing a SSH key with no password, over options like ssh-agent or keychain , is that the automated process will survive a reboot of the host machine and execute at the next scheduled time without any intervention on my part (not all machines so automated are always accessable). If you do not have those requirements, these other options may lend your implementation more security.
[1] If remotehost only has SSH1 installed, you may need to use another key type. Instead of 'rsa' you will need to use 'rsa1'. You can use 'dsa' instead of 'rsa', but it will still only be useful for a SSH2 connection (and key length may be an issue as noted here -- thank you @avenjamin). SSH2 connections are more secure than SSH1 connections, but you'll have to look elsewhere for the details on that ("man ssh-keygen" and Google). Also, the key creation can be done with the command ( ssh-keygen -b 2048 -f keyfile -t rsa -N '' ) to automate the "no key password part", or ( ssh-keygen -b 2048 -f keyfile -q -t rsa -N '' ) to eliminate any output from the command.
[2] Some configurations use the file 'authorized_keys2' instead of 'authorized_keys'. Look for "AuthorizedKeysFile" in '/etc/ssh/sshd_config'.
[3] If you use a shell other than 'bash' (or other bourne compatible shell), like 'csh' or 'tcsh', the commands listed may not work. Before executing them, start up a 'bash' (or 'sh', or 'ksh', or 'zsh') shell using the 'bash' (or 'sh', or 'ksh', or 'zsh') command. After completing the commands, you will have to exit the 'bash' shell, and then exit the shell your host spawns normally.
[4] Remember not to insert any newlines into the "authorized_keys" file. The key information, and the inserted commands associated with that key, should all be on one line. The key you generate (the nonsensical stuff on the key line) will be different from the one here.
[5] I have seen one host ignore a properly presented IPv4 address and instead see the incoming connection as a IPv6-ish sort of address ("::fff:10.1.1.1"). I found the address in '/var/log/messages' on a Fedora Core 3 Linux host, and it does allow connections from that host with the IPv6-ish version in the 'authorized_keys' file.
[6] Another option for validation (and more) is the Perl script located here: http://www.inwap.com/mybin/miscunix/?rrsync, though it is more complicated. A version of this Perl script is now bundled with the rsync source here: http://www.samba.org/ftp/unpacked/rsync/support/rrsync (with improvements). If you are writing a custom script, in whatever language you find comfortable, look inside this one for suggestions.
[7] By the time the 'validate-rsync' script runs, a SSH connection has been made with the SSH key you associated with this command in the 'authorized_keys' file. This example script basically tries to return 'Rejected' to anything other than a command that starts with "rsync --server", which is what rsync over ssh does on the other end of the connection. I found this out by running 'ps auxw | grep rsync' on the remote end of the connection after initializing a long running rsync job, but an rsync pro said you can add '-v -v -n' to your command line options for rsync and it will display the command it will use on the server end, so use that to make your script command more specific if you wish. The first six 'Rejected' lines try to elimate shell symbols that will allow a person to execute more than one command within a session (for example, a short rsync and some naughty command you don't want running remotely). You can also force the transfer to be read only by changing the command to "rsync --server --sender*" (thanks David Fred).
[8] "PermitRootLogin no" does what it says: the root user is not allowed to login via SSH. "PermitRootLogin forced-commands-only" requires that all connections, via SSH as root, need to use public key authentication (with a key like 'thishost-rsync-key.pub') and that a command be associated with that key (like 'validate-rsync'). For more explanation, use the "man sshd_config" command. If you are using Ubuntu, please make sure the package 'openssh-server' is installed (it is not installed by default).
[9] All kinds of SSH command line switches can be included (quoted) within the Rsync '-e' command line switch, like non-standard SSH server port connections (for example: "-p 2222" if SSH listens on port 2222), in addition to the private key ("-i identity_file") switch. (Per Funke suggested this and referenced http://mike-hostetler.com/blog/2007/12/rsync-non-standard-ssh-port).
[10] You can find out what log file SSH will be writing to by looking in two files: '/etc/ssh/sshd_config' and '/etc/syslog.conf'. 'sshd_config' contains the parameter "SyslogFacility", which by default is set to "AUTH", but Red Hat typically sets it to "AUTHPRIV". Whichever it is, remember the setting and look for it in the 'syslog.conf' file. Usually you will find a line with 'authpriv.*' followed by some tabs and then the log file you are searching for. Pay no attention to lines with 'authpriv.none' in them, as they are probaby taking in a many kinds of messages, but disallowing those from the 'authpriv' syslog facility.
[11] Not likely.

RDIFF-BACKUP with --force


"Programming is like sex. One mistake and you have to support 
   it for the rest of your life". (Michael Sinz)


Fatal Error: Destination directory


/Volumes/Backup320


exists, but does not look like a rdiff-backup directory.  Running
rdiff-backup like this could mess up what is currently in it.  If you
want to update or overwrite it, run rdiff-backup with the --force
option.


Creating backups is good, but they are of little use if you can't restore files from them. A restore, at its simplest, is just a backup reversed. In other words, the order of directories on the command line is reversed—the mirror first, the directory to restore to second. There is one important caveat: rdiff-backup, by default, will not restore over an existing file/path. Think of it as sort of a foot/gun safety. You have two options, restore to another path or use the --force switch to override the default behavior.

rdiff-backup gives you two basic methods for restoring a specific version of a file: time-based and number-based.


My reading is that using --force on a restore will overwrite existing 
files with the same name - so you may lose previous data at the restore
destination. In general if you are restoring a directory (or a complete
repository) it is logical to use a clean destination, in which case it
shouldn't be a problem.



When you are restoring a directory, "--force" will not only overwrite
existing files (which is probably what you intended, anyway), but it
will also _delete_ any files or even entire subdirectories that were
not present in the backup.  It will restore your directory to exactly
the state it was on the backup, nothing more, nothing less.  That
might be a nasty surprise.


--force
              Authorize  a more drastic modification of a directory than usual
              (for instance, when overwriting of a destination path,  or  when
              removing  multiple  sessions  with --remove-older-than).  rdiff-
              backup will generally tell you if it needs this.   WARNING:  You
              can cause data loss if you mis-use this option.  Furthermore, do
              NOT use this option when doing a  restore,  as  it  will  DELETE
              FILES, unless you absolutely know what you are doing.



Cheapest VPS Offers RAM >= 512 MB

I am trying to record some VPS Server which affordable at least has 512 MB RAM in their offered VPS package.


http://flipperhost.com/openvz_vps.php :
(I use them for one of my "cheap but enough" rdiff-backup server)
Disk Space : 70 GB
RAM : 768 MB / 1 GB
Traffic : 0.65 TB/Month
Price : $7.99/Month

FlipperHost Promo1 VPS :
Disk Space : 45 GB
Bandwidth : 1.5 TB
Guaranteed RAM : 512 MB
Bursted Ram : 768 MB
Price : $4,99 Per month ---> Order Now


https://www.evolucix.com/clients/cart.php?a=confproduct&i=0 :
Virtual Servers Package Special - Evo1024-LEB
Disk Space: 20 GB
Guaranteed RAM: 1 GB / 1.5 GB
Traffic: 500 GB / month
Price : $6.95/ month




http://123systems.net/vps.html :
Virtual Servers Package Special - Lin-512MB Dallas,TX
Guaranteed RAM: 512 MB/ 1 GB
Disk Space: 20 GB
Traffic: 1TB / month
Price : $6.00/ month


https://hostigation.com/?page=OpenVZ :
Package OVZ-512
RAM : 512/1024 MB
Disk Space : 50 GB
Traffic :1000 GB/month
Price :$6/mo
Order Data Center Location : CLT | LAX
3 letter code denotes airport code for the city VPS will be provided in
Paying a full year makes you eligible for a two month discount.



http://buyvm.net/ or https://my.frantech.ca/cart.php?a=add&pid=47
Virtual Servers Package Special
Guaranteed RAM: 512 MB/ 1 GB
Disk Space: 50 GB
Traffic: 2 TB / month
Price : $5.95/ month
I will consider their $3.5 /month package to be my next backup server of my VPS Server.




https://billing.eoreality.net/cart.php?a=add&pid=216
End Of Reality – $5.75/Month, 1024MB OpenVZ VPS in Chicago, IL
LEB Offer: OpenVZ Professional VPS
RAM : 1 GB RAM
Disk Space : 25GB
Traffic : 1 GB
$5.75/Month
(Promo Code: lowendbox1)
(This, I took this one, after my DirectServer.Net VPS got 2 times data lost and one mass hacking makes me mad)



Disclaimer : No, I do not endorsed by any of them. This Cheapest VPS with at least 512 MB RAM list is strictly just for my note.

Finding Cheap VPS Backup Server Solution

If your website or Internet service depends on a Virtual Private Server (VPS), you need to know how to back up and restore your data in a simple and reliable way.

Have you lost your VPS data before?
    Never... and I like it that way
    Once... it still hurts
    Several times... how did you know?
Those are all related question that every linux admin server get while managing their VPS/DEDICATED Servers.

If you invest a little time into reading and testing this resource you’ll not only do a much better job of backing up your VPS, but save yourself a lot of time and effort in the process — not to mention, you’ll avoid the costly traps that most newbie webmasters-turned-admins fall into.
Could this happen to you? Even if you are a home user, almost one third of you have lost all of your files due to circumstances beyond your control, like a hard disk drive crash. If you then tried to get a quote from a data recovery service, you likely gasped at the price. There are more severe consequences for businesses – according to a Price Waterhouse Coopers survey a single incident of data loss costs businesses an average of $10,000. You can say I put that to test myself – last time I lost my e-mails, it was dearer than $10,000 to get everything back on track.
99% of VPS owners don’t have a reliable backup strategy for their server. Interestingly, 99% of hosting companies don’t mention this – they are too interested in getting you boarded and unloading any responsibility for your data.
Server backup is not really that difficult. It just takes a little bit of effort, a little bit of thought, and a fair amount of resources. It is crucial to understand that online backup is a form of long-term insurance to protect your data against loss. It can take weeks or months to design a successful website or Internet service and to get into the search engines, and years to develop a strong customer/visitor base. On the other hand, it takes one poorly managed hardware upgrade, or server migration, or transition to another provider, to waste it all and start painstakingly from bits and pieces scattered on your developers’ hard drives or sent over public e-mail addresses. But don’t be overwhelmed by the potential consequences of such a disaster. Having a lot to lose is a sign of wealth and prosperity and by taking your time to visit this site you are making big steps towards protecting this lot.
    If you are posting it on the Internet, make sure you’re hosting it right. The Internet is great for meeting many people, but this is a two-way road. By all means, test the 3USD/mo shared hosting offers from the likes of Hostgator, Bluehost or Lunar Pages. They have their benefits. But when your needs grow, get a virtual private server or dedicated hardware behind your website. Nothing turns people away more than a slow and panting webserver, with the potential bonus of seeing your account suspended for consuming too many CPU cycles, bandwidth or disk space while on a so-called UNLIMITED hosting plan.
    Understand that hosting companies lose data every day. We all wish that were not true, yet we know these companies rely on hardware, software and people just like everybody else. The hidden ingredient is luck – some got away with it when losing data – backups worked or their customers did not complain.
    Be responsible for your business. You are just a customer to your service provider. Sometimes it is simply not worth to them to help you recover data or restore your server. Have a backup plan and triggers in place to know when it’s time to look for a new host.
    Keep your backup solution in working order. A disaster can happen any time, be it during the night, weekend or the few days far from civilisation you call holiday. Backups that stalled for months are like expired milk. You get sick opening them.
    Automate your website. There is no excuse for doing certain tasks manually. Put a price on your time and start acting as your own employer. Is your manual backup procedure (done timely and thoroughly, remember), worth 30min/day every day?
    Consider paying for getting good backup. We can all design plans and solutions. Doing it right is a different story.
    Document your configuration and backup solution. Backups tend to be protected with almost unbreakable algorithms. Trying to restore without the password to your data is an expensive way of testing that privacy claim.
    Think about implementing your own remote backup solution. There is a growing market for offsite backup. You can leverage on your experience to partner with a service provider like vps-backup.com or start your own venture.
    Don’t be too shy to discuss your concerns. If you can imagine a scary scenario, it might happen. Besides figuring out what to do yourself, try talking to others and pooling resources together.
    Read the rest of this page. Assuming I haven’t completely turned your brain to mush with all of the above verbiage, read on. The rest of this page explains in more detail what the site does, and how it does it.


What is the difference between online and traditional backup?

Traditionally, you backup your server to a disk or tape attached to it, then you ship that copy away (offsite). Since a virtual or dedicated server is hosted in someone’s data centre, most of them try to backup locally for their peace of mind and yours. Sometimes you can download those backups to your home computer and have sort of a DIY offsite backup solution. I give you space on my own backup server and the tools to take backups and upload them on it automatically. In the end, you get the best of two worlds.

How VPS backup works

In the event of a disaster (failed updates, hacked server or data lost in a hardware incident), you can safely restore the files back to a previous state. Alternatively, if you took a backup before changing your hosting company, you can restore the files to the “new” server and have your website back online with minimal downtime and effort.

Case when we need VPS Backup server :

    Our filesharing service relies on a VPS to keep track of users and files distributed over several servers in different locations. As a start-up, we are on a tight operation and backup costs were a constant source of concern before..

    When I moved my online store from eBay to a Linux VPS with Zencart, I never had time to think about backups. After two days of downtime following a hardware crash of our host server, a common friend recommended Garrett for a chat about backups. Next time we will be prepared!

    Our server had been deleted in an attack on our hosting provider. It took them days to recover from backups, while our SMS billing service was back online in less than 10 hours on a new server. On a single event we avoided a loss worth hundreds of times the cost of the backup service.

Is offsite backup better than onsite backup?

Some experts will say that onsite solutions are faster, especially on restoring large files or disaster recovery (restoring the server data completely). This is partially true due to the high transfer speeds achievable over the LAN or with direct connected technologies (iSCSI, SAS or FC), on the condition that LARGE amounts of data need to be restored. Since most VPS have rather small disks by modern standards (10GB to 100GB), and given the high compression of data used by vps-backup.com, offsite backups can deliver perfectly comparable results and retain better granularity and control over the restoration process.
My rule of thumb is, if I want to avoid keeping all the eggs in the same basket, or better, in the same building, I choose offsite backups. Running and monitoring your backups is not fail proof. My servers have respectable uptime, and I took the common collection of steps to instill some redundancy to the service (RAID1, multiple connections, replication server etc). Yet I am still exposed to black swans and perfect storms and any other acronym used by Service Providers to explain their downtime. This service does not come with an SLA, but with a lot of goodwill and my own business at stake along with yours. Enough said – if anything fails, do your best to let me know and I will do the same for you. This is a lot better than what banks are offering you, by the way – and how many of us manage to avoid banks these days?
Where to find a good offer unmanage VPS( less than $4 at least 40GB). You will need the VPS for a remote back-up of your existing client's website. So no cpanel or directadmin needed. Webmin - even without Virtualmin is enough for this task. It just a backup server, right?
If you purchase a VPS does it include the basic linux installations(like php,mysql,web server,FTP server). Since its unmanaged, is it vulnerable to some attacks even if only the FTP server is started? Yes, even if you're simply running an FTPd, you are susceptible to exploits.
Your budget makes you vulnerable to problems like low quality of support, slow connection speed due to overselling, occasionally data loss etc. I would suggest that if you really need to get your data intact you ought to raise a little and go for a good company
Well, you can always try http://www.lowendbox.com/ to find some cheap VPS, but a $4 VPS for 40gb of space is not something I'd rely on as your only "Backup" solution. It's definitely feasible for VPS used as backup server. Just be sure to do your research. Usually an unmanaged VPS is not safe you need to secure it. You will need to ask a company to secure it for you otherwise sooner or later you will get into trouble. Instead of having the backup server as your fall back just use the server as a backup solution. Depending on what you are running use rsync or ftp backup methods to just backup your files and data to your spare $4 VPS. Or just spend a little more on the main VPS you have an go with a host that makes backups daily. I have had good success with Hostigation as a backup VPS. It is < $4/mo, has good uptime, and has been relatively reliable.
They are unmanaged, which really means self-managed. That is, YOU are responsible for everything inside your container.
In my opinion, if you do backups from your live server frequent enough, you can live with occasional data loss. What is the probability of your backup VPS and your live VPS going out at exactly the same time? Also, if you're using it purely as a backup server, what support do you need?
Speed is the only factor you mentioned that could be important. If you do get a live server failure and need to restore from a backup, you want to do it as fast as possible. So I agree, this is an important factor. Having said that, I'd still advocate Hostigation. I've gotten good speeds and reliable uptime from them.
If the backup VPS is going to be used ONLY for your client's backup of his website, you can save quite a bit by going with a low memory VPS designed for this purpose. I have a different plan at Hostigation.com, but check out their backup VPS plans ... generous disk space with low RAM, and low cost. I agree with the others here who have said this is a good option. If you want to do more with the box, then a $5.95 VPS from Burst.net would work; but you can't come whining back here complaining that they didn't help you set up BeJeweled on it. These inexpensive VPS are absolutely un managed (or self-managed). There are $10 a month backup services like BQBackup that do it for you. That might be a way to get it done without having to worry about securing and maintaining the VPS. I am checking http://virpus.com/?vps-backup-cheap, not sure if they are good company, but they have a daily back-up. I rsync my reseller host to VPS. Or VPS to VPS only. My method of doing this requires root access to each server to set up SSH keys, which you couldn't do with your reseller host. But *I think* you could set up cPanel/WHM backups using remote FTP to your VPS. Security is really an issue, for now if you are at low budget. But you will need to have my own back up too, better be safe. Since you are the one dealing with your clients.
First off, I highly recommend a managed VPS since if you were new to VPS's. Unmanaged is more tuned for experienced users . Usually unmanaged linux VPS do not come with the installations you mentioned. I agree that a bit of knowledge is required to properly secure your server. Unbelievebly however, still one of the most popular reasons for a VPS to be compromised is a weak password - Always make sure this is changed (to a good one) as soon as you first log in to any unmanaged VPS.
You usually get your choice of OS (CentOS, Debian, etc). Usually, you don't have PHP, MySQL or a webserver such as Apache installed, and the host won't help you install these. Expect to spend from 10 to 30 hours learning how to set things up and secure them, with several "OS reinstalls" from the VPS control panel (at least, that was my experience with my first self-managed VPS). And there's on-going maintenance as well. For my own hosting accounts I use a managed VPS from a good provider and pay the extra. I use self-managed VPS for some special projects and my DNS cluster. I could save some money by using a self managed VPS for my hosting accounts, but the time savings and reduced hassle factor makes a managed VPS a much better solution for me.
I don't know if you've looked at backup services like VPS-Backup.com , or BQBackup (http://www.bqbackup.com/) or WebbyCart (http://www.webbycart.com/backup.htm). For $5 to $10 a month you can get automated, hassle free backup services without the time investment of setting up your own VPS. I think WebbyCart is ok, you only need a pure ftpd for back-up from your current host. Also you can use WHMEasyBackup to automatically back-up all accounts. They are also using Rsync technology

Update us later and let us know how it works for you. It's a less complicated solution that still provides the benefits of having a separate backup in a separate geographical location.