Backup programs for linux. Backup on Linux and other Unix-like OSes

The possibility of accidental damage to a system, even one as reliable as Linux, always exists. As a rule, reinstalling the OS takes a lot of time and effort. To avoid troubles of this kind, you should use Ubuntu Linux backup. I will not dwell on the various ways to create a backup copy of Ubuntu, but will tell you the method that I use myself and recommend to others. One of my friends recommended it to me. You can also look at creating a backup copy of all installed programs on the system in this article. But this method is best used not for system backup, but in cases where we need to install similar software on many computers that have the same OS and configuration.

Creating a backup of ubuntu via Rsync

Positive aspects of creation Ubuntu backup precisely with the help rsync are that to copy and restore the system there is no need to install additional packages and software. Everything is done using the console. But don't be intimidated by the console! In our case, everything will be extremely simple and clear.

A few words about rsync:

This command is a very powerful tool for working with files. You can view the full list of its capabilities by writing in the console man rsync. Proposed by me ubuntu backup method via rsync is the simplest and easiest to learn.

Ubuntu backup from personal experience

To make everything as simple as possible, I’ll tell you how I backup my system. My hard drive is divided into 5 partitions, of which 2 partitions are reserved for Ubuntu - the system partition / and a section for user information /home. I copy the entire contents of the system partition / to the users section in a special folder /home/.backup. In case of problems with the Ubuntu OS, I start from the LiveCD and simply copy the Ubuntu backup to the system partition. Based on this example, the procedure for backing up and restoring Ubuntu Linux will be described below.

Backup Ubuntu

Execute in the console: sudo rsync -aulv -x / /home/.backup/ Now let’s understand the syntax of this simple command
  • sudo- get superuser rights root;
  • rsync- execute the backup command and specify additional arguments -aulv and -x;
  • / -partition to be copied (system partition);
  • /home/.backup/- the place where the files will be copied (users section).
I deliberately put a dot at the beginning of the directory name so that it would be invisible. He also indicated that the owner of the directory is a superuser and only he has access to it, so as not to climb there again.

Restoring Ubuntu via rsync

Let’s say our system has failed and we need restore ubuntu. We start the computer using LiveCD with Linux, open the console. Now we need to mount (connect) the system partition and the user partition in order to perform a system recovery, and here we can go in two ways. The first method is based on mouse clicks, and the second is based on working in the console.

Method No. 1

Open the file manager and see in the left corner a list of hard drive partitions on your PC. We connect them by clicking the mouse, after which they will become available for review, and their mount point will be located in the directory /media/. We determine which partition is system and which is user. The disadvantage of this method is that the partitions will receive a complex mount point address like /media/2F45115E1265048F. We remember the address of the mount point of the system and user partitions. Now let's move on to direct recovery, skipping the "Method No. 2" section.

Method No. 2

For more advanced users. The advantage is that we ourselves will assign a name to the mount points and can do without cumbersome addresses. 1. Display a list of HDD partitions: sudo fdisk -l this command will show us the complete list of partitions available in the system. For example, I have this picture. Device Load Start End Blocks Id System /dev/sda1 771120 27342629 13285755 83Linux /dev/sda2 27342630 822190634 397424002+ 83Linux/dev/sda3 * 822190635 883639259 30724312+ 7 HPFS/NTFS/exFAT /dev/sda4 883639260 976768064 46564402+ 5 Extended /dev/sda5 883639323 976768064 4656437 1 7 HPFS/NTFS/exFAT In the System column it is easy to see that Linux file system located on sections:
  1. dev/sda1
  2. dev/sda2
2. Mount Linux partitions with the mount command. To do this, first create a mount point for each partition: sudo mkdir /media/1 sudo mkdir /media/2 Use mount to mount the partitions: sudo mount dev/sda1 /media/1 sudo mount dev/sda2 /media/2 3. Determine which Which partition is the system one, and which one is the user folder. We can either simply go through the file manager to the mounted directories and see which of them is the system one. Or, we’ll use the ls command (shows a list of files at a given address): ls /media/1 ls /media/2 If you are not a very experienced user, I’ll tell you that the Linux system partition will usually have the following folders: bin, boot , dev, etc, mnt, etc. Let's say we have established that The system partition is now mounted at /media/1 .

Direct recovery

1. Copy files from the backup. We use the same command: sudo rsync -aulv -x /media/2/.backup/ /media/1/
when using graphical method No. 1 instead /media/1/ And /media/2/ you will have different mount points!
2. Unmount the partitions after copying is complete: sudo umount /media/1 sudo umount /media/2 Reboot the computer and enjoy the Ubuntu restored from the backup.

Ubuntu Backup Restore Video Tutorial

Based on the material described above, I plan to record a demonstration video tutorial on restoring Ubuntu in a virtual system.

I work in an organization with a small staff, the activity is closely related to IT and we have system administration tasks. This is interesting to me and I often take on the decisions of some people.

Last week we set up FreePBX under debian 7.8 and hired a freelancer. During the setup process, it turned out that the server (yes, that’s what I call a regular PC) does not want to boot from the HDD when the USB 3G modems that we use for calls to mobile phones are connected; digging into the BIOS did not help. Disorder. I decided that I needed to transfer it to another piece of hardware. So two related tasks appeared at once:

  • make a server backup;
  • restore the backup on another hardware.
Googling did not give clear answers on how to do this, I had to collect information in pieces and try. I discarded all kinds of acronis immediately, because they were not interesting.

I have little experience with Linux systems: setting up a VPN server on open-vpn, ftp servers and a couple of other little things. I characterize myself as a person who knows how to read mana and edit configs :)

Below I describe my particular case and why I did what I did. I hope it will be useful for newcomers, and bearded admins will smile remembering their youth.

Let's start digging into the theory:
There are a lot of articles on creating backups, I noted two methods for myself: tar - packs and compresses all files, while the MBR is not saved, my backup will weigh about 1.5 Gb; - makes a complete copy of the partition, including the MBR and the entire area where there are no files, the archive will be equal to the size of the partition, in my case ~490 Gb.

The second method requires an external hard drive with a capacity no smaller than the partition that we are archiving. And what to do with it then, it’s unclear, store it on a shelf? I settled on tar, a little more difficult to implement, you will need to create an MBR, but the time to create/restore the archive is significantly less, storing a backup is easier, one and a half gigs can be uploaded to the cloud and downloaded when needed. You can record it on the same live flash drive from which I will boot.

So, the action plan:
  1. creating a backup;
  2. formatting, disk partitioning, creating a file system;
  3. backup restoration;
  4. creation of MBR;
  5. testing and troubleshooting.

1. Creating a backup

We boot from a live flash drive, mine is debian-live-7.8.0-amd64-standard.

Switch to root:

Sudo su
We mount the partition that we will archive, for me it is sda1, so as not to accidentally mess things up, we mount it for read only. You can view all your sections using the commands ls /dev | grep sd or df -l

Mount -o ro /dev/sda1 /mnt
Our flash drive is already mounted, but in read-only mode, we need to remount it for read-write in order to write a backup there.

Mount -o remount,rw /dev/sdb1 /lib/live/mount/medium
Everything is ready to create an archive

Tar -cvzpf /lib/live/mount/medium/backupYYYYMMDD.tgz --exclude=/mnt/var/spool/asterisk/monitor --exclude=/mnt/var/spool/asterisk/backup /mnt/
Here we have the following parameters: c - create an archive, v - display information about the process, z - use gzip compression, p - save information about owners and access rights, f - write the archive to a file, path to the file, --exclude - exclude from archive directory (I excluded directories with call recordings and the directory with FreePBX backups), /mnt/ - the directory that we are archiving.

We are waiting... all the preparation and creation of the archive took me 10 minutes. If the flash drive were faster, it would have been done in 7-8 minutes.

Let's unmount the disk:

Umount/mnt
... and reboot.

Reboot
We store the archive in a safe place outside the office.

Restoring a backup on another hardware

2. Partition the disk and create a file system
We boot from a live flash drive, I still have the same debian-live-7.8.0.

Switch to root:

Sudo su
Let's mark the disk. I liked the pseudo-graphical interface utility cfdisk. Everything is simple and clear there.

Cfdisk
We delete all existing partitions. I created two new partitions, one 490 Gb for / (sda1) and 10 Gb for swap (sda2) at the end of the disk, because... it will practically not be used. Let's check the partition types. Which under the system should be of type 83 Linux, the second - 82 Linux swap / Solaris. Mark the system partition as bootable, save the changes and exit.

We create a file system on the first partition.

Mkfs.ext4 /dev/sda1

3. Unpack the archive.
Mount the formatted partition

Mount /dev/sda1 /mnt
Unpack the archive directly from the flash drive

Tar --same-owner -xvpf /lib/live/mount/medium/backupYYYYMMDD.tgz -C /mnt/
The --same-owner parameter - saves the owners of the files being unpacked, x - extracts from the archive, v - displays information about the process, p - saves access rights, f - indicates the file to be unpacked, C - unpacks into a category.

4. Create an MBR on the new disk.
To correctly create a boot record, we mount the working directories to our future root directory, for me it is /mnt. The /dev and /proc directories are currently used by the live system, we use the bind parameter so that they are available in two places at once:

Mount --bind /dev /mnt/dev mount --bind /proc /mnt/proc
Switch to the new system using chroot:

Chroot/mnt
Making a swap partition for the new system:

Mkswap /dev/sda2
Let's connect it:

Swapon /dev/sda2
For grub to work, you need to indicate to it the correct UUID of the partitions in fstab; now the partitions of the previous system are registered there:

Nano /etc/fstab
Open a second terminal (Alt+F2) as root:

Sudo su
We call:

Blkid
And we see the current UUID of the partitions.

We manually rewrite them in fstab by switching between Alt+F1 and Alt+F2. Yes, it's a chore, but trying to copy took me longer than rewriting. Save fstab.

Install grub2. I have one physical disk, so we put it on sda:

Grub-install /dev/sda
It should install on a clean disk without errors. Update information from fstab:

Update-grub
Returning to the Live system:

Exit
Unmount all directories:

Umount /mnt/dev umount /mnt/proc umount /mnt
If there are processes that use these directories, we kill them using fuser.

That's it, let's go. Booting from the hard drive:

Reboot
The article should have ended here, but I had problems connecting to the Internet. The server sees the network, sees the computers in it, but does not go to the Internet... and this is kind of important for telephony.

5. Testing and troubleshooting.
ifconfig -a
It shows interfaces eth1 and lo, googling said that gateway can only be assigned to the eth0 connection, the rest are designed only for working within the network.

It looks like the missing eth0 is caused by the way the system was migrated. We find the file that is responsible for the numbering of interfaces, look there:

Nano /etc/udev/rules.d/70-persistent-net.rules
Indeed, there are two active interfaces defined by MACs. We comment on the first one, assign eth0 to the second one.

Restarting /etc/init.d/networking did not help, so reboot:

Reboot
We connect the dongles, check that everything works.
Thank you for your attention.

Greetings to all!

Today I would like to tell you about how you can make backups in linux Ubuntu.
The possibility of accidental damage to an operating system, even one as reliable as Linux, always exists. Reinstalling the OS takes a lot of time and effort. To avoid these troubles, it is best to use backup or, as it is also called, backup.

There is no need to install additional software for backup; the system already has a built-in tool. To use it, open “System Settings”.

By clicking on the “Backups” tool, a new window will open.

By default, the main “Overview” tab opens, but since we have not created backups yet, let’s move on to the settings of the following items.

In the “Folders to save” item, we can add the folders that we need to the general list for backup. By default, only the home directory is configured there. In order to add additional directories, click on the “+” icon.

After selecting one or more directories we need, click the “Add” button.

After this we will see the selected directories in our list. Since I selected the “Computer” devices, I added a root directory. And a copy of all folders will be made. In this case, the home folder can be removed from the list by highlighting it and clicking on the icon -

Next, we will configure the folders that will be excluded from the backup. Go to the “Excluded folders” item. And by analogy, we add directories or folders that we should not include in the backup copy and must be excluded from the list.

The next item is “Backup location”. In this paragraph we indicate where we will store backup copies. It is best to choose a separate external hard drive for this, it will be safer and the data will be stored in a safe place.

In the “Planning” item we configure how long to store old backups and after what period of time they are created.

In order to activate automatic archiving, drag the slider to the right with the mouse and select “archive every week”, and we will store copies for six months. You can choose the frequency of archiving and the duration of storage of copies yourself.

After all the settings have been made, you can create the first backup. Go to the “Browse” item and click on the “Create a backup” button.

The first step of the backup will ask you to set a password for the backup.
Add a password and click continue.

The backup process will begin. It can continue for quite a long time. This depends on the size of the folders you select.

After the process is completed, close the window.

Let's check our directory where the backup copy was saved. In it we see files in the gpg format. I have sixty-one such files.

Now, in case of any problems in the system, we can always restore its operation to its previous stable mode.

We looked at how to make backups in Linux Ubuntu. And if there are still unclear questions on this topic and there are suggestions, then I ask you to write them in the comments. Bye everyone!

Programs used to perform a complete backup by duplicating the original data are called backup programs. Obviously, the main purpose of backup is to create order out of chaos by restoring important files in the event of a disaster. Some popular backup programs use sql, access the system remotely and copy files to another system.

If you use Linux, there are many backup programs to choose from. Below is a list of their several best free and open source backup programs to try.

It is the Linux equivalent of Apple's Time Machine, based on GNOME. Like many other backup utilities, this package creates incremental backups of files that can later be used for recovery. Its snapshots are copies of a directory at a specific point in time. Snapshots taken of files that have not changed since the previous snapshot take up very little space. This is because instead of backing up the entire file without changing it, snapshots use a hard link to an existing backup of the file in its original state.

It is an open source clone of Symantec Ghost Corporate Edition. The package is based on the use of DRBL, partition images, ntfsclone, partclone and udpcast, which will allow you to obtain a snapshot of the data for backup and recovery. There are two versions of the Clonezilla package: Clonezilla live and Clonezilla SE (Server Edition). Clonezilla live is suitable for backing up and restoring a single machine. And Clonezilla SE is designed for mass deployment and can make clones of many computers at the same time.

Makes copies of directories, creating encrypted volumes in tar format and uploads them to a remote or local file server. Because Duplicity uses librsync, incremental archives use space sparingly and only write parts of files that have changed since the previous backup. Because Duplicity uses GnuPG to encrypt and/or sign these archives, they are protected from being spied on and/or tampered with on the server.

It is an open-source, enterprise-level backup system designed for heterogeneous networks. The package is designed to automate tasks that often require the intervention of a system administrator or operator. Bacula has Linux, UNIX, and Windows backup clients, and can also use a range of professional backup devices, including tape libraries. Administrators and operators can configure the system using a command line console, GUI, and web interface; The stored data is an information catalog, which can be stored in MySQL, PostgreSQL or SQLite.

Advanced Maryland Automatic Network Disk Archiver is a backup system that allows an administrator to configure one master server to back up a large number of network hosts to tape or optical media. AMANDA uses data dump and/or GNU tar and can backup large numbers of workstations running different versions of Unix.

To work on projects I use svn, which is located on a remote virtual dedicated host running ubuntu 8.04. Over time, the volume of data has grown, as has the criticality of that data. Losing something was a nightmare. From time to time I copied the repositories to my local computer. Recently I got tired of it. And I began to look for opportunities to automate this matter. I won’t talk about searches and options, I’ll tell you about the results.

So, we have a remote host running ubuntu, with some quite critical data. It would be quite logical to set up a backup directly on the remote host, using tar over cron, rsyns, etc. But, because space on virtual dedicated hosting is quite expensive and it is better to use it for business purposes; it would be ideal for the data to be automatically copied to some local machine, which has more than enough space. In my case, this is a file service in the office, running the same Ubuntu.

Installation

Install rsnapshot:

$ sudo apt-get install rsnapshot

If you are using a non-debian-like distribution, rsnapshot is probably also available in your distribution's repositories. For CentOS, with RPMForge enabled, this is done, for example, like this:

# yum install rsnapshot

Now we need to create a directory where we are going to store our “snapshots”:

$ sudo mkdir /var/snapshots

Settings

Now you can move on to setting up rsnapshot itself:

$ sudo nano /etc/rsnapshot.conf

Instead of nano, you can use any other editor, such as vi, or gedit if you are working in GNOME.
The following parameters need to be configured:

Snapshot_root - the directory where you want to save snapshots.

Interval xxx yy - xxx - name of the interval (for example hourly, daily), yy - number of shots for each. For example:
interval hourly 6
interval daily 7

Means that we want to store 6 hourly copies and 7 monthly ones. If the specified number of copies is already available, rsnapshot will replace the old one with the newer one.

Uncomment cmd_cp. cmd_ssh uncomment and change to

Cmd_ssh /usr/bin/ssh

Setting up a backup is carried out using the backup command<откуда> <куда>:

#Add the /etc/ folder from the local machine to the localhost/ folder
backup /etc/local/
#Add the /var/svn folder from the remote machine to the remotehost/ folder
backup [email protected]:/var/svn/remotehost/

Remember that spaces are not allowed in the configuration file - use only tabs.

Trial run

Let's run rsnapshot:
$rsnapshot hourly

The second parameter means the interval that we set in the configuration file.
The command can be executed for a long time. After execution, look at what it created:
$ ls -l /var/snapshots

For now there should be one directory in the directory: hourly.0. The next time you run rsnapshot, it will create the directories hourly.1, hourly.2, etc., until it reaches the maximum we specified in the configuration file.

Setting up cron

On Ubuntu, a file /etc/cron.d/rsnapshot is automatically created with the following content:
0 */4 * * * root /usr/bin/rsnapshot hourly
30 3 * * * root /usr/bin/rsnapshot daily
0 3 * * 1 root /usr/bin/rsnapshot weekly
30 2 1 * * root /usr/bin/rsnapshot monthly

That's all. Now you should automatically create a snapshot of data from your remote server 6 times a day. The data is safe and geographically distributed.

By the way, 6 times a day does not mean that the size will be 6 times larger than if you copy only 1 time a day. If there are no changes to the files between copies, the overall size of the copies will remain almost unchanged.

Additional Information

Using the backup_script parameter, you can also configure the backup of MySQL databases, and indeed anything else. I did not describe this process, because... I don’t use it and I can’t say anything specific.
You can read more on Google. A search for rsnapshot brings up a bunch of relevant links, albeit in English.