Difference between revisions of "Cool Solution - Migrating a native UCS installation to UVMM"

From Univention Wiki

Jump to: navigation, search
Line 1: Line 1:
{{Version|UCS=3.2}}
+
{{Version|UCS=4.1}}
 
{{Produktlogo UVMM}}
 
{{Produktlogo UVMM}}
 
{{Cool Solutions Disclaimer}}
 
{{Cool Solutions Disclaimer}}
 
{{Review-Status}}
 
{{Review-Status}}
  
The following article describes how to migrate a standard installation of UCS. The values and device names must be adapted to each individual setup. There
+
The following article describes how to migrate a standard installation of UCS into a UVMM instance. The values and device names must be adapted to each individual setup.
  
 
== Scenarios ==
 
== Scenarios ==
Line 20: Line 20:
 
== Preparations ==
 
== Preparations ==
 
In addition to the UCS system that will be migrated, and a working installation of UVMM, a Linux Live-CD with LVM (i.e. [http://www.sysresccd.org/Download SystemRescueCd]) support is needed.
 
In addition to the UCS system that will be migrated, and a working installation of UVMM, a Linux Live-CD with LVM (i.e. [http://www.sysresccd.org/Download SystemRescueCd]) support is needed.
 +
 +
You can now continue by either copying the whole harddrive (including empty space) or by only copying the actual files on your hard drive to a new UVMM instance.
  
 
== Copying the whole harddrive ==
 
== Copying the whole harddrive ==
To transfer a native UCS installation into a UVMM instance, the harddrive has to be saved as an image file which is then used by UVMM.
+
To transfer a native UCS installation into a UVMM instance, the harddrive has to be saved as an image file, which is then used by UVMM.
 +
 
 +
''Hint'': If the native installation used more than one harddrive the following procedure must be repeated for every harddrive!
  
 
To create the image file, the system must be booted with the Live-CD. The installed harddrives can be identified by using the following command:
 
To create the image file, the system must be booted with the Live-CD. The installed harddrives can be identified by using the following command:
Line 33: Line 37:
 
dd if=/dev/sda of=/mnt/usb/ucs.raw
 
dd if=/dev/sda of=/mnt/usb/ucs.raw
 
</pre>
 
</pre>
 +
 
Depending on the size of the device ''sda'' this will take some time.  
 
Depending on the size of the device ''sda'' this will take some time.  
  
''Hint'': When combining the commands '''dd''' and '''scp''', the image can be copied to the UVMM system and the need for an external storage is omitted. Beware that this can be more time consuming than using an external drive due to possible CPU and network limits.
+
You can also transfer the harddrive by combining the '''dd''' command with a '''scp''' command. This omits the need of external storage, but might be more time consuming due to possible CPU and network limits. Example command:
 +
<pre>
 +
dd if=/dev/sda | ssh root@<UVMM SERVER> 'dd of=/var/lib/libvirt/images/ucs.raw'
 +
</pre>
  
When the copy process is finished the system can be shut down.
+
When the process is finished for all drives you want to copy, the system can be shut down.
  
 
The image is saved as '''raw''' file and can now be used as a virtual harddrive with UVMM. For this, the raw image is copied to '''/var/lib/libvirt/images''' on the UVMM host.
 
The image is saved as '''raw''' file and can now be used as a virtual harddrive with UVMM. For this, the raw image is copied to '''/var/lib/libvirt/images''' on the UVMM host.
Line 47: Line 55:
 
</pre>
 
</pre>
  
Afterwards, a new virtual instance must be created, using the profile which describes the installation best, e.g. '''UCS 3.2 (64 Bit)'''. The raw or qcow2 file must be assigned as harddrive. After starting the virtual instance, the system boots up.
+
Afterwards, a new virtual instance must be created, using the profile which describes the installation best, e.g. '''UCS 4.1 (64 Bit)'''. The raw or qcow2 file must be assigned as harddrive. After starting the virtual instance, the system boots up.
 
 
''Hint'': If the native installation used more than one harddrive the procedure must be repeated for every harddrive!
 
  
 
== Migration by copying the core files ==
 
== Migration by copying the core files ==
Line 68: Line 74:
 
Please assure that the directory '''<my-Backup-Path>''' has enough free space to hold the backup. Furthermore, the file '''<exclude>''' must be created, containing directories that are not to be saved with the backup. The following directories '''must''' be excluded via the file:
 
Please assure that the directory '''<my-Backup-Path>''' has enough free space to hold the backup. Furthermore, the file '''<exclude>''' must be created, containing directories that are not to be saved with the backup. The following directories '''must''' be excluded via the file:
 
<pre>
 
<pre>
/sys
 
/proc
 
 
/dev
 
/dev
 
/dev/shm
 
/dev/shm
 
/dev/pts
 
/dev/pts
 
/lib/init/rw
 
/lib/init/rw
 +
/media
 +
/mnt
 +
/proc
 +
/sys
 
/var/lib/nfs/rpc_pipefs
 
/var/lib/nfs/rpc_pipefs
 
</pre>
 
</pre>
Line 79: Line 87:
 
It is recommended to exclude the following directories as well:
 
It is recommended to exclude the following directories as well:
 
<pre>
 
<pre>
 +
/tmp
 
/var/backups
 
/var/backups
 
/var/cache/apt/archives
 
/var/cache/apt/archives
 
/var/lib/univention-repository
 
/var/lib/univention-repository
 
/var/lib/univention-ldap/replog
 
/var/lib/univention-ldap/replog
/var/lib/opsi/products
 
/var/spool/squid
 
 
/var/tmp
 
/var/tmp
 
/var/univention-backup
 
/var/univention-backup
/tmp
 
 
</pre>
 
</pre>
  
'''Attention''': If the backups is saved directly to a mounted drive, the mountpoint '''must''' be excluded as well, else the files on the external drive will be saved as well into the backup!
+
'''Attention''': If the backup is saved directly to a mounted drive, the mountpoint '''must''' be excluded as well, else the files on the external drive will be saved as well into the backup!
  
The directory ''/var/log'' should be saved because services might deny being started when no log files are found. It is therefore recommended to exclude all gzipped files and include only the latest log files.
+
The directory ''/var/log'' should be saved, because services might deny being started when no log files are found. It is therefore recommended to exclude all gzipped (old) files and include only the latest log files.
  
 
The script must be made executable prior executing it as root user:
 
The script must be made executable prior executing it as root user:
Line 100: Line 106:
  
 
Before shutting down the old system write down the partitioning using '''gdisk''' for a GPT table partitioned system, or '''fdisk''' for a MBR table partitioned system.
 
Before shutting down the old system write down the partitioning using '''gdisk''' for a GPT table partitioned system, or '''fdisk''' for a MBR table partitioned system.
 +
You can also write down the informations displayed with the following commands, if you want to be safe:<br>
 +
''pvdisplay''<br>
 +
''vgdisplay''<br>
 +
''lvdisplay''
  
 
=== Migration into a virtual system ===
 
=== Migration into a virtual system ===
First, create a new VM with the profile suiting your native installation, e.g. '''UCS 3.2 (64 Bit)'''; if unsure use '''Other (64 Bit)'''. The harddrive must at least match the source system's size and quantity. Furthermore, a second harddrive must be created, at least the size of the backup file and the Live-CD must be mounted in the virtual CD-ROM drive.
+
First, create a new VM with the profile suiting your native installation, e.g. '''UCS 4.1 (64 Bit)'''; if unsure use '''Other (64 Bit)'''. The harddrive must at least match the source system's size and quantity. Furthermore, a second harddrive must be created, at least the size of the backup file and the Live-CD must be mounted in the virtual CD-ROM drive.
  
 
During the boot up of the Live-CD the old server's architecture should be loaded.
 
During the boot up of the Live-CD the old server's architecture should be loaded.
  
After the system is booted up, make sure the system has an IP address and the SSH daemon is started.
+
After the system is booted up, make sure the system has an IP address.
  
 
If the old installation used a GPT table, use '''gdisk''', else use '''fdisk'''. Run the appropriate tool to partition the harddrives:
 
If the old installation used a GPT table, use '''gdisk''', else use '''fdisk'''. Run the appropriate tool to partition the harddrives:
Line 113: Line 123:
 
</pre>
 
</pre>
  
Create an exact partioning like your old system. The command sequence in '''gdisk''' is similar to the one from '''fdisk''':
+
Create an exact partitioning like your old system. The command sequence in '''gdisk''' is similar to the one from '''fdisk''':
 
<pre>
 
<pre>
n -> p -> 1 -> (empty) -> +2G
+
n -> p -> 1 -> (empty) -> 999423
n -> p -> 2 -> (empty) -> +2G
+
a -> 1
t -> 2 -> 82
+
n -> e -> 2 -> (empty) -> (empty)
n -> p -> 3 -> (empty) -> (empty)
+
n -> (empty) -> (empty)
t -> 3 -> 8e
+
t -> 5 -> 8e
 
w
 
w
 
</pre>
 
</pre>
Line 125: Line 135:
 
Now the Logical Volume Groups must be created on the harddrive. In an UCS standard installation, one VG exists with one volume. This can be recreated with the folling command:
 
Now the Logical Volume Groups must be created on the harddrive. In an UCS standard installation, one VG exists with one volume. This can be recreated with the folling command:
 
<pre>
 
<pre>
pvcreate /dev/vda3
+
pvcreate /dev/vda5
vgcreate vg_ucs /dev/vda3
+
vgcreate vg_ucs /dev/vda5
lvcreate -l 100%VG -n rootfs vg_ucs
+
lvcreate -L 2G -n swap_1 vg_ucs
 +
lvcreate -l 100%FREE -n root vg_ucs
 
</pre>
 
</pre>
  
 
Now the '''/boot''' and LVM partition can be formatted, and the swap partion can be created:
 
Now the '''/boot''' and LVM partition can be formatted, and the swap partion can be created:
 
<pre>
 
<pre>
mkfs.ext4 /dev/vda1
+
mkfs.ext2 /dev/vda1
mkswap /dev/vda2
+
mkswap /dev/mapper/vg_ucs-swap_1
mkfs.ext4 /dev/mapper/vg_ucs-rootfs
+
mkfs.ext4 /dev/mapper/vg_ucs-root
 
</pre>
 
</pre>
 
'''Attention''': Make sure to format the partitions as they are in the native installation to prevent errors!
 
'''Attention''': Make sure to format the partitions as they are in the native installation to prevent errors!
Line 143: Line 154:
 
n -> p -> 1 -> (empty) -> (empty)
 
n -> p -> 1 -> (empty) -> (empty)
 
w
 
w
mkfs.ext4 /dev/sdb1
+
mkfs.ext4 /dev/vdb1
 
</pre>
 
</pre>
  
 
Now the the harddisks are ready to be mounted to the following locations:
 
Now the the harddisks are ready to be mounted to the following locations:
 
<pre>
 
<pre>
mount /dev/mapper/vg_ucs-rootfs /mnt/custom -o acl
+
mount /dev/mapper/vg_ucs-root /mnt/custom -o acl
 +
mkdir /mnt/custom/boot
 
mount /dev/vda1 /mnt/custom/boot -o acl
 
mount /dev/vda1 /mnt/custom/boot -o acl
 
mount /dev/vdb1 /mnt/backup
 
mount /dev/vdb1 /mnt/backup
Line 155: Line 167:
 
=== Copying the files to the new system ===
 
=== Copying the files to the new system ===
  
When the backup files are to be moved over the network, use the following commands:
+
When the backup files are to be moved over the network, use the following commands:<br>
 +
'''Note''': Make sure, that you configured your network interface
 
<pre>
 
<pre>
root@hardwaresystem# scp <my-Backup-Path>/files.tar.bz2 root@<target host>:/mnt/backup/
+
scp root@<my-backup-host>:<my-backup-path>/files.tar.bz2 /mnt/backup
root@hardwaresystem# scp <my-Backup-Path>/acl root@<target host>:/mnt/backup/
+
scp root@<my-backup-host>:<my-backup-path>/acl /mnt/backup
 
</pre>
 
</pre>
  
Line 169: Line 182:
 
</pre>
 
</pre>
  
All excluded directories must be created from the exclude-file must be created. Additionaly, the folder '''/tmp''' must be made world readable:
+
All excluded directories must be created from the exclude-file must be created. Additionaly, the folders '''/tmp''' and '''/var/tmp''' must be made writeable for everyone:
 
<pre>
 
<pre>
 
chmod 777 tmp
 
chmod 777 tmp
 +
chmod 777 /var/tmp
 
</pre>
 
</pre>
  
Line 197: Line 211:
 
</pre>
 
</pre>
  
Now the Grub bootloader must be initialized:
+
Now the Grub bootloader must be initialized and the config updated:
 
<pre>
 
<pre>
 
grub-install /dev/vda
 
grub-install /dev/vda
 +
update-grub
 +
</pre>
 +
 +
Now you should update the '''/etc/fstab''' file.<br>
 +
You will have to update the UUID of your '''/boot''' partition (/dev/vda1). You can find the new UUID with the following command:
 +
<pre>
 +
blkid
 
</pre>
 
</pre>
  
If necessary, the '''/etc/fstab''' file must be edited to reflect the actual positions for '''/boot''' and the '''swap''' partition:
+
The '''root''' and '''swap''' lines should already be correct. The three lines should look like this now:
 
<pre>
 
<pre>
#For boot:
+
/dev/mapper/vg_ucs-root / ext4 errors=remount-ro 0 1
/dev/vda1 /boot ext4 defaults,acl 0 0
+
 
#For swap:
+
UUID=<new uuid> /boot ext2 defaults 0 2
/dev/vda2 none swap sw 0 0
+
 
 +
/dev/mapper/vg_ucs-swap_1 none swap sw 0 0
 
</pre>
 
</pre>
  

Revision as of 08:15, 7 October 2016

Produktlogo UCS Version 4.1
Logo UVMM

Note: Cool Solutions are articles documenting additional functionality based on Univention products. Not all of the shown steps in the article are covered by Univention Support. For questions about your support coverage contact your contact person at Univention before you want to implement one of the shown steps.

Also regard the legal notes at Terms of Service.
Note: This article is not yet reviewed.


The following article describes how to migrate a standard installation of UCS into a UVMM instance. The values and device names must be adapted to each individual setup.

Scenarios

Live migration

If the server has to be accessed during the migration process, special care has to be taken of application and user data. This, however, would go beyond the scope of this article. If desired, Univention offers advice for such a scenario.

Migration of a DC Master

If the DC Master does not hold any data or provides services that need to be migrated, the installation of a DC Backup and promotion to a DC Master is preferred to migrating the DC Master. The promotion is done with the following command on the DC Backup:

univention-backup2master

If this is not possible and more servers in addition to the DC Master need to be migrated, the DC Master must be saved and reinstated as first system. If desired, Univention offers advice for such a scenario.

Preparations

In addition to the UCS system that will be migrated, and a working installation of UVMM, a Linux Live-CD with LVM (i.e. SystemRescueCd) support is needed.

You can now continue by either copying the whole harddrive (including empty space) or by only copying the actual files on your hard drive to a new UVMM instance.

Copying the whole harddrive

To transfer a native UCS installation into a UVMM instance, the harddrive has to be saved as an image file, which is then used by UVMM.

Hint: If the native installation used more than one harddrive the following procedure must be repeated for every harddrive!

To create the image file, the system must be booted with the Live-CD. The installed harddrives can be identified by using the following command:

fdisk -l

It is recommended to use an external harddrive or a USB flash drive to transfer the system:

dd if=/dev/sda of=/mnt/usb/ucs.raw

Depending on the size of the device sda this will take some time.

You can also transfer the harddrive by combining the dd command with a scp command. This omits the need of external storage, but might be more time consuming due to possible CPU and network limits. Example command:

dd if=/dev/sda | ssh root@<UVMM SERVER> 'dd of=/var/lib/libvirt/images/ucs.raw'

When the process is finished for all drives you want to copy, the system can be shut down.

The image is saved as raw file and can now be used as a virtual harddrive with UVMM. For this, the raw image is copied to /var/lib/libvirt/images on the UVMM host.

If KVM is used for virtualization and advanced features should be used, then the image must be converted to a qcow2 file:

cd /var/lib/libvirt/images/
qemu-img convert -f raw -O qcow2 ucs.raw ucs.qcow2

Afterwards, a new virtual instance must be created, using the profile which describes the installation best, e.g. UCS 4.1 (64 Bit). The raw or qcow2 file must be assigned as harddrive. After starting the virtual instance, the system boots up.

Migration by copying the core files

To transfer a native UCS installation into a UVMM instance, its harddrive is saved to a file and then copied into an existing, already virtualized, UCS installation.

Attention: If the system, that is about to be migrated, contains databases or user data provided on network shares, ensure that the databases are shut down properly and no users are connected to the server to prevent data loss!

Using the following script, all files and ACLs are saved on the server:

#!/bin/sh
#
export BACKUPPATH=<my-Backup-Path>
mkdir -p $BACKUPPATH
ionice -c 3 nice -n 20 tar cvfj $BACKUPPATH/files.tar.bz2 --numeric-owner --atime-preserve -X <exclude> --exclude=$BACKUPPATH /
getfacl --skip-base -RP / > $BACKUPPATH/acl

Please assure that the directory <my-Backup-Path> has enough free space to hold the backup. Furthermore, the file <exclude> must be created, containing directories that are not to be saved with the backup. The following directories must be excluded via the file:

/dev
/dev/shm
/dev/pts
/lib/init/rw
/media
/mnt
/proc
/sys
/var/lib/nfs/rpc_pipefs

It is recommended to exclude the following directories as well:

/tmp
/var/backups
/var/cache/apt/archives
/var/lib/univention-repository
/var/lib/univention-ldap/replog
/var/tmp
/var/univention-backup

Attention: If the backup is saved directly to a mounted drive, the mountpoint must be excluded as well, else the files on the external drive will be saved as well into the backup!

The directory /var/log should be saved, because services might deny being started when no log files are found. It is therefore recommended to exclude all gzipped (old) files and include only the latest log files.

The script must be made executable prior executing it as root user:

chmod +x script

Before shutting down the old system write down the partitioning using gdisk for a GPT table partitioned system, or fdisk for a MBR table partitioned system. You can also write down the informations displayed with the following commands, if you want to be safe:
pvdisplay
vgdisplay
lvdisplay

Migration into a virtual system

First, create a new VM with the profile suiting your native installation, e.g. UCS 4.1 (64 Bit); if unsure use Other (64 Bit). The harddrive must at least match the source system's size and quantity. Furthermore, a second harddrive must be created, at least the size of the backup file and the Live-CD must be mounted in the virtual CD-ROM drive.

During the boot up of the Live-CD the old server's architecture should be loaded.

After the system is booted up, make sure the system has an IP address.

If the old installation used a GPT table, use gdisk, else use fdisk. Run the appropriate tool to partition the harddrives:

fdisk /dev/vda

Create an exact partitioning like your old system. The command sequence in gdisk is similar to the one from fdisk:

n -> p -> 1 -> (empty) -> 999423
a -> 1
n -> e -> 2 -> (empty) -> (empty)
n -> (empty) -> (empty)
t -> 5 -> 8e
w

Now the Logical Volume Groups must be created on the harddrive. In an UCS standard installation, one VG exists with one volume. This can be recreated with the folling command:

pvcreate /dev/vda5
vgcreate vg_ucs /dev/vda5
lvcreate -L 2G -n swap_1 vg_ucs
lvcreate -l 100%FREE -n root vg_ucs

Now the /boot and LVM partition can be formatted, and the swap partion can be created:

mkfs.ext2 /dev/vda1
mkswap /dev/mapper/vg_ucs-swap_1
mkfs.ext4 /dev/mapper/vg_ucs-root

Attention: Make sure to format the partitions as they are in the native installation to prevent errors!

If you copy the backup file over the network, a second harddrive must be formatted:

fdisk /dev/vdb
n -> p -> 1 -> (empty) -> (empty)
w
mkfs.ext4 /dev/vdb1

Now the the harddisks are ready to be mounted to the following locations:

mount /dev/mapper/vg_ucs-root /mnt/custom -o acl
mkdir /mnt/custom/boot
mount /dev/vda1 /mnt/custom/boot -o acl
mount /dev/vdb1 /mnt/backup

Copying the files to the new system

When the backup files are to be moved over the network, use the following commands:
Note: Make sure, that you configured your network interface

scp root@<my-backup-host>:<my-backup-path>/files.tar.bz2 /mnt/backup
scp root@<my-backup-host>:<my-backup-path>/acl /mnt/backup

When the files are saved on an external drive, mount the drive and copy the files into the folder /mnt/backup.

The files can now be extracted from the archive:

cd /mnt/custom
tar xvjp --atime-preserve --numeric-owner -f /mnt/backup/files.tar.bz2 -C ./

All excluded directories must be created from the exclude-file must be created. Additionaly, the folders /tmp and /var/tmp must be made writeable for everyone:

chmod 777 tmp
chmod 777 /var/tmp

Finally the ACLs must be copied to the new environment so they can be restored later:

cp /mnt/backup/acl /mnt/custom/tmp/acl

Chroot into the system

After the folder structure is created, the system directories must be mounted into the new environment:

mount -o bind /proc /mnt/custom/proc
mount -o bind /dev /mnt/custom/dev
mount -o bind /sys /mnt/custom/sys

Now the new environment can be accessed by using chroot:

chroot /mnt/custom /bin/bash

The ACLs must now be restored:

setfacl --restore=/tmp/acl

Now the Grub bootloader must be initialized and the config updated:

grub-install /dev/vda
update-grub

Now you should update the /etc/fstab file.
You will have to update the UUID of your /boot partition (/dev/vda1). You can find the new UUID with the following command:

blkid

The root and swap lines should already be correct. The three lines should look like this now:

/dev/mapper/vg_ucs-root / ext4 errors=remount-ro 0 1

UUID=<new uuid> /boot ext2 defaults 0 2

/dev/mapper/vg_ucs-swap_1 none swap sw 0 0

Now exit the chroot environment and reboot the system. The migrated system should now be ready to be used.

Personal tools