This paragraph does not record the configuration of ZFS and arrays. If it involves the need to restore the configuration of ZFS or arrays, please do so first.
Record Information#
Datastore Configuration#
Before restoring, you need to record some basic configurations of the original PBS.
root@pbs:/etc/proxmox-backup# cat datastore.cfg
datastore: storage-3TiB
gc-schedule sat *-1..7 02:00
path /mnt/datastore/storage-3TiB
Of course, if this configuration is not recorded, you can refer to this format to create a new one.
Systemd Dynamic Mounting#
Proxmox uses the systemd dynamic mounting method.
/etc/fstab
and .mount
files in systemd
are two different disk mounting methods. They can both automatically mount disks during system startup, but there are some key differences:
- Configuration method:
/etc/fstab
is a simple text file, where each line represents a mount point. Each line contains information such as device path, mount point, file system type, and mount options. The.mount
file insystemd
uses theINI
format and can contain more detailed configuration information, such as dependencies and timeout settings. - Error handling: If a mount point in
/etc/fstab
encounters a problem (such as a non-existent device or file system error), the system may hang during startup until the problem is manually resolved. systemd, on the other hand, will attempt to continue starting other services even if a.mount
file encounters a problem. - Dynamic mounting: systemd supports "mount on demand," which means that the disk is only mounted when the mount point is accessed. This can improve system startup speed in certain situations.
- Dependency management: The
.mount
file in systemd can include dependencies, such as starting a service before mounting a disk./etc/fstab
does not support this functionality.
In summary, /etc/fstab
is a traditional and simple mounting configuration method, while systemd provides more flexibility and control. However, this also means that systemd configurations may be more complex and require a deeper understanding.
root@pbs:/etc/systemd/system# ls
chronyd.service multi-user.target.wants sockets.target.wants zed.service
getty.target.wants network-online.target.wants sshd.service zfs-import.target.wants
iscsi.service remote-fs.target.wants sysinit.target.wants zfs.target.wants
'mnt-datastore-storage\x2d3TiB.mount' smartd.service timers.target.wants zfs-volumes.target.wants
Here, we can see that Proxmox's data disk dynamic mounting method is located at /etc/systemd/system/'mnt-datastore-storage\mnt-datastore-storage\x2d3TiB.mount'
View and record the configuration.
root@pbs:/etc/systemd/system# cat 'mnt-datastore-storage\x2d3TiB.mount'
[Install]
WantedBy=multi-user.target
[Unit]
Description=Mount datatstore 'storage-3TiB' under '/mnt/datastore/storage-3TiB'
[Mount]
Options=defaults
Type=ext4
What=/dev/disk/by-uuid/78064deb-ac70-4a06-bc92-180503ef2d8c
Where=/mnt/datastore/storage-3TiB
Restore Configuration#
You can refer to Restore Data Disk in PVE for reference.
Assuming that the PBS data disk is not damaged, you should restore the configuration based on the original record above. First, create dynamic information, and pay attention to the fact that the disk UUID may be different in the new system.
# First, check the location of the disk
fdisk -l
# Then find the corresponding UUID
lsblk -fs
Record it and modify or create the /etc/systemd/system/'mnt-datastore-storage\x2d3TiB.mount'
file with the UUID information.
Then execute
# Reload the configuration
systemctl daemon-reload
# Mount immediately
systemctl start 'mnt-datastore-storage\x2d3TiB.mount'
# Enable auto-mounting after successful mounting
systemctl enable 'mnt-datastore-storage\x2d3TiB.mount'
# View all mounts
systemctl list-unit-files -t mount
Check /etc/mtab
cat /etc/mtab
...
/dev/sdb1 /mnt/datastore/storage-3TiB ext4 rw,relatime 0 0
...
It has taken effect. Finally, create a display for the WEBGUI.
nano /etc/proxmox-backup/datastore.cfg
datastore: storage-3TiB
gc-schedule sat *-1..7 02:00
path /mnt/datastore/storage-3TiB
Refresh the page. At this point, almost all configurations have been restored to be the same as the original system, but you still need to reconfigure the backup processing plan in the WebGUI.