顯示具有 Proxmox 標籤的文章。 顯示所有文章
顯示具有 Proxmox 標籤的文章。 顯示所有文章

2017年5月16日 星期二

Proxmox - lv的空間異常

Proxmox Version
Proxmox 4.3 (running kernel: 4.4.19-1-pve)
事由
由於在Proxmox機器上裝了一顆500GB和二顆1TB的HDD,一開始我以為裝機系統時,會幫我把lvm的事情都給搞定,後來幸好同事有發現硬碟空間不夠了,才得知原來Proxmox只有分配到500GB硬碟空間,才導致 /dev/pve/data 這個空間不夠用了,感謝同事幫我發現這個問題。
解法方法
Step1: 先用 pvdisplay 確認實體硬碟的容量跟 VG NAME ,確認所有硬碟都有加入到 VG NAME 這個群組。
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               465.64 GiB / not usable 4.01 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              119202
  Free PE               4052
  Allocated PE          115150
  PV UUID               Jnvsaq-Ys7Y-snQZ-1NbF-2xzb-P0jo-wnf0qU

  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               pve
  PV Size               931.51 GiB / not usable 4.71 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               238466
  Allocated PE          0
  PV UUID               1C3Bjt-a1Rd-weDb-jL70-5qRy-CGU7-LwpOBJ

  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               pve
  PV Size               931.51 GiB / not usable 4.71 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               238466
  Allocated PE          0
  PV UUID               xFO4Be-O0ZT-CHcV-hZXK-mwfj-sNiG-TP5p3c
Step2: 接著用 lsblk 來確認目前硬碟的使用狀況,很明顯可以發現,只有 sda 有被使用,其他二顆 sdb1 跟 sdc1 都是沒有被使用的,而且 pve-data_tmeta 跟 pve-data_tdata 容量都是一樣的。
# lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                              8:0    0 465.8G  0 disk 
├─sda1                           8:1    0  1007K  0 part 
├─sda2                           8:2    0   127M  0 part 
└─sda3                           8:3    0 465.7G  0 part 
  ├─pve-root                   251:0    0 116.3G  0 lvm  /
  ├─pve-swap                   251:1    0     8G  0 lvm  [SWAP]
  ├─pve-data_tmeta             251:2    0    84M  0 lvm  
  │ └─pve-data-tpool           251:4    0 325.4G  0 lvm  
  │   ├─pve-data               251:5    0 325.4G  0 lvm  
  │   ├─pve-vm--10003--disk--1 251:6    0    60G  0 lvm  
  │   ├─pve-vm--10000--disk--1 251:7    0    60G  0 lvm  
  │   ├─pve-vm--10002--disk--1 251:8    0    60G  0 lvm  
  │   ├─pve-vm--10002--disk--2 251:9    0   100G  0 lvm  
  │   ├─pve-vm--10001--disk--1 251:10   0    60G  0 lvm  
  │   ├─pve-vm--10001--disk--2 251:11   0   100G  0 lvm  
  │   ├─pve-vm--10004--disk--1 251:12   0    40G  0 lvm  
  │   ├─pve-vm--10004--disk--2 251:13   0    50G  0 lvm  
  │   └─pve-vm--20002--disk--1 251:14   0    70G  0 lvm  
  └─pve-data_tdata             251:3    0 325.4G  0 lvm  
    └─pve-data-tpool           251:4    0 325.4G  0 lvm  
      ├─pve-data               251:5    0 325.4G  0 lvm  
      ├─pve-vm--10003--disk--1 251:6    0    60G  0 lvm  
      ├─pve-vm--10000--disk--1 251:7    0    60G  0 lvm  
      ├─pve-vm--10002--disk--1 251:8    0    60G  0 lvm  
      ├─pve-vm--10002--disk--2 251:9    0   100G  0 lvm  
      ├─pve-vm--10001--disk--1 251:10   0    60G  0 lvm  
      ├─pve-vm--10001--disk--2 251:11   0   100G  0 lvm  
      ├─pve-vm--10004--disk--1 251:12   0    40G  0 lvm  
      ├─pve-vm--10004--disk--2 251:13   0    50G  0 lvm  
      └─pve-vm--20002--disk--1 251:14   0    70G  0 lvm  
sdb                              8:16   0 931.5G  0 disk 
└─sdb1                           8:17   0 931.5G  0 part 
sdc                              8:32   0 931.5G  0 disk 
└─sdc1                           8:33   0 931.5G  0 part 
sr0                             11:0    1  1024M  0 rom 
Step3: 再用 vgdisplay 觀察後,發現 VG Size有 2.27 TiB ,但可用的卻只有Alloc PE / Size 115150 / 449.80 GiB , 而 Free PE / Size 480984 / 1.83 TiB 則是代表還有1.83 TiB可以使用。
# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  85
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                19
  Open LV               11
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               2.27 TiB
  PE Size               4.00 MiB
  Total PE              596134
  Alloc PE / Size       115150 / 449.80 GiB
  Free  PE / Size       480984 / 1.83 TiB
  VG UUID               0AyuWc-Dhjx-5x4H-35O0-kG0r-pZbz-I2c6Q9
Step4: 用 lvextend 把 /dev/pve/data 做擴展
# lvextend -L+1T /dev/pve/data
  Size of logical volume pve/data_tdata changed from 455.39 GiB (116580 extents) to 1.44 TiB (378724 extents).
  Logical volume data successfully resized
Step5: 用 vgdisplay 和 lsblk 去檢查看是不是真的有做擴展
# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  90
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                19
  Open LV               11
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               2.27 TiB
  PE Size               4.00 MiB
  Total PE              596134
  Alloc PE / Size       410574 / 1.57 TiB
  Free  PE / Size       185560 / 724.84 GiB
  VG UUID               0AyuWc-Dhjx-5x4H-35O0-kG0r-pZbz-I2c6Q9
# lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                              8:0    0 465.8G  0 disk
├─sda1                           8:1    0  1007K  0 part
├─sda2                           8:2    0   127M  0 part
└─sda3                           8:3    0 465.7G  0 part
  ├─pve-root                   251:0    0 116.3G  0 lvm  /
  ├─pve-swap                   251:1    0     8G  0 lvm  [SWAP]
  ├─pve-data_tmeta             251:2    0    84M  0 lvm
  │ └─pve-data-tpool           251:4    0   1.5T  0 lvm
  │   ├─pve-data               251:5    0 325.4G  0 lvm
  │   ├─pve-vm--10003--disk--1 251:6    0    60G  0 lvm
  │   ├─pve-vm--10000--disk--1 251:7    0    60G  0 lvm
  │   ├─pve-vm--10002--disk--1 251:8    0    60G  0 lvm
  │   ├─pve-vm--10002--disk--2 251:9    0   100G  0 lvm
  │   ├─pve-vm--10001--disk--1 251:10   0    60G  0 lvm
  │   ├─pve-vm--10001--disk--2 251:11   0   100G  0 lvm
  │   ├─pve-vm--10004--disk--1 251:12   0    40G  0 lvm
  │   ├─pve-vm--10004--disk--2 251:13   0    50G  0 lvm
  │   └─pve-vm--20002--disk--1 251:14   0    70G  0 lvm
  └─pve-data_tdata             251:3    0   1.5T  0 lvm
    └─pve-data-tpool           251:4    0   1.5T  0 lvm
      ├─pve-data               251:5    0 325.4G  0 lvm
      ├─pve-vm--10003--disk--1 251:6    0    60G  0 lvm
      ├─pve-vm--10000--disk--1 251:7    0    60G  0 lvm
      ├─pve-vm--10002--disk--1 251:8    0    60G  0 lvm
      ├─pve-vm--10002--disk--2 251:9    0   100G  0 lvm
      ├─pve-vm--10001--disk--1 251:10   0    60G  0 lvm
      ├─pve-vm--10001--disk--2 251:11   0   100G  0 lvm
      ├─pve-vm--10004--disk--1 251:12   0    40G  0 lvm
      ├─pve-vm--10004--disk--2 251:13   0    50G  0 lvm
      └─pve-vm--20002--disk--1 251:14   0    70G  0 lvm
sdb                              8:16   0 931.5G  0 disk
└─sdb1                           8:17   0 931.5G  0 part
  └─pve-data_tdata             251:3    0   1.5T  0 lvm
    └─pve-data-tpool           251:4    0   1.5T  0 lvm
      ├─pve-data               251:5    0 325.4G  0 lvm
      ├─pve-vm--10003--disk--1 251:6    0    60G  0 lvm
      ├─pve-vm--10000--disk--1 251:7    0    60G  0 lvm
      ├─pve-vm--10002--disk--1 251:8    0    60G  0 lvm
      ├─pve-vm--10002--disk--2 251:9    0   100G  0 lvm
      ├─pve-vm--10001--disk--1 251:10   0    60G  0 lvm
      ├─pve-vm--10001--disk--2 251:11   0   100G  0 lvm
      ├─pve-vm--10004--disk--1 251:12   0    40G  0 lvm
      ├─pve-vm--10004--disk--2 251:13   0    50G  0 lvm
      └─pve-vm--20002--disk--1 251:14   0    70G  0 lvm
sdc                              8:32   0 931.5G  0 disk
└─sdc1                           8:33   0 931.5G  0 part
  └─pve-data_tdata             251:3    0   1.5T  0 lvm
    └─pve-data-tpool           251:4    0   1.5T  0 lvm
      ├─pve-data               251:5    0 325.4G  0 lvm
      ├─pve-vm--10003--disk--1 251:6    0    60G  0 lvm
      ├─pve-vm--10000--disk--1 251:7    0    60G  0 lvm
      ├─pve-vm--10002--disk--1 251:8    0    60G  0 lvm
      ├─pve-vm--10002--disk--2 251:9    0   100G  0 lvm
      ├─pve-vm--10001--disk--1 251:10   0    60G  0 lvm
      ├─pve-vm--10001--disk--2 251:11   0   100G  0 lvm
      ├─pve-vm--10004--disk--1 251:12   0    40G  0 lvm
      ├─pve-vm--10004--disk--2 251:13   0    50G  0 lvm
      └─pve-vm--20002--disk--1 251:14   0    70G  0 lvm
sr0                             11:0    1  1024M  0 rom
觀察發現 Alloc PE / Size 410574 / 1.57 TiB 變大了,且 sdb1 跟 sdc1都已經有資料了。
Reference:

2017年4月24日 星期一

Proxmox - VM 搬移至新的proxmox

Proxmox Version
Proxmox 4.3 
指令

查看 VM ID 可以使用 qm list
proxmox-1:/etc/pve# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
     10000 machine_name_1       running    1024              60.00 2305
     10001 machine_name_2       running    8192              60.00 2079
     10002 machine_name_3       running    4096              60.00 2114
     10003 machine_name_4       running    4096              60.00 2148
     10004 machine_name_5       running    4096              40.00 2263
     10005 machine_name_6       running    512               40.00 3720
     20002 machine_name_7       running    4096              70.00 2189
上傳的ISO檔是存放在 /var/lib/vz/template/iso
proxmox-1:/var/lib/vz/template/iso# ls
CentOS-7-x86_64-Minimal-1511.iso
vm的設定檔是存放在 /etc/pve/nodes/tpe-proxmox-1/qemu-server
proxmox-1:/etc/pve/nodes/tpe-proxmox-1/qemu-server# ls
10000.conf  10001.conf  10002.conf  10003.conf  10004.conf  10005.conf  20002.conf
vm的disk是放在 /dev/pve/vm-VMID-disk-1,要看的話需要下 lvdisplay
proxmox-2:/etc/pve# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                73Qx1N-Bxw1-vv1c-zf3q-bkWr-HbTM-caJ71E
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-04-20 19:22:44 +0800
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:1

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                3ZaB0Y-ffOi-6wxA-thlb-w19u-qX2E-rPWbwv
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-04-20 19:22:44 +0800
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:0

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                e0xE6i-7hpe-KlQ3-BCgg-YVyx-mSIX-m9VveW
  LV Write Access        read/write
  LV Creation host, time proxmox, 2017-04-20 19:22:45 +0800
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 2
  LV Size                10.80 TiB
  Allocated pool data    0.02%
  Allocated metadata     0.44%
  Current LE             2830400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:4

  --- Logical volume ---
  LV Path                /dev/pve/vm-10005-disk-1
  LV Name                vm-10005-disk-1
  VG Name                pve
  LV UUID                JekYQS-ZJKd-vPFE-B8TX-jOIN-hkj2-5jQjeG
  LV Write Access        read/write
  LV Creation host, time proxmox-2, 2017-04-24 17:43:36 +0800
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                40.00 GiB
  Mapped size            5.78%
  Current LE             10240
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:6

搬移步驟
先把要搬移的vm給關機, proxmox-1 建立 /mnt/backup ,然後再執行 
vzdump --mode stop --dumpdir /mnt/backup --compress lzo VMID 
  • --mode stop : 會做較完整的備份,但會執行會比較久
  • --dumpdir : 指定檔案輸出的位置
  • --compress : 會做壓縮,目前有 gzip, lzo,預設是0
  • --bwlimit : 指定要寫入的IO (我沒有用)
proxmox-1:~# vzdump --mode stop --dumpdir /mnt/backup/ --compress lzo 10005
INFO: starting new backup job: vzdump 10005 --dumpdir /mnt/backup/ --compress lzo --mode stop
INFO: Starting Backup of VM 10005 (qemu)
INFO: status = stopped
INFO: update VM 10005: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: kemp-test
INFO: creating archive '/mnt/backup/vzdump-qemu-10005-2017_04_24-17_35_01.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'a4236601-35f5-4818-9bb3-f77dab8bbc3c'
INFO: status: 2% (1259012096/42949672960), sparse 2% (1080279040), duration 3, 419/59 MB/s
INFO: status: 4% (1735262208/42949672960), sparse 3% (1421692928), duration 6, 158/44 MB/s
INFO: status: 6% (2583560192/42949672960), sparse 4% (1978941440), duration 11, 169/58 MB/s
INFO: status: 27% (11932925952/42949672960), sparse 26% (11321421824), duration 14, 3116/2 MB/s
INFO: status: 28% (12094865408/42949672960), sparse 26% (11323588608), duration 17, 53/53 MB/s
INFO: status: 33% (14232780800/42949672960), sparse 30% (13294813184), duration 21, 534/41 MB/s
INFO: status: 51% (22285189120/42949672960), sparse 49% (21312892928), duration 24, 2684/11 MB/s
INFO: status: 52% (22508208128/42949672960), sparse 49% (21313822720), duration 27, 74/74 MB/s
INFO: status: 53% (22773825536/42949672960), sparse 49% (21314109440), duration 31, 66/66 MB/s
INFO: status: 58% (24997265408/42949672960), sparse 54% (23424589824), duration 34, 741/37 MB/s
INFO: status: 75% (32576110592/42949672960), sparse 72% (31003435008), duration 37, 2526/0 MB/s
INFO: status: 76% (32646955008/42949672960), sparse 72% (31003824128), duration 43, 11/11 MB/s
INFO: status: 77% (33113833472/42949672960), sparse 72% (31006666752), duration 51, 58/58 MB/s
INFO: status: 82% (35597189120/42949672960), sparse 77% (33232822272), duration 55, 620/64 MB/s
INFO: status: 100% (42949672960/42949672960), sparse 94% (40585306112), duration 58, 2450/0 MB/s
INFO: transferred 42949 MB in 58 seconds (740 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 1.19GB
INFO: Finished Backup of VM 10005 (00:01:01)
INFO: Backup job finished successfully
跑完之後,就把 /mnt/backup 底下的檔案, scp 到 proxmox-2 ,再去執行 qmrestore vzdump-qemu-VMID-date.vma.lzo  VMID
proxmox-2:~# qmrestore vzdump-qemu-10005-2017_04_24-17_35_01.vma.lzo 10005
restore vma archive: lzop -d -c /root/vzdump-qemu-10005-2017_04_24-17_35_01.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp135251.fifo - /var/tmp/vzdumptmp135251
CFG: size: 359 name: qemu-server.conf
DEV: dev_id=1 size: 42949672960 devname: drive-virtio0
CTIME: Mon Apr 24 17:35:02 2017
  Logical volume "vm-10005-disk-1" created.
new volume ID is 'local-lvm:vm-10005-disk-1'
map 'drive-virtio0' to '/dev/pve/vm-10005-disk-1' (write zeros = 0)
progress 1% (read 429522944 bytes, duration 1 sec)
progress 2% (read 859045888 bytes, duration 1 sec)
progress 3% (read 1288503296 bytes, duration 1 sec)
progress 4% (read 1718026240 bytes, duration 2 sec)
progress 5% (read 2147483648 bytes, duration 3 sec)
progress 6% (read 2577006592 bytes, duration 3 sec)
progress 7% (read 3006529536 bytes, duration 3 sec)
progress 8% (read 3435986944 bytes, duration 3 sec)
progress 9% (read 3865509888 bytes, duration 3 sec)
progress 10% (read 4294967296 bytes, duration 3 sec)
progress 11% (read 4724490240 bytes, duration 3 sec)
progress 12% (read 5154013184 bytes, duration 3 sec)
progress 13% (read 5583470592 bytes, duration 3 sec)
progress 14% (read 6012993536 bytes, duration 3 sec)
progress 15% (read 6442450944 bytes, duration 3 sec)
progress 16% (read 6871973888 bytes, duration 3 sec)
progress 17% (read 7301496832 bytes, duration 3 sec)
progress 18% (read 7730954240 bytes, duration 3 sec)
progress 19% (read 8160477184 bytes, duration 3 sec)
progress 20% (read 8589934592 bytes, duration 3 sec)
progress 21% (read 9019457536 bytes, duration 3 sec)
progress 22% (read 9448980480 bytes, duration 3 sec)
progress 23% (read 9878437888 bytes, duration 3 sec)
progress 24% (read 10307960832 bytes, duration 3 sec)
progress 25% (read 10737418240 bytes, duration 3 sec)
progress 26% (read 11166941184 bytes, duration 3 sec)
progress 27% (read 11596464128 bytes, duration 3 sec)
progress 28% (read 12025921536 bytes, duration 4 sec)
progress 29% (read 12455444480 bytes, duration 5 sec)
progress 30% (read 12884901888 bytes, duration 5 sec)
progress 31% (read 13314424832 bytes, duration 5 sec)
progress 32% (read 13743947776 bytes, duration 5 sec)
progress 33% (read 14173405184 bytes, duration 5 sec)
progress 34% (read 14602928128 bytes, duration 5 sec)
progress 35% (read 15032385536 bytes, duration 5 sec)
progress 36% (read 15461908480 bytes, duration 5 sec)
progress 37% (read 15891431424 bytes, duration 5 sec)
progress 38% (read 16320888832 bytes, duration 5 sec)
progress 39% (read 16750411776 bytes, duration 5 sec)
progress 40% (read 17179869184 bytes, duration 5 sec)
progress 41% (read 17609392128 bytes, duration 5 sec)
progress 42% (read 18038915072 bytes, duration 6 sec)
progress 43% (read 18468372480 bytes, duration 6 sec)
progress 44% (read 18897895424 bytes, duration 6 sec)
progress 45% (read 19327352832 bytes, duration 6 sec)
progress 46% (read 19756875776 bytes, duration 6 sec)
progress 47% (read 20186398720 bytes, duration 6 sec)
progress 48% (read 20615856128 bytes, duration 6 sec)
progress 49% (read 21045379072 bytes, duration 6 sec)
progress 50% (read 21474836480 bytes, duration 6 sec)
progress 51% (read 21904359424 bytes, duration 6 sec)
progress 52% (read 22333882368 bytes, duration 6 sec)
progress 53% (read 22763339776 bytes, duration 8 sec)
progress 54% (read 23192862720 bytes, duration 8 sec)
progress 55% (read 23622320128 bytes, duration 8 sec)
progress 56% (read 24051843072 bytes, duration 8 sec)
progress 57% (read 24481366016 bytes, duration 8 sec)
progress 58% (read 24910823424 bytes, duration 8 sec)
progress 59% (read 25340346368 bytes, duration 9 sec)
progress 60% (read 25769803776 bytes, duration 9 sec)
progress 61% (read 26199326720 bytes, duration 9 sec)
progress 62% (read 26628849664 bytes, duration 9 sec)
progress 63% (read 27058307072 bytes, duration 9 sec)
progress 64% (read 27487830016 bytes, duration 9 sec)
progress 65% (read 27917287424 bytes, duration 9 sec)
progress 66% (read 28346810368 bytes, duration 9 sec)
progress 67% (read 28776333312 bytes, duration 9 sec)
progress 68% (read 29205790720 bytes, duration 9 sec)
progress 69% (read 29635313664 bytes, duration 9 sec)
progress 70% (read 30064771072 bytes, duration 9 sec)
bootdisk: virtio0
progress 71% (read 30494294016 bytes, duration 9 sec)
progress 72% (read 30923816960 bytes, duration 9 sec)
progress 73% (read 31353274368 bytes, duration 9 sec)
progress 74% (read 31782797312 bytes, duration 9 sec)
progress 75% (read 32212254720 bytes, duration 9 sec)
progress 76% (read 32641777664 bytes, duration 9 sec)
progress 77% (read 33071300608 bytes, duration 11 sec)
progress 78% (read 33500758016 bytes, duration 13 sec)
progress 79% (read 33930280960 bytes, duration 13 sec)
progress 80% (read 34359738368 bytes, duration 13 sec)
progress 81% (read 34789261312 bytes, duration 13 sec)
progress 82% (read 35218784256 bytes, duration 13 sec)
progress 83% (read 35648241664 bytes, duration 13 sec)
progress 84% (read 36077764608 bytes, duration 13 sec)
progress 85% (read 36507222016 bytes, duration 13 sec)
progress 86% (read 36936744960 bytes, duration 13 sec)
progress 87% (read 37366267904 bytes, duration 13 sec)
progress 88% (read 37795725312 bytes, duration 13 sec)
progress 89% (read 38225248256 bytes, duration 13 sec)
progress 90% (read 38654705664 bytes, duration 13 sec)
progress 91% (read 39084228608 bytes, duration 13 sec)
progress 92% (read 39513751552 bytes, duration 13 sec)
progress 93% (read 39943208960 bytes, duration 13 sec)
progress 94% (read 40372731904 bytes, duration 13 sec)
progress 95% (read 40802189312 bytes, duration 13 sec)
progress 96% (read 41231712256 bytes, duration 13 sec)
progress 97% (read 41661235200 bytes, duration 13 sec)
progress 98% (read 42090692608 bytes, duration 13 sec)
progress 99% (read 42520215552 bytes, duration 13 sec)
progress 100% (read 42949672960 bytes, duration 13 sec)
total bytes read 42949672960, sparse bytes 40585306112 (94.5%)
space reduction due to 4K zero blocks 0.581%
這樣就跑完了,收工。
BTW 這是適合沒有做cluster的proxmox可以這樣做
Reference:

2017年4月19日 星期三

Proxmox - Unable to find LVM volume pve/root

Environment
Poxmox Version:4.3
Kernel: 4.4.19-1-pve
Eroor Message
Volume group "pve" not found
Skipping Volume group pve
Unable to find LVM volume pve/root
Solution
On Grub Boot Menue Press "e", then add "rootdelay=10"
-> linux /vmlinuz-2.6.32-17-pve root=/dev/mapper/pve-root ro rootdelay=10 quiet
Reference