You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Hello together,
we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5
Now we get the following error messages
on pve1:
Nov 29 15:31:06 pve1 corosync[2244]: [TOTEM ] A new membership (1:4014312) was formed. Members
Nov 29 15:31:06 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Nov 29 15:31:06 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Nov 29 15:31:06 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pvesr[5238]: trying to acquire cfs lock 'file-replication_cfg' ...
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
pve1 is marked green in the Tree, pve2 is marked with an red cross.
when running pvecm status we get this reply on both machines:
------------------
Date: Fri Nov 29 15:35:07 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 2/332
Quorate: No
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.9.2 (local)
Any idea what happend?
Thank you very much for any help.
Greetings
Nov 29 22:28:09 pve-vm09 systemd[1]: pvesr.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- The unit pvesr.service has entered the 'failed' state with result 'exit-code'.
Nov 29 22:28:09 pve-vm09 systemd[1]: Failed to start Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- A start job for unit pvesr.service has finished with a failure.
-- The job identifier is 7625264 and the job result is failed.
Nov 29 22:28:14 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pvedaemon[63575]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pvedaemon[61568]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pvedaemon[63575]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:41 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
lines 1603-1643/1643 (END)
And because of this (I think), backups are failing.
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2019-11-29 22:00:02
INFO: status = running
INFO: unable to open file '/etc/pve/nodes/pve-vm08/qemu-server/100.conf.tmp.1837' - Permission denied
INFO: update VM 100: -lock backup
ERROR: Backup of VM 100 failed - command 'qm set 100 --lock backup' failed: exit code 2
INFO: Failed at 2019-11-29 22:00:03
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2019-11-29 22:00:04
INFO: status = running
INFO: unable to open file '/etc/pve/nodes/pve-vm08/qemu-server/101.conf.tmp.1841' - Permission denied
INFO: update VM 101: -lock backup
ERROR: Backup of VM 101 failed - command 'qm set 101 --lock backup' failed: exit code 2
INFO: Failed at 2019-11-29 22:00:05
Additional Informations:
- both Systems have 2 NICs (1 NIC as Business-Link (vmbr0), 1 NIC for Cluster(vmbr1))
- Cluster-Link is a separated VLAN Network-Connection (Port-VLAN over Managed Switch without VLAN-IDs) in different Sub-Net
- Both Connections are Up and working fine
- Shell over Web-UI will not open, Error in Cluster Log (putty works fine):
end task UPID: pve1:00005C80:0ED492B9:5DE4C868:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve1 --perm Sys.Console -- /bin/login -f root' failed: exit code 4
Hope the Information is ok. Feel free to ask for futher information.
Thank you very much for your efforts.
Best regards,
@HKRH
please upgrade to the current version (the version of corosync/knet you are running had quite some bugs!), which will also restart corosync and pve-cluster and hopefully resync the cluster file system. if you still experience issues afterwards, a full log ("journalctl -u corosync -u pve-cluster --since XXX" where XXX is the time of the upgrade) might shed some light.
Thank you,
I will do the Update on next weekend. I will give you response if it works or not.
Best regards,
Hello together,
we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5
Now we get the following error messages
on pve1:
on pve2:
pve1 is marked green in the Tree, pve2 is marked with an red cross.
when running pvecm status we get this reply on both machines:
I have 3 nodes with ceph storage, I have some issue with Shell, please help
I have issue with shell errror :
Undefined
Code: 1006
pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
root@pve2:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph: 18.2.4-pve3
ceph-fuse: 18.2.4-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
root@pve2:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; preset: enabled)
Active: active (running) since Fri 2025-02-07 19:24:53 IST; 2 days ago
Main PID: 1946 (pmxcfs)
Tasks: 11 (limit: 308962)
Memory: 53.2M
CPU: 2min 59.680s
CGroup: /system.slice/pve-cluster.service
└─1946 /usr/bin/pmxcfs
Feb 10 04:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 05:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 06:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 07:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 08:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 09:20:44 pve2 pmxcfs[1946]: [status] notice: received log
Feb 10 09:20:49 pve2 pmxcfs[1946]: [status] notice: received log
Feb 10 09:20:59 pve2 pmxcfs[1946]: [status] notice: received log
Feb 10 09:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 09:35:38 pve2 pmxcfs[1946]: [status] notice: received log
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; preset: enabled)
Active: active (running) since Fri 2025-02-07 19:24:54 IST; 2 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 2071 (corosync)
Tasks: 9 (limit: 308962)
Memory: 140.5M
CPU: 32min 22.989s
CGroup: /system.slice/corosync.service
└─2071 /usr/sbin/corosync -f
Feb 07 19:27:12 pve2 corosync[2071]: [KNET ] pmtud: Global data MTU changed to: 1397
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] link: Resetting MTU for link 0 because host 3 joined
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 07 19:28:50 pve2 corosync[2071]: [QUORUM] Sync members[3]: 1 2 3
Feb 07 19:28:50 pve2 corosync[2071]: [QUORUM] Sync joined[1]: 3
Feb 07 19:28:50 pve2 corosync[2071]: [TOTEM ] A new membership (1.94) was formed. Members joined: 3
Feb 07 19:28:50 pve2 corosync[2071]: [QUORUM] Members[3]: 1 2 3
Feb 07 19:28:50 pve2 corosync[2071]: [MAIN ] Completed service synchronization, ready to provide service.
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] pmtud: Global data MTU changed to: 1397
------------------
Date: Mon Feb 10 13:51:37 2025
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000002
Ring ID: 1.94
Quorate: No
Votequorum information
----------------------
Expected votes: 6
Highest expected: 6
Total votes: 3
Quorum: 4 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 x.x.x.1
0x00000002 1 x.x.x.2 (local)
0x00000003 1 x.x.x.3
root@pve2:~#
well, if half of your cluster is down, then yes, you will lose quorum.. either remove them properly or bring them back up
root@pve2:~# pvecm delnode pve7601
cluster not ready - no quorum?
root@pve2:~# pvecm delnode pve7602
cluster not ready - no quorum?
root@pve2:~# pvecm delnode pve7603
cluster not ready - no quorum?
root@pve2:~# pvecm status
The Proxmox community has been around for many years and offers help and support for
Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!
The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.
Buy now!
We value your privacy
We use essential
cookies
to make this site work, and optional cookies to enhance your experience.