Why does a Ceph cluster stop working after reboot? [closed]

I had to reboot my cluster (4 logic nodes + 1 control board). Since then, my Ceph installation seems to be dead! In fact, even the most basic commands like ceph -s spit out a series of errors instead of a result. This doesn’t work on any of the nodes:

mixtile@blade3n1:~$ sudo ceph -s
2025-12-11T19:41:41.960+0100 ffffa27cf180 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
2025-12-11T19:41:44.960+0100 ffffa2fdf180 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
[errno 13] RADOS permission denied (error connecting to the cluster)

I found out that the keystring in /etc/ceph/ceph.client.admin.keyring has changed in the meantime, but only on the mamagement node. I copied the new key to the other nodes, but no use.

What’s even stranger, is that I suddenly got a new subdir in /var/lib/ceph/:

mixtile@blade3n1:~$ ls -al /var/lib/ceph/
total 24
drwxr-xr-x  6 root root 4096 Dec  8 17:37 .
drwxr-xr-x  5 root root 4096 Dec  8 16:59 ..
drwx------ 13 root root 4096 Dec 11 18:15 3d540960-ce05-11f0-a5b1-0e281c59af8c
drwx------ 13 root root 4096 Nov 27 15:49 c0e4524b-c561-11f0-908d-0e281c59af8c
drwx------ 11  167  167 4096 Dec  8 18:33 c80b083c-d453-11f0-81b7-0e281c59af8c
drwxr-xr-x  3 root root 4096 Dec  8 17:33 mon

On the other hand, there is no data at all in /var/lib/ceph/mon:

mixtile@blade3n1:~$ sudo ls -al /var/lib/ceph/mon
total 12
drwxr-xr-x 3 root root 4096 Dec  8 17:33 .
drwxr-xr-x 6 root root 4096 Dec  8 17:37 ..
drwxr-xr-x 2 root root 4096 Dec  8 17:33 ceph-blade3n1
mixtile@blade3n1:~$ sudo ls -al /var/lib/ceph/mon/ceph-blade3n1
total 8
drwxr-xr-x 2 root root 4096 Dec  8 17:33 .
drwxr-xr-x 3 root root 4096 Dec  8 17:33 ..

So: Is my Ceph dead now? How get I recovered it? I still haven’t uploaded any production data on it, but such a failure can’t happen once the cluster gets in use.