After MariaDB being unresponsive to signals, we decided to give the whole system a reboot. We will check for integrity afterwards.
One of our nodes crashed, we are rebooting these servers:
We have enabled mitigations for the Terrapin SSH vulnerability. If you are experiencing trouble connecting to Uberspace hosts via SSH, please make sure your SSH client is up to date and supports more secure ciphers.
Aktuell kann es auf dem Host ferdinand
zu einer Einschränkung der Performance kommen. Dort läuft aktuell eine Kampagne von postmitherz.org über die wir uns sehr freuen, die aber teilweise zu hohen Lastspitzen führt. Wir haben für den Host daher bereits einen eigenen Server Node freigemacht um für den Zeitraum (bis 18.12.) der Kampagne möglichst viel Spielraum zu geben.
Solltest du dennoch ein Problem mit deiner Seite feststellen, schreib bitte unserem Support und wir suchen gemeinsam nach einer Lösung.
we observe partial failures on fr3
Everything is up and running, and the last host is also showing normal performance again.
The root cause was likely an unintentionally installed package on the Ceph nodes during an OS update, which caused the IPs in Ceph to become disoriented.
All hosts are reachable again, and their services are running. However, we still observe high load on some of them. We are addressing this issue, but overall, the situation has already significantly eased.
After a network/configuration problem of our ceph-cluster at FRA3, we have the situation under control again, but still have some follow-up errors to fix. some hosts and services are still down. We are working on it
One node with the following Hosts crashed, we are in the process of restoring them right now:
We have received reports that our IPv6 network 2a00:d0c0:200::/48 (FRA3) is unreachable from some networks. We are investigating the issue. IPv4 connectivity should be unaffected.
We observe that IPv4 connectivity is currently impacted. IPv6 connectivity is unaffected.
After an unexpected core router reboot, the standby router took over. While in theory both routers have exactly the same routing configuration, we discovered a bug in the IPv4 routing configuration which only affected the standby router. This bug is now fixed and IPv4 connectivity fully restored since 14:08 CEST.
Today from 15:30 (GMT+2) we will be doing maintenance works on our routers and uplink connections in our FRA4 datacenter (95.143.172.0/24 network), after some instabilities observed in the past which we now want to put to an end my making some major changes. The maintenance works will be carried out in person in the datacenter and in collaboration with the datacenter provider. While we try our best to limit impact on the connectivity, there might be shorter periods of network outages.
The network 95.143.172.0/24 at our datacenter FRA4 is currently experiencing an outage. We're investigating.
IPv6 connectivity at our FRA4 datacenter is currently impacted and showing packet loss. IPv4 is working fine. The root cause has been identified already and our operations team is working on a fix.
Due to maintenance work, it is possible that the network connections to some of our hosts might be temporarily disrupted (a list of hosts will follow).
Maintainance finished without major disruptions.
A list of hosts, that might suffer from short disruptions of network connectivity:
acamar.uberspace.de
achernar.uberspace.de
aldebaran.uberspace.de
antares.uberspace.de
antila.uberspace.de
apus.uberspace.de
aquila.uberspace.de
ara.uberspace.de
aries.uberspace.de
auriga.uberspace.de
bootes.uberspace.de
caelum.uberspace.de
canis.uberspace.de
canopus.uberspace.de
capella.uberspace.de
carina.uberspace.de
cassiopeia.uberspace.de
centaurus.uberspace.de
cepheus.uberspace.de
cetus.uberspace.de
circinus.uberspace.de
columba.uberspace.de
corvus.uberspace.de
crater.uberspace.de
crux.uberspace.de
cygnus.uberspace.de
delphinus.uberspace.de
dorado.uberspace.de
draco.uberspace.de
fomalhaut.uberspace.de
fulu.uberspace.de
grus.uberspace.de
hamal.uberspace.de
hercules.uberspace.de
horologium.uberspace.de
hydrus.uberspace.de
indus.uberspace.de
jarnsaxa.uberspace.de
juliet.uberspace.de
lacerta.uberspace.de
leo.uberspace.de
lepus.uberspace.de
libra.uberspace.de
lupus.uberspace.de
lynx.uberspace.de
menkar.uberspace.de
mensa.uberspace.de
monoceres.uberspace.de
musca.uberspace.de
norma.uberspace.de
octans.uberspace.de
pavo.uberspace.de
perseus.uberspace.de
phoenix.uberspace.de
pictor.uberspace.de
puppis.uberspace.de
rigel.uberspace.de
sagitta.uberspace.de
sculptor.uberspace.de
serpens.uberspace.de
sirius.uberspace.de
triangulum.uberspace.de
tucana.uberspace.de
ursa.uberspace.de
vega.uberspace.de
vela.uberspace.de
volans.uberspace.de
vulpecula.uberspace.de
Multiple host are currently unreachable, we are investigating.
A full list will follow ASAP.
The underlying issue was a switch config problem causing the Ceph storage network to be unresponsive, so no disk I/O was possible for the - still running - U7 VMs. The problem has been solved for now and according to our service monitoring, all VMs with all of their services are running fine again.
All host seem to be operational again.
All host - except crater.uberspace.de
- are reachable again. Some might still need time to recover before normal operation.