Some systems are experiencing issues

Stickied Incidents

Tuesday 1st July 2025

Managed Object Storage [Object Storage] Degraded performance

Because of an incident at the datacenter provider of our object storage provider performance can be degraded. The airconditioning units in one of the datahalls is broken which with the increased temperatures of today is creating issues for some of the servers. Because of this our provider is moving hardware to other datahalls.

  • We are still watching what happens with the cooling situation, however at the moment all our customers seem to be online.

    Cloudbear translate is offline however this is something we will look at tomorrow.

  • We have moved our object-storage to a different datacenter, we have changed the DNS but due to the TTL it could take up to an hour to change over to the temporary IP address (51.158.237.195).

    We are still working on our other customers that are affected in other ways by the same cooling/air conditioning issue at one of the Scaleway datacenters.

  • We have moved our object-storage to a different datacenter, we have changed the DNS but due to the TTL it could take up to an hour to change over to the temporary IP address (51.158.237.195).

  • The underlying incident of object storage is caused by overheating in a datacenter in Amsterdam, this datacenter is the main location for our object storage but this incident can also affect our customers that are still running at Scaleway as these are also running in this datacenter.

    We are working on moving our object storage and affected customers running at Scaleway to a different datacenter.

  • Past Incidents

    Tuesday 19th September 2023

    Managed DNS Small period of reduced availability on Managed DNS

    The Managed DNS cluster failed to respond to a small number of DNS queries for a short period of time. We are currently investigating the exact scale and root cause of this incident.

  • Last night we were able to mitigate an attack with the fine tuning that happaned earlier.

  • Another brief outage happened last night. Further investigation showed that there was a TCP SYN flood happening from various networks and some UDP floods. We are working with the networks from which we received the traffic and we've put additional protections in place to prevent these incidents.

  • Metrics shows signs of a brief DDoS on various nodes within the DNS cluster. We are further investigating.

  • Monday 18th September 2023

    Managed GitLab Runner Less capacity available on Managed GitLab Runners

    Jobs may be picked up slower within GitLab. This could result in:

    • jobs being queued for a longer period;
    • or longer waiting on "Waiting for Runner to become available and request this job.".

    This is due to an API issue at our cloud provider. Our system is unable to scale up the GitLab runners as a result. We are waiting for a fix from our cloud provider.

  • The provider fixed their API and scaling was back operational.