Server has went down (Resolved)
  • Priority - Critical
  • Affecting Server - SSUS
  • 17/01/2022  23:00 Our monitoring system has just detected a fault on the server, we will investigate

     

     

     

  • Date - 17/01/2022 23:59 - 20/01/2022 10:07
  • Last Updated - 18/01/2022 00:00
EU Servers (Resolved)
  • Priority - Critical
  • 10/3/2012
    Downtime for EU Pool

    Hey,

     

    We are experiencing current issues with one of the datacenters in our infrastructure a Fire has broke out in one of the buildings we will update you more as we get it, but currently Fire crews have attended the site and is putting out the fire.

     

    Detials
    We are currently facing a major incident in our Strasbourg datacentre, with a fire declared in the SBG2 building.
    Firefighters intervened immediately on the spot but were unable to control the SBG2 fire.
    As a precautionary measure, the electricity was cut off on the whole site, which impacts all our services at SBG1, SBG2, SBG3 and SBG4.
    If your production is in Strasbourg, we recommend that you activate your Business Recovery Plan.
    All our teams are mobilized alongside the firefighters.
    We will keep you posted on this as soon as we have more information.

     

    Fire at our Strasbourg site

    This Wednesday March 10, 2021, at 12:47 am, a fire broke out in a room of one of our 4 Strasbourg data centers, SBG2. We point out that the site is not subject to a seveso classification.
    The firefighters immediately intervened on site to protect the teams and limit the progression of the fire. They thus proceeded with the complete isolation of the site and its perimeter from 2h54.
    At 4:09 a.m., the fire destroyed SBG2 and continued to pose risks to nearby data centers until firefighters took full control of the blaze. Since 5:30 am, the site has been inaccessible to our teams for obvious security reasons, under the supervision of the prefecture.
    The fire is now contained.
    We are relieved that no injuries are to be deplored, neither among our teams nor among the firefighters and the prefecture services, whom we thank for their exemplary mobilization by our side.

    Thanks to our operational park of 15 data centers in Europe, our technical and commercial teams are fully invested in supporting our customers, implementing solutions and alleviating the unavailability of our Strasbourg site.

    Our mission is to offer our customers an optimal quality of service to support their online activities and we know the crucial importance that this has for them. We offer them our sincere apologies for the hardship this fire is causing them. We are therefore committed to communicating with the greatest transparency on its causes and impacts.
    We are currently evaluating the impact of this incident and will communicate as soon as possible with the greatest transparency on the progress of our analyzes and the implementation of solutions.

    Status of the Strasbourg datacenter
    SBG1: Network room is OK - 4 rooms destroyed - 8 rooms OK
    SBG2: destroyed
    SBG3: PSU off - server monitoring in progress
    SBG4: No physical impact

    No restart today for SBG1, SBG3 and SBG4

    Plan for the next 2 weeks:
    1) Restarting 20KV for SBG3
    2) Restarting 240V in SBG1 / SBG4
    3) Checking of DWDM / routers / switches in Network Room A (SBG1).
    Control of fibers on the Paris / Frankfurt link
    4) Reconstruction of network room B (SBG5) Control of fibers on the Paris / Frankfurt link

    We will keep you informed of developments in the situation.
    -

    Datacenter technical teams are preparing and shipping the equipment needed to set up a temporary network room. This equipment will be sent to the Strasbourg datacenter during the night.
    The site's fiber have been checked and were not affected by the fire.
    The restoration of the site's power supply of SBG1 and SBG4 is estimated for Monday, March 15. A recovery for SBG3 is estimated for Friday, March 19

     

     

    11/3/2012

    Now the facts

    - We have one pool of servers in this location its called SSEU1 Pool and its in SGB2

    - The fire has complete burned down SGB2 damaged SGB1 "4Rooms" 

    - The pool consists of around 30 Servers

    - There is still no power to the datacenter as they are currently cleaning up

    - DC has told us to activate Disaster plans and these are in action 

    - Data has to be assumes Destroyed

    - We started the "disaster recovery plan" last night 

     

    The disaster recovery plan

    We have 3 kinds of redundancy

    - backup on the machine

    - backup using DC backup space

    - backup sent to bk server 

     

    The first two is the most common for sys admins, and these two can now be completely useless in this situation “disaster” unless kept off site like the third one, but you’ll find most people won’t buy another machine just for this, as a fire that burns down the datacenter is unheard of that what makes this really really really bad

     

    We are just lucky we made the investment to have this privilege, and that is my disaster plan, relocate, reinstall, pull latest backup from bk unless DC vault has been destroyed

     

    Whats happening over the next few days?

     

    So at this time our main plan is to relocate the pool in GRA, and restore the down pool from our redundancy system, all clients will be compensated the downtime and your data will be restored if you require the server as soon as possible we can relocate you in one of our USA pools.

     

     

     





    Updates

    * Still lost control of DC Vault looking at other avenues to gain control

    * Control gained through our GRA Infrastructure 

    * Gained access to the DC vault through our BK machine in GRA

    * DC Vault is intact in RBX

    * Pulling files to the BK machine from DC Vault ready for new machines

    * Clients affected have been informed of proceedings

    * Files are safe and moved over

    * Located all the clients and setup an action plan for the relocate

    * Awaiting updates from DC

    * We are tasked to start rebuilding machines and relocating in GRA / RBX

    * As soon as these machines are audited and ready they will be moved into production and housed in GBX / RBX and then the final stages of the DRP "disaster recovery plan" can commence

    *We will pause all the invoices for any service located in EU while we build the machines

    *If any client cant wait for the new machines, i completely understand we are just as frustrated with this situation we can apply your service in the usa pool

    * Still waiting on updates on the replacement infrastructure , we will post as soon as we have any updates 
    * Over 100 personal on site

    * New machine confirmed in RBX France

    * Awaiting final prep, and then we will start the Pool re creation

    *New machine confirmed in RBX France

    *Awaiting final prep, and then we will start the Pool re creation

    * Machines should be completed this week or at the latest early next week, we can only apologise for the delays and wait, all downtime will be compensated double

    *Removed old EU pool from order funnel, kept sending new machines to this pool

    * Most machines have been moved onto a new pool, we still have around 14 servers still down, we are now awaiting more resources that should arrive by wednesday 

    *Effected customers that have been moved and are backup have also been credited the time back

    * 7 Machines left to replace now

    *  New Stock arriving this week, and we will have the remaining servers replaced by the next 10-15 days depending on your machine type

    We have received the final machines, we will be installing them today, we will contact each customer with the new server details

    * last of the customers coming over now

    * All customer have been moved now, time has been compinsated and the files have been restored on the new pool

    Issue Resolved!

     

     

     

     

     

     

     

     

     

  • Date - 10/03/2021 21:21 - 22/04/2021 18:52
  • Last Updated - 22/04/2021 18:52
test (Resolved)
  • Priority - Critical
  • Affecting Other - test
  • test

     

    test2

    test3

  • Date - 14/04/2021 02:39 - 14/04/2021 02:44
  • Last Updated - 14/04/2021 02:43
Rebooting Infrastructures (Resolved)
  • Priority - Low
  • Affecting System - ALL Infrastructures
  • We will be restarting all of our Infrastructures during off peak to apply some updates

    - No issues expected

     

    - Operation completed, infrastructure stabilising 

  • Date - 09/01/2021 00:00 - 11/03/2021 18:36
  • Last Updated - 08/01/2021 23:57
Network issue (Resolved)
  • Priority - Critical
  • We have detected a degradation in service performance. Our teams are fully engaged in fixing the issue and fully restore the service as fast as possible. We apologize for the inconvenience
    A suspected device has been isolated
    The issues are now fully mitigated, and the situation is normal

     

     

  • Date - 19/11/2020 00:47
  • Last Updated - 19/11/2020 01:22
Clearing port errors (Resolved)
  • Priority - Low
  • Affecting Server - SSEU
  • Hey

     

    We are giving this pool a reboot to clear some port issues

  • Date - 17/10/2020 18:26 - 17/10/2020 19:28
  • Last Updated - 17/10/2020 19:28
Order Funnel (Resolved)
  • Priority - High
  • Affecting System - Backend order funnel
  • Hey

     

    We have noticed there is an issue with the new orders trying to be placed, and showing errors we are investigating now 

     

    • we have pin pointed that it is a php issue, working on a fix 
    • Applied a patch, still testing but seems to have worked so far

    marked as resolved 

  • Date - 02/10/2020 14:34
  • Last Updated - 05/10/2020 17:21
S2US Network issue (Resolved)
  • Priority - Critical
  • Affecting Server - Testing
  • Experiencing issues with Servers located on s2us Investigating now

     

    Pulling machine down for checks will update as the investigation takes place 

    checks ongoing

     

    check's complete, machine resumed 

  • Date - 16/08/2020 02:39 - 16/08/2020 05:25
  • Last Updated - 16/08/2020 05:24
Emails (Resolved)
  • Priority - Low
  • Affecting System - MX : USA Mailing Server
  • Currently a Email Backlog due to the amount of new orders on the machines at this time, we are looking to shift the backlog over the next few hours 

  • Date - 14/05/2020 01:12
  • Last Updated - 14/05/2020 14:30
Component replacement (Resolved)
  • Priority - High
  • Affecting Server - SSUS
  • we will be replacing the motherboard on this machine 

  • Date - 28/03/2020 13:01 - 28/03/2020 16:36
  • Last Updated - 28/03/2020 15:11
Component Replacement (Resolved)
  • Priority - Critical
  • Affecting Server - SSUS

  • Hey!

     

    After the tests ran this morning to ensure that we provide the top service possible, has uncovered an issue with the RAM on this board

    we will be replacing these components and then resuming the service as normal while keeping an eye on the board and machine over the next few days

  • Date - 26/03/2020 00:00 - 26/03/2020 01:00
  • Last Updated - 26/03/2020 01:22
Machine maintenance (Resolved)
  • Priority - Low
  • Affecting Server - SSUS
  • Hey

    To ensure that we have the best performance and delivering the best servers we can, we will have this machines components

     

    ed

  • Date - 25/03/2020 15:11 - 28/03/2020 15:14
  • Last Updated - 25/03/2020 15:13
OS Upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - SSUS
  • Moving Machine os to the upgraded Ubuntu OS this will cause the machine to go down for a moment 

     

    *Times are Estimates

  • Date - 29/02/2020 10:00 - 28/03/2020 15:14
  • Last Updated - 29/02/2020 00:09
Email Server (Resolved)
  • Priority - Low
  • Affecting Server - SSUS
  • Hello,

    Some Emails are Currently Sticking in the backend, and there is a back log of emails, Contacted provider waiting for a response 

  • Date - 07/01/2020 04:23
  • Last Updated - 07/01/2020 13:29
Server Reboot (Resolved)
  • Priority - Low
  • Affecting Server - SSUS
  • Hello,

     

    This node is going down for a reboot!

  • Date - 28/12/2019 14:12 - 28/12/2019 14:17
  • Last Updated - 28/12/2019 14:13

Server Status

Below is a real-time overview of our servers where you can check if there's any known issues.

Server Name HTTP FTP POP3 PHP Info Server Load Uptime
SSEU PHP Info
SSUS PHP Info