Under Investigation 07/08/2025 - Network Interruptions NL

KH-DanielP

KH-CEO
Staff member
We're currently investigating network outages reported in our Netherlands facility. Appropriate engineers are being engaged at this time.

** No customer data is at risk, all servers are powered and operational, this is a network event only **

These updates have been pulled from broader conversations below. Both redundant uplinks provided to KnownHost in the Netherlands facility have gone offline. This has stemmed from a power event in a completely separate facility which powers the core networking equipment for our uplink provider.

1:12 AM - We've been given confirmation that all equipment, including upstream equipment within all DRT facilities is functioning properly, however we've been told this may be tied to a "Power Event" at Equinix AMS1 where apparently all of the public uplinks feed from.

1:34 AM - It appears that the root issue is being worked on now, or at least isolated to the core equipment located in Equinix AM1 facility.

2:35 AM - Equinix Data Center confirmed they are doing a power maintenance at the site but the issue is one of the PDUs. We are working on restoral options at this time.

2:47 AM - The problem has been isolated down to a PDU failure in combination with Equinix performing power maintenance on one of the feeds.

2:57 AM - Update that PF are working on getting temporary power to bring up the critical equipment which will bring us online. Very very rough ETA of 1~3 Hours

3:10 AM - Quick update, a plan has been formulated and sent to Equinix, their basically reliant on their remote hands to rectify the situation.

In theory it's only 5-10 minutes of work (swap the power feeds between the bad/good PDU's) but simple things are never quite simple with remote hands as the request was put in to them about 20~ mins ago according to PF.

4:02 AM - Connectivity restored, system services coming back online.
 
Last edited:
Any news on this? We're almost 1 hour down.
No ETA at this time, the problem has been isolated w/ the upstream routing equipment outside of our network. Both redundant endpoints that we connect with are offline and unreachable.

Those engineers have been engaged and we're awaiting an update.

We have additional redundant paths coming online as soon as next week along with a major network overhaul planned to start this month but obviously not soon enough
 
Is there no way to restore NL backups to another region? We require our data at this moment urgently.
Not unless you have specific backups set to be stored in separate regions. With this being a network outage there is no risk to your data or it's integrity and as soon as the network connectivity has been restored access will resume as normal.
 
All our customers are asking for an ETA. Please give approximate ETA.
At this time no ETA has been provided by the upstream providers. We've not been given any details as far as the scope of the issue so I would be hesitant to even guess at this time.
Wich dutch datacenter is this?
This is our Netherlands facility housed in Interxion/DRTs facility

Unacceptable situation
I don't disagree.
 
Hi,

I hope the issue will be fixed soon, as we are all starting the working day here (in Europe), in less than 15min.


Thank you for your hard work.
 
This has happened at the worst time for us. It is down since 09.45AM, which is a prime business time.

We need to know the approximate ETA as all our customers are asking the same.
 
Would you happen to know which upstream providers are involved in this issue? If we can identify them, we might be able to gather some credible information or status updates that we can share with our customers through valid links.
 
Hi,

I hope the issue will be fixed soon, as we are all starting the working day here (in Europe), in less than 15min.


Thank you for your hard work.
We are pulling every lever to ensure this is escalated and is their top priority.

This has happened at the worst time for us. It is down since 09.45AM, which is a prime business time.

We need to know the approximate ETA as all our customers are asking the same.
I do understand and we do apologize. If it were something that our team can directly impact/correct I would be able to give a more specific ETA, but as of yet the information I have is limited.

I can say that all appropriate teams, and all escalations at the impacted providers are aware and engaged, but nothing specific has been shared as far as an ETA, or in regards to the exact problem. I can say that I'm unable to access their redundant public end points that we connect to, but that's about all I can say for certain.

As soon as we know more we will share it.
 
Hi,

Any idea who are these upstream providers as we can gather any information so give atleast some excuse to the customers.

Thanks
Absolutely, We currently have redundant diverse path uplinks to diverse routing equipment with PacketFabric, Formerly Unitas, Formerly Internap.

The formerly part on both is why we've invested in a major network overhaul (among other things) which are scheduled to begin in a few weeks time dependent upon the last few hardware shipments.
 
This is very reminiscent of the issue at this same data center less than a month ago (11th June) which was also related to 'redundant uplinks' and which turned out to be due to works on power supply systems that no-one seemed to have any advanced knowledge of.
 
This is very reminiscent of the issue at this same data center less than a month ago (11th June) which was also related to 'redundant uplinks' and which turned out to be due to works on power supply systems that no-one seemed to have any advanced knowledge of.
That was not due to a power issue, but was from the same upstream provider improperly handling maintenance / changes on one of their circuits.
 
A better question is how can you guarantee us that you will give us access to our data today!? And if you won't what action can we take against you!?
 
Top