This was a unique situation last night. Drive failures in servers are commonplace and that is why we have hot swap drives in arrays. You swap teh drive, the array rebuilds and clients rarely notice. Last night we placed drive 0 in the array and the array was rebuilding when drive 1 reported as failed. We removed the new drive and forced the failed drive to come on line which broiught the servers on line by 12.23am as mentioned below. Due to the server failing due to Drive 1 showing as failed we took the decision to move the 11 Virtual Machines off this node and we worked to 5am performing migrations of the data. A mixture of snapshot moves and live moves were performed and no user had any significant downtime. All VMs from xen39 are now moved off to new nodes.
The server is back on line (minus one drive in the array). All Virtual Machines are back online too. Our system admins are continuing to check into this.
We had some planned maintenance on XEN39 this evening where we were replacing a failed drive in the array. Unfortunately during the routine rebuild onto a new drive the array has gone read only. We are looking into this with priority and will update this announcement at 1am when we know more.
Friday, June 29, 2018