Space Lynx
Astronaut
- Joined
- Oct 17, 2014
- Messages
- 17,503 (4.66/day)
- Location
- Kepler-186f
Processor | 7800X3D -25 all core |
---|---|
Motherboard | B650 Steel Legend |
Cooling | Frost Commander 140 |
Video Card(s) | Merc 310 7900 XT @3100 core -.75v |
Display(s) | Agon 27" QD-OLED Glossy 240hz 1440p |
Case | NZXT H710 (Red/Black) |
Audio Device(s) | Asgard 2, Modi 3, HD58X |
Power Supply | Corsair RM850x Gold |
Japanese university loses 77TB of research data following a buggy software update
The culprit for this huge data loss was a faulty script originally meant to delete old, unnecessary log files from Kyoto university's Cray/HPE supercomputer as part of...
www.techspot.com
34 million files got deleted from Kyoto university's supercomputer due to a backup error
Just read this article... thought the storage community would be interested. Is there seriously no way to recover something like this? Why would there not be a redundancy backup not connected to anything? Like every 5-10 tb's the offline backup gets backed up, why isn't this standard procedure for every critical storage situation? I was a history major in college and even I know this... so why didn't they know this? What am I not getting?