Space Lynx
Astronaut
- Joined
- Oct 17, 2014
- Messages
- 17,549 (4.66/day)
- Location
- Kepler-186f
Processor | 7800X3D -25 all core |
---|---|
Motherboard | B650 Steel Legend |
Cooling | Frost Commander 140 |
Memory | 32gb ddr5 (2x16) cl 30 6000 |
Video Card(s) | Merc 310 7900 XT @3100 core |
Display(s) | Agon 27" QD-OLED Glossy 240hz 1440p |
Case | NZXT H710 (Red/Black) |
Power Supply | Corsair RM850x Gold |
![www.techspot.com](/forums/proxy.php?image=https%3A%2F%2Fwww.techspot.com%2Fimages2%2Fnews%2Fbigimage%2F2022%2F01%2F2022-01-03-image-2.jpg&hash=a61b3b56ec3476e8bef4f5a3cc86f7ae&return_error=1)
Japanese university loses 77TB of research data following a buggy software update
The culprit for this huge data loss was a faulty script originally meant to delete old, unnecessary log files from Kyoto university's Cray/HPE supercomputer as part of...
![www.techspot.com](/forums/proxy.php?image=https%3A%2F%2Fwww.techspot.com%2Fimages%2Fts3mobile-badge-196.png&hash=8a1ad0673d8c104f645961f064c93493&return_error=1)
34 million files got deleted from Kyoto university's supercomputer due to a backup error
Just read this article... thought the storage community would be interested. Is there seriously no way to recover something like this? Why would there not be a redundancy backup not connected to anything? Like every 5-10 tb's the offline backup gets backed up, why isn't this standard procedure for every critical storage situation? I was a history major in college and even I know this... so why didn't they know this? What am I not getting?