• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: October 26th, 2023

help-circle

  • I kept all my CD’s from the 90’s that have sentimental value because those are my high school and college years. I used to look at the band photos and lyrics in the liner notes all the time

    Since 2000 I’ve been ripping CDs the moment I buy them and look the liner notes once and then it goes in the closet. I sold or donated almost all of those unless it was from a band I liked from the 90’s or some kind of collector’s edition

    Same for DVDs. I ripped them all and kept about 10%

    Same for my National Geographic magazines. I kept about 10 and I have the entire collection on my computer back to 1888




  • Aren’t you scared about loosing your data?

    No. I still have files from 1991. I’ve got files that have migrated from floppy disk to hard drive to QIC-80 tape to PD (Phase Change) optical disk to CD-RW to DVD+RW and now back to hard drives.

    What if I get a ransomwarei don’t realize and all my backups get encrypted too?

    Then you need to detect the ransomware before you backup. I use rsync --dry-run and look at what WOULD change before I run it for real. If I see thousands of files change that I did not expect then I would not run the backup and investigate what changed before running the rsync command for real.

    Or if the backups are corrupted

    I have 3 copies of my data. Local file server, local backup, remote file server.

    I also run rsnapshot on /home every hour to another drive in the machine. I also run snapraid sync to dual parity drives in the system once a day.

    I generate and compare stored file checksums twice a year across all 3 copies to detect any corruption. Over 300TB I have about 1 failed checksum every 2 years.

    and my disks breaks?

    If one of my disks breaks I buy a new one and restore from backups.

    But also I’m afraid about cloud

    I don’t use any cloud services because I don’t trust them.


  • almost all of them don’t have built in hardware RAID. I don’t really trust software RAID. Mostly rebuilding if the software crashes or my hardware crashes. Even if I was ok going with soft RAID

    Most people here are the exact opposite of you and don’t trust hardware RAID especially cheap implementations in a USB based DAS box. Software RAID is far more flexible and makes your setup independent of the hardware RAID cad dying.

    A NAS is great when you have multiple simulataneous users. What kind of computer do you have? Do you have a desktop computer in an ordinary case? How many drives can it hold internally? If you’ve run out of space just buy a bigger case and move the motherboard etc to the new case and put the drives in the same case as the rest of your computer.








  • copy / paste of my previous post

    Silent bit rot where a bit flips but there is no hardware is extremely rare. My stats say once a year on 300TB of data. Some statistics major can correct me but if someone has 1TB of data then they should see a single bit flip in 300 years so maybe their great great great grandchildren will see it and report back to them in a time machine.

    All of my data is on ordinary ext4 hard drives. I buy all my drives in groups of 3. I have my local file server, local backup, and remote backup. I have 2 drives in the local file server dedicated for snapraid parity and run “snapraid sync” every night.

    https://www.snapraid.it

    Snapraid has a data scrub feature. I run that once every 6 months to verify that my primary copy of my data in my file server is still correct.

    Then I run cshatag on every file when generates SHA256 checksums and stores them as ext4 extended attribute metadata. It compares the stored checksum and stored timestamp and if any file has changed but the timestamp wasn’t edited it reports it as corrupt.

    https://github.com/rfjakob/cshatag

    Then I use rsync -RHXva when I make my backups via rsync of all my media drives. This data is almost never modifed, just new files are added. The -X option is to also copy over the extended attribute metadata. Then I run the same cshatag file on the local backup and remote backup server. This takes about 1 day to run. On literally 90 million files across 300TB it finds a single file about once a year that has been silently corrupted. I have 2 other copies that match each other so I overwrite the bad file with one of the good copies.

    I only run rsnapshot on /home because that is where my frequently changing files are. The other 99% of my data is maybe “write only” so I just use rsync from the main file server to the two backups. Before I run rsync for real I use rsync --dry-run to show what WOULD change but it doesn’t do anything. If I see the files I expect to be written then I run it for real. If I were to see thousands of files that would be changed I would stop and investigate. Was this a cryptolocker virus that updated thousands of files?

    As for backing up the operating system I have the /etc and /root account backed up every hour through rsnapshot along with /home

    I’m not running a business. I can reinstall Linux in 15 minutes on a new SSD and copy over the handful of files I need from the /etc backup