• 0 Posts
  • 62 Comments
Joined 8 months ago
cake
Cake day: June 4th, 2025

help-circle


  • Do not split a RAID array across drives in separate USB enclosures.

    Doing RAID on USB drives is alright, as long as they’re all in the same enclosure and use a single USB interface. If you split an array between drives with separate USB interfaces, you will face corruption and rebuild issues when one of the controllers has a hiccup or comes up slower/faster than the other, which WILL happen. If you need to run a RAID array on USB-connected drives, use a 2-bay USB-connected DAS. I’ve used the QNAP TR-002 in the past, it works fine, just set it to individual mode.

    The better option since we’re just talking about a mirror, is to run on one drive primarily, and occasionally sync your data to the other for a backup.




  • Yes, because the argument was never “we’ll have fusion in 20 years”, it’s always been “we COULD have fusion in 20 years IF research was properly funded”. It’s never been properly funded, hence it’s always 20 years away.

    It’s a bit like my boss coming to ask me how long it would take to do project X. I tell him 6 months after we get funding. We don’t get funding. 6 months later he comes and asks me how long it would take to do project X. I tell him 6 months after we get funding. Queue shocked Pikachu face that the estimate is still 6 months, 6 months later.


  • Notifications will go a long way toward helping with that. Check all assumptions, check all exit codes, notify and stop if anything is amiss. I also have my backup script notify on success, with the time it took to back up and the size and delta size (versus the previous backup) of the resulting backup. 99% of errors get caught by the checks and I get a failure notification. But just in case something silently goes wrong, the size of the backup (too big or too small) is another obvious indicator that something went wrong.











  • Thanks! BentoPDF is fantastic, I never knew something like this existed.

    I have a todo list where I keep track of services I might be interested in one day, I read your post a few hours ago and added Bento to my list, thinking I might get around to it in a few days/weeks/months. Then out of nowhere 15 minutes ago I randomly needed to crop and split a PDF and realized I didn’t have anything to do it. I fired Bento up and was done in under a minute.



  • Disagree. Their priorities are backwards.

    Company A releases a product, it runs closed-source proprietary firmware on-board, and it can’t be updated by the user even if bugs or compatibility issues are found later on in the product’s life cycle.

    Company B releases a product, it runs closed-source proprietary firmware on-board, but it can be updated by the user if bugs or compatibility issues are found later on in the product’s life cycle.

    According to the FSF, product A gets the stamp of approval, product B doesn’t. That makes no sense.


  • I use node_exporter + VictoriaMetrics + Grafana for network-wide system monitoring. node_exporter also has provisions to include text files placed in a directory you specify, as long as they’re written out in the right format. I use that capability on my systems to include some custom metrics, including CPU and memory usage of the top 5 processes on the system, for exactly this reason.

    The resulting file looks like:

    # HELP cpu_usage CPU usage for top processes in %
    # TYPE cpu_usage gauge
    cpu_usage{process="/usr/bin/dockerd",pid="187613"} 1.8
    cpu_usage{process="/usr/local/bin/python3",pid="190047"} 1.4
    cpu_usage{process="/usr/bin/cadvisor",pid="188999"} 1.0
    cpu_usage{process="/opt/mealie/bin/python3",pid="190114"} 0.9
    cpu_usage{process="/opt/java/openjdk/bin/java",pid="190080"} 0.9
    
    # HELP mem_usage Memory usage for top processes in %
    # TYPE mem_usage gauge
    mem_usage{process="/usr/local/bin/python3",pid="190047"} 3.0
    mem_usage{process="/usr/bin/Xvfb",pid="196573"} 2.4
    mem_usage{process="/usr/bin/Xvfb",pid="193606"} 2.4
    mem_usage{process="next-server",pid="194634"} 1.2
    mem_usage{process="/opt/mealie/bin/python3",pid="190114"} 1.2
    

    And it gets scraped every 15 seconds for all of my systems. The result looks like this for CPU and memory. Pretty boring most of the time, but it can be very valuable to see what was going on with the active processes in the moments leading up to a problem.