Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?
Does it apply it to all feeds? Or can it detect what feeds are actually Youtube ones?
Why do you need the files in your local?
Is your network that slow?
I’ve heard of multiple content creators which have their video files in their NAS to share between their editors, and they work directly from the NAS.
Could you do the same? You’ll be working with music, so the network traffic will be lower than with video.
If you do this you just need a way to mount the external directory, either with rclone or with sshfs.
The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time
I think this is a good strategy to not put additional stress in your drives (as a non-expert of NAS), but I’ve read the actual wear and tear of the drives is mostly during this process of spinning up and down. That’s why NAS drives should be kept spinning all the time.
And drives specifically built for NAS setups are designed with this in mind.
I use rclone and duplicati depending on the needs of the backup.
For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.
rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.
I can’t give you the technical explanation, but it works.
My Caddyfile only something like this
@forgejo host forgejo.pe1uca
handle @forgejo {
reverse_proxy :8000
}
and everything else has worked properly cloning via ssh with git@forgejo.pe1uca:pe1uca/my_repo.git
My guess is git only needs the host to resolve the IP and then connects to the port directly.
One of my best friends introduced me to this series back in MH4U for the 3DS.
As someone mentioned in other comment, these games are definitely not newbie friendly haha. I started it and left it after a few missions, I don’t remember what rank I was, but definitely the starting village.
Afterwards we finally got time to play and he mocked me since my character had less armor than his palico :D
We played more often and he helped me reach higher ranks until G-rank.
Each game has had a different kind of end game.
For MH4U were the guild quests which were randomly generated, I loved this, it made the game not feel like a total grind, but it only made it feel like that, because it really was a grind to both get the correct quest and level it up to get the relics you wanted.
The one I enjoyed the least was MHGen/MHGU because there’s no end game loop, once you reach G-rank the game doesn’t have anything else to offer, so you can just grind the same missions you already have. Of course this can be considered an end game loop since maxing your armor and weapons takes a long time (and IIRC some older fans mentioned this was ad-hoc with the theme of remembering old games since they where like that).
For MHW were the investigations which felt a bit like MH4U guild questions but without the random map.
The only downside of this game and the Iceborn expansion was the game as a service aspect, you could only access some quests on some days of the week, you had to connect to the internet to get them, and also one of the last bosses is tied to multiplayer, which if you have bad internet or only time for a single quest is impossible to properly finish.
I’ve bought each game. Around 200 minimum in each one. IIRC 450+ in MH4U and around 500 in MHW (mostly because it’s harder to pause in PS4). MHRise/Sunbreak
MHRise is one of the most relaxing ones with the sunbreak expansion since you can take NCPs on all missions, they help a lot to de-aggro the monsters and enjoy the hunt.
I was with some friends from work when the trailer for MHW released and we literally screamed when we realized it was an MH game haha.
The only change they’ve made between games that I found really annoying was to the hunting horn. It was really fun to have to adapt your hunt to each horn’s songs and keep track of what buffs were active and which ones you needed to re-apply (in reality you always rotated your songs over and over so you never ran out of your buffs).
But in Rise each song now is X -> X
, A -> A
, and X+A -> X+A
, there’s no combinations.
Every hunting horn only has 3 songs, previously some horns could have up to 5.
When you play a song twice the buff applied goes up a level, well, in Rise they made it a single attack to play all your songs twice.
It feels like they tried to simplify the weapon but two teams got in charge of providing ideas and they implemented both solutions, which made the weapon have no depth at all.
Also, previously you felt like the super support playing hunting horn, each time you applied a buff a messages appeared showing the buff you applied. Yeah, it was kind of spammy, but it felt nice having a hunting horn on the hunt.
In Rise they decided to only display a message the first time you apply the buff and that’s it, so if you re-apply it there’s nothing, even when you keep buffing your team. Ah, but if you use bow the arc shot does spam the buff message, so you feel less than a support than the bow :/
Due to work I haven’t followed all the news of MHWilds, but I’ll definitely buy it.
For the next posts my recommendations would be the series Sniper elite, Mario and Luigi, Pokemon mystery dungeon, and Disgaea.
(Maybe also another theme of posts could be genre/mechanic, like tactics games or colony management in general)
I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.
I’ve only played P5 and currently P5R.
The RPG part is amazing, the story, combat, dungeon crawling, interactions, etc, and all the other comments people had already made.
My only con for it would be the strictness of the schedule to do the story. Yeah, it’s also an interesting part of the game which differs from other RPGs, but it’s frustrating you might permanently lose something because you planned it a bit off or selected the wrong dialog option with a confidant so you don’t have enough points which makes you have to spend an extra day with them to increase their rank.
Either you follow a guide or you accept the idea of missing some parts of the history.
And even then with a guide I think I might as well not experience everything since I won’t go to visit some of the places to hang out with confidants, only the main ranks and that’s it.
Also, you can’t focus on finishing a confidant because I think all of them have some requirement, or they are not available that day, so you need to do other stuff.
For example, Yoshida is only available Sundays, Kawakami IIRC is also only the last days of the week, but not weekends and only during the evening.
But I plan to also play P3 and P4 since the stories are so good.
My recommendation for the next post would be series of Monster Hunter, Paper Mario, or Kingdom Rush.
I had a similar case.
My minipc has a microSD card slot and I figured if it could be done for a RPI, why not for a mini PC? :P
After a few months I bought a new m2nvme but I didn’t want to start from scratch (maybe I should’ve looked into nix?)
So what I did was sudo dd if=/dev/sda of=/dev/sdc bs=1024k status=progress
And that worked perfectly!
Things to note:
P5R, almost done with the 2nd palace.
I’m following a guide to experience everything in one play through since I’ve already played the original one and this one on PS4, now this is in PC.
I don’t think RSS is suited for getting more than just the latest entries in a feed.
What you’re looking for is handled by the API which includes pagination.
Text to speech is what piper is doing.
What I’m looking for is called voice changer since I want to change a voice which already read something.
That’s exactly what I want: “the thing in the Darth Vader halloween masks” but for linux, preferably via CLI to ingest audio files and be able to configure it to change the voice as I want, not only Darth Vader.
I don’t want to manage piper voices, I can handle that directly in my file system as I only have a few.
The issue is none of the ones I’ve found are good for me, so what I need is something to change the voice once it has been generated by piper.
I haven’t completely looked into creating a model for piper, but just having to deal with a dataset is not something I look forward to, like gathering the data and all of what this implies.
So, I’m thinking it’s easier to take an existing model and make adjustments to fit a bit better on what I would like to hear constantly.
Check the most upvoted answer and then look into tubearchivist which can take your yt-dpl parameters and URLs to download the videos plus process them to have a better index of them.
Borderlands 2, specifically the mechromancer class.
It has a perk where you get more damage each time you unload your full clip, and it resets when you manually reload.
In PC the reload action is its own key.
But I had a potato PC and was only able to play it at low settings. When I got a PS4 I bought the game again to play it with nice graphics. It quickly got very frustrating since the reload action is bouns to the same button as interact! So each time you tried to talk to someone, to get into a vehicle, or even pick up something from the ground you got into the risk of not aiming well enough and reloading by accident which resets your buff!
I only had to run this in my home server, behind my router which already has firewall to prevent outside traffic, so at least I’m a bit at ease for that.
In the VPS everything worked without having to manually modify iptables.
For some reason I wasn’t being able to make a curl call to the internet inside docker.
I thought it could be DNS, but that was working properly trying nslookup tailscale.com
The call to the same url wasn’t working at all. I don’t remember the exact details of the errors since the iptables modification fixed it.
AFAIK the only difference between the two setups was ufw enabled in the VPS, but not at home.
So I installed UFW at home and removed the rule from iptables and everything keeps working right now.
I didn’t save the output of iptables before uwf, but right now there are almost 100 rules for it.
For example since this is curl you’re probably going to connect to ports 80 and 443 so you can add --dport to restrict the ports to the OUTPUT rule. And you should specify the interface (in this case docker0) in almost all cases.
Oh, that’s a good point!
I’ll later try to replicate the issue and test this, since I don’t understand why OUTPUT
should be solved by an INPUT
rule.
Well, it’s a bit of a pipeline, I use a custom project to have an API to be able to send files or urls to summarize videos.
With yt-dlp I can get the video and transcribe it with fast whisper (https://github.com/SYSTRAN/faster-whisper), then the transcription is sent to the LLM to actually make the summary.
I’ve been meaning to publish the code, but it’s embedded in a personal project, so I need to take the time to isolate it '^_^
I’ve used it to summarize long articles, news posts, or videos when the title/thumbnail looks interesting but I’m not sure if it’s worth the 10+ minutes to read/watch.
There are other solutions, like a dedicated summarizer, but I’ve investigated into them and they only extract exact quotes from the original text, an LLM can also paraphrase making the summary a bit more informative IMO.
(For example, one article mentioned a quote from an expert talking about a company, the summarizer only extracted the quote and the flow of the summary made me believe the company said it, but the LLM properly stated the quote came from the expert)
This project https://github.com/goniszewski/grimoire has in it’s road map a way to connect to an AI to summarize the bookmarks you make and generate at 3 tags.
I’ve seen the code, I don’t remember what the exact status of the integration.
Also I have a few models dedicated for coding, so I’ve also asked a few pieces of code and configurations to just get started on a project, nothing too complicated.
Ah, that makes sense!
Yes, a DB would let you build this. But the point is in the word “build”, you need to think about what is needed, in which format, how to properly make all the relationships to have data consistency and flexibility, etc.
For example, you might implement the tags as a text field, then we still have the same issue about addition, removal, and reorder. One fix could be have a many tags to one task table. Then we have the problem of mistyping a tag, you might want to add TODO
but you forgot you have it as todo
, which might not be a problem if the field is case insensitive, but what about to-do
?
So there are still a lot of stuff you might oversight which will come up to sidetrack you from creating and doing your tasks even if you abstract all of this into a script.
Specifically for todo list I selfhost https://vikunja.io/
It has OAS so you can easily generate a library for any language for you to create a CLI.
Each task has a lot of attributes, including the ones you want: relation between tasks, labels, due date, assignee.
Maybe you can have a project for your book list, but it might be overkill.
For links and articles to read I’d say a simple bookmark software could be enough, even the ones in your browser.
If you want to go a bit beyond that I’m using https://github.com/goniszewski/grimoire
I like it because it has nested categories plus tags, most other bookmark projects only have simple categories or only tags.
It also has a basic API but is enough for most use cases.
Other option could be an RSS reader if you want to get all articles from a site. I’m using https://github.com/FreshRSS/FreshRSS which has the option to retrieve data form sites using XMLPath in case they don’t offer RSS.
If you still want to go the DB route, then as others have mentioned, since it’ll be local and single user, sqlite is the best option.
I’d still encourage you to use any existing project, and if it’s open source you can easily contribute the code you’d have done for you to help improve it for the next person with your exact needs.
(Just paid attention to your username :P
I also love matcha, not an addict tho haha)
Start by learning docker, you don’t have to selfhost anything yet, just learn to run a container, specially to run automated stuff. Then learn to build the images and run docker compose.
Also you could start checking any form or infrastructure as code. I usually hear about ansible and nixos.
This helps having a way to redeploy your services in any hardware easily.