High uptime is bad, that means you do not update your kernel
ArkScript lang developer, split keyboard fanatic
High uptime is bad, that means you do not update your kernel
On my own server at home, yes. Because that’s important for me to know what’s going on and not discover something by chance weeks later.
I use camel case for methods and functions and snake case for variables. And pascal case for constants. Why? I don’t really know, it makes for a nice distinction I guess.
If you are interested in tiny lisp like languages, this gitlab could be of interest to you.
Full disclaimer, I came across it a few years back as I am the maintainer of arkscript (which will get faster and better in the v4, so that data about it there is accurate as a baseline of « it should be at least this good but can be even better now »).
You could consider markdown extensions that helps you write and visualize!
Like this one: https://github.com/MeanderingProgrammer/render-markdown.nvim
A card grabber disguised as a game to me
I was thinking more like just having dockers on macOS
But running a Linux like asahi is an option
Joke on them, I don’t read ads!
Last time I checked they were working on forgejo runners / actions!
I think I’m more fed up with people making those quotes “rust will change everything” when, in fact, it will rule out many if not most memory corruption as you said. Reading your comment, I see now it’s the mentality “everything need to be in rust” that bothers me the most, which in fact means “rust can bring memory safety” and not “rust will replace everything”. Alas I’m seeing it used times and times again as the latter instead of the former.
I’m getting fed up about all those articles “rust x something: the future?”, “I rewrote <cli tool> in rust it’s now memory safe”. I get the rust safeties and all, but that doesn’t automatically make everything great, right ? You can still write shit code in any language that can RM -rf all your disk, or let security gaps here and there without intending to.
I mounted a disk of a server in rescue mode, since I needed to extract everything (the provider didn’t have the option to dump everything as a zip). Then installed an FTP server, added a user/pass, it worked.
But I couldn’t access the files of the original disk, even though I could see them. So I just chgrp/chown the original files, since the disk was just “mounted” in the rescue disk /mnt, I thought it was alright (at the time I thought permissions were volatile, stored separately from the files). I could now download the entire disk, yay!
Upon booting the original disk again, a bunch of errors: shell not starting, tools not running, because they were owned by user and not root…
Well we reinstalled all the server from scratch that day.
The reMarkable runs on Linux too! It’s an eink paper tablet
Ask yourself: do you really need a performance boost or are you just chasing the numbers to avoid a non-existant problem?
Right on point ; I use it years ago as my daily driver in terms of wm, but never went very far in term of customisation. Now is maybe the time to look at it again, thanks for the link!
I would have thought that i3wm would use a lot less memory, given how basic it is.
It’s a post from 2 years ago, so if nothing changed as of today, well, I think they didn’t succeed in updating the docs
Nginx proxy manager can do all of the routing for you if you are using docker. In a graphical interface without touching config. It’s on top of nginx so you get all its benefits!