• 1 Post
  • 27 Comments
Joined 11 months ago
cake
Cake day: December 27th, 2023

help-circle
  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    one example of a program that did multiple things is sfdisk, it used to make the kernel reload the new partition table but that was not its main job, only changing them. the extra functionality moved to blockdev which is nearer to doing such as it also triggers flushing buffers and i think setting read/write status. i am fully ok with that change as it removes code from a program that doesn’t need it to another that already does similar things so that other partitioning programs like gdisk fdisk or parted could go the same way so that maintainers of the reread-partition-table things can concentrate on one solution at one place (in userspace) instead of opening issues at an unknown number of projects that also alter partitioning. the “do one thing” paradigma is good for developers who maintain the code and i pretty much appreciate their work. if you are up to only want one-day-flies that either die or take huge amounts of resources only for keeping them alive (image of a mayfly in an emergency room and a heart-lung machine attached while chirurgs rushing around trying to enlenghten its life a few seconds more) then you are good with monolithic tools that could hardly be maintained and suck allday as no one wants to fix any bugs or cannot without creating new ones due to the tightened dependency hell it has internally.

    the point is not a lack of examples doing wrong but where one wants to be heading towards.


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Lol what???

    wouldn’t that be the definition of stable?

    the computer on voyager 2 is running for 47 years now, they might have rebooted some parts meanwhile but overall its a long time now, and if the program is free of bugs the time that program can run only depends on the durability of the hardware, protection from cosmic rays (which were afaik the problems the voyager probes faced mostly, not bugs) which could be quite long if protected from hazardous environments and maybe using optoelectronics but the point is that a bug free software can run forever only depending on hardware durability and energy supply, in any other way no humans are needed for a veery long time ;-)


  • smb@lemmy.mltoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 month ago

    However, systemd makes the system much more secure and reliable as it is

    less secure and less reliable day-by-day you meant? systemd introduces needless dependencies ever since as if that was it sole intention ever from its very beginning, which already were used for wide attacks, and exactly those attacks that the people working hard to remove unneeded dependencies for security reasons meant to prevent by things like “do one thing only” (but security was not the number 1 reason for this one i think), systemd instead: ‘lets add another level of that exponential dependency tree from the insecurity hell’ felt like they did this stupid thing intentionally every month for a decade or more.

    and stability… if you don’t monitor what systemd does, you’ll never know how bad it actually is. i’ve made custom scripts to monitor systemd’s failures (failing in doing a very primitive of its job) and there are hundreds (actually varying around 200 to 300 sometimes more) of such per day on all our systems for one particular(!) measurement only that was breaking service stability and i wrote a measure-and-fix+monitor workaround. other fixes were not monitored however, only silently fixed by workarounds, thus just unnumbered systemd bugs/instabilities in the dark that stole a lot of work capacity…

    if you run distros with systemd, unreliability is your daily experience unless you don’t really care or have never experienced stability before - like running a service (a single process) for 8 years without any interruption then it suddenly stops and you go like “was it maybe an attack? the process died, how could that be? were there any connects from outside at that moment?” not talking about not updating something that long, but “stability” itself CAN be like if you dont stop it, it’ll still run in 10000+ years maybe millions, more likely that humans extincted themselves way earlier than of a process “just dying” by a bug… while systemd even randomly stops things that were running well for no reason (varying) once a month more or less (also varying in what it actually randomly stops, sometimes (2 times) it even stopped ssh on my servers, me asking myself if i should create yet another workaround for systemds buggyness to not locking me out again from network or ratjer go for the real solution for most* of all systemd problems - *see below) on the few standard installs i personally have as i didn’t have the way to automatically replace provider installed distro on VMs in the DC. i want this replacing automatically for the same reason why i don’t like systemd, it causes manual work for a thing that should go automated. however due to systemd’s perpetuated instability i now managed to have this way, and every second working on getting rid of systemd is worth it 100k times. this however does not solve all systemd-introduced problems as the xz attack showed (a systemd-dependency on xz made the infected xz library beeing useful-for-the-atracker during compiletime of sshd binary with which then the attacker could infect the newly built sshd binary),one could still be attacked through systemd’s dependency hell even if one does not use systemd by oneself, but the build machines used for your distro could be affected/infected by systemd’s needless dependencies when “also” compiling for systemd-affected distributions thus there is the risk of becoming a victim of needless-systemd-dependencies while not using systemd at all. however the attack through systemd dependency (and that the public solution was not the removal of needless dependencies only included as source for superflous third party “needs”) made clear that systemd is an overall problem for security that will not be solved quickly but stay just like all windows insecurities will stay as long as they whish to push them to their “users”.

    systemd reducing overall security and its unreliability combined with some builtin impediments (i.e. when debugging its defects) is what drove me away from systemd. there are solutions way more stable and way more secure (and way better documented btw) that do not call in for needless dependencies, reducing risks, attack vectors and increases overall debuggability i.e. by deterministic behaviour as an easy example. and none of its important (to me) promises have been fulfilled yet by systemd, drop-in-replacement? have heared that lie thousands of times, but in the last decade i have not experienced it a single time in a distro and it does not seem to be included/finished any more.

    for windows users or windows admins a linux with systemd on it IS an improvement in stability, security and of course for updating, yes. but all of that does not come from systemd, rather the opposite is the case, systemd reduces it month by month, thats my experience and thats the most important experience for me, idc what lies whitdepapers tell or what broken promises are believed by anyone or the masses, i want secure and stable servers and services and systemd does not fit in for any of these goals and the time it was still “young” and early problems could be accepted in the hope they get fixed soon are gone, but without those fixes having ever appeared.



  • smb@lemmy.mltoMemes@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    the OS was not the comparison, but the hardware it runs on (just as @Freefall said) but also you seem to be wrong with your other assumption:

    And both those devices are tied to a specific OS.

    Which seems not to be the case as install instructions for another OS can be found here (i didn’t try it though) for the mentioned device:

    https://wiki.lineageos.org/devices/pdx215/

    lineage os still is an “android”, but another vendor with clearly different approach than the original firmware and what hinders you from writing bsd drivers and compiling a bsd kernel for it instead? So i count the Xperia 1 III as NOT bound to any OS or OS vendor.

    But despite the way longer possible support/security, freedom of choice and endless other possibilities that often come along with free OS choice, this pure and great advantages weren’t even mentioned there, thus it wasnt an OS comparison as it also wasn’t a bound-to-an-OS vs. absentness of vendor-lock-in-limitation-jungle comparison.


  • you should definitely know what type of authentication you use (my opinion) !! the agent can hold the key forever, so if you are just not asked again when connecting once more, thats what the agent is for. however its only in ram, so stopping the process or rebooting ends that of course. if you didn’t reboot meanwhile maybe try unload all keys from it (ssh-add -D, ssh-add -L) and see what the next login is like.

    btw: i use ControlMaster /ControlPath (with timeouts) to even reduce the number of passwordless logins and speed things up when running scripts or things like ansible, monitoring via ssh etc. then everything goes through the already open channel and no authentication is needed for the second thing any more, it gets really fast then.


  • The whole point of ssh-agent is to remember your passphrase.

    replace passphrase with private key and you’re very correct.

    passphrases used to login to servers using PasswordAuthentication are not stored in the agent. i might be wrong with technical details on how the private key is actually stored in RAM by the agent, but in the context of ssh passphrases that could be directly used for login to servers, saying the agent stores passphrases is at least a bit misleading.

    what you want is:

    • use Key authentication, not passwords
    • disable passwordauthentication on the server when you have setup and secured (some sort of backup) ssh access with keys instead of passwords.
    • if you always want to provide a short password for login, then don’t use an agent, i.e. unset that environment variable and check ssh_config
    • give your private key a password that fits your needs (average time it shoulf take attackers to guess that password vs your time you need overall to exchange the pubkey on all your servers)
    • change the privatekey every time immediately after someone might have had access to the password protected privkey file
    • do not give others access to your account on your pc to not have to change your private key too often.

    also an idea:

    • use a token that stores the private key AND is PIN protected as in it would lock itself upon a few tries with a wrong pin. this way the “password” needed to enter for logins can be minimal while at the same time protecting the private key from beeing copied. but even then one should not let others have access to the same machine (of course not as root) or account (as user, but better not at all) as an unlocked token could also possibly be used to place a second attacker provided key on the server you wanted to protect.

    all depends on the level of security you want to achieve. additional TOTP could improve security too (but beware that some authenticator providers might have “sharing” features which could compromise the TOTP token even before its first use.


  • My theory is that you already have something providing ssh agent service

    in the past some xserver environments started an ssh-agent for you just in case of, and for some reason i don’t remember that was annoying and i disabled it to start my agent in my shell environment as i wanted it.

    also a possibility is tharlt there are other agents like the gpg-agent that afaik also handles ssh keys.

    but i would also look into $HOME/.ssh/config if there was something configured that matches the hostname, ip, or with wildcards* parts of it, that could interfere with key selection as the .ssh/id_rsa key should IMHO always be tried if key auth is possible and no (matching) key is known to the ssh process, that is unless there already is something configured…

    not sure if a system-wide /etc/ssh/ssh_config would interfere there too, maybe have a look there too. as this behaviour seems a bit unexpected if not configured specially to do so.


  • smb@lemmy.mltoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    7 months ago

    that a moderately clever human can talk them into doing pretty much anything.

    besides that LLMs are good enough to let moderately clever humans believe that they actually got an answer that was more than guessing and probabilities based on millions of trolls messages, advertising lies, fantasy books, scammer webpages, fake news, astroturfing, propaganda of the past centuries including the current made up narratives and a quite long prompt invisible to that human.

    cheerio!





  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.



  • smb@lemmy.mltoLinux@lemmy.mlBtw
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    6
    ·
    8 months ago

    woman would take care for a literal horse instead of going to therapy. i don’t see anything wrong there either.

    just a horse is way more expensive, cannot be put aside for a week on vacations (could a notebook be put aside?) and one cannot make backups of horses or carry them with you when visiting friends. Horses are way more cute, though.


  • smb@lemmy.mltoMemes@lemmy.mlWell shit it took me a minute.
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    ich you have adhd and asperger or better, you might end up beeing trained by yourself for ignoring google’s, microsoft’s or other f’uped up so called “autocorrection” and ‘automatically’ overlook single mismatching words but recognize the sentence as a whole without actually reading it word by word. some can read entire pages with a single look on it without focusing on a a specific region or i.e. the center while still beeing able to later rephrase the content of the entire page.

    1up for all with special (dis-)Abilities!!


  • sorry if i might repeat someones answer, i did not read everything.

    it seems you want it for “work” that assumes that stability and maybe something like LTS is dort of the way to go. This also assumes older but stable packages. maybe better choose a distro that separates new features from bugfixes, this removes most of the hassle that comes with rolling release (like every single bugfix comes with two more new bugs, one removal/incompatible change of a feature that you relied on and at least one feature that cripples stability or performance whilst you cannot deactivate it… yet…)

    likely there is at least some software you most likely want to update out of regular package repos, like i did for years with chromium, firefox and thunderbird using some shellscript that compared current version with latest remote to download and unpack it if needed.

    however maybe some things NEED a newer system than you currently have, thus if you need such software, maybe consider to run something in VMs maybe using ssh and X11 forwarding (oh my, i still don’t use/need wayland *haha)

    as for me, i like to have some things shared anyway like my emails on an IMAP store accessible from my mobile devices and some files synced across devices using nextcloud. maybe think outside the box from the beginning. no arch-like OS gives you the stability that the already years-long-hung things like debian redhat/centos offer, but be aware that some OSes might suddenly change to rolling release (like centos i believe) or include rolling-release software made by third parties without respecting their own rules about unstable/testing/stable branches and thus might cripple their stability by such decisions. better stay up to date if what you update to really is what you want.

    but for stability (like at work) there is nothing more practical than ancient packages that still get security fixes.

    roundabout the last 15 years or more i only reinstalled my workstation or laptop for:

    • hardware problems, mostly aged disk like ssd wearlevel down (while recovery from backup or direct syncing is not reinstalling right?)
    • OS becomes EOL. thats it.

    if you choose to run servers and services like imap and/or nextcloud, there is some gain in quickly switching the workstation without having to clone/copy everything but only place some configs there and you’re done.

    A multi-OS setup is more likely to cover “all” needs while tools like x2vnc exist and can be very handy then, i nearly forgot that i was working on two very different systems, when i had such a setup.

    I would suggest to make recovery easy, maybe put everything on a raid1 and make sure you have on offsite and an offline backup with snapshots, so in case of something breaks you just need to replace hardware. thats the stability i want for the tools i work with at least.

    if you want to use a rolling release OS for something work related i would suggest to make sure no one externally (their repo, package manager etc) could ever prevent you from reinstalling that exact version you had at that exact point in time (snapshots from repos install media etc). then put everything in something like ansible and try out that reapplying old snapshots is straight forward for you, then (and not earlier) i would suggest that those OSes are ok for something you consider to be as important as “work”. i tried arch linux at a time when they already stopped supporting the old installer while the “new” installer wasn’t yet ready at all for use, thus i never really got into longterm use of archlinux for something i rely on, bcause i could’nt even install the second machine with the then broken install procedure *haha

    i believe one should consider to NOT tinker too much on the workstation. having to fix something you personally broke “before” beeing able to work on sth important is the opposite of awesome. better have a second machine instead, swappable harddrive or use VMs.

    The exact OS is IMHO not important, i personally use devuan as it is not affected by some instability annoyances that are present in ubuntu and probably some more distros that use that same software. at work we monitor some of those bugs of that software. within ubuntu cause it creates extra hassle and we workaround those so its mostly just a buggy annoying thing visible in monitoring.


  • smb@lemmy.mltoLemmy Shitpost@lemmy.worldUnnamed island
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    9 months ago

    the island next to england is called iceland, its in the upper left corner. but please don’t use AIs, they always do it wrong and i.e. mark the wrong spot like in your picture.

    secretly: its really called Avalon, but don’t spread it, its secret and should stay so 8-)


  • smb@lemmy.mltoLinux@lemmy.mlWhen do I actually need a firewall?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    so here are some reasons for having a firewall on a computer, i did not read in the thread (could have missed them) i have already written this but then lost the text again before it was saved :( so here a compact version:

    • having a second layer of defence, to prevent some of the direct impact of i.e. supply chain attacks like “upgrading” to an malicously manipulated version.
    • control things tightly and report strange behaviour as an early warning sign ‘if’ something happens, no matter if attacks or bugs.
    • learn how to tighten security and know better what to do in case you need it some day.
    • sleep more comfortable when knowing what you have done or prevented
    • compliance to some laws or customers buzzword matching whishes
    • the fun to do because you can
    • getting in touch with real life side quests, that you would never be aware of if you did not actively practiced by hardening your system.

    one side quest example i stumbled upon: imagine an attacker has ccompromised the vendor of a software you use on your machine. this software connects to some port eventually, but pings the target first before doing so (whatever! you say). from time to time the ping does not go to the correct 11.22.33.44 of the service (weather app maybe) but to 0.11.22.33 looks like a bug you say, never mind.

    could be something different. pinging an IP that does not exist ensures that the connection tracking of your router keeps the entry until it expires, opening a time window that is much easier to hit even if clocks are a bit out of sync.

    also as the attacker knows the IP that gets pinged (but its an outbound connection to an unreachable IP you say what could go wrong?)

    lets assume the attacker knows the external IP of your router by other means (i.e. you’ve send an email to the attacker and your freemail provider hands over your external router address to him inside of an email received header, or the manipulated software updates an dyndns address, or the attacker just guesses your router has an address of your providers dial up range, no matter what.)

    so the attacker knows when and from where (or what range) you will ping an unreachable IP address in exact what timeframe (the software running from cron, or in user space and pings at exact timeframes to the “buggy” IP address) Then within that timeframe the attacker sends you an icmp unreachable packet to your routers external address, and puts the known buggy IP in the payload as the address that is unreachable. the router machtes the payload of the package, recognizes it is related to the known connection tracking entry and forwards the icmp unreachable to your workstation which in turn gives your application the information that the IP address of the attacker informs you that the buggy IP 0.11.22.33 cannot be reached by him. as the source IP of that packet is the IP of the attacker, that software can then open a TCP connection to that IP on port 443 and follow the instructions the attacker sends to it. Sure the attacker needs that backdoor already to exist and run on your workstation, and to know or guess your external IP address, but the actual behaviour of the software looks like normal, a bit buggy maybe, but there are exactly no informations within the software where the command and control server would be, only that it would respond to the icmp unreachable packet it would eventually receive. all connections are outgoing, but the attacker “connects” to his backdoor on your workstation through your NAT “Firewall” as if it did not exist while hiding the backdoor behind an occasional ping to an address that does not respond, either because the IP does not exist, or because it cannot respond due to DDos attack on the 100% sane IP that actually belongs to the service the App legitimately connects to or to a maintenance window, the provider of the manipulated software officially announces. the attacker just needs the IP to not respond or slooowly to increase the timeframe of connecting to his backdoor on your workstation before your router deletes the connectiin tracking entry of that unlucky ping.

    if you don’t understand how that example works, that is absolutely normal and i might be bad in explaining too. thinking out of the box around corners that only sometimes are corners to think around and only under very specific circumstances that could happen by chance, or could be directly or indirectly under control of the attacker while only revealing the attackers location in the exact moment of connection is not an easy task and can really destroy the feeling of achievable security (aka believe to have some “control”) but this is not a common attack vector, only maybe an advanced one.

    sometimes side quests can be more “informative” than the main course ;-) so i would put that (“learn more”, not the example above) as the main good reason to install a firewall and other security measures on your pc even if you’ld think you’re okay without it.