• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • Yeah not sure I agree with all of this.

    When it comes to KDE this feels out of date. The GTK issues are not what they once were; KDE Plasma has good GTK themes that match the KDE ones. Nowadays I find the main issues are with Flatpak software not matching DE themes because they’re in a sandbox. I’ve had that issue on both KDE and gnome 2 derived environments (I’ve never really gotten into Gnome 3). KDE also used to have a reputation for being slow and a resource hog; that’s inverted now - KDE has a good reputation including for scaling down to lower powered machines, while Gnome 3 seems to have a reputation as a resource hog?

    I have a KDE desktop environment and it’s very attractive, and I haven’t had any glitches beyond issues with Flatpak (VLC being a recent one that I managed to fix). I would say the mainstream themes for DE work in the same way as a windows theme works. The problems are when you go to super niche attempts to pretty up the desktop - but you’d get similar issues if you tried that in windows.

    I agree regarding the professional apps. If you are tied into specific proprietary Windows software then Linux is difficult. The exception is Office 365 which is now both Windows and Web App based, and the web apps are close to feature parity with the desktop clients. The open source alternatives to windows proprietary software can be very good, but there are often compromises (particularly collaboration as that is generally within specific softwares walled gardens). Like Libre Office; it’s very good and handles Office documents near seamlessly, but if your work uses Office then it you lose the integration with One Drive and Teams.

    In terms of Linux not supporting old software, I would caveat that that is supporting old linux software. It is very good at supporting other systems software through the various open source emulators etc. Also Flatpak has changed things somewhat; software can come with it’s own set of libraries although it does mean bloat in terms of space taken (and security issues & bugs albeit it limited to the app’s sandbox). And while Wine can be painful for some desktop apps it is also very robust with a lot of software; it can either be a doddle or a nightmare. Meanwhile Proton has rapidly become very powerful when it comes to gaming.

    I disagree that it takes a lot of time to make basic things work. Generally Linux supports modern hardware well and I’ve had no issues myself with fresh installs across multiple different pieces of hardware (my custom desktop, raspberry Pi, and a living room PC). Printing/Scanning remains probably the biggest issue but I’ve not had to deal with that in a long time. But problem solving bigger issues can be hard.


  • To answer your questions:

    When it comes to other distros; I currently use Linux Mint with KDE Plasma desktop. The debian/ubuntu ecosystem is pretty easy to use and there are lots of guides out there for fixing/tinkering with Linux Mint (or Ubuntu which largely also works) because of their popularity. Lots of software is available as “.deb” packages which can be installed easily on Linux Mint and other Debian based systems including Ubuntu.

    I’ve also been trying Nobara on a living room PC; that is Fedora based. I like that too, although it has a very different package manager set up.

    Whatever distro you choose, Flatpak is an increasingly popular way of installing software outside the traditional package managers. A flatpak should just work on any distro. I would not personally recommend Snap which is a similar method from Cannonical (the people behind Ubuntu) but not as good in my opinion.

    In terms of desktop environments, I like Linux Mint’s Cinnamon desktop, but have moved over to KDE having decided I prefer it after getting used to it with the Steam Deck. KDE has a windows feel to it (although it’s very customisable and can be made to look like any interface). I’ve also used some of the lightweight environments like LXDE, XFCE etc - they’re nice and also customisable but not as slick. You can get a nice look on a desktop with a good graphics card with KDE. The only desktop environment I personally don’t like is Gnome 3 (and the Unity shell from Ubuntu); that may just be personal preference but if you’re coming from Windows I wouldn’t start with that desktop environment - it’s too much of a paradigm shift in my opinion. However it is a popular desktop environment.


  • BananaTrifleViolin@kbin.socialtoLinux@lemmy.mlLooking to make the switch
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    10 months ago

    I’ve been dual booting between Linux and Windows for maybe 10 years or so (and tinkered with linux growing up before that). I think maybe similar to you, I’m technically apt when it comes to computers but not a programmer; I’m good at problem solving issues with my computer and am not afraid to “break” it.

    A few key things:

    • Make sure your important personal data, files etc are kept secure and always backed up. This is probably obvious, but it does lower the threshold for tinkering and messing with the computer. I’ve reinstalled Windows and Linux multiple times; whether that’s getting round broken Windows updates, or Linux issues or just switching up which Linux distro I use. If you are confident you have your data backed up, then reinstalling an OS is not a big deal
    • Use multiple drives; don’t just partition one drive. Ideally each OS gets it’s own SSD; this will make dual booting much easier and also allows complete separation of issues. I have 4 hard drives in my PC currently - A 1TB C Drive SSD for Windows, a 500 GB Linux SSD drive, and two 4TB data drives (one is SSD one is just a standard HD). SSD is faster but you can of course use a mechnical drive if you want.
    • When it comes to dual booting, if you have a separate linux hard drive, then linux will only mess around with it’s own boot sectors. It will just point at the Windows boot sector on the windows hard drive and not touch it but add it as an option to it’s boot menu. Then all you have to do is go into your Bios and tell it to boot the Linux drive first, which will get you a boot menu to chose between Linux and Windows. Tinker with that boot menu (Grub2 usually) - I set mine to always boot the last OS selected, so I only have to think about the boot menu when I’m wanting to switch. Separate drives saves you having to mess around with Windows recovery disks if things go wrong with the boot sector. One drive with a shared or multiple boot sectors can be messy.
    • Try a few Distros using their live images. Most Linux distros you flash onto a USB stick, boot onto that (OR use VirtualBox in Windows to try Linux in an emulated environment) and it takes you into the full desktop environment running from the stick. You can then install from that. But you can also use linux that way. You can even run linux entirely from those USB sticks (or an external drive) and get a feel for it, including installing more apps, upgrading etc all using the USB stick as storage.
    • Also try a few different different desktop environments and get a feel for which one you like. Most distros default to a desktop environment (Gnome, KDE, Cinnamon, etc). You only really need to test the desktop environments with one distro as they’ll feel mostly the same in each distro.

    If you know you want to use Pop_OS, then follow their guide on how to install. It’s generally very similar for all linux OSs (there are other methods but this is the simplest and most common):

    1. Download a disk image (ISO)
    2. Flash the disk image onto a spare USB stick. Balena Etcher is a very commonly used tool for this.
    3. Restart your computer and go into your bios (usually the Del key just after reboot, sometimes Escape or F2) and change the boot order to that USB is 1st, above your hard drives
    4. Insert the USB stick and restart the computer
    5. You should load into the Linux live environment set up by that distro. Pop_OS loads you directly into the installer; you can go to the desktop by clicking “Try Demo Mode” after setting up langauge and keyboard. You can just continue installing.
    6. Select the hard drive you want to install onto. BE CAREFUL at this step; most installers are good at making clear which drives are which. The last thing you want to do is wipe a data drive or your main OS. Know your computer’s drives well, and if in doubt the safest thing is to unplug all the hard drives except the one you’re going to install Linux onto.
    7. Follow the installer set up (to create the main user account, etc) and install.
    8. After installing reboot the system and go back into the bios. This time put your linux drive at the top of the boot order (or below USB if you still want to boot other live images - remember to take out the stick! But generally more secure to boot to a hard drive and password protect your bios so people can only boot to USB when you decide). That’s it! Reboot, and select linux from the new boot menu.

    Linux has come a very long when it comes to installing and setting up; installers are generally easy to use, work well and generally hardware is recognised and set up for you. The exception will be the Nvidia graphics card - you will need to set up the Nvidia drivers. Pop_OS’s install guide shows how to do it.

    Hope that helps! Run out of characters!




  • This feels misleading? They’re claiming Linux has been hard coded to 8 cores but from what they describe in the article it is specifically the scaling of the scheduler?

    If I understood correctly the more cores you have, the more you could scale up the time each individual task gets on a CPU core without experiencing latency for the end user?

    I can see that would have a benefit in terms of user perception Vs efficient use of processing time but it doesn’t mean all the cores aren’t being used? It just means the kernel is still switching between tasks at say 5ms when it could be doing it at 20ms if you have lots of cores and the user wouldn’t notice. I can imagine that would be more efficient but it’s definitely not the same as being capped to 8 cores; all the cores and CPUs are being scheduled just not in a way that might be the most optimal for some users.

    Is that right? I feel like the title massively overplays the issue if so. It should be fixed but it doesn’t affect how many cores are used or even how fasr they work, merely how big the chunks of time each task get to run and how you can “hide” that from desktop users so the experience feels slick?


  • I use Noto Sans, or the Liberation Sans / Liberation Serif fonts. Tend to have a mix but Noto Sans for most desktop/GUI fonts.

    I also quite like Libre Caslon and EB Garamond as serif fonts for reading, so tend to use those with e-reader software or on my ereader device.

    I do install the old Microsoft Fonts just in case/out of habit but they seem to be disappearing from the internet fast now.


  • I think that’s true but South Parks humour has also changed over time. That’s the nature of satire - it lampoons human folly and vice, including the ideas of offence and moralising which are so often borne out of hippocracy.

    You mention taboo topics like 9/11 as if it’s a no go area but actually that has been a rich source of comedy and satire due to the level of hippocracy displayed around it. The hippocracy of Uber patriotism, religious nationalism, racism (you mention people having to be careful about the target and culture of jokes, but many groups found the exact opposite after 9/11 - certain ethnic and religious groups were all tarred with the same brush, particularly in the US) and more. Even the idea of self censorship out of fear of causing offense. Some of this is being replayed right now with a contemporary conflict.

    South Park is in a similar tradition to other satire such as Private Eye in the UK, or The Onion, or various other TV shows. South Park is just a sometimes more extreme version more willing to be deliberately offensive. But satire moves with the times like any other type of humour.


  • Essentially the engine is more modern, the graphics are nicer, and the simulation seems to be better. It is also hopefully a better base for newer mods.

    I love CS1 but the engine is 8 years old and PC gamers in particular have been hitting the engines limits in multiple ways for years. There were also some fundamental design decisions which limited the scope of what could be done with the game going forward - it is definitely time for a sequel.

    To be fair though, CS1 is going nowhere and has a massive amount of content available for it (including the massive free community content). It will probably take a couple of years before CS2 surpasses it. Although for Console gamers it'll probably quickly surpass what CS1 was and is able to offer.


  • I get where you're coming from, but in fairness the model can work. Cities Skylines 1 DLCs did mostly add substantial content to the game which over time built it to what it is today. At launch CS1 was a good game, far better than the premium Sim City 2013. I have over 1000hrs in the game so for me I think it was good value; and a lot of people bought the game over the years on heavy discount with a lot of the DLCs bundled.

    The downside with this model is when they release half baked games and withhold core game mechanics to engineer DLC. From what they've released of Skylines 2 that doesn't seem to be the case - it seems to be a fully featured city builder with more at launch than CS1 had. Obviously it will depend what the game is actually like and launch and there are obvious hooks for DLC already.

    I compare that to a game like Sim City 2013 - that released as a premium game, with a shitty reduced game scope, basic missing features and an always on-line DRM requirement, 1 crappy expansion and then completely abandoned by EA in a crappy state despite selling 2 million copies.

    If I had to pick a model I'd pick Paradox's.





  • I am confused by this post, there are 4 ways to add search engines to Firefox:

    1. From the settings page via "add search engine" button, to pick on from the Firefox add-ons site. This is the "main" route for most users as it ensures you're adding links from a trusted source (so you won't add a fake version of a popular search engine by accident that scrapes your data).

    2. Via the address bar. Any website that supports OpenSearch can be added by right clicking the address bar and selecting "add search engine name".

    3. Via the Mycroft project website, where almost any search engine in the directory can be added to Firefox.

    4. Via bookmarks and keywords. This is slightly more involved but almost any engine can be added this way.

    Android Firefox offers slightly different routes but again any search engine can be added. It is a bit more involved though.

    Firefox includes certain search engines by default as it gets revenue from the search engine providers for doing so, and Mozilla is transparent about this. Although Mozilla is independent, the Google search engine deal remains one of its biggest sources of income. That's how it survives.

    The default add-ons site meanwhile is a compromise between security and convenience for the majority of users, but people are not locked in to it and other search providers are not locked out of it.

    The Mullvad browser is modified Firefox btw, as is the Tor Browser it is itself based off. I don't know how much either contribute to the Mozilla foundation. Tor is an open source project but Mullvad is a commercial enterprise.


  • They haven’t u-turned, they have delayed implementation.

    This is a prime example of enshittification. What do Unity’s users benefit from in exchange for this new charge? Nothing - all this does is increase Unity’s income.

    The reason they need to change their income makes sense - their primary income comes from helping mobile game makers push ads and that is both volatile and threatened by privacy pushes by Apple on particular.

    The commercial motivation makes sense, and yet another example of how it leads to enshittification of a service or product.


  • This doesn’t make sense. It’s more likely we’ll pack more into a high end device then say goodbye to them in tasks like gaming.

    Computing power has been constantly improving for decades and miniaturisation is part of that. I have desktop PCs at work in small form factors that are more powerful than the gaming PC I used to have 10 years ago. It’s impressive how far things have come.

    However at the top end bleeding edge in CPUs,.GPUs and APUs high powered kit needs more space for very good reasons. One is cooling - if you want to push any chip to its limits then you’ll get heat, so you need space to cool it. The vast majority of the space in my desktop is for fans and airflow. Even the vast majority of the bulk of my graphics card is actually space for cooling.

    The second is scale - in a small form factor device you cram as much as you can get in, and these days you can get a lot in a small space. But in my desktop gaming tower I’m not constrained such limits. So I have space for a high quality power supply unit, a spacious motherboard with a wealth of options for expansions, a large graphics card so I can have a cutting edge chip and keep it cool, space for multiple storage devices, and also lots and lots of fans, a cooling system for the CPU.

    Yes, in 5 years a smaller device will be more capable for today’s games. But the cutting edge will also have moved on and you’ll still need a cutting edge large form factor device for the really bleeding edge stuff. Just as now - a gaming laptop or a games console is powerful but they have hard upper limits. A large form factor device is where you go for high end experiences such as the highest end graphics and now increasingly high fidelity VR.

    The exceptions to that are certain computing tasks don’t need anything like high end any more (like office software, web browsing, 4k movies), other tasks largely don’t (like video editing) so big desktops are becoming more niche in the sense that high end gaming is their main use for many homes users. That’s been a long running trend, and not related to APUs.

    The other exception is cloud streaming of gaming and offloading processing into the cloud. In my opinion that is what will really bring an end to needing large form factor devices. We’re not quite there but I suspec that will that really pushes form factors down, rather than APUs etc.