• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: August 25th, 2023

help-circle


  • lol. Did this in my old building - the dryer was on an improperly rated circuit and the breaker would trip half the time, eating my money and leaving wet clothes.

    It was one of the old, “insert coin, push metal chute in” types. Turns out you could bend a coat hanger and fish it through a hole in the back to engage the lever that the push-mechanism was supposed to engage. Showed everyone in the building.

    The landlord came by the building a month later and asked why there was no money in the machines, I told him “we all started going to the laundromat down the street because it was cheaper”



  • The feature is explicit sync, which is a brand new graphics stack API that would fix some issues with nvidia rendering under Wayland.

    It’s not a big deal, canonical basically said ‘this isn’t a bug fix or security patch, it’s not getting backported into our LTS release’ - so if you want it you have to install GNOME/mutter from source, switch operating systems, or just wait a few months for the next Ubuntu release



  • Reddit has way more data than you would have been exposed to via the API though - they can look at things like user ARN (is it coming from a datacenter), whether they were using a VPN, they track things like scroll position, cursor movements, read time before posting a comment, how long it takes to type that comment, etc.

    no one at reddit is going to hunt these sophisticated bots because they inflate numbers

    You are conflating “don’t care about bots” with “don’t care about showing bot generated content to users”. If the latter increases activity and engagement there is no reason to put a stop to it, however, when it comes to building predictive models, A/B testing, and other internal decisions they have a vested financial interest in making sure they are focusing on organic users - how humans interact with humans and/or bots is meaningful data, how bots interact with other bots is not



  • To compare every comment on reddit to every other comment in reddit’s entire history would require an index

    You think in Reddit’s 20 year history no one has thought of indexing comments for data science workloads? A cursory glance at their engineering blog indicates they perform much more computationally demanding tasks on comment data already for purposes of content filtering

    you need to duplicate all of that data in a separate database and keep it in sync with your main database without affecting performance too much

    Analytics workflows are never run on the production database, always on read replicas which are taken asynchronously and built from the transaction logs so as not to affect production database read/write performance

    Programmers just do what they’re told. If the managers don’t care about something, the programmers won’t work on it.

    Reddit’s entire monetization strategy is collecting user data and selling it to advertisers - It’s incredibly naive to think that they don’t have a vested interest in identifying organic engagement