• @xavier666@lemm.ee
    link
    fedilink
    English
    1911 months ago

    The amount of data that needs to be exchanged because of this approach is not scalable. Assume that there are 3 instances with 100 users each. Even if lots of users upvote/post/comment, the traffic is exchanged only between 3 servers. But if there are 300 single user instances, the amount of traffic/storage will be duplicated which can cause a huge load for everyone which might not be viable in the long run, for both the sender and receiver. PS: I am assuming that the instances periodically update content by fetching the deltas.

    • @jcg@halubilo.social
      link
      fedilink
      411 months ago

      I am assuming that the instances periodically update content by fetching the deltas.

      That’s incorrect, so far no batching is set up for sending multiple posts at once and the exchange is initiated by the sending server, not the receiving server.

    • @interdimensionalmeme@lemmy.ml
      link
      fedilink
      311 months ago

      Just go to your average big popular subreddit, check out all the text of all posts and comments they week. That’s still a minuscule amount of data. A few megabytes when uncompressed.

      And Lemmy won’t get to that point of popularity and traffic for a very long time.

      And even then, it’s an easy problem to solve. Each instance creates a chunk of a day’s data, sign it and share it on a bittorrent like protocol. Even nntp massively archaic infrastructure can manage this, it is a piece of cake for Lemmy to do.