• Rentlar@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    I agree with your point overall in terms of AI not actually learning (I'd describe it as optimizing).

    However, I will say that inferring from what is not said is a tricky one to apply generally, which you do in your reply by jumping to conclusions as follows:

    The fact that they refused to reply hints that the reply would be against their best interests, either lying in a liable way or saying the truth and potentially ruining their investment.

    This is dangerous, can be used disingenuously and I discourage using it in our discourse.

    • Lvxferre@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I do agree with you that it's tricky to apply, but it's still useful regardless; and while the danger that you're talking about is real, it has more to do with the certainty assigned to the inference than with the inference itself.

      That's why I said it "hints that the reply…" instead of "means", or that the reason that Google answered is "likely related" - both words are there for a good reason, to highlight that this is not a conclusion. As in: it might be wrong, and both words acknowledge it.

      Even not being solid info but just an inference, I still felt worth sharing for two reasons, that make the lack of reply noteworthy:

      • Google, OpenAI and Meta/Facebook are roughly in the same situation (contacted by the author due to LLM development), and yet only one answered. Why?
      • Politicians and corporations are generally eager to advertise their stuff, but extra careful with what they say on-record.