• 0 Posts
  • 219 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle








  • I did actually, and it worked, though they may have changed it by now.

    Think I have a screenshot somewhere…

    Edit: they’ve definitely altered the way it works. I’m sure there’s a way to get around whatever guardrails they added with enough creativity, unless they’ve completely rebuilt the model and removed any programming training data.





  • A llm making business decisions has no such control or safety mechanisms.

    I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.

    If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?

    From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.

    The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.