• artaxadepressedhorse@lemmyngs.social
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    I keep seeing people post this same idea, and I see no proof that it would actually happen.

    Why would you need "real" CP if there's like-for-like-quality AI CP out there?

    Also, aside from going out of our way to wreck the lives of individuals who look at the stuff, is there any actual concrete stats that say we're preventing any sort of significant number of RL child abuse by giving up rights to privacy or paying FBI agents to post CP online and entrap people? I Don't get behind the "if it theoretically helped one single child, I'd genocide a nation…" bs. I want to see what we've gained so far by these policies before I agree to giving govt more power by expanding them.

    • Radiant_sir_radiant@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      This is a difficult one to get morally 'right', but I can see how legal (or at least not-all-out-illegal) AI CP could make the situation worse. Given today's technological advances, it will be next to impossible for law enforcement to reliably distinguish between illegal real CP and not-illegal artificial CP, meaning images and videos of actual child abuse cannot be used as evidence in court anymore, as the defendant can just claim that it's AI-generated.
      Second, while a lot of consumers of CP might be happy with AI material, I expect that for a substantial number, the real thing will be considered superior or a special treat… much as many consumers of 'normal' porn prefer amateur porn over mass-produced studio flicks.
      The two combined would mean there's still a considerable market for real CP, but the prosecution of child abusers would be much, much harder.

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        See the biggest issue is that there isn't an easy way to test any hypothesis here. For a pretty big obvious issue if you look at it.

        You get wrong building a battery you maybe burn a building down, you get it wrong trying to cure pedophilia, you end up with a molested or hurt kid at worst. And a lot more people are gonna have strong emotions about the child than a building even if more lives are lost in the fire.

        It's such a big emotionally charged thing to get wrong. How do you agree to take the risk when no one would feel comfortable with the worst outcome?

        So instead it's easy and potentially just proper to push it aside and blanket say "bad". And I hate black or white issues. But it's impossible to answer without doing and impossible to do without and answer.

        • Radiant_sir_radiant@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          See the biggest issue is that there isn't an easy way to test any hypothesis here.

          If I had to speculate I could see both turning out to be true. There are probably some pedophiles whom AI CP will help handle the urge, and some for whom the readily available content will make actual abuse more morally acceptable. But then again, we'll probably never know for sure unless we find some criteria like in your nice battery example. Criteria such as "is the building on fire" give you quick and near-immediate feedback on whether or not you've been successful.

          The discussion reminds me of the never-ending debate on whether drugs should be legal though. If there should be tests with AI CP, could there be a setup similar to that of supplying recovering heroine addicts (and only them) with methadone? This would allow the tests to be conducted in a controlled environment, with a control group and according to reproducible criteria.