A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos.
See the biggest issue is that there isn't an easy way to test any hypothesis here. For a pretty big obvious issue if you look at it.
You get wrong building a battery you maybe burn a building down, you get it wrong trying to cure pedophilia, you end up with a molested or hurt kid at worst. And a lot more people are gonna have strong emotions about the child than a building even if more lives are lost in the fire.
It's such a big emotionally charged thing to get wrong. How do you agree to take the risk when no one would feel comfortable with the worst outcome?
So instead it's easy and potentially just proper to push it aside and blanket say "bad". And I hate black or white issues. But it's impossible to answer without doing and impossible to do without and answer.
See the biggest issue is that there isn't an easy way to test any hypothesis here.
If I had to speculate I could see both turning out to be true. There are probably some pedophiles whom AI CP will help handle the urge, and some for whom the readily available content will make actual abuse more morally acceptable. But then again, we'll probably never know for sure unless we find some criteria like in your nice battery example. Criteria such as "is the building on fire" give you quick and near-immediate feedback on whether or not you've been successful.
The discussion reminds me of the never-ending debate on whether drugs should be legal though. If there should be tests with AI CP, could there be a setup similar to that of supplying recovering heroine addicts (and only them) with methadone? This would allow the tests to be conducted in a controlled environment, with a control group and according to reproducible criteria.
See the biggest issue is that there isn't an easy way to test any hypothesis here. For a pretty big obvious issue if you look at it.
You get wrong building a battery you maybe burn a building down, you get it wrong trying to cure pedophilia, you end up with a molested or hurt kid at worst. And a lot more people are gonna have strong emotions about the child than a building even if more lives are lost in the fire.
It's such a big emotionally charged thing to get wrong. How do you agree to take the risk when no one would feel comfortable with the worst outcome?
So instead it's easy and potentially just proper to push it aside and blanket say "bad". And I hate black or white issues. But it's impossible to answer without doing and impossible to do without and answer.
If I had to speculate I could see both turning out to be true. There are probably some pedophiles whom AI CP will help handle the urge, and some for whom the readily available content will make actual abuse more morally acceptable. But then again, we'll probably never know for sure unless we find some criteria like in your nice battery example. Criteria such as "is the building on fire" give you quick and near-immediate feedback on whether or not you've been successful.
The discussion reminds me of the never-ending debate on whether drugs should be legal though. If there should be tests with AI CP, could there be a setup similar to that of supplying recovering heroine addicts (and only them) with methadone? This would allow the tests to be conducted in a controlled environment, with a control group and according to reproducible criteria.