Is your enhancement related to a problem? Please describe.
One issue with AI-generated (or augmented) content is that its currently hard for a consumer of that content to know that it was AI generated or not by merely reading/viewing said content. Said more simply, without taking some actions here we're potentially contributing to the erosion of trust in online content and interactions. We have the opportunity with ClassifAI to help (1) track what content is generated or augmented via AI and (2) provide mechanisms to alert users on the front-end when viewing said content. While we cannot prevent deepfakes and similar malevolent actors, it feels like we can still try and help contribute as best we can to improve the trust in content and interactions online.
Designs
As originally referenced in #398 (comment), we can look to provide a way to track what content has been generated or augmented with AI and then display attribution where that data exists. While we have an image generation credit and license in the caption of generated images we could go further with the following:
While the CAI tooling does not yet appear to support similar tracking and attribution for text-based content, we could still look to add some custom ClassifAI meta to posts where AI tooling is used to track what was done, when, and by which users so that data could be presented or utilize later on (e.g. post footnotes referencing AI features used in the generation of the post content, reporting on what features are most used).
Describe alternatives you've considered
No response
Code of Conduct
Is your enhancement related to a problem? Please describe.
One issue with AI-generated (or augmented) content is that its currently hard for a consumer of that content to know that it was AI generated or not by merely reading/viewing said content. Said more simply, without taking some actions here we're potentially contributing to the erosion of trust in online content and interactions. We have the opportunity with ClassifAI to help (1) track what content is generated or augmented via AI and (2) provide mechanisms to alert users on the front-end when viewing said content. While we cannot prevent deepfakes and similar malevolent actors, it feels like we can still try and help contribute as best we can to improve the trust in content and interactions online.
Designs
As originally referenced in #398 (comment), we can look to provide a way to track what content has been generated or augmented with AI and then display attribution where that data exists. While we have an image generation credit and license in the caption of generated images we could go further with the following:
croverlay on the front-end for images (where said data exists)While the CAI tooling does not yet appear to support similar tracking and attribution for text-based content, we could still look to add some custom ClassifAI meta to posts where AI tooling is used to track what was done, when, and by which users so that data could be presented or utilize later on (e.g. post footnotes referencing AI features used in the generation of the post content, reporting on what features are most used).
Describe alternatives you've considered
No response
Code of Conduct