At Positive, we are committed to ensuring that Uma, our AI assistant, complies with the requirements of the EU AI Act by providing a transparent, safe, and ethical solution.
Uma is classified as a low-risk AI system, and we apply the necessary measures to ensure its compliance.
Classification of Uma as a low-risk AI system under the EU AI Act.
Uma is used exclusively for business-related tasks within the Positive Platform, such as generating insights, suggestions, and contextual assistance based on user input.
Uma is not used for high-risk applications such as biometric surveillance, critical decision-making in healthcare, law enforcement, or other regulated high-risk domains.
Uma has been internally assessed against the risk criteria defined by the EU AI Act and is categorized as a low-risk AI system based on its intended use and scope.
Clear communication about AI-powered functionalities and AI-generated outputs.
Uma ensures that users can always identify when they are interacting with an AI system, in accordance with Article 50 of the EU AI Act.
– Interactions involving AI are clearly labeled as “AI Assistant” wherever they appear.
– Content generated by the AI assistant is identifiable, readable, and fully copyable.
Ensuring safe, transparent, and ethical use of AI features, including the recognition and handling of potential errors.
Uma is designed to provide clear and reliable outputs for specific tasks.
However, like any AI system, it may occasionally produce errors, inaccuracies, or incomplete interpretations.
– Transparency
Users are explicitly informed that AI-generated outputs are suggestions and require user validation to ensure accuracy.
– Safety
By acknowledging potential errors, Uma encourages users to review and verify outputs before making decisions.
– Ethics
AI-generated outputs are designed to assist users without replacing their judgment, ensuring users retain full control over decisions.
– Error handling
Feedback mechanisms are available to allow users to report errors, contributing to the continuous improvement of the system.
Transparency in the event of substantial modifications or transfer of AI systems.
Compliance with Articles 3 and 25 of the EU AI Act, which require clear documentation for any substantial modification of the AI system or its transfer to a third party.
We maintain and provide technical documentation, such as system design and risk assessments, to ensure that any third party performing modifications complies with applicable AI Act obligations.
By structuring our measures around these key controls, Positive ensures that Uma is a reliable, transparent, and EU AI Act–compliant AI solution.