Warunki świadczenia usług
Przegląd bezpieczeństwa
Polityka Prywatności
Polityka Cookies
Polityka etyczna i antykorupcyjna
SLA
Uma

At Positive, we are committed to ensuring that Uma, our AI assistant, complies with the requirements of the EU AI Act by providing a transparent, safe, and ethical solution.
Uma is classified as a low-risk AI system, and we apply the necessary measures to ensure its compliance.

Risk category

Control

Classification of Uma as a low-risk AI system under the EU AI Act.


What does this mean?

Uma is used exclusively for business-related tasks within the Positive Platform, such as generating insights, suggestions, and contextual assistance based on user input.


These use cases are considered low-risk under the EU AI Act guidelines.

Uma is not used for high-risk applications such as biometric surveillance, critical decision-making in healthcare, law enforcement, or other regulated high-risk domains.

How we comply

Uma has been internally assessed against the risk criteria defined by the EU AI Act and is categorized as a low-risk AI system based on its intended use and scope.

Transparency commitments

Control

Clear communication about AI-powered functionalities and AI-generated outputs.


What does this mean?

Uma ensures that users can always identify when they are interacting with an AI system, in accordance with Article 50 of the EU AI Act.

How we comply

– Interactions involving AI are clearly labeled as “AI Assistant” wherever they appear.
– Content generated by the AI assistant is identifiable, readable, and fully copyable.


Safety and ethical use assurance

Control

Ensuring safe, transparent, and ethical use of AI features, including the recognition and handling of potential errors.

What does this mean?

Uma is designed to provide clear and reliable outputs for specific tasks.

However, like any AI system, it may occasionally produce errors, inaccuracies, or incomplete interpretations.


How we comply


Transparency
Users are explicitly informed that AI-generated outputs are suggestions and require user validation to ensure accuracy.
Safety
By acknowledging potential errors, Uma encourages users to review and verify outputs before making decisions.
Ethics
AI-generated outputs are designed to assist users without replacing their judgment, ensuring users retain full control over decisions.
Error handling
Feedback mechanisms are available to allow users to report errors, contributing to the continuous improvement of the system.

Modification and transfer obligations

Control

Transparency in the event of substantial modifications or transfer of AI systems.


What does this mean?

Compliance with Articles 3 and 25 of the EU AI Act, which require clear documentation for any substantial modification of the AI system or its transfer to a third party.

How we comply

We maintain and provide technical documentation, such as system design and risk assessments, to ensure that any third party performing modifications complies with applicable AI Act obligations.
By structuring our measures around these key controls, Positive ensures that Uma is a reliable, transparent, and EU AI Act–compliant AI solution.

Rozwijaj swój biznes automatycznie

Nie trać czasu na powtarzalne zadania. Pozwól automatyzacji sobie z tym poradzić.
Dziękujemy! Twoje zgłoszenie zostało odebrane!
Ups! Coś poszło nie tak podczas przesyłania formularza.