Skip to content

Conversation

@krishvoor
Copy link

This PR aims at increasing limits, requests for backend-cron
so as it can be deployed on ppc64le

Signed-off-by: Krishna Harsha Voora krishvoor@in.ibm.com

@openshift-ci
Copy link

openshift-ci bot commented Oct 22, 2021

Hi @krishvoor. Thanks for your PR.

I'm waiting for a 3scale member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@krishvoor
Copy link
Author

Hi @eguzki 👋
Can you please help review this PR?

@miguelsorianod
Copy link
Contributor

Hi @krishvoor,

Can you run make templates and include the changes in the PR?

Thank you.

This PR aims at increasing limits, requests for backend-cron
so as it can be deployed on ppc64le

Signed-off-by: Krishna Harsha Voora <krishvoor@in.ibm.com>
Limits: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("150m"),
v1.ResourceMemory: resource.MustParse("150Mi"),
v1.ResourceCPU: resource.MustParse("500m"),
Copy link
Contributor

@miguelsorianod miguelsorianod Oct 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the value chosen for the resource requests were values that were needed to be able to start the pod in ppc64.
What's the reasoning behind the chosen resource limits values?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our smoke-testing, on ppc64le, we reproduced the successful deployment, only with these resource allocations.
Anything less than chosen values results in an OOMKilled situation.

@qlty-cloud-legacy
Copy link

Code Climate has analyzed commit 43bd69d and detected 0 issues on this pull request.

View more on Code Climate.

@miguelsorianod miguelsorianod merged commit 10413ca into 3scale:master Oct 25, 2021
@thomasmaas
Copy link
Contributor

Am I correct to understand that we bump minimal needed resources for all users while only needed for ppc64le? Would it make sense/be possible to specify resources depending on OS or to make them editable and document how user running ppc64le should up them? (not for 2.11.1 but in a future update)

@eguzki
Copy link
Member

eguzki commented Oct 25, 2021

Actually, this is not the first time we increase the default limits of this very same component for the release 2.11.0. It was done here #592

Some notes:

  • The delta added to the required resources is really small. Just for one component, backend-cron, which is not cpu nor memory "greedy" process.
    CPU: from 50millicores to 100millicores
    MEM: from 40MB to 100MB.

  • The limits, which is what ppc64le really needs to avoid OOM issues, have been increased to the minimum required. Which is reasonable.
    CPU: from 150m to 500m. I guess this is not really required and it is arbitrary.
    MEM: from 150Mb to 500Mb. This is really the fix to the OOM.

After said that, it is all about the defaults. With the given defaults, 3scale can be deployed in x86_64 and ppc64le with the minimum requried a little bit increased: (50millicores of cpu and 60Mb of memory).

The customization of the limits and requests through the CR has been available for at least. It would require a documentation note, though.

Regarding OS dependent limit values, one question. Can we assume that all nodes in a cluster run with the same OS arch?
If the answer is no, then it is hard to tell from the operator what is the target OS arch. It would be using the node affinity mechanism to assign pods to specific nodes.

On the other hand, if the answer is yes and all nodes are running expected OS arch, then implementing defaults depending on the OS would make sense to lower required resources for x86_64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants