After deployment of mendix app, in environment, the Runtime is still spinning but application is running well.

0
Hi,After deployment of mendix app, in environment, the Runtime is still spinning with message "The resource is still processing:", but application is running well.When check for operatorconfiguration pod logs, it is showing error as -{"level":"error","ts":"2026-04-10T13:53:30Z","logger":"controller_runtime","msg":"Failed to load Pod status","Runtime.Namespace":"xxx-xx-xxx","Runtime.Name":"xxx","Pod.Name":"xxx-master-g34545-trtrt","error":"pod (xx.xxx.xx.xxx) runtime status cannot be retrieved: invalid response code: 500 Internal Server Error","stacktrace":"gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/util/mxlog.Logger.logErrorMessage\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/util/mxlog/mxlog.go:168\ngitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/util/mxlog.Logger.Error\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/util/mxlog/mxlog.go:130\ngitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/controllers/runtime.(*RuntimeReconciler).getReplicaStatuses\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/controllers/runtime/runtime_controller.go:530\ngitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/controllers/runtime.(*RuntimeReconciler).Reconcile\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/controllers/runtime/runtime_controller.go:275\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:261\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/usr/src/gitlab.rnd.mendix.com/digital_ecosystems/mendix-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:222"}Please suggest!Thank you in advance.Regards,Anup Adkar
asked
2 answers
0

Hi Anup Adkar


This is a Mendix Operator to Runtime Pod communication issue on Kubernetes/Private Cloud.The app is running fine, but the Mendix Operator can't poll the Runtime pod's status API (getting HTTP 500), so the portal stays stuck spinning. From you message i conclude that

1.Pod is up and reachable

2.App is serving end users fine

3.The m2ee admin/status endpoint inside the pod is unhealthy.


Now primarly You need to check the Startup microflow->Disable After Startup microflow, redeploy & test.


Other checks worth to do are

1.Runtime version mismatch

2.Pod under memory pressure

3.Wrong adminPort in MendixApp CR


If not kindly share the runtime logs. I hope this helps

answered
0

Hi,


This behavior typically indicates a runtime status reporting issue in the Mendix Operator, not an actual application problem.

Since your application is running fine, the issue is not with deployment itself but with how the Operator retrieves pod runtime status.

Root cause

The key error is:

runtime status cannot be retrieved: invalid response code: 500 Internal Server Error

This means the Mendix Operator is trying to query the runtime (via the runtime status endpoint), but the pod is returning HTTP 500, so the Operator cannot update the status correctly. As a result, the environment keeps showing “The resource is still processing”.

Common reasons

  1. Runtime status endpoint temporarily failing
  2. The app is up, but the internal status endpoint used by the Operator is not responding correctly.
  3. Networking / service mesh / ingress issues
  4. If you are using Istio, Nginx, or any proxy:
  • Internal calls from Operator → Runtime may fail
  • Leads to 500 responses
  1. Mendix Operator / Runtime version mismatch
  2. Incompatibility between Operator and Mendix Runtime can cause status parsing failures.
  3. Heavy startup or blocked threads
  4. If runtime is under load or stuck during health checks, status endpoint may fail intermittently.


checks / fixes

1. Verify runtime health endpoint manually

From within the cluster:

kubectl exec -it <pod> -- curl http://localhost:8080/

(or relevant status/health endpoint)

Check if it returns 200 consistently.

2. Check runtime logs

Look for:

  • Exceptions during startup
  • Thread blocking / long-running tasks
  • Errors around request handling

3. Check Operator ↔ Runtime connectivity

  • Ensure no network policies are blocking internal calls
  • If using service mesh (Istio), check sidecar configs

4. Restart Operator pod

Sometimes the Operator gets into a bad reconciliation state:

kubectl rollout restart deployment mendix-operator

5. Validate version compatibility

Ensure:

  • Mendix Runtime version
  • Mendix Operator version

are officially compatible.

6. Temporary workaround

If everything else is fine:

  • Restart the runtime pod
  • Or redeploy

This usually clears the “processing” state.

This is a status synchronization issue between Mendix Operator and Runtime, not a functional problem with your app. The runtime is healthy, but the Operator cannot retrieve its status due to intermittent 500 responses. Fixing connectivity, health endpoint stability, or restarting the Operator typically resolves it.


answered