Drop JMS "load balancer"
Look into implementations of load balancing. Do we really need the middleware API server implementing JMS?
Isn't it enough to just configure API server's nginx to be the load balancer and redirect the requests to one of the (worker) IPs? Is this easily scalable? Or do we need to hardcode the IPs into the nginx config?
I'd like this because this means that we can drop the API KTOR server and JMS.
On the other hand, all the workers would be API servers. Also with current state we have a 100% FIFO for requests (once worker finishes, it takes another job). With nginx load balancing we would have N queues that are processed individually. So for instance when there are two queues (A, B) and len(A) = 10
, len(B) = 5` it can happen that A would be emptied first when B jobs run longer. Or perhaps not? https://guo365.github.io/study/nginx.org/en/docs/admin-guide/HTTP%20load%20balancer.html#maxconns