Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

2
  • I am scaling the application horizontally only. At server level, I am just trying to give some different percentage of resources to the different services using some "Quality of service" mechanism, so that on an average the server can handle more no of requests. Commented Oct 20, 2020 at 12:07
  • Unless you have different pricing tiers in the application where you have a different QoS definition, a basic load balancer that distributes load evenly should do. What you can do further is have a feedback mechanism based on which you can bring up more instances, for ex: if all your machines have a load of say 75%, you can bring up a couple more. That would be a lot simpler and almost any LB can be used. If you divide your nodes for some to have more resources to provide better QoS, tag your resource and have a weighted priority to those nodes. This is provided by most cloud LBs Commented Oct 20, 2020 at 16:05