Autoscaling in Netifi or K8S?

Hi guys,

      how do i enable autoscaling in Netifi with round-robin distribution ? Should i delegate that to K8S ? In such a case how do i change the port number of the new started service  ?

      Thank you guys !


Hi @yjonckhe!

Your questions are very good, and I’d like to make sure that we thoroughly answer them. In addition to the following response at the bottom I have some follow-up questions:

  1. What programming languages are your applications being written in? This is important for me to know so that I can provide you with code examples if needed, and for example, if you happen to be writing your applications in Java, we have existing discovery libraries I should tell you about.

  2. Where/how are you running your K8S clusters? Often times cloud providers have their own particular way of having things done, so this answer can help me get you a real working example of how you might go about solving your challenges.


If you use our Helm chart, we currently rely on using DaemonSets, dedicated hosts, and cluster scaling to autoscale the brokers. We did this to keep things simple, and to get around a lot of the insanity that involves most Kubernetes network deployments and performance issues.

That said, the more we know about how you intend to deploy things, the better we can help provide you answers, and the Netifi Broker already has some built in solutions to help.

So let’s assume that we can scale the brokers with little effort, the next interesting challenge is discoverability, and there are 2 initial sets of problems with this:

  1. How do I find my 1st broker?
  2. How do I find my Nth broker?

To find our first broker, we wrote a handful of discovery strategies that the brokers themselves can be configured to use. For example, our helm chart configures the brokers with our Kubernetes discovery mechanism which uses the services API to add newly discovered broker nodes to the Netifi Broker client.

Finding the Nth broker can come in 2 ways:

  1. The discovery library continues to update the Netifi Broker client with newly discovered nodes.
  2. It receives Broker Info messages from the brokers it’s already connected to.

Using the Broker Info messages for continued discovery, and thus load balancing, also makes choices like using our static discovery mechanism reasonable, because the client would only ever rely on the IP Address or DNS record for that first connection. After that, the health and gossip of the pooled brokers keeps the clients and brokers always connected.

For those who are Netifi Enterprise customers, we’ve also been experimenting with offering a Netifi Native Discovery mechanism so that you would never need to worry about the initial Broker discovery challenge, and your applications could be agnostic to their compute platform for discovering brokers as well.

Hi Alan, thanks for your answer !

We use only Java and Openshift on premise (the cloud is not yet permitted for some stupid reasons).

We are searching for the easiest mechanism to have autoscaling. Without autoscaling, we could even start as many instances of our microservices as we think we might need but this is a waste of resources. That being said, if the autoscaling involves lot of complex tasks we will prefer to work this way…

Awesome, I’ll open an internal Jira ticket to investigate reworking the Helm Chart to not use Daemonsets so that the Horizontal Pod Autoscalers can work off custom metrics from the brokers. Hopefully you’re running Openshift 4.1, or can influence folks to upgrade, because that feature is still in Technology Preview, and you’re definitely going to want it, not just for our brokers, but for your own applications as well.

So my current “strategy” will be to manually start many brokers (much more than our estimated needs in order to ensure there is no bottleneck there) and i will use the pod-autoscaling feature of OpenShift for my own micro-services

I continue to investigate Netifi this is such an exiting technology !!! Keep on the good work guys !!