Feature request
We want to manage a remote user cluster that is owned by a customer.
If we can connect the node-controller to a remote kubernetes cluster, than we can remove the workload on a that cluster.
The node-controller has the credentials to the Citrix ADC. This way we don't have to find a way to hide these credentials.
The same functionality existst in the citrix-ingress-controller:
https://github.com/citrix/citrix-helm-charts/blob/master/citrix-ingress-controller/values.yaml#L42
Current situation
Currently the situation is fixed for local development with a kubeconfig or inside a kubernetes cluster.
- The namespace is pulled from
/var/run/secrets inside the node-controller pod.
- The k8s api connection has only 2 options build in to connect to kubernetes.
- The kubeconfig located at
~/.kube/config
- The in-cluster configuration is pulled from `/var/run/secrets inside de node-controller pod.
Feature request
We want to manage a remote user cluster that is owned by a customer.
If we can connect the node-controller to a remote kubernetes cluster, than we can remove the workload on a that cluster.
The node-controller has the credentials to the Citrix ADC. This way we don't have to find a way to hide these credentials.
The same functionality existst in the citrix-ingress-controller:
https://github.com/citrix/citrix-helm-charts/blob/master/citrix-ingress-controller/values.yaml#L42
Current situation
Currently the situation is fixed for local development with a kubeconfig or inside a kubernetes cluster.
/var/run/secretsinside the node-controller pod.~/.kube/config