Hi there,
So far so good with your docs :
- I created a cluster, added a node, deployed a Dashboard, a (secured) Helm/Tiller and an Ingress.
- I bought a nice .ovh domain that points to my NODES_URL.
- I deployed you hello app and made it available on http://hello.DOMAIN.ovh:32080
As for now I'd like to issue an HTTPS certificate with (deployed easily with helm chart). So I :
- deployed cert-manager with helm chart
- created a "letsencrypt-staging" ClusterIssuer
- added a Certificate for hello.DOMAIN.ovh
- modified my Ingress to includes this Certificate
But As Ingress is using 32080/32443, I'm having some troubles with ACME validation giving me :
> http-01 self check failed for domain "hello.DOMAIN.ovh"
But Ingress does not allow a setup on NodePort with ports under 3000.
What am I doing wrong ? should I use another mode for Ingress ? Which ?
Thanks for your help !
Ingress on ports 80/443 for cluster
Sujets apparentés
- Kubectl - TLS handshake timeout
5704
18.03.2019 09:07
- Docker registry private
5437
13.10.2016 07:26
- [Auto TLS sur k8s] ClusterIssuer cert-manager pour OVH
4539
18.11.2017 22:45
- Kubernetes Metrics
3212
18.11.2019 10:49
- A-t-on une idée des futurs tarifs?
3076
03.04.2019 19:18
- OVH Managed Kubernetes security patches
2949
06.12.2018 10:23
- Kubernetes ingress port 80 & 443
2809
31.12.2018 11:48
- Node Autoscaling / K8s dans une autre région: est-ce que K8s vaut le coup ?
2630
23.01.2020 16:40
- Datastore commun ?
2619
12.10.2016 12:15
Some news, I found a way to reach a bit further my goal.
Instead of deploying Ingress by the Helm chart, I https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#bare-metal re-deployed it manually :
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
Then for last step I edited the service-nodeport.yaml by adding the ips of my nodes :
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml -o service-nodeport.yaml
kubectl get -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}' $(kubectl get nodes -o name)
Add to service-nodeport.yaml :
> externalIPs:
> - X.X.X.X
> - X.X.X.X
Then :
kubectl apply -f service-nodeport.yaml
It worked and the Ingress listens on all my nodes on 80 and 433 now.
But I think this way may not be the cleanest one ... Any advice/remark ?
Thanks,
Until LB is released, port 80/443 are reachable by this way, thanks @GuillaumeL30
I'm interestd in such solution, as I currently do not use K8S capabilities related to scalability: I have a trivial application, not scalable at all, so a single node is enough. Thus, I do not need Load-Balancer.
Following this tutorial, I found some parts require updates:
* the mandatory URL is https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
* the service-nodeport.yaml is now located at https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/baremetal/service-nodeport.yaml
But while doing this, the ports 80 and 443 seems to stay unreachable, while the ingress-nginx-controller is running and the NodePort strategy is applied.
Any tips?
Ports were `closed`. Looking deeper in my config I realized that the `Service` doesn't have a selector.
Fixing that, I was able to access my service.
To expose:
```sh
kubectl expose deploy/nginx-ingress-controller -n ingress-nginx --external-ip=X.X.X.X --type=NodePort
```
Thanks @GuillaumeL30!