This is rather misleading, and I say that as a fan of Istio.
For example, the claim here is that Ambassador has 46 code contributors and Istio Gateway has 264. That number for Istio is for the whole project, which a couple lines above is said to be primarily a service mesh (east-west). Counting those contributions as Istio…
containerPort needs to match the port the image runs on (1001). Basically you will have
containerPort: 1001 and then a service with
targetPort: 1001 and
port: 80. That will get your service exposed using port 80 regardless of what the image itself needs to run as — and the service does not need to be a NodePort… a default service type is fine.
In that case, ingress largely would more most useful to create an API gateway layer where you have a consistent hostname and do path-based routing. If you want each entrypoint to have it’s own hostname and you aren’t using pricey load balancers, Ingress won’t really solve much of a problem — you can just deploy each service on a dedicated NodePort (Kubernetes term) and point a proxy at that.
Right, you always need load balancing and you need to make sure your load balancer scales as well. AWS does make it easy by offering ELB, which is essentially just an autoscaling load balancer, hence the name. You can always use nginx or haproxy or other more “enterprisey” solutions to load balance across your ingress controller instances but at that…
Your ingress controller handles requests and gets them to your deployments but you still need a way to get to your ingress controller pods. Where you would normally have an ELB per service, ingress allows you to have a single ELB to get to the ingress controller and it does the rest.