Skip to content
This repository has been archived by the owner on Oct 3, 2020. It is now read-only.

Working behind haproxy v.1.5.18 #255

Open
maprager opened this issue Jan 12, 2020 · 8 comments
Open

Working behind haproxy v.1.5.18 #255

maprager opened this issue Jan 12, 2020 · 8 comments

Comments

@maprager
Copy link

When routing via an haproxy v1.5.18 - the browser seems to get stuck - without showing the cluster.
It appears because ( this is my guess ) - the /events call is never ending - and doesn't seem to close.
Does anyone have a good answer to solve this ?

@hjacobs
Copy link
Owner

hjacobs commented Jan 13, 2020

@maprager would this HAProxy configuration for server-sent-events (SSE) (/events is nothing else) help?

@maprager
Copy link
Author

unfortunately - this did not help...
I have this in the haproxy:
defaults
mode http
log global
option httplog
option tcplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 20s
timeout queue 1m
timeout connect 30s
timeout client 50s
timeout server 50s
timeout check 20s
timeout client-fin 30s
maxconn 3000

with the backend configure thus:

All the backends

backend eks_ingress_be_kubedb
balance roundrobin
option httplog
timeout tunnel 10h
http-request set-header host kube-ops-view.router.blah.blah.blah
server kubedb kube-ops-view.router.blah.blah.blah:80 weight 10 check port 80

@megabreit
Copy link

@maprager Does this happen after quite some time? Or directly after starting the pod?
Sounds a bit like it's similar to issue #251 or #240
I see this on Openshift 3.11 which comes with ha-proxy by default, even though with a newer version 1.8.17.

@maprager
Copy link
Author

hi - this happens immediatly when trying to access the pod via haproxy - the pod itself starts up ok.
I am running without the redis container - on purpose.

@maprager
Copy link
Author

running directly via tunnel works fine - just via haproxy doesn't....

@megabreit
Copy link

Hm... then it's probably not the same issue. All I can say that it's working with haproxy 1.8 in Openshift. But there are reasons for running such an old version... hopefully.

@maprager
Copy link
Author

maprager commented Jan 26, 2020 via email

@megabreit
Copy link

I'm no haproxy guy, mine is generated by Openshift. Hopefully, I found all the necessary parts:

<snip>
global
  maxconn 20000

  daemon
  ca-base /etc/ssl
  crt-base /etc/ssl
  # TODO: Check if we can get reload to be faster by saving server state.
  # server-state-file /var/lib/haproxy/run/haproxy.state
  stats socket /var/lib/haproxy/run/haproxy.sock mode 600 level admin expose-fd listeners
  stats timeout 2m

  # Increase the default request size to be comparable to modern cloud load balancers (ALB: 64kb), affects
  # total memory use when large numbers of connections are open.
  tune.maxrewrite 8192
  tune.bufsize 32768

  # Prevent vulnerability to POODLE attacks
  ssl-default-bind-options no-sslv3

# The default cipher suite can be selected from the three sets recommended by https://wiki.mozilla.org/Security/Server_Side_TLS,
# or the user can provide one using the ROUTER_CIPHERS environment variable.
# By default when a cipher set is not provided, intermediate is used.
  # Intermediate cipher suite (default) from https://wiki.mozilla.org/Security/Server_Side_TLS
  tune.ssl.default-dh-param 2048
  ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

defaults
  maxconn 20000

  # Add x-forwarded-for header.

  # To configure custom default errors, you can either uncomment the
  # line below (server ... 127.0.0.1:8080) and point it to your custom
  # backend service or alternatively, you can send a custom 503 error.
  #
  # server openshift_backend 127.0.0.1:8080
  errorfile 503 /var/lib/haproxy/conf/error-page-503.http

  timeout connect 5s
  timeout client 30s
  timeout client-fin 1s
  timeout server 30s
  timeout server-fin 1s
  timeout http-request 10s
  timeout http-keep-alive 300s

  # Long timeout for WebSocket connections.
  timeout tunnel 1h

frontend public

  bind :80
  mode http
  tcp-request inspect-delay 5s
  tcp-request content accept if HTTP
  monitor-uri /_______internal_router_healthz

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

  # DNS labels are case insensitive (RFC 4343), we need to convert the hostname into lowercase
  # before matching, or any requests containing uppercase characters will never match.
  http-request set-header Host %[req.hdr(Host),lower]

  # check if we need to redirect/force using https.
  acl secure_redirect base,map_reg(/var/lib/haproxy/conf/os_route_http_redirect.map) -m found
  redirect scheme https if secure_redirect

  use_backend %[base,map_reg(/var/lib/haproxy/conf/os_http_be.map)]

  default_backend openshift_default

# public ssl accepts all connections and isn't checking certificates yet certificates to use will be
# determined by the next backend in the chain which may be an app backend (passthrough termination) or a backend
# that terminates encryption in this router (edge)
frontend public_ssl

  bind :443
  tcp-request  inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  # if the connection is SNI and the route is a passthrough don't use the termination backend, just use the tcp backend
  # for the SNI case, we also need to compare it in case-insensitive mode (by converting it to lowercase) as RFC 4343 says
  acl sni req.ssl_sni -m found
  acl sni_passthrough req.ssl_sni,lower,map_reg(/var/lib/haproxy/conf/os_sni_passthrough.map) -m found
  use_backend %[req.ssl_sni,lower,map_reg(/var/lib/haproxy/conf/os_tcp_be.map)] if sni sni_passthrough

  # if the route is SNI and NOT passthrough enter the termination flow
  use_backend be_sni if sni

  # non SNI requests should enter a default termination backend rather than the custom cert SNI backend since it
  # will not be able to match a cert to an SNI host
  default_backend be_no_sni

# Plain http backend or backend with TLS terminated at the edge or a
# secure backend with re-encryption.
backend be_edge_http:ocp-ops-view:kube-ops-view
  mode http
  option redispatch
  option forwardfor
  balance leastconn

  timeout check 5000ms
  http-request set-header X-Forwarded-Host %[req.hdr(host)]
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
  http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]
  cookie e1a16e62e3813c8e0c40999b324731ce insert indirect nocache httponly secure
  server pod:kube-ops-view-6cf6d4d6fb-9rxxx:kube-ops-view:10.x.x.x:8080 10.x.x.x:8080 cookie a67b9db2c182c5109f2999b487f568cf weight 256
<snip>

There is a hardware load balancer in front of Openshift, haproxy is used as Ingress router, there are 3 instances running. Self signed certificates are used on the frontend, ocp-ops-view is edge terminated. Probably your config differs at certain parts.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants