Thursday, July 25, 2024

Squid configuration in 2024

Configuring Squid in 2024 should be easy, but with over 20 years worth of posts describing problems and solutions and the ongoing evolution of squid means it can be difficult to work out what configuration option you need and how to configure it. Especially for SSL

SSL support for squid made some serious strides in versions 3 and 4 and has settled down after that, yet it can be unclear the correct configuration to use with many posts from that transition period of versions 3 to 4

For example, there are so many posts about configuring SSLBump or Peek and Slice, that you can become easily confused. All that sort of external configuration is now taken care of internally by modern versions of squid, and the most configuration you may need to do is to generate host certificates

In this example, the (internal) parent listens on port 443 and will redirect any port 80 traffic to 443. The squid server is inside a Kubernetes statefulset that redirects port 80 traffic to the pod on port 3128, and 443 to 3129 since the pod does not run as root, so cannot listen on privileged ports

acl localnet src 10.0.0.0/8
cache_peer parent.example.com parent 443 0 no-query default ssl name=myAccel no-digest tls-cert=/etc/squid/certs/tls.crt tls-key=/etc/squid/certs/tls.key
cache_peer_access myAccel allow localnet
cache_peer_access myAccel deny all
http_port 3128 accel defaultsite=parent.example.com no-vhost
https_port 3129 accel defaultsite=parent.example.com no-vhost generate-host-certificates=on tls-cert=/etc/squid/certs/tls.crt tls-key=/etc/squid/certs/tls.key
sslcrtd_program /usr/lib64/squid/security_file_certgen -s /var/cache/squid/ssl_db -M 20MB
sslproxy_cert_error allow all


The only other interesting configuration is the squid server is part of a Load Balanced Domain Name / DNS Traffic Control service and so a certificate is created with Kubernetes certificate manager and the commonName set to the FQDN of the LBDN.

Sunday, June 30, 2024

TIL - load testing or benchmarking client limits

 Was load testing a puppet forge implementation today and was hitting some odd errors when I ran each load test immediately after the previous one finished. The errors would not happen if I waited a few (maybe 5 minutes) between tests.


This was odd behaviour but digging through google with the client error of Failed to open TCP connection to (Cannot assign request, I landed on this stack overflow entry (https://stackoverflow.com/a/31877033/14784297) which implied the error mostly came from a lack of ephemeral ports (correct) and more interestingly that after a TCP connection is closed, the connection is still up in a TIME_WAIT state for about 2 minutes:

The reason of this problem is that for opening a TCP connection, the operating system allocates an ephemeral port (for the source port). It binds the socket to the allocated port. After the TCP connection is closed, the connection is left in TIME_WAIT state, typically for 2 minutes, due to historical reasons

Count the number of open connections:

netstat -naptu | grep -c TIME_WAIT