Haproxy Ssl Cert

Using SSL Certificates with HAProxy | Servers for Hackers

January 22, 2018
Using HAProxy with SSL certificates, including SSL Termation and SSL Pass-Through.
Overview
If your application makes use of SSL certificates, then some decisions need to be made about how to use them with a load balancer.
A simple setup of one server usually sees a client’s SSL connection being decrypted by the server receiving the request. Because a load balancer sits between a client and one or more servers, where the SSL connection is decrypted becomes a concern.
There are two main strategies.
SSL Termination is the practice of terminating/decrypting an SSL connection at the load balancer, and sending unencrypted connections to the backend servers.
This means the load balancer is responsible for decrypting an SSL connection – a slow and CPU intensive process relative to accepting non-SSL requests.
This is the opposite of SSL Pass-Through, which sends SSL connections directly to the proxied servers.
With SSL-Pass-Through, the SSL connection is terminated at each proxied server, distributing the CPU load across those servers. However, you lose the ability to add or edit HTTP headers, as the connection is simply routed through the load balancer to the proxied servers.
This means your application servers will lose the ability to get the X-Forwarded-* headers, which may include the client’s IP address, port and scheme used.
Which strategy you choose is up to you and your application needs. SSL Termination is the most typical I’ve seen, but pass-thru is likely more secure.
There is a combination of the two strategies, where SSL connections are terminated at the load balancer, adjusted as needed, and then proxied off to the backend servers as a new SSL connection. This may provide the best of both security and ability to send the client’s information. The trade off is more CPU power being used all-around, and a little more complexity in configuration.
An older article of mine on the consequences and gotchas of using load balancers explains these issues (and more) as well.
HAProxy with SSL Termination
We’ll cover the most typical use case first – SSL Termination. As stated, we need to have the load balancer handle the SSL connection. This means having the SSL Certificate live on the load balancer server.
We saw how to create a self-signed certificate in a previous edition of SFH. We’ll re-use that information for setting up a self-signed SSL certificate for HAProxy to use.
Keep in mind that for a production SSL Certificate (not a self-signed one), you won’t need to generate or sign a certificate yourself – you’ll just need to create a Certificate Signing Request (csr) and pass that to whomever you purchase a certificate from.
First, we’ll create a self-signed certificate for *, which is handy for demonstration purposes, and lets use one the same certificate when our server IP addresses might change while testing locally. For example, if our local server exists at 192. 168. 33. 10, but then our Virtual Machine IP changes to 192. 11, then we don’t need to re-create the self-signed certificate.
I use the service as it allows us to use a hostname rather than directly accessing the servers via an IP address, all without having to edit my computers’ Host file.
As this process is outlined in a passed edition on SSL certificates, I’ll simple show the steps to generate a self-signed certificate here:
$ sudo mkdir /etc/ssl/
$ sudo openssl genrsa -out /etc/ssl/ 1024
$ sudo openssl req -new -key /etc/ssl/ \
-out /etc/ssl/
> Country Name (2 letter code) [AU]:US
> State or Province Name (full name) [Some-State]:Connecticut
> Locality Name (eg, city) []:New Haven
> Organization Name (eg, company) [Internet Widgits Pty Ltd]:SFH
> Organizational Unit Name (eg, section) []:
> Common Name (e. g. server FQDN or YOUR name) []:*
> Email Address []:
> Please enter the following ‘extra’ attributes to be sent with your certificate request
> A challenge password []:
> An optional company name []:
$ sudo openssl x509 -req -days 365 -in /etc/ssl/ \
-signkey /etc/ssl/ \
This leaves us with a, and file.
Next, after the certificates are created, we need to create a pem file. A pem file is essentially just the certificate, the key and optionally certificate authorities concatenated into one file. In our example, we’ll simply concatenate the certificate and key files together (in that order) to create a file. This is HAProxy’s preferred way to read an SSL certificate.
$ sudo cat /etc/ssl/ /etc/ssl/ \
| sudo tee /etc/ssl/
When purchasing a real certificate, you won’t necessarily get a concatenated “bundle” file. You may have to concatenate them yourself. However, many do provide a bundle file. If you do, it might not be a pem file, but instead be a bundle, cert, cert, key file or some similar name for the same concept. This Stack Overflow answer explains that nicely.
In any case, once we have a pem file for HAproxy to use, we can adjust our configuration just a bit to handle SSL connections.
We’ll setup our application to accept both and connections. In the last edition on HAProxy, we had this frontend:
frontend localnodes
bind *:80
mode
default_backend nodes
To terminate an SSL connection in HAProxy, we can now add a binding to the standard SSL port 443, and let HAProxy know where the SSL certificates are:
frontend localhost
bind *:443 ssl crt /etc/ssl/
In the above example, we’re using the backend “nodes”. The backend, luckily, doesn’t really need to be configured in any particular way. In the previous edition on HAProxy, we had the backend like so:
backend nodes
balance roundrobin
option forwardfor
option chk HEAD / HTTP/1. 1\r\nHost:localhost
server web01 172. 17. 0. 3:9000 check
server web02 172. 3:9001 check
server web03 172. 3:9002 check
-request set-header X-Forwarded-Port%[dst_port]
-request add-header X-Forwarded-Proto if { ssl_fc}
Because the SSL connection is terminated at the Load Balancer, we’re still sending regular HTTP requests to the backend servers. We don’t need to change this configuration, as it works the same!
SSL Only
If you’d like the site to be SSL-only, you can add a redirect directive to the frontend configuration:
redirect scheme if! { ssl_fc}
Above, we added the redirect directive, which will redirect from “” to “” if the connection was not made with an SSL connection. More information on ssl_fc is available here.
HAProxy with SSL Pass-Through
With SSL Pass-Through, we’ll have our backend servers handle the SSL connection, rather than the load balancer.
The job of the load balancer then is simply to proxy a request off to its configured backend servers. Because the connection remains encrypted, HAProxy can’t do anything with it other than redirect a request to another server.
In this setup, we need to use TCP mode over HTTP mode in both the frontend and backend configurations. HAProxy will treat the connection as just a stream of information to proxy to a server, rather than use its functions available for HTTP requests.
First, we’ll tweak the frontend configuration:
bind *:443
option tcplog
mode tcp
This still binds to both port 80 and port 443, giving the opportunity to use both regular and SSL connections.
As mentioned, to pass a secure connection off to a backend server without encrypting it, we need to use TCP mode (mode tcp) instead. This also means we need to set the logging to tcp instead of the default (option tcplog). Read more on log formats here to see the difference between tcplog and log.
Next, we need to tweak our backend configuration. Notably, we once again need to change this to TCP mode, and we remove some directives to reflect the loss of ability to edit/add HTTP headers:
option ssl-hello-chk
server web01 172. 3:443 check
server web02 172. 4:443 check
As you can see, this is set to mode tcp – Both frontend and backend configurations need to be set to this mode.
We also remove option forwardfor and the -request options – these can’t be used in TCP mode, and we couldn’t inject headers into a request that’s encrypted anyway.
For health checks, we can use ssl-hello-chk which checks the connection as well as its ability to handle SSL (SSLv3 specifically) connections.
In this example, I have two fictitious server backend that accept SSL certificates. If you’ve read the edition SSL certificates, you can see how to integrate them with Apache or Nginx in order to create a web server backend, which handles SSL traffic. With SSL Pass-Through, no SSL certificates need to be created or used within HAproxy. The backend servers can handle SSL connections just as they would if there was only one server used in the stack without a load balancer.
Resources
HAProxy Official blog post on SSL Termination
SO Question: “What is a PEM file? ”
Reading custom headers in Nginx – Not mentioned in this edition specifically, but useful in context of reading X-Forwarded-* headers sent to Nginx
So You Got Yourself a Load Balancer, an article about considerations to make in your applications when using a load balancer.
HAProxy SSL Termination

HAProxy SSL Termination

The HAProxy load balancer provides high-performance SSL termination, allowing you to encrypt and decrypt traffic.
You can quickly and easily enable SSL/TLS encryption for your applications by using HAProxy SSL termination. HAProxy is compiled with OpenSSL, which allows it to encrypt and decrypt traffic as it passes. In this blog post, you will learn how to set this up and why delegating this function to HAProxy simplifies your infrastructure.
The Benefits of SSL Termination
When you operate a farm of servers, it can be a tedious task maintaining SSL certificates. Even using a Let’s Encrypt Certbot to automatically update certificates has its challenges because, unless you have the ability to dynamically update DNS records as part of the certificate renewal process, it may necessitate making your web servers directly accessible from the Internet so that Let’s Encrypt servers can verify that you own your domain.
Enabling SSL on your web servers also costs more CPU usage, since those servers must become involved in encrypting and decrypting messages. That CPU time could otherwise have been used to do other meaningful work. Web servers can process requests more quickly if they’re not also crunching through encryption algorithms simultaneously.
The term SSL termination means that you are performing all encryption and decryption at the edge of your network, such as at the load balancer. The load balancer strips away the encryption and passes the messages in the clear to your servers. You might also hear this called SSL offloading.
SSL termination has many benefits. These include the following:
You can maintain certificates in fewer places, making your job easier.
You don’t need to expose your servers to the Internet for certificate renewal purposes.
Servers are unburdened from the task of processing encrypted messages, freeing up CPU time.
Enabling SSL with HAProxy
HAProxy version 1. 5, which was released in 2016, introduced the ability to handle SSL encryption and decryption without any extra tools like Stunnel or Pound. Enable it by editing your HAProxy configuration file, adding the ssl and crt parameters to a bind line in a frontend section. Here’s an example:
The ssl parameter enables SSL termination for this listener. The crt parameter identifies the location of the PEM-formatted SSL certificate. This certificate should contain both the public certificate and private key. That’s it for turning on this feature. Once traffic is decrypted it can be inspected and modified by HAProxy, such as to alter HTTP headers, route based on the URL path or Host, and read cookies. The messages are also passed to backend servers with the encryption stripped away.
Although you lose some of the benefits of SSL termination by doing so, if you prefer to re-encrypt the data before relaying it, then you’d simply add an ssl parameter to your server lines in the backend section. Here’s an example:
When HAProxy negotiates the connection with the server, it will verify whether it trusts that server’s SSL certificate. If the server is using a certificate that was signed by a private certificate authority, you can either ignore the verification by adding verify none to the server line or you can store the CA certificate on the load balancer and reference it with the ca-file parameter. Here’s an example that references the CA PEM file:
If the certificate is self-signed, in which case it acts as its own CA, then you can reference it directly.
Redirecting from HTTP to HTTPS
When a person types your domain name into their address bar, more likely than not, they won’t include. So, they’ll be sent to the version of your site. When you use HAProxy for SSL termination, you also get the ability to redirect any traffic that is received at HTTP port 80 to HTTPS port 443.
Add an -request redirect scheme line to route traffic from HTTP to HTTPS, like this:
This line uses the unless keyword to check the ssl_fc fetch method, which returns true unless the connection used SSL/TLS. If it wasn’t used, the request is redirected to the scheme. Now, all traffic will end up using HTTPS.
This paves the way to adding an HSTS header, which tells a person’s browser to use HTTPS from the start the next time they visit your site. You can add an HSTS header by following the steps described in our blog post, HAProxy and HTTP Strict Transport Security (HSTS) Header in HTTP Redirects.
Limiting Supported Versions of SSL
As vulnerabilities are discovered in older versions of SSL and TLS, those versions are marked deprecated and should no longer be used. With HAProxy, you can allow only certain versions of SSL to be negotiated. Add an ssl-min-ver directive to a frontend, specifying the oldest version you want to support.
In the following example, only TLS version 1. 2 and newer is allowed:
Today, possible values for this parameter are:
SSLv3
TLSv1. 0
TLSv1. 1
TLSv1. 2
TLSv1. 3
You can set this for all proxies by adding it to your global section in the form of an ssl-default-bind-options directive, as shown:
Limiting Supported Certificates
In addition to setting the allowed versions of SSL and TLS, you can also set the encryption ciphers that you’ll use by adding a ciphers parameter to the bind line. These are set in preferred order, with fallback algorithms bringing up the end of the list. HAProxy will select the first cipher that the client also supports, unless the prefer-client-ciphers parameter is present, in which case the client’s preferred cipher is selected.
It looks like this:
Consider using the Mozilla SSL Configuration Generator when deciding which ciphers to include.
You can also set a default value by adding an ssl-default-bind-ciphers directive to your global section, as shown:
Choosing the Certificate with SNI
Server Name Indication (SNI) is a TLS extension that allows the browser to include the hostname of the site it is trying to reach in the TLS handshake information. You can use this to dynamically choose which certificate to serve.
Instead of setting the crt parameter to the path to a single certificate file, specify a directory that contains multiple PEM files. HAProxy will find the correct one by checking for a matching common name or subject alternative name.
Here’s an example:
If the client does not send SNI information, HAProxy uses the first file listed in the directory, sorted alphabetically. Therefore, it’s a good idea to name the PEM files so that the default certificate is listed first.
Supporting EC and RSA Certificates
HAProxy supports both Elliptic Curve (EC) and RSA certificates and will choose the one that the client supports. To enable this, store both certificates on the load balancer server, but name one with an extension and the other with an extension. For example, and Then, in the HAProxy configuration, leave off the extension, like this:
Client Certificates
You can restrict who can access your application by giving trusted clients a certificate that they must present when connecting. HAProxy will check for this if you add a verify required parameter to the bind line, as shown:
In addition to verify required, you can use ca-file to set a path to a CA file in order to verify that the client’s certificate is signed and trusted. You can also include a crl-file parameter to indicate a certificate revocation list.
Conclusion
In this blog post, you learned how to enable SSL termination with HAProxy. Allowing HAProxy to manage encryption and decryption has several benefits, including reducing work done on your backend servers, making certificate maintenance easier, and avoiding exposing your servers directly to the Internet for certificate renewal purposes.
Want to stay up to date on topics like this? Follow us on Twitter and subscribe to this blog! You can also join the conversation on Slack.
Explore the advanced features available in HAProxy Enterprise. Contact us to learn more and sign up for a free trial.
HAProxy with SSL and Let's Encrypt - Gridscale

HAProxy with SSL and Let’s Encrypt – Gridscale

Secure HAProxy with SSLPerhaps you’ve already tested a little with Let’s Encrypt or read my article on Nginx with Let’s Encrypt. That I am a big fan of HAProxy should have become clear here and here What I have not written yet: HAProxy with SSL Securing. Now I’m going to get this article. Let’s Encrypt is a new certification authority that provides simple and free SSL certificates. The CA is embedded in all relevant browsers, so you can use Let’s Encrypt to secure your web pages. And all at no is even more important to me personally:: So far, I have always set my Icinga on every SSL certificate. So I was able to make sure to always be informed in time if an SSL certificate threatened to expire. In the case of monitoring, the procedure then always proceeded:Check the key length (whether it is still sufficient)Possibly. Renewal of SSL certificate or create new SSL key and new SSL CRTThen credit card through Thawte or Comodo pull me, meanwhile with bad interfaces aroundThen wait, wait, wait …. and waitThen, in a next step, switching the SSL certificates on the servers Restart the services and check that everything is runningIf you have something more than a handful of SSL certificates for your servers and services, then that is really hard. So I started some time ago to exchange the SSL certificates by Let’s Encrypt certificates. First I started with my webserver, but in the meantime I have already converted a few mailserver. Thanks to the HAProxy software, I can use the same configuration for almost anything that can be layer-4 article is based on my article Installation of HAProxy on Debian. If you are in a hurry and do not want to read my article, simply install the HAProxy eparationIf I work with SSL on web servers and use a HAProxy (or even an F5 or Broacade Loadbalancer) in front of the web servers, I usually do not use the encryption on the webserver itself. I have SSL on the loadbalancer terminate and work in the backend with an unencrypted connection. Especially with hardware load balancers and pages with a large amount of HTTPS sessions, this reduces a lot of CPU load. Loadbalancers have mostly built specially for SSL-designed CPU cards, which are many times the connections between HAProxy and the web servers are made via a private network, the danger of a man-in-the-middle (MitM) attack is manageable. First, one would have to compromise my internal network in order to get to the unencrypted data packages. If this is possible, I would have a completely different problem, create a self-signed SSL certificateFor the initial tests, you first create a self-signed SSL certificate. In my example is so far not an OpenSSL installed, if necessary, do it quickly now:root@haproxy:~# apt-get -y install opensslUse the following command to create your self-signed SSL certificate and move it to /etc/ssl/ openssl req -nodes -x509 -newkey rsa:2048 -keyout /etc/ssl/private/ -out /etc/ssl/private/ -days 30
Generating a 2048 bit RSA private key…………………………………… +++……………………………….. +++
writing new private key to ‘/etc/ssl/private/’
—–
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘. ‘, the field will be left blank.
Country Name (2 letter code) [AU]:DE
State or Province Name (full name) [Some-State]:NRW
Locality Name (eg, city) []:Cologne
Organization Name (eg, company) [Internet Widgits Pty Ltd]:gridscale Cloud Computing
Organizational Unit Name (eg, section) []:gridscale Beta Labs
Common Name (e. g. server FQDN or YOUR name) []:*
Email Address []:You can fill all the properties for the certificate with your personal data, or simply leave it blank. We will replace the certificate by a Let’s Encrypt create a pem file by copying key and certificate to a file. That goes with:root@haproxy:~# cat /etc/ssl/private/ /etc/ssl/private/ > /etc/ssl/private/mConfiguration of HAProxy with SSL listener for HTTPSOn the HAProxy, you can use the following configuration to create a listener on port 443 (HTTPS) and instruct HAProxy to both schedule SSL and work on ontend ssl_443
bind *:443 ssl crt /etc/ssl/private/
mode
-request set-header X-Forwarded-For%[src]
reqadd X-Forwarded-Proto:\
option -server-close
default_backend ssl_443
backend ssl_443
balance leastconn
server web1 10. 0. 2:80 check
server web2 10. 3:80 checkShort Parameter-Check:bind *:443 ssl crt /etc/ssl/private/ creates a listener on port 443 with an SSL certificate that is located in the appropriate as well as the whole header manipulations I had already explained in the previous articleoption -server-close is new. This parameter is unimportant for the first, but it allows the web server to keepalive. This makes the surfing experience on your websites a bit restarting the HAProxy, your web servers should be accessible on port 443. If you call directly the, you see also which additional headers are now available. Compare nevertheless times the over Port 80 and over Port stallation of Let’s Encrypt on HAProxyOn the HAProxy system, the Let’s Encrypt Suite must be installed so that you can request SSL certificates. Since Let’s Encrypt issues domain validated certificates, you first need a DNS entry pointing to the IP address of your HAProxy. I use the following DNS ‘’, install a few tools on the HAProxy system that are later required by Let’s apt-get -y install bc gitThen clone the repository of Let’s Encrypt:root@haproxy:~# git clone /opt/letsencryptNow you have everything on the system that is needed for Let’s Encrypt. Temporarily switch your HAProxy off, there are still a few configurations necessary:root@haproxy:~# /etc/init. d/haproxy stopHow Let’s Encrypt Suite WorksTimes quite roughly to the course of Let’s Encrypt Suite. The cloned suite works with so-called modules for various services and programs. These modules usually do all the work. You create the certificates, perform the validation, and finally configure your is no such module for implementation in HAProxy. Therefore, it is helpful to understand how Let’s Encrypt works. With this knowledge, you can then build a workaround that works with a Let’s Encrypt Auto module connects to the Let’s Encrypt servers and exchanges a token there. This token is placed in a web server directory and then called by the Let’s Encrypt servers. If the token matches, the Let’s Encrypt Server sign your HAproxy for Let’s EncryptActually quite simply, if there would not be a small hook. Your default HAProxy does not deliver any content, but redirects the queries to the backend. The backend, in turn, neither speak SSL (so do not create certificate requests), nor does it know anything about what you are doing on the HAProxy. Say the token cannot be exchanged and thus you get no signed certificate. I’d like to work around this problem by instructing HAProxy to handle all requests to the Let’s Encrypt token directory differently than any other request. To do this, you first need the names of the directories that Let’s Encrypt requests on your web server. Now adjust the HAProxy configuration ontend port_80
bind *:80
acl lets_encrypt path_beg /
use_backend lets_encrypt if lets_encrypt
default_backend port_80
backend port_80
balance roundrobin
option chk HEAD / HTTP/1. 0
server web2 10. 3:80 checkPay attention to the inserted ACL. If a request is made to the HAProxy that begins with ‘’ in the URI, it will be redirected to the backend ‘lets_encrypt’. A definition for the backend ‘lets_encrypt’ is missing. The probably simplest backend entry for HAProxy is enough for this:backend lets_encrypt
server local localhost:60001This makes it clear to HAProxy that all requests to the backend should be delivered via the localhost. What is missing is still a web server, which also responds to localhost port 60001. This webserver letsencrypt start your HAProxy and look in the log file /var/log/ if there are any errors logged there. If you have the “admin stats” activated (more in an article about a few advanced configurations of HAProxy), you should now see the new configuration on the admin page of test times whether the exception rule in the HAProxy for the URL works. Call the URL in the browser, for example: ‘. If you have done everything right, you get an error (503 Service Unavailable from the HAProxy). Because currently is on the localhost port 60001 not a service started, so far HAProxy cannot forward a ’s Encrypt Certificate RequestNow comes the penultimate step, requesting the Let’s Encrypt certificate. First create the directory /etc/letsencrypt/ (if it does not already exist) and create a configuration file “”) = 4096
email =
authenticator = standalone
standalone-supported-challenges = -01
You can also pass the parameters to the CLI tool, if you prefer to work without a configuration file. Now go to the directory / opt / letsencrypt (Reminder: Here you had the Let’s Encrypt Suite cloned) and execute the following command. Replace –domains with your DNS for which you want to request a. /letsencrypt-auto certonly –domains –renew-by-default —01-port 60001 –agree-tosVoilà! You will find in / etc / letsencrypt / live / a directory with your DNS name and in it are key, certificate, chain etc …Now the new certificate only has to be entered into the Unfortunately, you cannot directly use the files created by Let’s Encrypt, since HAProxy needs everything in one file. I put all certificates for HAProxy to / etc / haproxy / cert. I think this is clear, but DevOps regularly breaks the right structure. So just think about how you want to do it root@haproxy:~# mkdir -p /etc/haproxy/cert
root@haproxy:~# cat /etc/letsencrypt/live/ /etc/letsencrypt/live/ > /etc/haproxy/cert/ edit the listener for port 443 in your ssl_443
# bind *:443 ssl crt /etc/ssl/private/
bind *:443 ssl crt /etc/haproxy/cert/
default_backend ssl_443And restart HAProxy. Then test if you can access your DNS via step, automatic renewal of certificatesLet’s Encrypt only allow certificates for a period of 90 days. This means you must renew the certificate within this easiest way to do this is to enter the above command into the cronjob. If you want to create a lot of certificates for a domain, a warning: Let’s Encrypt has installed a so-called rate limit. This rate limit counts the certificate requests based on the domain (caution, not subdomain! ). Each request for a subdomain counts into the counter of the main domain. Do not forget that you do not ask the servers of Let’s Encrypt too often after a renewal (otherwise you are unlucky), but not too rarely (otherwise your certificate is suddenly invalid). I use a small shell script for automation. You can find it on our BitBucket repository and can Download the URL directly. Just download it to / wget -o /opt/le-renew-webrootEdit the parameter “exp_limit” in the script and set it, e. To any value between 10 and 30. Brief overview of what I do in the script:I read the existing directories in /etc/letsencrypt/liveFor each directory I check the key and CA fileIn the CA I look for how long the certificate is still valid. If it is longer than exp_limit, jump to the next the certificate is valid for less than exp_limit, then I read the certificate names (you can deposit more than one DNS per certificate)Then I generate the Let’s Encrypt command and try to have the certificate extendedIf this is successful, I exchange the certificate in HAProxy and reload the service. If the operation fails, I try to restore the previous script is only hacked together quickly and should serve as an example. It would certainly be good to build a little more error handling. I do not start the following scenarios:If the DNS is no longer valid for a domain, Let’s Encrypt cannot perform domain validation. Let’s Encrypt then acknowledges with an error. It would be better if the deposited domain names were checked for the registry status (i. e. in the root zone of the respective NIC) and then the responsible nameservers would be asked for the entered is no error handler for HAProxy in the error and info reporting in the scriptIn brief: Use the script carefully or build it if necessary. If I have a quiet hour, I hack the bash script maybe once again in a Python script and improve the deficiencies. If you want to do the work, I am glad about your application which I will publish here you still want to trust my script, simply create a cronjob so it runs once per day. Zurück zur Tutorial Übersicht Back to Tutorial OverviewSecure HAProxy with SSL Perhaps you’ve already tested a little with Let’s Encrypt or read my article on Nginx with Let’s Encrypt. That I am a big fan of HAProxy should have become clear here and here What I have not written yet: HAProxy with SSL Securing. […]

Frequently Asked Questions about haproxy ssl cert

Leave a Reply

Your email address will not be published. Required fields are marked *