How to host ‘https’ service for python application served with Waitress

Hyeoungho Bae
9 min readAug 21, 2020

--

Serving Python Application with SSL/TLS using NGINX reverse proxy

If you want to publish your python application, one of your choices is using Waitress + Flask configuration. The unfortunate thing is Waitress does not support SSL/TSL based secured connection (or ‘https’). There are a couple of document that explains this situation and some partial information regarding how to build the service. However, I personally could not find well-summarized document to guide people like me. That is the reason why I am writing this. Hope that this will help anyone who experience the same problem.

To cope with the limit, you can use NGINX as a reverse proxy to handle the certificate/key part and pass the remaining pure request to Waitress so that it can take care of the request as ‘http’ style.

There are a couple of steps to accomplish and I will also provide some debugging tips that I used. I assume that you’ve already have issued certificate. In this scenario, I will use the certificate exported from Windows IIS server since I wanted to use the same certificate and same server name for a different service hosted on WSL (Windows Subsystem Linux).

The contents will be divided as below:

  1. Export .pfx file to .cert and .key files
  2. Define the server entry for NGINX
  3. Test from the client

Exporting .pfx File into .cert and .key

On Windows IIS Server Management Tool, you can locate existing your server SSL certificate like below window:

Export certificate from IIS Manager

Using the ‘Export’ menu, you can generate the .pfx file, which contains certificate, public and private keys, which you need to separate for NGINX. When you export the file, the application will ask you a password. Note the password since you will use that when you separate cert and key files from the pfx file.

On your WSL shell, you need to run below commands to do the job (it will ask you the password that you just used above.) You may want to save the certificate and key pairs per your application. Otherwise, it will be a bit chaotic after you start hosting tens of different applications.

$ openssl pkcs12 -in [cert folder]/cert.pfx -clcerts -nokeys -out [folder to save the pair]/cert.crt

$ openssl pkcs12 -in [cert folder]/cert.pfx -nocerts -nodes -out [folder to save the pair]/cert.rsa

However, there is one more step that you should do: Concatenate with root and intermediate certificates. If you check ‘Certification Path’ of your server certificate in IIS manager window. You will find the hierarchy of certificates like below picture.

Certificate Path: Your server certificate is the on at the bottom

If you use the server certificate only, you will see below message when you try to verify the certificate on your client machine.

$openssl s_client -connect your.server.com:port

CONNECTED(00000005)
depth=0 CN = your.server.com
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = your.server.com
verify error:num=21:unable to verify the first certificate
verify return:1

….

— — -END CERTIFICATE — — -

— -

Peer signing digest: SHA256
Server Temp Key: X25519, 253 bits
— -
SSL handshake has read 2763 bytes and written 386 bytes
Verification error: unable to verify the first certificate

The error message shown above basically means openssl could not verify the certificate since it cannot find either intermediate or root certificate that issued the server certificate. Without knowing who is responsible for the certificate, it cannot verify the validity of the certificate, which makes sense.

You can check this locally by running below command:

$ openssl verify cert.cert

CN = your.server.com
error 20 at 0 depth lookup: unable to get local issuer certificate
error cert.cert: verification failed

OK, then it’s time to make a working certificate by concatenating all the certificates shown in the path diagram. Starting from the root, you can export the certificate using the export wizard.

Export certificates in the certification path

After this, you will have list of .cer files. Please remember the order of those files, which will be used when you concatenate the files. NGINX does not support multiple certificates and will complain if the order is incorrect.

Let’s say you have below .cer files on your hand

root.cer (Root certificate)

intermediate.cer (Intermediate certificate)

cert.crt (the server certificate generated from pfx file)

Then you need to convert the first two files into .pem file format, which is supported by openssl.

$openssl x509 -inform der -in [exported .cer file] -out [your .pem file]

Once you have all converted individual certificates on your hand, you want to validate whether they let you pass the verification process. In my case, it is like below:

$openssl verify -CAfile root.pem -untrusted intermediate.pem cert.cert

cert.cert: OK

Then it is time to concatenation. The order should be bottom to up, meaning you start from the server to the root certificate:

$cat cert.cert intermediate.pem root.pem > chained.pem

Now you have concatenated certificates that will let you assure the credibility and the key matches with it. These pair of cert and key will be used under server section of NGINX configuration will be:

ssl_certificate [folder to save the pair]/chained.pem;

ssl_certificate_key [folder to save the pair]/cert.rsa;

You can also reference this wonderful guide on SSL certificates, which helped me a lot.

Define the server entry for NGINX

NGINX sits between a client and your web application on your server machine to provide various types of control over the traffic on the server. It can route per certain types of traffic (for example, reverse proxy — this scenario) or even balance traffic for your applications.

In this scenario, we will configure NGINX as a reverse proxy. As mentioned earlier, Waitress does not support SSL/TSL based connection, which means it only supports ‘unsecured’ connection even you provide additional ‘s’ on your URL — or https. After the reverse proxy configuration, you need to inform Waitress that ‘url_scheme’ is ‘https’, though.

First, you need to install NGINX on your Linux system. If your system is not updated, you may experience some build error while installation so better way is update and install.

$sudo apt-get update

$sudo apt-get install nginx

Then you need to disable the default virtual host

$unlink /etc/nginx/sites-enabled/default

Now you generate the reverse-proxy configuration:

$ sudo vi /etc/nginx/sites-available/reverse-proxy.conf

error_log /var/log/nginx/error.log debug;

access_log /var/log/nginx/access.log;

server {

listen your.server.com:443 ssl;

server_name your.server.com;

access_log /your/application/Log/access.log;

ssl_certificate [folder to save the pair]/chined.pem;

ssl_certificate_key [folder to save the pair]/cert.rsa;

ssl_ciphers HIGH:!aNULL:!MD5;

location / {

proxy_pass http://0.0.0.0:[PORT]/;

proxy_set_header X-Real-IP $remote_addr;

}

}

There are a couple of things to note.

  1. Logging: It is very important especially you want to understand why your service is blocked by NGINX. There are three log location that you can reference. Error, access and application access, which are shown in the above example as the same order. Especially for error log, you can specify the level of logging. If you don’t, it will give you informative message only. Debug is the mode when you want to understand underlying issue of the connection.
  2. Listen: The default port used for SSL connection is 443. However, you can change the port number depends on your design and firewall configuration. If you change it, NGINX will listen to the port instead of the default port number.
  3. Location: This is about where to route the request to and the core part of the reverse proxy. The first slash means the requested URL after the server name and ‘proxy_pass’ will specify where to route this traffic onto. The IP shown in the example (0.0.0.0) is the address of localhost

To run the reverse proxy server, you need to follow below steps:

First, you want to test your configuration (Luckily, NGINX provides the basic functionality) If there is no grammatical problem, it will show messages like below

$sudo nginx -t

Enter PEM pass phrase:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

If the concatenation order was wrong, you will see below error message. It fails since it tries to locate the certificate matches with the key from very first part of the chained certificate. If the server certificate is not there, it will throw error like below:

nginx: [emerg] SSL_CTX_use_PrivateKey_file(“ [folder to save the pair]/cert.rsa”) failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)
nginx: configuration file /etc/nginx/nginx.conf test failed

Then you start/restart NGINX service to apply the change that you made on the configuration file. (You should do this whenever you change on any configuration file of NGINX)

$sudo /etc/init.d/nginx (re)start

Note: On genuine Linux system, you can use ‘service’ command — like service nginx restart. However, WSL has a known problem with the command, which forced me to use above walk-around method.

Whenever you changed your .conf file, you want to restart the service to apply the change.

Test from the client

It is the moment of truth. You can just run your browser or other GUI tools to validate the service. However, I prefer using curl or openssl to validate the service before running my client application.

OpenSSL is quite useful to understand what is the problem with your certificate. Since certificate related issue contributes huge portion of problems in establishing SSL based service, I used the tool to debug most of the issue that I’ve found. Curl is also useful since it provides capability of playing with RESTful APIs like header, body options.

First, you need to confirm whether your certificate can be validated on the client side. If your server is reachable and the certificate is valid, you will find similar output like below:

$openssl s_client -connect your.server.com:[port]

CONNECTED(00000005)
depth=2 C = IE, O = ***, OU = **, CN = **** Root
verify return:1
depth=1 C = **, ST = ***, L = **, O = ***, OU = ***, CN = *****
verify return:1
depth=0 CN =your.server.com
verify return:1
— -
Certificate chain
0 s:/CN=your.server.com
i:/C=US/ST=***/L=**/O=**/OU=***/CN=****
1 s:/C=**/ST=***/L=****/O=***/OU=****/CN=**
i:/C=IE/O=***/OU=***/CN=****
2 s:/C=IE/O=***/OU=****/CN=****
i:/C=IE/O=***/OU=***/CN=***
— -
Server certificate
— — -BEGIN CERTIFICATE — — -

— — -END CERTIFICATE — — -

— -
No client certificate CA names sent
Peer signing digest: SHA256
Server Temp Key: X25519, 253 bits
— -
SSL handshake has read 5124 bytes and written 386 bytes
Verification: OK

OK, at the bottom of this line (not exactly the bottom of the entire output, though), you will find that the verification is OK. At the top of this verification step, you will find that the ‘verify return’ is ‘1’. Don’t be afraid. It is actually means we passed the verification process (according to this thread).

If your application is RESTful API and you want to test run it before actually running on your client application, you can do it using curl.

$ curl -v -H “Content-Type: application/json” — data @testinput.json -POST https://your.server.com:[port]/your_service_name
* Trying **.**.**.***…
* TCP_NODELAY set
* Connected to
your.server.com (**.**.**.***) port **** (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ****
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=
your.server.com
* start date: ***
* expire date: ***
* subjectAltName: host “
your.server.com” matched cert’s “your.server.com
* issuer: *****
* SSL certificate verify ok.
> POST /predict_bugclass HTTP/1.1
> Host:
your.server.com:[port]
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 2324
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< Server: nginx/1.14.0 (Ubuntu)
< Date: Fri, 21 Aug 2020 21:17:21 GMT
< Content-Type: application/json
< Content-Length: 353
< Connection: keep-alive
< Sp-Location: *****
<
{“field name”:”category”,”prediction”:”AppFunc,AppFunc,AppFunc,DeviceInst,AppFunc,AppFunc,Crash,Crash,DeviceInst,Crash,Crash,Crash,Crash,OsPerf,AppFunc;Crash,Crash,AppFunc;Crash,AppFunc,AppFunc,DeviceInst,AppFunc,DeviceInst,AppFunc,AppFunc,Crash,Crash,AppFunc,Crash,Crash,Crash,OsPerf,AppFunc,Crash”,”prediction module”:”Bug Classifier”,”success”:true}
* Connection #0 to host
your.server.com left intact

Yes! The server application (which is a ML classifier as you can see from the output) responds properly with my ‘https’ request and the client could verify that the connection was securely established using the certificates, which is great!

Now, we are ready to use the service from the client node. Thanks for reading this to the end and hope that it helps people like me who struggled to find out solutions over documents spread out here and there.

--

--

Hyeoungho Bae

Software Engineer interested in Optimization, Natural Language Processing, Data Augmentation (https://www.linkedin.com/in/hyeoungho-wayne-bae-8b395724/)