Set up a CDN for Plex with CloudFlare & NGINX

1 Like

Difficulty: Moderate


  • A second IP (can be done with one, but not the scope of this guide)
  • A domain
  • A free CloudFlare account

CloudFlare has the effect of forcing certain levels of TLS encryption on the client. Older clients (such as some SmartTVs) do not support the minimum level of TLS required by CloudFlare, and may prevent them from being able to connect to the server. You can bypass this by rolling your own proxy where you control the level of security.

I modified this a bit from the github entry posted here along with the corresponding post on Reddit. This may take a good amount of time to setup, so make sure your read through and understand the instructions before starting! Sure this might be a bit convoluted and setting up apache and nginx to work aside each other is a bit of a pain, but with QuickBox utilizing Apache and Plex not playing well with it, there are few options.

Once you understand what’s involved, grab a cup of coffee and let’s get to work!

1. Sign up for CloudFlare

The first step is to sign up with an account at CloudFlare and move the nameservers of your domain over to the one’s provided during CloudFlare’s setup. Once CloudFlare is setup, make sure you add your failover IP to a new subdomain, i.e. Until we setup Let’s Encrypt for the subdomain, ensure server traffic is not being routed through CloudFlare just yet. (grey icon) Further, under the crypto tab, ensure SSL is set to either Full or Full (Strict)

2. Add a second IP to your server

As your new nameservers propagate throughout the network, take this time to bring up the second IP on your server. We need this second IP to bind an instance of NGINX to, so that there are no conflicts with our currently running Apache server. If you want to reverse proxy apache through nginx, you could run this all off a single IP, but that won’t be covered here.

The setup of an IP may vary from host to host; however for a dedicated machine running Ubuntu, this method should work for most. A failover ip can be brought online by editing the file /etc/network/interfaces to bring a new ip address online for your network interface.

sudo nano /etc/network/interfaces

Insert at the bottom of interface eth0 (before ipv6 if your server supports it). Replace IP.OF.FAIL.OVER with the IP you were given by your provider and replace eth0 with the name of your adapter if necessary (e.g. enp2s0):

up ip addr add IP.OF.FAIL.OVER/32 dev eth0
down ip addr del IP.OF.FAIL.OVER/32 dev eth0

Save and exit. Bring the new interface online with the command

ip addr add IP.OF.FAIL.OVER/32 dev eth0

You should now be able to ping your new IP. Confirm with a local test:
If your new IP fails to respond to ping, consider consulting your host’s documentation for help.

3. Bind Apache to the main IP of your server

If you don’t know the IP address of your server you can attempt to be lazy and grab it with this one-liner:

sudo ifconfig | grep -m1 "inet addr" | cut -d: -f2 | cut -d" " -f1

If that doesn’t work, check the email that your host sent you :sweat_smile:

Now we need to edit two files in apache to prevent Apache2 from binding to all available interfaces:

sudo nano /etc/apache2/ports.conf

It should look something like this:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen 80

<IfModule ssl_module>
        Listen 443

<IfModule mod_gnutls.c>
        Listen 443

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Alter the file as such:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen ORIG.IP.OF.SRV:80

<IfModule ssl_module>
        Listen ORIG.IP.OF.SRV:443

<IfModule mod_gnutls.c>
        Listen ORIG.IP.OF.SRV:443

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Save, exit and load up the next file:

sudo nano /etc/apache2/sites-enabled/default-ssl.conf

Find the sections where it says <VirtualHost *:80> and <VirtualHost *:443>. Replace the asterisks with the ORIG.IP.OF.SRV:

<VirtualHost ORIG.IP.OF.SRV:80>
<VirtualHost ORIG.IP.OF.SRV:443>

Now, stop Apache2.

sudo systemctl stop apache2

4. Install NGINX, Let’s Encrypt and bind to the failover IP
Start by installing nginx:

sudo apt update
sudo apt install nginx

Also, if you do not have Let’s Encrypt installed, now would be the time to do that

sudo apt install letsencrypt

Now, we need to alter nginx to bind to our new IP and also connect with the Let’s Encrypt servers so that they can issue you an SSL certificate.

sudo nano /etc/nginx/sites-enabled/default

Underneath the block you should see

server {
        listen 80 default_server;

Insert your failover IP and the .well-known location for Let’s Encrypt:

server {
        listen IP.OF.FAIL.OVER:80 default_server;

        location ~ /.well-known {
                allow all;

Save and exit. Run the command sudo nginx -t to ensure your configuration is valid. If yes, hooray! Let’s take this opportunity to restart nginx and bring our apache2 server back online.

sudo systemctl restart nginx
sudo systemctl restart apache2

Ensure there are no conflicts and both services are currently running (systemctl status apache2 & systemctl status nginx)

Now we can use Let’s Encrypt to grab an SSL certificate. Make sure your DNS is pointing at your failover (and not through cloudflare)

sudo letsencrypt certonly -a webroot --webroot-path=/var/www/html -d

If all goes well you now have shiny new SSL certs for

Now we will beef up security just a bit more:

sudo bash
cd /etc/letsencrypt/live/

Check the issuing authority of your certificate as per: source

sudo openssl x509 -noout -text -in fullchain.pem | grep Issuer:

You’ll see something like:

Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3

Download the corresponding pem:

wget -O chain.pem ""

Change x3 if necessary to correlate to the authority who issued your certificate.

Now we will generate a dhparam for nginx

mkdir -p /etc/nginx/ssl
cd /etc/nginx/ssl
openssl dhparam -out dhparam.pem 2048

We now have everything we need to configure our Plex Reverse Proxy:

cd /etc/nginx/sites-enabled
wget -O plex.conf

Now we must alter the file to include our failover IP, ssl certificates and dhparam.

nano plex.conf

Find and insert your failover IP into the two listen parameters

listen 80;
listen 443 ssl http2;


listen IP.OF.FAIL.OVER:80;
listen IP.OF.FAIL.OVER:443 ssl http2;

Next find:


Insert your letsencrypt certificates here:

ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

Now add our trusted certificate (chain.pem):

ssl_trusted_certificate /etc/letsencrypt/live/;

And the dhparam a bit further down:

ssl_dhparam /etc/nginx/ssl/dhparam.pem;

That’s all the edits we need to make to this file so save and exit. Test your config and cross your fingers!

nginx -t

If all goes well, start your server

systemctl start nginx

Now your Plex should be accessible via

5. Alter your Plex Server settings

Make sure “Show Advanced Settings” is on. Under the Network tab add a custom access url:

Under Remote Access, disable remote access

6. Update firewall to prevent external pinging of port 32400

iptables -A INPUT -p tcp -s localhost --dport 32400 -j ACCEPT
iptables -A INPUT -p tcp --dport 32400 -j DROP

The first rule ensures our localhost is still able to talk with Plex. This allows our proxy to communicate, with the internal server, but the second rule prevents all other access. As such, you should still be able to access your plex installation from both and Confirm you are able to, then once you have, ensure these rules stay persistent upon a reboot with iptables-persistent:

apt install iptables-persistent

Choose yes during installation to save your current iptables (note if you have any fail2ban rules, they may get included in here. Make sure you have no firewall rules you don’t want to save, though you can remove unintentional additions if needed.

At this point, no data should be served from your server via port 32400 and all traffic should flow exclusively through port 443 (you can verify this with tcptrack - your server will not establish connections via port 32400 during playback though it will try- SYN_CONNECT).

7. Enable CloudFlare CDN for

Change over that grey cloud icon to the orange one!

Your data is now being proxied through CloudFlare’s servers. Congratulations!! Once the DNS switches over to your CloudFlare IP you should see a noticeable improvement in your speeds and jitter.

Please note this may not be perfect - I will do my best to answer questions and update and errors in the guide. Please let me know if you run into issues!

If you’re worried about sending all of your data through cloudflare, the same basic principle could apply to a VPS such as Linode, Vultr or other.

1 Like

so after doing all this i learn not great for sharing lol only good for browser watching also XD so not for me SAD Dtech

BAD for sharing XD

Why is it bad for sharing?

If you look at the reddit comments you see people using it for media players and not just web:

Perhaps you have that CF setting set wrong?

maybe… idk just did not work for me sharing wise…

would not shock me if i broken something XD

I have QB in a lxc container now as I work my way out of QB and into a full docker config. I use nginx as the reverse proxy for the Dockers so I’ll try to give this a go in that config shortly.


This is working well for Roku, Chromecast and Plex Web. I don’t have any other clients to test locally.

This is why we set the custom access URL - it tells the client where to go for the data. Most clients will still ping the server on port 32400 which is why we have to prevent external access.

I actually had this setup running for a couple weeks but no streams were getting sent through the proxy, only library data. The firewall was the final piece of the puzzle and is not an optional step. Alternately, if you have concerns about your shares, you can simply block port 32400 on your local network. However this shouldn’t be necessary and these steps should enable CF support for all clients.

just curious … seems promising… but what does the average user in this community gain from going down this road? seems like it adds an added layer of complexity … some people have reported bettter performances just from downgrading their plex clients… any thoughts to that? thanks!! again love all the work you guys do… that goes w/o saying… :slight_smile:

I use an OVH server in France. Without the CDN enabled, I usually pull down speeds between 400-1800kb/s from my server. When routed through CloudFlare, Plex rockets to 8MB/s-15MB/s. This translates to insanely faster buffering (most lower bitrate files start within a second) as well as reduced jitter (no random dropouts during streaming).

For me the difference is night and day. Higher bitrate files (eg 20mbps) were not doable (or only under optimal circimstances, but the network would typically change before I was done watching an hour long file). I have no issues now.

Some may see improvements simply with the reverse proxy and not routing their data through CF, though the benefits of CF are very clear in my case - I simply don’t see saturated links at any hour of the day at my location from an OVH France server.

Edit: And you are completely right - it is convoluted. This would be significantly less so if QB ran off nginx (which is a thought kicking around somewhere in the back of my head)

1 Like

interesting so i wonder if many of the problems i was experiencing w/ OVH would go away if i switched to nginx? perhaps good looking into :slight_smile: i’d love to be able to go back to streaming REMUXes w/ minimal problems.

also would it be possible to go back and forth once this is in place for easy A/B testing? or would i be stuck to this once its completed?

thanks Liza.

you forgot an n, im setting up now and got to that step and noticed

on a side note, if i just activated my 2nd ip minutes before doing this guide will it take a little bit before works? i followed step by step and both nginx -t tests showed settings working but i am unable to load the url. im not too technical with domain stuff.

1 Like

I’m on OVH, albeit France instead of BHS. I think it depends a bit on what CloudFlare can do for you - for me it is a lot. While CF states that you can use unlimited bandwidth, they also mention that it is a shared service and if you affect users around you they reserve the right to suspend or terminate your account. Remuxes are very data heavy, but if you aren’t streaming them all day every day, then I think you should be able to slip under the radar. Conversely, if CF ever does anything to your account, it would be pretty easy to switch over to a VPS like Vultr where you have a known quantity of bandwidth at your disposal.

As for this, it should be as easy as switching CloudFlare routing on and off through your panel. Make sure you ping the server to make sure if it’s routing through CF or your origin IP (it takes a few moments sometimes for these changes to propagate)

This is related to DNS propagation. It may take a bit of time to get it going. I prefer to use a shorter TTL (time to live) in these circumstances (5 minutes or so) as it helps propagate the change throughout the DNS servers faster. As long as ping is up on the IP, it should propagate through the network eventually.

thanks everything is working here, will have to test out some devices now!

remind me again why the need for Cloudfare? Just curious because it sounds like people like to use NGINX for performance over APACHE. is Cloudfare required in order to run Apache and Nginx side by side?


I look forward to your results!

CloudFlare is the CDN - they are the one responsible for routing your data over a faster network and making the server appear much closer to you geographically speaking. Ping to server without CloudFlare: 160ms. With CloudFlare: 15ms. Using them as a proxy server will change the routing of your data to one that hopefully is much better connected than the one provided solely by your DC/ISP.

It would be the same idea as using a VPN (or reverse proxy) to improve speeds.

1 Like

so far it works on browser, phone, xbox one, and roku 3.

does not work on samsung tv which is a problem because thats my main thing i watch on. i have ssl set to full (strict) but the server just shows up as the red X when it works on the roku fine 20 ft away. I am thinking the samsung plex app is more limited than the other apps.

Ah! I forgot I have a Samsung SmartTV too (far prefer the Roku myself). I can’t even communicate with the plex website to login to my account right now. Do you have any other servers (shares or otherwise) on your account to verify it is localized to the proxied server? I have a feeling something fishy is going on with the login portal at the moment.

local server and a remote server setup regular work, but the CF setup is not working on a 2015 JS model. my friend is having no issues playing on his KS (2016) model samsung which uses a different app (new style like roku/xbox). I am thinking there are limitations on this outdated app?

the roku is nice, but I prefer to be a 1 remote kind of guy especially when I am laying in bed.