Nginx Proxy Manager ignoring ports

Soldato
Joined
10 Sep 2009
Posts
2,565
Location
United Kingdom
I have been trying to set up a proxy for some services on my server via Tailscale so I can use a nice DNS and not deal with ports. I got on one to work, but everything else is ignoring the port.

I am running Nginx in Docker Compose on OMV. All my services are also on Docker on OMV, except Home Assistant, which is on a VM on the same server, and another service on a different server using Docker Compose and OMV. I can access them on the local network and via the Tailscale DNS or IP, using the correct port. The DNS I use is from Cloudflare. I created SSL cert with their token, but it doesn't matter if I use it or not, I still have the same problem.

For example, I created a proxy host to localhost and port 81 to domain.com. When accessing domain.com, it is redirected to localhost instead of localhost: port. I also tried setting the IP to the Docker name instead of localhost since they are on the same network, but I still get directed to localhost.

As of right now, I have to do domain.com: port to access the correct service, which is not ideal.



I hope this explains it well since I am a networking noob.
 
Last edited:
I have been trying to set up a proxy for some services on my server via Tailscale so I can use a nice DNS and not deal with ports. I got on one to work, but everything else is ignoring the port.

I am running Nginx in Docker Compose on OMV. All my services are also on Docker on OMV, except Home Assistant, which is on a VM on the same server, and another service on a different server using Docker Compose and OMV. I can access them on the local network and via the Tailscale DNS or IP, using the correct port. The DNS I use is from Cloudflare. I created SSL cert with their token, but it doesn't matter if I use it or not, I still have the same problem.

For example, I created a proxy host to localhost and port 81 to domain.com. When accessing domain.com, it is redirected to localhost instead of localhost: port. I also tried setting the IP to the Docker name instead of localhost since they are on the same network, but I still get directed to localhost.

As of right now, I have to do domain.com: port to access the correct service, which is not ideal.



I hope this explains it well since I am a networking noob.

Yeah, it can be very tricky.

If you share your docker compose for NPM (Nginx Proxy Manager) that might help, by default things wouldn't' work as you expect but are fairly easily sorted..

I assume the intention is this is only reverse proxying your various services within your tailnet?
 
Last edited:
LLMs are a godsend for fixing stuff like this, just bang your docker compose into chatgpt or Claude or something and get it to sort it for you. Obviously don't be sticking any sensitive information in it.
 
Yeah the goal is only proxy the services. I moved Nignx to my home assistant as an add on but the problem persisted. The Nginx page on port 81 is the only thing I got working.

services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped

ports:
# These ports are in format <host-port>:<container-port>
- '81:81' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '82:82' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP

environment:
TZ: "Europe/London"

# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"

# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'

volumes:
- /nginx-data:/data
- /nginx-letsencrypt:/etc/letsencrypt

This was the compose yaml file. LLM is useless btw. It never says anything useful or I haven't tried.
 
Last edited:
I have been trying to set up a proxy for some services on my server via Tailscale so I can use a nice DNS and not deal with ports. I got on one to work, but everything else is ignoring the port.

I am running Nginx in Docker Compose on OMV. All my services are also on Docker on OMV, except Home Assistant, which is on a VM on the same server, and another service on a different server using Docker Compose and OMV. I can access them on the local network and via the Tailscale DNS or IP, using the correct port. The DNS I use is from Cloudflare. I created SSL cert with their token, but it doesn't matter if I use it or not, I still have the same problem.

For example, I created a proxy host to localhost and port 81 to domain.com. When accessing domain.com, it is redirected to localhost instead of localhost: port. I also tried setting the IP to the Docker name instead of localhost since they are on the same network, but I still get directed to localhost.

As of right now, I have to do domain.com: port to access the correct service, which is not ideal.



I hope this explains it well since I am a networking noob.
Before we go head first into yaml land.

Can you share some images of how you've set the hosts up in NPM?
What are your DNS records looking like?

Start at the shallow end before you go deep.
 
Before we go head first into yaml land.

Can you share some images of how you've set the hosts up in NPM?
What are your DNS records looking like?

Start at the shallow end before you go deep.


this is the image of my dns record on cloudflare. the content is the ip address of my server.


this is my nginex. i disabled ssl right now and it still doesnt work. only the npm dns worked.
 
Last edited:

this is the image of my dns record on cloudflare. the content is the ip address of my server.


this is my nginex. i disabled ssl right now and it still doesnt work. only the npm dns worked.
One obvious thing is the ports..

Presuming your DNS records are mainly pointing at your OVM IP address, then using "jellyfin.tail.xxx" as the URL, I would expect browsers to try port 80 first (regular HTTP), in your case, that would effectively resolve to the IP of your OVM instance and port 80 is used by OVM for it's web interface.. this matches the observed behaviour of 'accessing localhost instead of localhost: port'.. i.e. its not getting to ngnix at all..

This is compounded by looking at the NPM installation documentation which shows a docker compose of:

Code:
services:
  app:
    image: 'docker.io/jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
In this case, from the containers point of view:
Port 80 = HTTP
Port 81 = NPM Web interface
Port 443 = HHTPS

However, In your case, you've seemingly randomly mapped different container (internal) ports
Code:
# These ports are in format <host-port>:<container-port>
- '81:81' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '82:82' # Admin Web Port

Issues (IMO)
1. You couldn't use port 80 on the 'host' side because that is in use by the OVM web interface, so you've thought, oh, I'll map port 81 for HTTP, and 82 for the web interface.. however, that (under normal docker compose on a normal installation) should have yielded:
Code:
# These ports are in format <host-port>:<container-port>
- '81:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '82:81' # Admin Web Port
i.e. you can't change the container ports, they are fixed by the container, what you can do is change the host port to map them however you want..
so in your case, http://ovm-ip:81 would launch NPMs web interface, not be it's public HTTP port..
http://owm-ip:82 wouldn't go to the NPM web interface because port 82 in the container goes nowhere..


2. The next issue is as I first mention, I think browsers will default to port 80.. that is in use by OVM for its web interface and so nginx is not even getting a sniff..


The solution in my opinion is to use a custom docker network of type MACVLAN (or IPVLAN).. this would in effect give your container its own IP address on the network, then it can use port 80/443/81 without any issues..
Then your DNS needs to point to the new IP Address for the NPM Container.. and in NPMs proxy hosts you might want to use the IP address of your OVM instance for each service rather than localhost (127.0.0.1)

Here's co-pilots attempt: (I assumed you'd want an IP address within the same subnet as your OVM host).

Code:
services:
  nginx-proxy-manager:
    image: jc21/nginx-proxy-manager:latest
    container_name: nginx-proxy-manager
    restart: unless-stopped
    networks:
      custom_macvlan:
        ipv4_address: 192.168.1.50   # Change this to an unused IP in your host's subnet
    ports:
      - "81:81"    # Admin UI
      - "80:80"    # HTTP
      - "443:443"  # HTTPS
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt


networks:
  custom_macvlan:
    driver: macvlan
    driver_opts:
      parent: eth0   # Replace with your host's network interface
    ipam:
      config:
        - subnet: 192.168.1.0/24    # Your host's subnet
          gateway: 192.168.1.1      # Your host's gateway

This way when you put in jellyfin.tail.xxx it will be resolved to the NPM IP but port 80 by default and nginx will take over from there..
(and you'll access the NPM web interface for configuration using the new NPM IP:81)
 
Last edited:
Nginex is installed on the Home Assistant VM now. Because it crashed overnight for no reason and server couldn't recreate the container and got stuck. I had to hard reset the server to get it running again. I am kind of scared of it now lol. I think now its on the VM, its more safe because crashing the VM does nothing. Port 80 is free on that VM since HA uses port 8123. So the address from Nginex (npm.ha.xxx) works. But the other stuff always goes to the OMV web UI. What you are saying is it always try port 80 first which is OMV and since it can connect, it just do and not go to the port of the services?
 
Last edited:
Nginex is installed on the Home Assistant VM now. Because it crashed overnight for no reason and server couldn't recreate the container and got stuck. I had to hard reset the server to get it running again. I am kind of scared of it now lol. I think now its on the VM, its more safe because crashing the VM does nothing. Port 80 is free on that VM since HA uses port 8123. So the address from Nginex (npm.ha.xxx) works. But the other stuff always goes to the OMV web UI. What you are saying is it always try port 80 first which is OMV and since it can connect, it just do and not go to the port of the services?
if Home Assistant is in a VM, then it should have it's own IP address (different to OVM), in which case, your DNS needs to point to the Home Assistant VM IP address.. and fix your ports to be

Code:
    ports:
      - "81:81"    # Admin UI
      - "80:80"    # HTTP
      - "443:443"  # HTTPS
so port 80 (host, i.e. your Home Assistant VM) maps to port 80 of NPM..

If you have the DNS pointing to the Home Assistant IP, then in a web browser npm.ha.xxx should point to the Home Assistant IP and it'll try Port 80 first, which if mapped to the container correctly then nginx should take over..

Finally, your NPM proxy hosts configs need to point to the IP of OVM instead of 127.0.0.1 for the services running on OVM..

I mentioned it in the other thread, and hindsight is great, but OVM was a nightmare for me for stability, mainly as I easily broke it too.. Unraid has been exemplary since I've had it.. although it costs, running VMs, Containers, configuring stuff is childs play and it's a great stepping stone to learning more about containers and linux with so many step by step video guides to get you going..
 
Last edited:

this is the image of my dns record on cloudflare. the content is the ip address of my server.


this is my nginex. i disabled ssl right now and it still doesnt work. only the npm dns worked.
So all of your cloudflare CNAME's are set to point to your NPM?

You have done a full install onto a VM though and not run it via docker? I thought NPM was "supposed" to be run via docker?

As an example, try pointing at something else - eg create something like firewall.mydomain.com and proxy your port 80 through to your firewalls web gui, see if it works, I *think* it probably will in which case that would push me towards blaming the 127.0.0.1 IP address as you're asking the proxy to proxy traffic to itself. Wouldn't explain why NPM works but that's what's different to my setup.

If NPM was in a docker and you were pointing them at 127.0.0.1 then that is an incorrect config as you are pointing clients at your NPM docker, as a local host (remember it's a container) not the actual host IP they reside on.
 
Last edited:
He installed it as an app in Home Assistant (its in the addon store in Home Assistant) so presumably it's running directly on the host network, so that should make the port 80/81/443 map correctly, and yes, I also pointed out he'd need to update his proxy hosts to not be localhost..
 
To my knowledge add ons in HA are basically Docker containers. The proxy project is on hold right now because the server is undergoing maintenance after a disaster and I am too scared to do anything before I finish backing up. You can found the details of my woes on the other thread.

I will come back to do proxy after the server is stable again

The record and host proxy settings are identical in the HA add on to the Docker settings I posted. But I still have the same issue where only the DNS to the NPM UI is correct and the others goes to the OMV UI on port 80. The .ha is record is the VM's IP. .rev and .tail are the IP of the other servers. (there are two servers) HA VM is on the .tail server.
 
Last edited:
The record and host proxy settings are identical in the HA add on to the Docker settings I posted. But I still have the same issue where only the DNS to the NPM UI is correct and the others goes to the OMV UI on port 80. The .ha is record is the VM's IP. .rev and .tail are the IP of the other servers. (there are two servers) HA VM is on the .tail server.

This is most likely the bit breaking it then. Even though your NPM is hosted on your HA box, local host to NPM 127.0.0.1 is the NPM container - not the HA box as a wider object. The container doesn't share the same IP as your host machine.
 
He installed it as an app in Home Assistant (its in the addon store in Home Assistant) so presumably it's running directly on the host network, so that should make the port 80/81/443 map correctly, and yes, I also pointed out he'd need to update his proxy hosts to not be localhost..
Docker on a host doesn't share the IP. It gets ports forwarded to it o na private interface which is NAT'd outbound unless specifically told to use bridge as it's network configuration. This is why NPM works and others don't. Simply, the others don't live ino the same container that NPM does.
 
Docker on a host doesn't share the IP. It gets ports forwarded to it o na private interface which is NAT'd outbound unless specifically told to use bridge as it's network configuration. This is why NPM works and others don't. Simply, the others don't live ino the same container that NPM does.
My bad, I assumed Home Assistant installed add-ons as native apps, so 'host' literally means the host machines network.. not dockers Host type, I didn't realise it used containers..

However, ignoring the host aspect, I know why the proxy hosts wouldn't work, this is why I said:
[/quote]Finally, your NPM proxy hosts configs need to point to the IP of OVM instead of 127.0.0.1 for the services running on OVM..[/quote]
For the very reasons you have mentioned so I think I get that (I'm no expert, but have built some experience lately).

But if you are an expert on Docker networking, on Unraid, I have some containers set to use the docker 'Host' network type, those are accessible from outside the server using the servers IP and the port the service is listening on, as if it's sharing the hosts network stack just like a native app would do..
This is what Docker's documentation eludes to does it not? https://docs.docker.com/engine/network/drivers/host/
If you use the host network mode for a container, that container's network stack isn't isolated from the Docker host (the container shares the host's networking namespace), and the container doesn't get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you use host networking, the container's application is available on port 80 on the host's IP address.

I'm still learning, so always happy to be educated..

Anyway, glad an expert finally came along, I'll let you take over..
 
Last edited:
But if you are an expert on Docker networking, on Unraid, I have some containers set to use the docker 'Host' network type, those are accessible from outside the server using the servers IP and the port the service is listening on, as if it's sharing the hosts network stack just like a native app would do..
This is what Docker's documentation eludes to does it not? https://docs.docker.com/engine/network/drivers/host/
[/QUOTE]

I am no expert and I am piecing bit sof knowledge together.

Unraid is slightly different in that it has the option for host so yes in this instance the docker DOES share the same IP as host, this is not the typical behaviour of a true docker installation though.
 
Don’t worry, then, the first issue he resolved anyway moving NPM to Home Assistant, the second issue of proxy host endpoints we both agree on.

I’m always learning, I’m fairly ok on the mainstream stuff, but have had to learn more about docker networking lately due to using it more extensively.. I found Network Chucks very caffeine fuelled primer quite useful


 
Don’t worry, then, the first issue he resolved anyway moving NPM to Home Assistant, the second issue of proxy host endpoints we both agree on.

I’m always learning, I’m fairly ok on the mainstream stuff, but have had to learn more about docker networking lately due to using it more extensively.. I found Network Chucks very caffeine fuelled primer quite useful


Yeah chuck is pretty good for getting your first thing ot 2 up and running, the rest is more experimentation, make it work, break it, figure out how to fix it, why did it fix it, why did it break it and work from there.

I moved all my home server stuff onto dockers, initially unraid dockers via community apps and more recently have gone into Proxmox with LXC. From there I've learned enough to spin up some decent usable services in my professional life.

As I said, no expert but if stuff breaks, I can probably dig myself out of the hole.
 
Back
Top Bottom