Skip to content

reverseproxy: Set Host to {upstream_hostport} automatically if TLS#7454

Merged
mholt merged 1 commit intomasterfrom
better-proxy-tls-host-default
Feb 9, 2026
Merged

reverseproxy: Set Host to {upstream_hostport} automatically if TLS#7454
mholt merged 1 commit intomasterfrom
better-proxy-tls-host-default

Conversation

@francislavoie
Copy link
Member

For almost the entire life of Caddy v2, we've had a recommendation to set header_up Host {upstream_hostport} when configuring reverse_proxy for HTTPS.

I think it's time that we make this the default. It's sensible to make the Host header match the upstream address when we know the server is configured with TLS. If the user needs something different, it's fine, their own header_up rules will be applied afterwards and take priority, e.g. with header_up Host {host} to reset it to the default host. This is so rare to be correct though, in our experience.

This also fixes some major footguns when you forget to set header_up Host {upstream_hostport} which can cause tricky misbehaviour depending on the upstream's handling of the Host header.

Assistance Disclosure

I used Copilot to iterate on the changes, but finished and tested it by hand.

@francislavoie francislavoie added this to the v2.11.0 milestone Jan 30, 2026
@francislavoie francislavoie added the feature ⚙️ New feature or request label Jan 30, 2026
@mholt
Copy link
Member

mholt commented Feb 3, 2026

It probably is a good idea as part of a "well-behaved proxy" for the SNI and Host header to be in agreement by default.

For context for other visitors: we do know of at least one situation where it's too easy to misconfigure in a way that causes a security issue, without this patch. It's not a security fix in and of itself, but the goal is to prevent misconfigurations that are too easy, and which do cause security issues.

@mholt mholt merged commit 2ae0f7a into master Feb 9, 2026
29 checks passed
@mholt mholt deleted the better-proxy-tls-host-default branch February 9, 2026 20:06
@mholt
Copy link
Member

mholt commented Feb 9, 2026

Thanks, Francis. Appreciate the discourse on this and landing on a good solution!

This was referenced Feb 20, 2026
@francislavoie francislavoie mentioned this pull request Feb 20, 2026
4 tasks
francislavoie added a commit to caddyserver/website that referenced this pull request Feb 22, 2026
francislavoie added a commit to caddyserver/website that referenced this pull request Feb 23, 2026
@timkgh
Copy link

timkgh commented Feb 23, 2026

Why for https upstreams only? It is also useful for plain http upstreams, when caddy is used to provide https.

@francislavoie
Copy link
Member Author

No, because vast majority of upstreams expect to get the original Host and not something dumb like localhost:8080 as the Host.

ElioDiNino added a commit to ElioDiNino/Homelab that referenced this pull request Feb 24, 2026
Due to caddyserver/caddy#7454 we need to set
trusted origins for Portainer and since TrueNAS does not offer a
similar option, we instead override the header to what it was prior to
v2.11.1.
@fooflington
Copy link

For us this was a breaking change -- upstream over https expecting the original host header, rightly or wrongly.

Ideally that would have been a thing to do in a major version update rather than sneaking it into a minor release.

@BWC-Michael
Copy link

For us this was a breaking change -- upstream over https expecting the original host header, rightly or wrongly.

Ideally that would have been a thing to do in a major version update rather than sneaking it into a minor release.

Same for me, I'am still in the process to wrap my head around it and it hit me by surprise.

@wernermaes
Copy link

For us this was a breaking change -- upstream over https expecting the original host header, rightly or wrongly.

Ideally that would have been a thing to do in a major version update rather than sneaking it into a minor release.

Same here apparently.

Would header_up Host X-Forwarded-Host fix this?

@fooflington
Copy link

I've used header_up Host {http.request.host} with success

@francislavoie
Copy link
Member Author

francislavoie commented Feb 24, 2026

We have no such thing as "major" releases, we don't follow semver. But you can consider 2.11 as effectively "major", where "2" is the "product version" (our version is moreso product.major.minor). We did this because it closes a security issue, and simplifies the most common config pattern. Your case is the outlier.

As I wrote above, use header_up Host {host} to recover the prior behaviour. It's also explained in the docs right here https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#https (also linked to in the PR description above).

@dewet22
Copy link

dewet22 commented Feb 24, 2026

FWIW, this also broke out of the blue for me, for at least 3 different vendors at last count (Dell iDRAC, TrueNAS, Unifi). The fix was re-adding header_up Host {host} for all cases.

@mholt
Copy link
Member

mholt commented Feb 24, 2026

Wow. How are your backends possibly functioning AND preserving security using the client-facing hostname with a TLS certificate??

Indeed, though, this was a security-related fix since insecure configurations were too easy and non-obvious to make in some situations without this change.

I would be interested in more details to know if the broken use cases were legitimate, or if they were already insecure/misconfigured in the first place. I would not have this to break any legitimate configurations.

@fooflington
Copy link

fooflington commented Feb 25, 2026

I would be interested in more details ...

For us, we've been using Caddy as a Layer 7 front-end load balancer on Docker Swarms.

Some of the backend services within the swarm are running a piece of software which requires its incoming connections to be over HTTPS but we didn't want to have the extra burden of managing the PKI for these, effectively, internal services on a closed network -- so we use self-signed certificates over the internal Docker network between Caddy and the services (and use tls_insecure_skip_verify to allow it to work).

The application behind requires the public hostname to be passed through (and doesn't support the use of X-Forwarded-Host).

We don't consider this insecure per-se as the network is entirely within the Docker environment so the risk of MITM is next-to-nothing.

This looked broadly like:

example.com {
    reverse_proxy https://example_backend {
        transport http {
            tls_insecure_skip_verify
        }
    }
}

@dewet22
Copy link

dewet22 commented Feb 25, 2026

My use-case is broadly the same as @fooflington's – the inner TLS uses either vendor-generated or otherwise "nonsense" PKI because I don't want to manage it (mostly because some of these vendors don't have an easy programmatic way to interact with certs) and is the entire point of putting Caddy in front of them: to be certain that I have a sane, unified TLS front to the world. I do pin their certificates though, to manage the MITM risk. My config stanzas all tend to look like this:

*.mydomain.uk {
	...
	@truenas host truenas.mydomain.uk
	handle @truenas {
		reverse_proxy https://truenas.lab:8443 {
			transport http {
				tls_server_name localhost
				tls_trust_pool file /config/certs/truenas.pem
			}
		}
	}
	...
}

Without theheader_up Host {host} fix I saw behaviours like

  • websocket requests causing 500s in Unifi's nginx (that was particularly hard to debug because the rest of the UI loads fine)
  • TrueNAS giving redirects using the internal (.lab) hostname instead of the external hostname, breaking the sign-in flow (when not on the same local network)
  • Dell iDRAC just getting completely confused and 500ing everything (it seems very sensitive to the Host header being set just right, matching both its self-signed PKI and its configuration settings)

@mboisson
Copy link

mboisson commented Feb 25, 2026

Same use cases as above, this broke our config.

@mholt
Copy link
Member

mholt commented Feb 26, 2026

As Francis said, this is a security-adjacent fix. While not directly a vulnerability, the previous default made it far too easy to have security bypasses, and was observed in at least one scenario. When proxy servers connect TLS using one hostname, then send HTTP requests for another, this can often lead to unexpected access to resources gated by the TLS connection.

tls_insecure_skip_verify should be irrelevant to this issue unless the SAN on the self-signed certs isn't correct (and why would it be incorrect, since you control the PKI)? tls_insecure_skip_verify is literally a security "off" switch. I would also emphasize for your sake that "next-to-nothing" is not nothing... you have TLS as a security theater when tls_insecure_skip_verify is on.

Since you're already using certificates, you can use valid certificates, no? Add the CA to the trust store if you have to, like @dewet22 did, it's 1 line of config or you can do it on the system.

I think the certificates thing is a red herring.

The real issue is that the backend applications are requiring a hostname that they are not being exposed on, AND they do not support de-facto standard proxy headers to get the true hostname. Probably upstream bug reports/requests to those applications should be filed to support proxying.


As mentioned by @fooflington, the fix is to use this line in your reverse_proxy block:

header_up Host {hostport}

(I'm tweaking it a bit to use {hostport} since that carries through the entire Host header rather than just the hostname part.)

We feel it is better for a few applications to do this to function (which is obvious whether it is or isn't) than for possibly many applications to have to do the inverse to be secure (which is NOT obvious).

However, note that setting this line will cause Caddy to connect upstream to a server in a way where the TLS ServerName and the HTTP request Host header hostnames differ.

Let me know if there's any further issues that we missed.

@mboisson
Copy link

mboisson commented Feb 26, 2026

I don't think anyone is saying that this is a bad change per say. What people are saying is that this presented itself in what is expected to be a minor release, while it is a breaking change. I think this is an excellent example of why semver should be adopted. People would be a lot more careful when upgrading from caddy 2.x to 3.x, and it would likely not be automatic or have caught so many people off guard, with transparent automated upgrades of minor versions, and left people scratching their head about what suddenly broke in their setup.

@mholt
Copy link
Member

mholt commented Feb 26, 2026

Thanks for clarifying. Unfortunately, semver is not a practical option since each bump would break every Caddy plugin instantly. They would all have to update before you could use them. You could never use plugins for Caddy v2 with plugins for Caddy v3.

We wrote about this somewhere (I can't remember, I'd have to find it) why we don't use semver. But ultimately it boils down to how there's so many dimensions of "user-facing" or "exported" or "public" when it comes to a complex web server project such as this one, that the semantics can't be distilled down to a single number.

One thing I've even considered doing is making up our own versioning scheme, where we try to enumerate all those dimensions (JSON config, CLI, RESTful admin API, UNIX APIs such as signals, layer 4 behaviors such as TCP/UDP bindings, layer 7 behaviors for HTTP etc, ACME and TLS-automation related behaviors, etc etc.) into a single massive version string, like 31.217.49.2.15.66.12 -- but not only this is ridiculous and tedious to actually assess, it doesn't work with Go modules version scheme, which is proper semver. (We'd also probably want a leading number that just increments for every version bump so the version strings can be sorted. Then there's pre-releases like -beta and it just gets so complicated.)

We do try and make a meaningful distinction between patch and minor releases, like 2.11 is going to be a "major" release compared to 2.11.1, which is mostly just fixes (maaaaybe little new features, or only experimental ones, if any).

If there are any other suggestions for improvement I'm happy to hear them. Sorry for the breakage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature ⚙️ New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants