reverseproxy: Set Host to {upstream_hostport} automatically if TLS#7454
reverseproxy: Set Host to {upstream_hostport} automatically if TLS#7454
Host to {upstream_hostport} automatically if TLS#7454Conversation
|
It probably is a good idea as part of a "well-behaved proxy" for the SNI and Host header to be in agreement by default. For context for other visitors: we do know of at least one situation where it's too easy to misconfigure in a way that causes a security issue, without this patch. It's not a security fix in and of itself, but the goal is to prevent misconfigurations that are too easy, and which do cause security issues. |
|
Thanks, Francis. Appreciate the discourse on this and landing on a good solution! |
|
Why for https upstreams only? It is also useful for plain http upstreams, when caddy is used to provide https. |
|
No, because vast majority of upstreams expect to get the original |
Due to caddyserver/caddy#7454 we need to set trusted origins for Portainer and since TrueNAS does not offer a similar option, we instead override the header to what it was prior to v2.11.1.
|
For us this was a breaking change -- upstream over https expecting the original host header, rightly or wrongly. Ideally that would have been a thing to do in a major version update rather than sneaking it into a minor release. |
Same for me, I'am still in the process to wrap my head around it and it hit me by surprise. |
Same here apparently. Would header_up Host X-Forwarded-Host fix this? |
|
I've used |
|
We have no such thing as "major" releases, we don't follow semver. But you can consider 2.11 as effectively "major", where "2" is the "product version" (our version is moreso As I wrote above, use |
|
FWIW, this also broke out of the blue for me, for at least 3 different vendors at last count (Dell iDRAC, TrueNAS, Unifi). The fix was re-adding |
|
Wow. How are your backends possibly functioning AND preserving security using the client-facing hostname with a TLS certificate?? Indeed, though, this was a security-related fix since insecure configurations were too easy and non-obvious to make in some situations without this change. I would be interested in more details to know if the broken use cases were legitimate, or if they were already insecure/misconfigured in the first place. I would not have this to break any legitimate configurations. |
For us, we've been using Caddy as a Layer 7 front-end load balancer on Docker Swarms. Some of the backend services within the swarm are running a piece of software which requires its incoming connections to be over HTTPS but we didn't want to have the extra burden of managing the PKI for these, effectively, internal services on a closed network -- so we use self-signed certificates over the internal Docker network between Caddy and the services (and use tls_insecure_skip_verify to allow it to work). The application behind requires the public hostname to be passed through (and doesn't support the use of We don't consider this insecure per-se as the network is entirely within the Docker environment so the risk of MITM is next-to-nothing. This looked broadly like: |
|
My use-case is broadly the same as @fooflington's – the inner TLS uses either vendor-generated or otherwise "nonsense" PKI because I don't want to manage it (mostly because some of these vendors don't have an easy programmatic way to interact with certs) and is the entire point of putting Caddy in front of them: to be certain that I have a sane, unified TLS front to the world. I do pin their certificates though, to manage the MITM risk. My config stanzas all tend to look like this: Without the
|
|
Same use cases as above, this broke our config. |
|
As Francis said, this is a security-adjacent fix. While not directly a vulnerability, the previous default made it far too easy to have security bypasses, and was observed in at least one scenario. When proxy servers connect TLS using one hostname, then send HTTP requests for another, this can often lead to unexpected access to resources gated by the TLS connection.
Since you're already using certificates, you can use valid certificates, no? Add the CA to the trust store if you have to, like @dewet22 did, it's 1 line of config or you can do it on the system. I think the certificates thing is a red herring. The real issue is that the backend applications are requiring a hostname that they are not being exposed on, AND they do not support de-facto standard proxy headers to get the true hostname. Probably upstream bug reports/requests to those applications should be filed to support proxying. As mentioned by @fooflington, the fix is to use this line in your (I'm tweaking it a bit to use We feel it is better for a few applications to do this to function (which is obvious whether it is or isn't) than for possibly many applications to have to do the inverse to be secure (which is NOT obvious). However, note that setting this line will cause Caddy to connect upstream to a server in a way where the TLS ServerName and the HTTP request Host header hostnames differ. Let me know if there's any further issues that we missed. |
|
I don't think anyone is saying that this is a bad change per say. What people are saying is that this presented itself in what is expected to be a minor release, while it is a breaking change. I think this is an excellent example of why semver should be adopted. People would be a lot more careful when upgrading from caddy 2.x to 3.x, and it would likely not be automatic or have caught so many people off guard, with transparent automated upgrades of minor versions, and left people scratching their head about what suddenly broke in their setup. |
|
Thanks for clarifying. Unfortunately, semver is not a practical option since each bump would break every Caddy plugin instantly. They would all have to update before you could use them. You could never use plugins for Caddy v2 with plugins for Caddy v3. We wrote about this somewhere (I can't remember, I'd have to find it) why we don't use semver. But ultimately it boils down to how there's so many dimensions of "user-facing" or "exported" or "public" when it comes to a complex web server project such as this one, that the semantics can't be distilled down to a single number. One thing I've even considered doing is making up our own versioning scheme, where we try to enumerate all those dimensions (JSON config, CLI, RESTful admin API, UNIX APIs such as signals, layer 4 behaviors such as TCP/UDP bindings, layer 7 behaviors for HTTP etc, ACME and TLS-automation related behaviors, etc etc.) into a single massive version string, like We do try and make a meaningful distinction between patch and minor releases, like 2.11 is going to be a "major" release compared to 2.11.1, which is mostly just fixes (maaaaybe little new features, or only experimental ones, if any). If there are any other suggestions for improvement I'm happy to hear them. Sorry for the breakage. |
For almost the entire life of Caddy v2, we've had a recommendation to set
header_up Host {upstream_hostport}when configuringreverse_proxyfor HTTPS.I think it's time that we make this the default. It's sensible to make the
Hostheader match the upstream address when we know the server is configured with TLS. If the user needs something different, it's fine, their ownheader_uprules will be applied afterwards and take priority, e.g. withheader_up Host {host}to reset it to the default host. This is so rare to be correct though, in our experience.This also fixes some major footguns when you forget to set
header_up Host {upstream_hostport}which can cause tricky misbehaviour depending on the upstream's handling of theHostheader.Assistance Disclosure
I used Copilot to iterate on the changes, but finished and tested it by hand.