Help: Media content "should be served over HTTPS"

I’m getting errors that seem to be preventing the dynamic background media image on my dashboard from displaying. Has been working for a long time so not sure what has changed. Not sure if it has to do with issues with Hubitat’s OAuth servers? Or something else?

The image shows when I enter the URL directly into my browser. But it doesn’t show on the preview when entering the media item:

Ideas?

Try clicking the Learn More link and trying the steps there.

In the first line of your browser logs screenshot, it seems like the browser is trying to automatically update the media request from HTTP to HTTPS and failing. I would double-check what the setting is for Mixed Content for the site in Chrome (per the linked article) as you would need to explicitly allow that.

Does it make a difference that this is happening in both Chrome and Safari? And I have enabled insecure content in Chrome already…I don’t know if there are similar steps that I would need to take in Safari though.

Most modern browsers have started limiting Mixed Content, so yes, it’s likely an issue with Safari too but I don’t know if there’s a setting to allow Mixed Content with the latest Safari versions as I’m out of the office at the moment (on vacation).

The “correct” approach to resolving this in the eyes of the browser developers is for all resources to be served over SSL. And while I think that’s applaudable for most company websites and web apps, it doesn’t take into consideration this use case where a user wants to display their own content from a local source.

On that same topic, if it’s from a Hubitat app (or driver via the API), you could use the cloud endpoints which supply their own valid SSL cert.

There’s other approaches like proxying the requests with a valid SSL certificate (I do this on my network with Caddy2) and I’m happy to share more details about alternative approaches if you have questions.

Looks like i may need to try this alternative approach. I can’t serve via the cloud endpoint because HE limits the cloud message size.

Ok, sorry to keep hounding with this issue. I am finally trying to get this to work. The issue is mainly my dashboard images aren’t auto refreshing - I see in the javascript console the following error caused by an attempt to call the cloud hubitat maker API endpoint (the media item in sharptools points to that endpoint as the url of the media item)

Screenshot 2024-10-13 at 8.42.20 AM

I don’t see anywhere to set headers on the hubitat maker API side?

Is this cross-site issue related to the mixed content issue? If so, would the Caddy2 solution you mention above fix it?

EDIT: I just bought a domain. Never done that before. Will have to look if there’s some guide on what to do with it now…Godaddy.com shows my domain is secured with an SSL certificate, but not sure how i exploit that certficate for serving my own content locally (without having to buy hosting space)

Yeah I’m tired of this mixed content issue. I have a domain now and am all ears on more details if you’re able to share!

I thought you were calling the API of your custom Hubitat App directly (eg. OAuth enabled on the app so you could use a token and call it directly)?

I’m having a bit of trouble understanding the context. Since the request should be done from within an <img> tag rather than via JavaScript or something else, I’m surprised that error message would come up. It’s a bit hard to tell without more context though.

I wouldn’t have thought it would matter in this particular case, but you could also check the CORS setting in the Maker API to allow sharptools.io as perhaps that setting is internally used by the Maker API for other things as well.

Caddy is a feature-rich reverse proxy, so it’s very likely you could find a workaround using it.

I run Caddy2 as a Docker container, but they have a bunch of other installation methods available depending on your preferred approach:

I originally used my own SSL certificate with Caddy, but Caddy can automatically grab a valid SSL certificate for you and automatically renew it.

Here’s an example of my redacted Caddy file for your reference. I added a bunch of comments throughout explaining things:

{
  email "your-email@example.com"
}
# I set the global email address property for the ACME challenge to be used 
# for getting SSL certificates

# I have multiple domains and I wanted different configurations for each domain.
example1.com:443 {
  # I don't expose port 80 publicly, so I disable that and Caddy automatically 
  # tries the TLS challenge (port 443) for getting the SSL certificate for the domain
  tls {
    issuer acme {
      disable_http_challenge
    }
    issuer zerossl {
      disable_http_challenge
    }
  }
  # Note that you would need to expose port 443 publicly for the TLS challenge to work.
  # And would need to point the domain DNS to the public IP Caddy is exposed on.
  #
  # You *can* use other `tls` options like providing your own valid SSL certificate
  # tls /path/to/cert.crt /path/to/cert.key
  # 
  # Or use the `tls internal` option to use self-signed certificates, but then you would
  # need to add the certificate or CA to *each* client's trust store.
  # tls internal
  #
  # I haven't tried it with Caddy (though I have used it elsewhere), but you can also use
  # a DNS challenge to get the SSL certificate, but I suspect the various DNS providers
  # aren't built into the standard Caddy binary, so you might need to build your own Caddy
  # Docker image with the required DNS provider plugin to use that approach
  # 
  # Similar to the DNS challenge method above, you could use a separate ACME compatible client
  # to perform the DNS challenge from outside Caddy and then provide the certificate to a known
  # location on the server for Caddy to use per above. 

  route { 
    header -server # remove the 'caddy' server header (no need to identify what server we are using publicly)

    # For this particular server, I have multiple different internal services exposed at different
    # paths, so I reverse proxy each path of my choice to the internal services - note that the 
    # internal service must support having a custom base path / path prefix for this to work
    reverse_proxy /service1* http://internal-ip-1:port1
    reverse_proxy /service2* http://internal-ip-2:port2
    reverse_proxy /service3* http://internal-ip-3:port3

    # I just have this in as a simple testing tool to check if the server is up
    # If I hit the /ping path, it will respond with "pong"
    respond /ping "pong"

    # You can also serve static files from a directory on the server - for example, I used this
    # to serve a custom Minecraft map generated for my son's Minecraft world so I could browse it from my phone / tablet
    file_server /webmap/* {
        root /www/
    }

    # Not explicitly necessary, but I like to have a catch-all for any paths that don't match the above
    # and return a 404 Not Found
    respond * "Not Found" 404 {
      close
    }
  }
}

# Another example server, but in this case, I have a single internal service that I want to expose
# while still having my special /ping path for testing
example2.com:443 {
  tls {
    issuer acme {
      disable_http_challenge
    }
    issuer zerossl {
      disable_http_challenge
    }
  }

  # Note the use of a catch-all, so it forwards all paths onto the internal service
  reverse_proxy /* http://internal-service:port

  # But still allows my explicit /ping path for testing since explicit paths are matched first
  respond /ping "pong"
}

Oh, right, I am :slight_smile:
My dashboard background image URL is the following (redacted version):

http://192.168.1.XXX.nip.io/apps/api/917/DynamicImageSwitcher?access_token=XXXXXXXXXXXXX

I have narrowed it down to seemingly being an issue only on my Mac mini (the one that matters). On my other computers, the dashboard background loads just fine without errors. But on my Mac mini:

(1) In Safari javascript console, I get a “The certificate for this server is invalid” error, but the “server” is the same URL but with https instead of http. Same error when I put the https URL directly into the safari browser, but the http URL works fine.

(2) in Chrome javascript console, I get a “ERR_ADDRESS_UNREACHABLE” error no matter what URL (https or http) I put in there. In fact, it gives me the same error even when I just put in my habitat hub base URL…

Any suggestions on how to handle this specific issue?

EDIT: well, I’m seeing some other issues on the Mac mini too - let me see if there’s something else sinister going on here computer wide first. Hang tight.

Ok, I am still just getting this problem on my mac mini, but i think it must just be because it has the latest browser versions installed. Even though I have enabled Mixed Content in Chrome, it still blocks the background image since it’s coming from http, not https. Not sure why that Allow Insecure Setting is not working in Chrome. And Safari doesn’t allow it either apparently.

Interestingly enough, when i try the nip.io workaround, Chrome gives the ERR_ADDRESS_UNREACHABLE error. And Safari says the certificate is invalid.

Any ideas?

Two possible workarounds besides setting up caddy would be:
(1) have my app endpoint send an http redirect to the desired image URL (but would have to explore hubitat forums for if this is possible, and actually not sure if this could work with Sharptools depending on how the media items are rendered)
(2) have the variable in media items feature request met :slight_smile:

That browser error would typically indicate a networking issue. IIRC, another user had ran into something similar and it ended up being a DNS issue for them (they were using PiHole or Adguard or something that needed an exception set).

If I remember correctly, I had some intermittent issues with Chrome when I was first setting up my split-brain DNS at my house. There’s a default setting enabled where it tries to use its own custom “DNS over HTTPS” resolver instead of your standard DNS resolution… which can cause it to try to look up network requests with Google’s DNS servers instead of your local network stuff.

If you go to chrome://settings/security (Privacy and Security > Security) is the Use Secure DNS setting enabled under the Advanced section?

I had issues where even after I would disable it, it would end up getting re-enabled on its own every once in a while and I would have to toggle it back off.

When the Insecure Content is set to Allow that should bypass the Mixed Content issue and allow HTTP content within HTTPS.

Per the linked community post above, Chrome has a bug in newer versions where using IP addresses for the URL does not work when Insecure Content is set to Allow… which is why using a wildcard DNS provider like nip.io or st-ip.net is suggested as a workaround.

Of course, the fundamental networking would need to be working first – in other words, you should be able to access the image directly within your browser using one of the wildcard DNS options before trying it as an embedded background / media item in a dashboard.

It sounds like Safari is trying to upgrade the protocol from HTTP to HTTPS but since the “server” (your app endpoint) doesn’t have a valid certificate it won’t render. I suspect that the cert could be added as trusted to your Mac and that might bypass the issue.

I’m not able to reproduce that issue in Safari on Mac with other servers that are HTTP-only like my Blue Iris instance. I wonder if Safari is trying HTTPS behind the scenes with your setup and since the Hubitat hub can respond on HTTPS using their default self-signed (invalid) certificate, it ends up upgrading the connection and ultimately failing whereas with my Blue Iris instance there is no response on HTTPS so it doesn’t upgrade it and it uses the existing HTTP connection.

Ok, I think I’m back to normal. Turns out there was a new permission introduced with the latest MacOS that requires you to give Chrome permission to access local devices. Same issue as mentioned here. Back to serving mixed content in Chrome, so for now at least I’ll save the hassle of Caddy.

1 Like