• 0 Posts
  • 76 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle

  • HA doesn’t need either of these, but if you want an SSL certificate (to run over HTTPS instead of plain HTTP) it is bound to a domain name, which must be public unless you want to enter in the zone of adding our custom certification authority to each of your devices. This name is resolved by a public DNS. You asked how to use it when internet is down, in this case a public DNS is not reachable so you need your own on the local network.

    The reverse proxy is useful when you have a bunch of web services and you want to protect all of them with HTTPS. Instead of delivering the certificate to each of them, you add the HTTPS layer at your reverse proxy and it queries the servers behind in plain HTTP. The reverse proxy has also the benefit of making handling subdomains easier. So instead of distinguishing the different services because they have a different port number you can have a few https://ha.my.domain/ and https://feedreader.my.domain/

    If you just have homeassistant and not care of HTTPS the easiest option is to use the local resolution: modern OSes advertise the name of the device on the network and it can be resolved on the .local domain. But, if you configured HTTPS to use https://name.duckdns.org/ you’ll se an error when you try to use https://name.local/ because your browser sees a mismatch between the name in the certificate and the name that you are trying to connect to. You can always ignore this error and move on, but it mostly defeats the point of HTTPS.


  • it make sense to handle Certificate renewal where your reverse proxy is just because they are easier to install this way. Having a single homeassistant let it handle it. The day you’ll start hosting more staff and put all of it behind a single reverse proxy (caddy or nginx are the most popular options) you can move certificate handling on the machine with reverse proxy.

    to make your homeassistant reachable even when internet is down you just need a local DNS that resolves yourdomain.duckdns.org to your local IP. This is usually easier configured on the router but many stock firmwares don’t allow it. Another option is to install a DNS (pihole is the most famous, I personally use blocky) somewhere and configure your router to advertise this DNS instead of its own.




  • I have an initramfs script which knows half decription key and fetches the other half from internet.

    My threat model is: I want to be able to dispose safely my drives, and if someone steals my NAS needs to connect it to a similar network of mine (same gateway and subnet) before I delete the second half of the key to get my data.


  • It is not just a matter of how many ports are open. It is about the attack surface. You can have a single 443 open with the best reverse proxy, but if you have a crappy app behind which allows remote code execution you are fucked no matter what.

    Each port open exposes one or more services on internet. You have to decide how much you trust each of these services to be secure and how much you trust your password.

    While we can agree that SSH is a very safe service, if you allow password login for root and the password is “root” the first scanner that passes will get control of your server.

    As other mentioned, having everything behind a vpn is the best way to reduce the attack surface: vpn software is usually written with safety in mind so you reduce the risk of zero days attacks. Also many vpn use certificates to authenticate the user, making guessing access virtually impossible.




  • The really important things (essentially only photos) are backed up on a different USB drive and remotely on backblaze. Around one terabyte cost 2-3$ per month (you pay by operation, so it depends also by how frequently you trigger the backup). You want to search for “cold storage” which is the name for cloud storage unfrequently accessed (in other words, more storage than bandwidth). As a bonus, if you use rclone you can encrypt your data before sending it to the cloud.



  • I remember reading a post on mastodon where it was explained that no mother board validates the secure boot keys expiration dates otherwise it wouldn’t boot the first time the BIOS battery gets empty and the internal clock gets reset. The post was written well and was citing some sources. But I didn’t try to verify these assertions.




  • I remember searching for a similar workaround in the past. I’m not sure parallel will work because the whole automation is blocked on error if I recall correctly. A workaround I found suggested on the ha website (but never tried) was to put the command that may error in a script and run the script as “fire and forget” from the automation. If the automation doesn’t wait for the script to finish it won’t detect the error either. But, as other pointed out, try to make the zigbee network more stable first.




  • I’d say that the most important takeover of this approach is to stop all the containers before the backup. Some applications (like databases) are extremely sensitive to data corruption. If you simply ´cp´ while they are running you may copy files of the same program at different point in time and get a corrupted backup. It is also important mentioning that a backup is good only if you verify that you can restore it. There are so many issues you can discover the first time you recover a backup, you want to be sure you discover them when you still have the original data.



  • The decryption key is more than 20 random character, so if you get only half of it is not a biggie and it doesn’t look like anything interesting.

    It is on the internet mostly because I don’t have anything else to host it locally. But I see some benefit: I wanted for the server to be available immediately after a power failure. If it fetches the key from internet I just need for the router to be online, if it fetches it from the local network I need another server running unencrypted disk.