Some services run really good behind a reverse proxy on 443, but some others can really become an hassle… And sometimes just opening other ports would be easier than to try configuring everything to work through 443.

An example that comes to my mind is SSH, yeah you can use SSLH to forward requests coming from 443 to 22, but it’s so much easier to just leave 22 open…

Now, for SSH, if you have certificate authentication or a strong password, I think you can feel quite safe, but what about other random ports? What risks I’m exposing my server to if I open some of them when needed for a service? Is the effort of trying to pass everything through 443/80 worth it?

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    12 minutes ago

    Less danger than OPsec nerds hype up but enough of a concern you want at least a reverse proxy. The new FOSS replacement for cloudflare on the block is Anubis https://github.com/TecharoHQ/anubis, while Im not the biggest fan of seeing chibi anime funkopop girl thing wag its finger at me for a second or two as it test connection, I cannot deny the results seem effective enough that all the cool kids on the FOSS circle all are switching to it over cloudflare.

    I just learned how to get my first website and domain and stuff setup locally this summer so theres some network admin stuff im still figuring out. I don’t have any complex scripting or php or whatever so all the bots that try scanning for admin pages are never going to hit anything it just pollutes the logs. People are all nuts about scraping bots in current year but when I was a kid allowing your sites to be indexed and crawled was what let people discover it through engines, I don’t care if botnets scan through my permissively licensed public writing.

  • horse@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    40 minutes ago

    Personally I don’t forward ports for anything that only I am supposed to access (such as SSH). Instead I connect to my home network via VPN and establish the connection from the inside. I just have an allow all from the VPN subnet to my main one, but you could also allow things selectively if you don’t want everything accessible via VPN. Using the VPN has the added bonus of ensuring everything is going through a secure tunnel if I’m connecting from a public network.

  • bigfondue@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    9 hours ago

    If you can disable IPv4 on sshd then it really isn’t an issue. I know, security through obscurity isn’t robust, but when I had sshd with IPv4 enabled, I was getting around 6 - 10 failed login attempts a minute. People iterate through all the IPv4 addresses since there are only 4,294,967,296 possible addresses. There are 340,282,366,920,938,463,463,374,607,431,768,211,456 possible IPv6 addresses, so the chance of someone randomly stumbling upon your address is fucking astronomically tiny. When I disabled IPv4 a couple years ago I’ve had exactly zero failed logins that weren’t me being a sloppy typist.

    • ganymede@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      People iterate through all the IPv4 addresses since there are only 4,294,967,296 possible addresses. There are 340,282,366,920,938,463,463,374,607,431,768,211,456 possible IPv6 addresses

      i love your thinking!!

      do you have a backup in case you accidentally find yourself locked out from an ipv4-only network?

      • bigfondue@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        Not really. My home network doesn’t have any port forwarding so nothing is exposed. I have a VPS, but nothing really important is on there, and I pretty much exclusively use it from home. Anyway all those failed logins were just trying defaults like user admin password admin. If you have a strong password or ssh key it really doesn’t matter, but I just hated knowing people were trying to get in, even if it was just half-assed attempts to find a unsecured machine.

        If I really needed to use IP4 to get in, I could always just log onto Vultr’s web console and enable IP4 again.

  • blargh513@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    9 hours ago

    Get a WAF. Sophos firewall is free if you want to diy. If not, use cloudflare.

    Opening ports, logging, monitoring, nailing up allow listed IP addresses and dicking around with fail2ban is such a timesuck. None of that crap will stop something from exploiting a vulnerability.

    Some things are worth farming out to a 3rd party. Plus, you can just point your DNS entry over and be mostly done. No more dynamic IP bs.

    • sfjvvssss@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      A WAF won’t magically solve your problems and free you from your attack surface. To be effective it needs contect of the application and a lot of tuning. Your public facing services should be treated, configured and maintained as such. I am not sure if you include a WAF in the stuff that won’t stop exploitation of vulns, but it definitely belongs there. Yes, it can decrease volume and make exploitation a bit harder but that’s it usually. Also don’t just include proprietary third party stuff and hope it solves your problems.

      • blargh513@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        It isn’t a magic solution, no, but you have a lot more control than crummy layer 3 firewall rules and endless lists.

        The big players have far more data about what bad looks like. Either we can play whack a mole with outdated tools and techniques or get smart and learn to use what is available.

        Self hosting doesn’t mean we go backward in terms of the sophistication and difficulty, it means embracing modern solutions.

        In the dinosaur days, we had primitive tools, but so did the attackers. We cannot hope to self host with any measure of security if we bring piss to a shitfight.

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    Move the port to a high port. Install fail2ban and set it to ban quickly. The downside of that is if you fat finger your login more than a couple of times it might ban you. I have whitelist on mine of the IP addresses I know I will be logging in from. I also run TCP wrappers which far too many people screech about it being depreciated. it works and also if set up properly logs all login attempts. I get about three or four a month on my random high port. Of course most of this depends on you trying to gain access from known addresses or subnet.

    I only have the ssh login as a backup. I run wireguard with the ports set to something other than the default port. It allows me to gain access to my home network quickly. While its always possible there might be some bug that would allow someone to access it in the future it works as well as any other solution.

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    17 hours ago

    you can reverse proxy other ports than 443 and ex. upstream ssh, the advantage of having reverse proxy over everything is to have traffic in one place so you can manage it, that’s why for example kubernetes have ingress server, example nginx / openresty upstream ssh, you can restrict traffic to limited amount of IP etc.

    stream {
        upstream ssh {
            server          127.0.0.1:22;
        }
    
        server {
            listen          2222;
            ssl_preread     on;
            proxy_pass      ssh;
        }
    }
    
    • dontblink@feddit.itOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      As far as I knew reverse proxies could only reverse proxy stuff coming in from 443 or 80, I didn’t know they could listen other ports as well!

      Main reason why I was using a reverse proxy at first is because I had everything behind cloudflare, and cloudflare can only proxy and give you an SSL encryption for stuff that goes through 443, so I could make Caddy listen to 443 and then forward to interested ports.

      But this leaves out everything that needs to go in some other places than 443, and requires its own standalone ssl certificate, which is a bit cumbersome. Pheraps these can be proxied with other proxies than cloudflare, hopefully giving SSL to everything…

      I’m not sure I understood the upstream ssh thing, what do you actually do?

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 hours ago

        this is nginx / openresty config - upstream is just definition of server / bunch of servers if you do loadbalancing - you can specify load balancing strategies and stuff. Or when want to separate server layer from proxy layer.

        stream {
          upstream something {
             server xxx:123;
             server yyy:321;
          }
          server {
             listen 666;
             proxy_pass something;
          }
        }
        

        https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/

        I use openresty with autossl, it renews certificates automatically. The only problem is maintaining subdomain allowance otherwise bots will ddos letsencrypt with random domain names, after some quota they will soft ban you for a week to create certificates for new domains / subdomains.

  • sadfitzy@ttrpg.network
    link
    fedilink
    English
    arrow-up
    10
    ·
    22 hours ago

    Opening ports essentially allows other computers on the internet to initiate a connection with yours.

    It’s only dangerous if a service running on those ports can be exploited.

    • ganymede@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      to reduce attack-surface, if there’s no reason for the port to be open, don’t open it.

    • medem@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      This, coupled with the fact that firewalls are protocol-agnostic. You can, for instance, use ‘port https’ in your Packet Filter config instead of ‘port 443’, but that simply means that PF will block/pass traffic to whatever service is bound to that particular port, and NOT https connections in general.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      18 hours ago

      “If” is not the correct word choice. It’s only dangerous when a service on the port gets exploited.

  • Onomatopoeia@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    19 hours ago

    About 5 years ago I opened a port to run a test.

    Within hours it was getting hammered (probably by scripts) trying to figure out what that port was forwarded to, and trying to connect.

    I closed the port about a week later, but not before that poor consumer router was overwhelmed with the hits.

    I closed the port after a week. For the next 2 years I’d get hammered with scans occasionally.

    There are tools out there continually looking for open ports, they probably get added to a database and hackers/script kiddies, whoever, will try to get in.

    Whats interesting is I did the same thing around 2000 with a DSL connection (which was very much a static address) and it wasn’t an issue even though there were fewer always-on consume connections.

  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    It is not just a matter of how many ports are open. It is about the attack surface. You can have a single 443 open with the best reverse proxy, but if you have a crappy app behind which allows remote code execution you are fucked no matter what.

    Each port open exposes one or more services on internet. You have to decide how much you trust each of these services to be secure and how much you trust your password.

    While we can agree that SSH is a very safe service, if you allow password login for root and the password is “root” the first scanner that passes will get control of your server.

    As other mentioned, having everything behind a vpn is the best way to reduce the attack surface: vpn software is usually written with safety in mind so you reduce the risk of zero days attacks. Also many vpn use certificates to authenticate the user, making guessing access virtually impossible.

  • tvcvt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 hours ago

    There’s definitely nothing magic about ports 443 and 80. The risk is always that the underlying service will provide a vulnerability through which attackers could find a way. Any port presents an opportunity for attack; the security of the service is the is what makes it safe or not.

    I’d argue that long tested services like ssh, absent misconfiguration, are at least as safe as most reverse proxies. That doesn’t mean to say that people won’t try to break in via port 22. They sure will—they try on web ports too.

  • ryokimball@infosec.pub
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    20 hours ago

    If you are trying to access several different services through the internet to your home network, you are better off setting up a home VPN than trying to manage multiple public facing services. The more you publish directly to the public, the more difficult it is to keep up with everything; It is likely needlessly expanding your threat exposure. Plus you never know when a new exploit gets published against any of the services you have available.

    • 0x0@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      setting up a home VPN then trying to manage multiple public facing services.

      You mean than? Not being anal it but does change the meaning.

    • Dagnet@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Self hosted newbie here. What if those services are docker containers? Wouldn’t the threat be isolated from the rest of the machine?

      • Onomatopoeia@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        3
        ·
        19 hours ago

        Others have clarified, but I’d like to add that security isn’t one thing - it’s done in layers so each layer protects from potential failures in another layer.

        This is called The Swiss Cheese Model. of risk mitigation.

        If you take a bunch of random slices of Swiss cheese and stack them up, how likely is there to be a single hole that goes though every layer?

        Using more layers reduces the risk of “hole alignment”.

        Here’s an example model:

        So a router that has no open ports, then a mesh VPN (wireguard/Tailscale) to access different services.

        That VPN should have rules that only specific ports may be connected to specific hosts.

        Hosts are on an isolated network (could be VLANS), with only specific ports permitted into the VLAN via the VPN (service dependent).

        Each service and host should use unique names for admin/root, with complex passwords, and preferably 2FA (or in the case of SSH, certs).

        Admin/root access should be limited to local devices, and if you want to get really restrictive, specific devices.

        In the Enterprise it’s not unusual to have an admin password management system that you have to request an admin password for a specific system, for a specific period of time (which is delivered via a secure mechanism, sometimes in person). This is logged, and when the requested time frame expires the password is changed.

        Everyone’s risk model and Swiss cheese layering will fall somewhere on this scale.

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        1 day ago

        No. Docker containers aren’t a full sandbox. There’s a number of exploits that can break out of a container and gain root access to the host.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Yes and no

          Breaking out of docker in a real life context would require either a massive misconfiguration or a major security vulnerability. Chances are you aren’t going to have much in the way of lateral movement but it is always good to have defense in depth.

          • Technus@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            If someone’s self-hosting, I’d be willing to bet they don’t have the same hardened config or isolation that a cloud provider would.

            • Possibly linux@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              21 hours ago

              Docker restricts the permissions of software running in the container. It is hardened by default and you need to manually grant permissions in some rare cases.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 day ago

        Is your container isolated from your internal network?

        If I were to compromise your container, I’d immediately pivot to other systems on your private network.

        Why do the difficult thing of breaking out of a container when there’s a good chance I can use the credentials I got breaking in to your container to access other systems on your network?

      • oddlyqueer@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        it’s an extra hurdle, but it’s far from a guaranteed barrier. There’s a whole class of exploits called container escapes (or hypervisor escapes if you’re dealing with old-school VMs) that specifically focus on escalating an attack from a compromised container into whatever machine is hosting the container.

  • skankhunt42@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    1 day ago

    It’s not so much about the ports, its about what you’re running that’s accessible to the public.

    If you have a single website on 443 and SSH on 22 (or a non-standard port like 6543) you’re generally considered safe. This is 2 services and someone would need to attack one of the two to get in.

    If you have a VPN on 4567 and everything behind the VPN then someone would need to hack the VPN to get in.

    If you have 100 different things behind 443 then someone just needs to find a hole in one to get in.

    Generally ssh, nginx, a VPN are all safe and they should be on their own ports.

    • sfjvvssss@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Sorry to nitpick but I feel like beimg precise here is important. Nginx is a project, ssh a protocol and VPN an overlay network, so more of a concept. All 3 can be run somewhere on the spectrum between quite secure and super insecure. Also safe and secure are two different things, I guess you meant secure so no big deal.

    • ryannathans@aussie.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      1 day ago

      Exposing SSH is not recommended, it’s a hot attack target. Expose a VPN and use that to SSH in.

        • sfjvvssss@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          While this helps getting volume down it just adds a layer of obscurity and the service behind should still be treated and maintained as if it was fully public-facing.

          • ganymede@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            5 hours ago

            while the most bare bones knocking implementation may be classed as obscurity, there’s certainly plenty of implementations which i wouldn’t class as obscurity.

            • sfjvvssss@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 hours ago

              Does this method use a cryptographically secure secret which is transmitted encrypted? If not, it is obscurity. If yes, just use normal secure authentication if your goal is security. If you want to get volume down and maybe reduce your risk, feel free to use such things but you should not apply the security label to it.

              • ganymede@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                would you classify out of band whitelisting by IP (or other session characteristic[s]) as having no security merit whatsoever?

                would you classify it as purely a decision regarding network congestion & optimisation?

                you’re ofc free to define these things however you wish, but in a form which is helpful to OP’s question i’m not sure i follow you.

                • sfjvvssss@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 hours ago

                  I just wanted to make clear that port knocking is obscurity and maintaining and configuring your still public facing services in a secure manner is essential. There are best practices which I did not define and are applicable here.

                  If you whitelist your IP that of course helps but I am not sure what that has to do with port knocking. Whitelisting an IP after it knocked right, that would be obscurity. Whitelisting an IP after it authenticated through a secure connection with secure credentials? Why not just use VPN?

                  I am also not directly commenting on OPs question, as I try to tackle missconceptions in the comments.

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            ·
            16 hours ago

            I think people get too defensive about security by obscurity not being security. It’s still better for things to be obscure, it’s just not sufficient. A hidden lock to open a door is marginally better than a lock on the door. A hidden button to open a door isn’t secure though, of course.

            But at the same time, I fully understand why it’s stressed so much. People tend to make analogies in their mind to the physical world. The digital world is so different though. An example I use often is you can’t jiggle every doorknob in the world to see if it’s unlocked, but it’s (relatively) easy to check every IPv4 address for an open port to some database with default credentials.

            • 4am@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              ·
              16 hours ago

              Security through obscurity is hammered into newbies as being bad because it’s often a “quicker and easier” solution and we don’t want anyone thinking they could just do that and be done with it.

              You have to learn the proper way to do it; obscurity only buys you time. Maybe.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      1 day ago

      Everything you expose is fine until somebody finds a zero day.

      Everything these days is being built from a ton of publically maintained packages. All it takes is for one of those packages to fall into the wrong hands and get updated which happens all the time.

      If you’re going to expose web yourself, use anubus and fail2ban

      Put everything that doesn’t absolutely need to be public open behind a VPN.

      Keep all of your software updated, constant vigilance.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    With SSH it is easier to do key authentication. Certificate authentication is supported but it is a little more hassle. Don’t use password authentication as it is deprecated and not secure.

    The key with SSH (openssh specifically) is that it is heavily audited so it is unlikely to have any issues. The problem is when you start exposing self hosted services with lots of attack surface. You need to be very careful when exposing services as web services are very hard to secure and can be the source of a compromise that you may or may not be aware of.

    It is much safer to use a overlay VPN or some other frontend for authentication like mTLS or an authenticated reverse proxy.

  • Lka1988@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    19 hours ago

    The only ports I have open are 80 and 443, and 80 just redirects to 443.

    I also have a BeamMP server that has to have a port open because that’s just how it works, but that VM sits on its own DMZ’d VLAN, and I only open the port when I’m actively playing the game.

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    It just widens your attack surface for the ghost army of bots that roam the net knocking on ports, you don’t want to be someone else’s sap. I would imagine most home attacks fall into three categories: script kiddies just war driving, targeted attacks on someone specific, or just plain ol’ looking for sensitive docs for identity theft or something. Its still the net, man. If you leave your ass hanging out someone’s gonna bite it in a new way every time.