Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.

One note: you can absolutely use Python or Node just as well as Go. There's Hetzner that offers 4GB RAM, 10TB network (then 1$/TB egress), 2CPUs machines for 5$.

Two disclaimers for VPS:

If you're using a dedicated server instead of a cloud server, just don't forget to backup DB to a Storage box often (3$ /mo for 1TB, use rsync). It's a good practice either way, but cloud instances seem more reliable to hardware faults. Also avoid their object store.

You are responsible for security. I saw good devs skipping basic SSH hardening and get infected by bots in <1hr. My go-to move when I spin up servers is a two-stage Terraform setup: first, I set up SSH with only my IP allowed, set up Tailscale and then shutdown the public SSH IP entrypoint completely.

Take care and have fun!



Personally for backups I’d avoid using a product provided by the same company as the VM I’m backing up. You should be defending against the individual VM suffering corruption of some kind, needing to roll back to a previous version because of an error you made, and finally your VM provider taking a dislike to you (rationally or otherwise) and shutting down your account.

If you’re backing up to a third party losing your account isn’t a disaster, bring up a VM somewhere else, restore from backups, redirect DNS and you’re up and running again. If the backups are on a disk you can’t access anymore then a minor issue has just escalated to an existential threat to your company.

Personally I use Backblaze B2 for my offsite backups because they’re ridiculously cheap, but other options exist and Restic will write to all of them near identically.


About security, wall of shame story,

Once I had Postgresql db with default password on a new vps, and forgetting to disable password based login, on a server with no domain. And it got hacked in a day, and was being used as bot server. And that was 10 years ago.

Recently deployed server, and was getting ssh login attempts within an hour, and it didn't had a domain. Fortunately, I've learned my lesson, and turned of password based login as soon as the server was up and running.

And similar attempts bogged down my desktop to halt.

Having an machine open to the world is now very scary. Thanks God for service like tailscale exists.


I've had SSH, SMTP, POP3, HTTP, HTTPS and many other services open to the world since the 90's. I have fail2ban running. It is not that scary.


Yes, after changing the ssh port, and fail2ban on the server completely stopped those pesky ssh log in attempts.

But, on home computer, I do not want to be bothered with all the security efforts, and want to keep it simple. But I have plans to put up an isolated server setup someday. But too broke right now, and looking for a job. heh.

I have seen people, who is using simple password based authentication, with really simple password. I always go and fix that first, so, it's too common, which is why It's scary.


Also, strong, random-looking passwords for droplets or apps saved in a text file. Use the Digital Ocean guide on setting up a Linux box securely and the UFW firewall. Then, lighttpd, BunnyCDN (esp for SSL), and periodic updates.

Works so well that it's easy to forget they're running.


Nothing would happen, ssh is designed to be open to the world. Using tailscale or a vpn to hide your IP is fine, but using tailscale ssh maybe not.


Well continuous attempts definitely bogged down my desktop pretty bad. Also, getting OOM on a 64gb machine multiple times a day is quiet annoying.

And one simple mistake, and we're screwed


If sshd is OOMing on 64GB something else is going on…


Well, after changing the ssh port to something really big, OOM and heavy CPU usage stopped, as I was still using that public IP, so concluded it was not an inside job .

There were like thousands of requests in an hour, and that went on continuously, before I changed the port.


Yeah that sounds quite annoying, but has nothing to do with ssh log noise. Maybe investigate what's causing the OOM. I have multiple 1GB vps with ssh open to the world and they never OOM, and they're obviously not just running ssh. It sounds like you've been compromised.


The number of attempts were staggering though, i think there were requests every seconds non-stop.

Once I changed the ssh port to a large number, the OOM and heavy CPU usage stopped, and never came back. So, I think I'm safe, though I keep an eye on the logs, and for any unknown processes, but never seen anything out of ordinary.

The 64gb machine is my dev machine, as my IDE(intellij) runs on high memory config and I run some heavy process, it could've been combined with the ssh spam it went OOM. I still run all the things, without any issues now.


> Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned.

Funny you said that. I migrated an old, Django web site to a slightly more modern architecture (docker compose with uvicorn instead of bare metal uWSGI) the other day, and while doing that I noticed that it doesn't need PostgreSQL at all. The old server had it already installed, so it was the lazy choice.

I just dumped all data and loaded it into an SQLite database with WAL and it's much easier to maintain and back up now.


Yep, it literally is a one-file backup. And runtime it's so much faster for apps where write serialisation is acceptable.


> You are responsible for security. I saw good devs skipping basic SSH hardening and get infected by bots in <1hr. My go-to move when I spin up servers is a two-stage Terraform setup: first, I set up SSH with only my IP allowed, set up Tailscale and then shutdown the public SSH IP entrypoint completely.

Note that you don't need all of that to keep your SSH server secure. Just having a good password (ideally on a non-root account) is more than enough.


Disable password auth and go with key based, it's easier and more secure.


I'd call it unnecessary exposure. Under both modern threat models and classic cybernetic models (check out law of requisite variety) removing as much surface attack area as possible is optimal. Especially disabling passwords in SSH is infosec 1o1 these days. No need to worry about brute force attacks, credential stuffing, or simple human error, which was the cause of all attacks I've seen directly.

It's easier to add a small config to Terraform to make your config at least key-based.


I need more info about devs getting infected over ssh in less than an hour. Unless they had a comically weak root password or left VNC I don't believe it at all


Yes, <1h was a weak root password. All attacks I've seen directly were always user error. The point is effectively removing attack surfaces rather than enhancing security in needlessly exposed internet-facing protocols.


It must have been comically weak, like "root", "password" or something like that


You'll get thousand of attacks a day (and it's been years since I have done this, so probably worse). They try the list of 1000 or so most common passwords across the whole internet. It works often enough to be cost effective.


Yeah exactly. If your password can be bruteforced in 1000 or so attempts you have bigger problems than not having fail2ban on ssh. The parent comment was suggesting someone was hacked in an hour for leaving ssh on default settings, and it's obviously not true.


You're misreading my point. I didn't recommend 'fail2ban' or claimed any machine without it is as good as compromised. I recommended removing the attack surface entirely by not exposing SSH to the public internet. The point is removing an attack surface completely instead of relying on operator competency.

Relying on a 'sane password' is like seeing the stat '1 out of 10 cars is left unlocked' and commenting 'Yeah, but those people are stupid, I'd never forget to lock mine!'. While maybe true, it's irrelevant. It's objectively safer to keep the car in a private garage (Tailscale) than to leave it on a public street. Feel free to leave your car wherever.


particularly as VPS providers typically auto assign a random character root password, suggests the weak one was specifically changed


First step is to get ssh setup correctly, and second step is to enable a firewall to block incoming connections on everything except the key ports (ssh but on a different port/web/ssl). This immediately eliminates a swathe of issues!


Also use fail2ban. If nothing else to decrease the amount of junk in logs.


> Also avoid their object store.

Curious as to why you say this. I’m using litestream to backup to Hetzner object storage, and it’s been working well so far.

I guess itt’s probably more expensive than just a storage box?

Not sure but I also don’t have to set up cron jobs and the like.


Historical reliability and compatibility. They claimed they were S3 compatible, but they were requiring deprecated S3 SDKs, plus S3 advanced features are unimplemented (but at least they document it [0]). There was constant timeouts for object creation and updates, very slow speeds and overall instability. Even now, if you check out r/hetzner on reddit, you'll see it's a reliability nightmare (but take it with a grain of salt, nobody reports lack of problems). Not as relevant for DB backups, but billing is dumb, even if you upload a 1KB file, they charge you for 64KB.

At least with Storage Box you know it's just a dumb storage box. And you can SSH, SFTP, Samba and rsync to it reliably.

[0] https://docs.hetzner.com/storage/object-storage/supported-ac...


Does WAL really offer multiple concurrent writers? I know little about DBs and I've done a couple of Google searches and people say it allows concurrent reads while a write is happening, but no concurrent writers?

Not everybody says so... So, can anyone explain what's the right way to think about WAL?


No, it does not allow concurrent writes (with some exceptions if you get into it [0]). You should generally use it only if write serialisation is acceptable. Reads and writes are concurrent except for the commit stage of writes, which SQLite tries to keep short but is workload- and storage-dependent.

Now this is more controversial take and you should always benchmark on your own traffic projections, but:

consider that if you don't have a ton of indexes, the raw throughput of SQLite is so good that on many access patterns you'd already have to shard a Postgres instance anyway to surpass where SQLite single-write limitation would become the bottleneck.

[0] https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...


No it doesn't - it allows a single writer and concurrent READs at the same time.


Thanks! even I run a sqlite in "production" (is it production if you have no visitors?) and WAL mode is enabled, but I had to work around concurrent writes, so I was really confused. I may have misunderstood the comments.


Writes are super fast in SQLite even if they are not concurrent.

If you were seeing errors due to concurrent writes you must adjust BUSY_TIMEOUT


Thanks I'll have a look. For now I just had a sane retry strategy. Not that I have any traffic, mind you :-)))


Sqlite + Litestream for backups.


When creating a VPS on Hetzner, it lets you by default to configure the key auth only.


From memory this is the case on DO as well




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: