But why? If you don’t need moving parts, don’t use moving parts. Simplicity is king.
But why? If you don’t need moving parts, don’t use moving parts. Simplicity is king.
Notesnook locked my session every other week requiring me to authenticate again. But it doesn’t even do this like every other app out there, it first asks for the TOTP code and then for the password. So reverse from how password managers work, requiring a lot of manual back and forth.
So quickly writing a thought down sends me into a chain of context switches and completely disrupts me.
I gave up after a while. Happened on all my devices too, so it wasn’t a weird setup either.
If you run it in old-school CGI mode, no, because each request would spawn a new process. But that’s nowhere near state-of-the-art. So typically you would still have a long-running process somewhere that could manage a connection pool. No idea if it does, though. Can’t imagine that it wouldn’t, however, since PHP would be slaughtered in benchmarks if there was no way to keep connections (or pools) open across requests.
Anything under like 100ms load is instant to the user, especially a page load.
True, but it accumulates. Every ms I save on templating I can “waste” on I/O, DB, upstream service calls, etc.
For a bit of templating? Yes! What drives response times up is typically the database or some RPC, both of which are out of control of PHP, so I assume these were not factored in (because PHP can’t win anything there in a comparison).
Pff. I know someone who generated programs using XSLT.
What exactly do you mean? Typically you go to a website, register the domain, setup payment and then setup the nameserver. No need to install anything on your end.
Same with hosting. You sign up, setup payment, order a machine (root or virtual) and then you get SSH credentials and are good to go.
Where did you look at? The backend had a commit 2 days ago, the frontend rewrite 2 weeks ago. Looks active.
Damn, I never saw it that way. In that regard the EU regulation could actually harm the browser market, because it lowers the incentive for service providers to support anything but Chrome. At the moment that would exclude all iPhone users (which hurts business, because that’s a lot of users with large pockets). But then they could simply shrug and tell their users to install Chrome. 😐️
It’s easier to share (so my wife can maintain and use the collection as well) and I am not locked into the apple eco system. Plus, I can then define different views and queries.
Could work. Maybe also something like AirTable (SeaTable probably) for the metadata. Hmm.
Sounds good. Probably less work than writing something from scratch. Thanks 🙂
I have probably around 40 T-Shirts (most with some funny or technical prints), obviously several business shirts in different colors, different trousers, etc.
I have difficulty keeping an overview and keep reusing only a very small subset. I also don’t want to pull them all out of the wardrobe each time to sift through them to figure out what would fit my mood today.
For cloths (bed cloths especially) the problem is mostly that their instructions are unreadable after a while and I then have no clue what I can wash at 95°C, what only at 60 an what may only survive 40.
I could likely also abuse Grocy. But I think for Wardrobe a visual galery is important. I’ll take a look at ERPnext nonetheless… maybe they have some picture centric view.
They all offer managed kubernetes. So that would be my common divisor.
If no legal issues stand in your way and your uptime requirement warrant the invest, you can design and host your system across multiple providers. So instead of “just” going multi-datacenter within for example Azure, you go multi-datacenter across Azure, AWS, GCP, etc.
Isn’t part of “the cloud” being able to scale? That only works if there is a large® shared infrastructure layer. Of course I can have my own datacenter where I host my clustered services. But if I decide I need 20% more resources, I need to order and setup 20% more machines. On the other hand, if I just keep 20% machines idling around for the chance that I might need to scale up, I waste a lot of money.
Same experience here. S3 is essentially a key/value store to simply put and retrieve large values/blobs. Everything resembling filesystem features is just convention over how keys are named. Comminication uses HTTP, so there is a lot of overhead when working with it as an FS.
In the web you can use these properties to your advantage: you can talk to S3 with simple HTTP clients, you can use reverse proxies, you can put a CDN in front and have a static file server.
But FS utils are almost always optimized for instant block based access and fast metadata responses. Something simple like a find
will fuck you over with S3.
ZFS. I want snapshots, volumes, encryption etc. btrfs fucked me over too often. Also I prefer the semantics of the zfs
and zpool
utils and the way mount points are handled. Thanks to ZFSBootMenu I can even have /boot
as a zfs volume and have it therefore incluced in my snapshots. And did I mention that all of that is encrypted? Anyway. Love it.
There’s nothing wrong with UDP. At least not that I know of.