Hello,
I’m currently using Minio as an easy database for serving my images. To make things simpler everything is set to public, so that just with the URL, you can access it directly. While it’s working great for my website, by setting everything public you can easily see ALL the images. So my question is : What is the best way to setup my node JS app as a proxy ? Is it going through the full S3 protocol hell mess, or is there any solution ?
PS : I have a lot of images, so setting everything in the node app is not possible
Very cool! Thanks for posting this. Minio was great, but they started tailoring to enterprise clients, and it’s become more and more annoying to keep it running in a homelab. (Security is 100% a great thing, but forcing high levels of security on me when I’m running 2 containers in a compose stack, where the minio container will never have exterior access… eh, I just gave up). So, I’m happy there’s one tailored a bit more towards self hosters
We ship a single dependency-free binary that runs on all Linux distributions
It’s like 20 years of security awareness vanished in an instant.
Looks like its all rust, so your distro would just ship a binary too.
Why?
Dependency-free doesnt mean they dont have dependencies. Its just that they bundle them all in the executable. When there is a security vulnerability in a library on your Linux system the vendor of your distribution (Canonical, Redhat, SUSE) takes care that it is fixed. All dependent software and libraries are then fixed as well. All I say? Not the ones which have been bundled in the executable. First they need to find out that you are affected and then the maintainer has to update the dependency manually. Often they can only do this after there has been a coordinated release of the fix by the major distributors, which can leave you vulnerable no matter how fast the maintainer is. This is the way it is in Windows. (This was a short summary)
this is the same with every docker container.
Yes, in the sense that you are responsible to update the Docker container and often this can lead to vulnerable containers. No, in the sense that it is much easier to scan for dependencies inside a Docker container and identify vulnerabilities. Also most containers are based on Linux distribution, so those distribute the security fixes for specific libraries. All you have to is update the base image.
You know that you can configure minio to only serve images for authenticated requests, right?
Don’t reinvent the wheel unless you have a very good reason to do so.
Why do you use minio for image serving ? There are much better ways to do so. Nextcloud, Immich, Photoprism and others…
I’m not entirely sure what you’re seeking to accomplish here - are you looking to just impose authorization on a subset of the images? Probably those should be in a non-public bucket for starters.
Looking to only give certain people access to files and also have a nicer UI (a la Google Drive / Photos)? Maybe plain S3 isn’t the play here and a dedicated application is needed for that subset.
Pre signed URLs may also be a thing useful to what you’re trying to to solve. https://docs.min.io/docs/javascript-client-api-reference.html#presignedGetObject
Hi, just a disclaimer as I’ve never used Minio in a serious/professional project, only in home fiddling and school projects, so I may not suggest the optimal way to do things.
You can create Access Keys (Service Accounts) in Minio for your application, and make the bucket(s) accessible only by this account (private with an access policy, I guess). In your JS app, if I remember correctly, you should be able to specify an access key to use for the connection.
Access keys are credentials, so be sure to store them (and pass them around) safely.
I hope this gives you at least a general direction to investigate, happy tinkering!