I wouldn’t mind seeing a new Micro Kernel. The Ethernet stack could be rebooted and drivers without taking down the whole computer module by module.
There are advantages to each.
IMO there are two underrated benefits:
- It enforces separation of concerns
- It provides options to OPS.
Designing for micro services doesn’t mean you need to deploy it as micro services. You can deploy it as a monolith and configure it too skip the network stack
I very much agree with designing things in style of microservices in terms of having isolated components that can be reasoned about independently. In my experience, this is the only way to keep large projects manageable. Incidentally, this is also why I’ve come to appreciate functional approach with immutability as the default. It makes it much easier to write largely stateless code where all the IO happens at the edges, and then you just pass your context around explicitly through pure functions.
A coder after my own heart. State machines are the bane of my existence.
There is no one-size-fits-all architecture. Microservices are fine, but probably not for you.
It’s really not, you’ll be thankful you have it once the system grows too big.
5 mins build and test times Vs 1hr build times.
I know that can be achieved by setting a monolith up to be more segregated in design, but my experience so far is that that rarely happens.
Ms architecture forces the segregation, which helps keep me sane (:
Exactly! Monoliths can work in theory but, in practice, end up becoming bloated messes since it’s just easier to do so.
And in practice micro services become a fragile mess and takes longer to develop new products due to low code share and higher complexity.
Ofc not always the case, just like large monoliths can exist without being a mess.
I somewhat agree but I find that the added complexity is segmented, you shouldn’t need to care about anything but the contracts themselves when working within a micro service.
That means less code to take into account, less spaghetti and an easier time with local testing.
Micro services also have a ton of advantages at the infrastructure level.
Imo if your doing it right your monolith is also broken up into chunks that are segmented with clear defined apis and well tested (apis in this context are whatever your public functions/method/top level objects). With clean internal apis and properly segmented code it should be easy to read and do what you need.
I don’t know if I agree with the infra level. What makes you say it has advantages there?
Biggest two advantages to micro services in my mind is you can use different tools / languages for different jobs and making it easier for multiple teams to work in parallel. Two biggest disadvantages in my mind is you lose code sharing and services become more siloded to different teams which can make it more difficult to roll out changes that need multiple services updated.
There is also the messaging problem with micro services. Message passing through the network rather then in memory. (Ex calling the user_service object vs user_service micro service)
One other big disadvantage of a monolith I also can think of is build time and developer tools can struggle with them. A lot more files/objects to keep track of and it can often make for an annoying development flow.
My preference is to monolith most things and only split off something into a micro service if you really get a big benefit from another tool or language for a specific task.
Micro services are a lot easier to scale out since they behave independently from each other, you can have different levels of replication and concurrency based on the traffic that each part of your system receives.
Something that I think is pretty huge is that, done right, you end up with a bunch of smaller databases, meaning you can save a lot of money by having different levels of security and replication depending on how sensitive the data is.
This last part also helps with data residency issues, which is becoming a pretty big deal for the EU.
Something to consider is a monolith can have different entry points and a focused area of work. Like my web application monolith can also have email workers, and background job processers all with different container specs and scaling but share a code base.
And coming from a background where I work heavily with Postgres a bunch of smaller segregates databases sound like a nightmare data integerity wise. Although I’m sure it can be done cleanly there are big advantages with having all your tables in one database.
The main problem with microservice architecture is around orchestration. People tend to downplay the complexity involved in making sure all the services are running and talking to each other. On top of that, you have a lot of overhead in having to make endpoints and client calls along with all the security concerns where it would just be a simple function call otherwise. Finally, services often end up talking to the same database, and then you just end up with your shared state in the db which largely defeats the point.
This approach has some benefits to it. You can write different services in different languages. Different teams can be responsible for maintaining each service. The scope of the code can be kept contained reducing mental overhead. However, that has to be weighed against the downsides as well. At the end of the day, whether this is the right architecture really depends on the problem being solved, and the team solving it.
I’ve worked on projects where microservices resulted in a complete disaster and that ended up being rewritten as monoliths, and ones where splitting things up worked fairly well.
What I’ve found works best is having services that encapsulate some particular functionality that’s context free. For example, a service that can generate PDFs for reports that can be reused by a bunch of apps that can send it some Markdown and get a PDF back. Having a service bus of such services gives you a bunch of reusable components, and since they don’t have any business logic in them, you don’t have to touch them often. However, any code that deals with a particular business workflow is much better to keep all in one place.
And the dev experience IMO is much better. I don’t have to deploy a huge ass service to test a tiny feature.
If you have to deploy your service to test features instead of being able to test them locally while developing them then you have a really poor dev workflow.
If you don’t have a staging environment for doing integration testing of your feature in a non dev environment, you have a poor dev workflow. I never said I don’t test locally. And even then, I don’t want to run a huge monolith in my local environment if I don’t work with 90% of it.
Nowhere did I say you shouldn’t have a staging environment. However, if you can develop and test changes locally then by the time it goes to staging, the code should already be in good shape most of the time. Staging is like your guardrail, it shouldn’t be part of your main dev loop.
Meanwhile, not sure what the issue is with running a monolith locally. The reality is that even large applications aren’t actually that big in absolute terms. Having to run a bunch of services locally to test things end to end is certainly not any easier either.
If you need to run a bunch of services locally manually, then you’re doing it wrong
Do tell how you do end to end testing without running services locally.