Going through achievements for last year, I would like share my introduction to serverless.
I would like to start from the helicopter view: we had a MVP project implemented as a dozen of
.NET WebAPI (.net core 3.1). Services are supposed to be resilient and scalable. The initial idea was to have them running on
Kubernetes cluster. Sounds nice and simple? It does, in theory... we quickly find out that the system is heavily depends on the platform in this case
Kubernetes. Which means, every noise happens in the cluster having direct implication on services its running. These are couple of things to consider next time while choosing orchestration strategy in future:
First, you still will have to manage
Kubernetes cluster though it is managed. There is no setup and forget shortcut here,
Kubernetes is a complex platform. Even if you are going to spend half an hour a day for monitoring your cluster and checking the alerts, it is still relatively big effort.
Secondly, cloud provider, Azure in our case, forces you to do upgrades of managed
Kubernetes from time to time. Yes, you will be notified a couple of months before, but honestly, you can't prepare yourself for unknown. If you did update
Kubernetes cluster in production once, you might understand this feeling of uncertainty. You can never know what of your services will run after the upgrade. Not to mention custom cluster's internal services like dns, networking and etc, it can easily become a nightmare.
Thirdly, service meshes are tricky. We introduced istio as a service mesh for the cluster to improve observability, security and monitor internal communication between our services. It is definitely very powerful tool, however we faced a several problems with customization. The reason is that most of these tools still shipped as a beta, which can mean couple of bugs and lack of documentation and community support. It is quite straight forward implemeting the standard use cases, but when implemeting custom things its getting worst. Introduction of such infrastructure tool for you system can only enhance two points mentioned above.
This situation just supported skepticism about the correctness of choice we made for
Kubernetes. For intensive system a high availability is major quality attribute.
Kubernetes is so complex and so modular system that it takes a quite effort to tune it for specific needs. The effort resources and learning curve we didn't have.
We've decided going for
Azure Functions and serverless as a target architecure for several reasons:
- We don't actually need to monitor and maintain the platform (k8s in our case)
- We can easily scale up if needed every component separately.
- With serverless we do not share same platform. Yes, it might be running on same Azure server internally after all, but still its Saas (Function as a service) and not Paas.
- Simple deployment comparing to container approach - just build and copy files. Which allows us roll out services in minutes regardless region.
Azure Functions was easy though. We introduced our selfs to a new dotnet-isolated which was super easy to setup. The dotnet isolated mode allows you to enjoy recent .net features were lacking in previous version of
Azure functions such as dependency injection. for more about dotnet-isolated refer to this guide.
Here is a several things we had struggled with:
Different mind set - at first glance not obvious point. Serverless requires different way of thinking when it comes to structuring your logic specially for long running methods. There are couple of limitation you should be aware of migrating to Azure Functions, execution duration limit and amount connections limit. Best example is a worker that gets data from database, processes it and saves back to the db. No-brainer solution might be wrapping your code in one console app and declaring hosted service that fetches data from database, processes and saves, pr. However, for serverless approach, depending on the load of course, it will not work this way due to duration limitation and maybe connection limitation as well. You will have to split into several functions and maybe with introduction of message queue in the middle: one function gets data from database sends it to processing queue, second function picks up messages for processes and saved them.
Bigger Devops effort - considering the above you will probably split your function to several smaller and for each of them CI/CD should be created. The more functions you create, the more devops efforts you need.
Shared logic - suppose you have 20 different functions structured in 20 different projects according to
Microservicesnotation. All of them uses same code snippets that you naturally don't want to replicate x 20 times. When you go for shared nuget package, and everything is clean and nice. However, besides obvious advantage this also will have own downsides e.g. every time you improve/fix this shared code you will have to update and redeploy all 20 projects.
Learning curve - you will need some time to bring your functions to production ready state polishing middlewares, logging and metrics.
As an outcome I would say I'm rather happy with the choise. However do not consider serverless (microservices) as a "silver bullet" solution for everything. We did found relevant arguments for the migration, so you should.
Debugging microservices is always tricky and time consuming specially without proper code level logging in place.
Meet Konso. It's developed as a solution for this challenge and can help with saving development effort for your team up to 30%. 🎯🎉
The key functions are:
🔥 Centralized logging for your microservices
🔥 Tracing with metrics and value tracking events
🔥 Ad-Hoc events exploration
🔥 Saving your queries
🔥 Create alerts and get notified
You can start collecting you project's logs in 5 minutes. 🕐💪
To Learn more about logging tool, book a free demo.
To get started for free, Create your free account now