Working with Chalice
I started at Circles Consulting in August and have been mostly working on developing our internal systems. Today I wanted to talk about our backend solution, Chalice, and its differences to the more popular Node.js.
A lot of software developed today is web-based, meaning that you access it with your web browser, as opposed to installing it to your computer. This creates a separator between the code that runs in your browser, or frontend, and the code that runs in the server which sends you the data, or backend.
There are of course a lot of different options available, and Circles had opted for a different solution for their internal system. Chalice is a Python framework designed to create web backends to be deployed to Amazon Web Services. While Chalice backend has many similarities to NodeJS backend implementations like Express, it has some differences as well, and I want to address some of them here.
A central problem in backend development is that things have to happen in parallel: If multiple users want to access the website at the same time, the backend has to be implemented so that the users do not have to wait for the previous user’s request to complete before that can send their own. Node.js accomplishes this by asynchronicity: If a request has to do something time-consuming, like accessing a database, the process queues the database request and handles it when the process is not handling other requests. Node.js also uses threads: multiple processes are running at the same time. Heavy operations like Input / Output are run in a separate process, freeing the main process for handling website requests.
Most of the programmers out there know what Lambda is, but for all other readers, Lambda is a serverless solution that can be used to host runnable functions. This means that you do not have to manage the server, or even container: you just provide the source code, and the platform will run it under certain conditions. In the case of Chalice, whenever a request is made from the website.
This way you do not have to take care of parallelism, since the Lambda platform will take care of it for you: each request causes a separate invocation of your function, which is separate from the other invocations running at the same time. This will make the solution auto-scaling as well, since Lambda provides memory and processing power for each invocation separately.
There are downsides as well: Lambda is billed by each invocation, so more traffic will cause your costs to increase as well. Also, programming errors might lead the functions invoking new Lambda functions, leading to high costs. But all in all, I think Chalice is a viable option as a backend framework, if Python is the tool of your choice.
Written by Kalle Laukkanen, Senior Consultant