Home > Digital technologies > Virtualisation, cloud, DevOps > Serverless architectures: where to start? 

Serverless architectures: where to start? 

Published on 28 May 2025
Share this page :

Serverless is a cloud development model that allows developers to create and run applications without having to manage the underlying infrastructure (servers, OS, etc.). The cloud provider takes care of everything, including scaling, server management and high availability. But where do you start? Which cloud platform should you choose? And for what uses? Find out about best practice, the pitfalls to avoid, and follow our step-by-step roadmap...

Image Article Serverless architectures

Le serverless is now a major trend in cloud computing. More and more organisations are adopting this model to gain greater agility and reduce their infrastructure costs.

The serverless market will be worth around $25 billion by 2025, driven by demand for agility and cost reduction. For IT Departments, this represents a strategic lever, but also a challenge to master.

According to a Datadog report from 2023, the majority of companies using AWS or Google Cloud already have at least one serverless deployment, and almost half of Azure users are doing the same.

But what exactly is serverless, where does it come from and what is it changing for IT professionals?

What is serverless?

Serverless is a cloud-native development model that enables applications to be created and run without the need to manage physical or virtual servers.

Contrary to what the term suggests, we're not talking about a total absence of servers, but rather a complete abstraction of their management The cloud provider takes care of provisioning (preparing and configuring the necessary hardware and software resources), maintenance, scaling and availability of the underlying infrastructure.

The developer concentrates exclusively on the code and business logic, which is often packaged in the form of functions or containers. The application runs on demand, in response to events, and billing is based on actual usage, which optimises costs, particularly for irregular or unpredictable workloads.

Difference from standard cloud computing

In a standard cloud computing model as theIaaS (Infrastructure-as-a-Service), users buy units of capacity in advance. They pay a public cloud provider for permanently active server components to run their applications. It is their responsibility to use and increase server capacity during periods of high demand and to reduce it when it is no longer needed. The cloud infrastructure remains active even when the application is not in use, which can lead to costs.

Serverless, on the other hand, is an event-driven execution model. Applications respond to demand and adapt automatically as required. When a serverless function is inactive, it costs nothing. The cloud provider manages the infrastructure, including scaling and maintenance. The main objective is to allow developers to concentrate on the code of their applications while the provider manages the underlying infrastructure.

Serverless applications are deployed in containers that start up automatically on request.

The different types of serverless computing

Serverless mainly falls into two categories:

Function as a Service (FaaS) is often considered to be at the heart of serverless - it involves deploying application functions that are triggered by events, executed ephemerally and stateless (stateless) in temporary containers. The server-side code remains the responsibility of the developer, but its execution is managed by the cloud platform in containers that retain no data between two calls. This means that each time the function is invoked, it starts in a new environment, which means that short tasks are required that do not depend on data in persistent local memory.

Le Backend-as-a-Service (BaaS)  refers to ready-to-use backend services provided by a third party. For example, managed database services, authentication, file storage, real-time messaging, etc., where the developer consumes an API without having to deploy these components themselves.

What are the origins of serverless?

The concept of serverless is the result of a gradual evolution in the world of cloud computing. Long before the term became popular, the idea of relieving developers of server management already existed in other forms.

Platform as a Service (PaaS)In the late 2000s, for example, the Google App Engine and Heroku offered a foretaste of serverlessness, enabling applications to be deployed without managing the infrastructure. However, the term "serverless itself did not appear until later to designate an even more granular and event-driven approach to cloud computing.

The turning point came in 2014 when Amazon Web Services launches AWS Lambda. For many, this is the concrete starting point for serverless as we understand it today. AWS Lambda introduces a model for executing functions on demand, triggered by events, with per-invocation billing and completely transparent autoscaling. No need to start or stop VMs - the code runs in an ephemeral container managed by AWS and scales automatically.

Since then, the serverless ecosystem has gone from strength to strength. New services have emerged to complement the execution of functions: for example, databases serverless (AWS DynamoDB in on-demand mode, Azure Cosmos DB serverless), workflow processing (AWS Kinesis, Google Cloud Pub/Sub + Functions) or serverless orchestrators (AWS Step Functions, Azure Logic Apps).

Open source solutions have also emerged (OpenFaaS, Apache OpenWhisk used by IBM Cloud) to enable serverless outside the large clouds.

What are the advantages of serverless?

Serverless has been so successful because it offers numerous advantages to development teams and businesses. Here are the main benefits:

- No server management, no more focus on the application. The first, obvious advantage is free developers from system administration tasks. No need to configure or maintain machines, OS patches or middleware - the cloud takes care of it. As a result, the team can concentrate on application logic, functionality and user experience, accelerating development.

- Automatic scalability and elasticity. Serverless functions and services automatically adapt to the load. Whether there are 10 requests a day or 10 million, the platform dynamically allocates the instances needed to process the requests in parallel. This instant elasticity is a major advantage when it comes to absorbing unexpected traffic peaks without manual intervention. For example, an e-commerce website hosted via a Gateway API + Lambda functions will be able to absorb a spike in visits during Black Friday without the developers having to plan for overcapacity - the functions multiply on demand. Conversely, in off-peak periods, no resources will be running unnecessarily. Sizing is always "just right", which improves service availability without any particular architectural effort.

- Pay-as-you-go, cost optimisation. The business model for serverless is generally pay-per-useIn other words, it is billed on demand, according to actual consumption. Unlike a traditional server that is billed by the hour (even if it does nothing), a serverless function only costs something when it is executed. This is how it works, "resources are never inactive, they are only activated on demand".. Customers only pay for the resources actually used (CPU time, memory, number of calls), not for idle time. This model can generate substantial savings, particularly for intermittent or unpredictable workloads. For example, an application whose activity is highly variable over time costs much less with serverless than with a permanently allocated server.

- Performance and speed of installation. Serverless platforms take advantage of the capabilities of clouds to offer very short response times. The start-up of a function is almost instantaneous (a few milliseconds for a 'hot' container), and even if a new instance is cold-started, the delay remains very low in most environments (particularly with interpreted languages). What's more, many components are already supported by the supplier (load balancing, CDN, authentication, etc. via managed services), the time required to develop and deploy a new application is reduced. You can go from an idea to a working prototype in a matter of hours. For example, a developer can code and deploy a complete REST API in just a few minutes using AWS Lambda and API Gateway, whereas configuring a traditional server stack would have taken days. The rapid provisioning of serverless (no delay in ordering a server or starting up a VM) means that development can be carried out very quickly.

Serverless also encourages good DevOps practices: infrastructure as code (IaaS), frequent deployments, and close collaboration between devs and ops. Developers are encouraged to adopt a "DevOps without a serverThey integrate the infrastructure (albeit managed by the cloud) directly into their development cycle via frameworks (Serverless Framework, AWS SAM, Terraform, etc.). Because the operational barrier to entry is lower, there is often a a stronger DevOps culture and a more democratic approach to production. For example, a developer can easily include the configuration of a function and its triggers in the same code repository as the application, promoting autonomy and responsibility for the entire lifecycle.

- High availability and fault tolerance. By design, serverless services are distributed over the cloud provider's infrastructure, which ensures redundancy. A deployed function is generally replicated across several zones to be tolerant of hardware failures. Providers often guarantee high availability (for example, AWS Lambda is deployed on several availability zones by default). The developer has no extra effort to make to benefit from this robustness. What's more, the stateless nature facilitates failover in the event of a problem: a function instance that fails does not affect other calls, and orchestration systems can automatically restart failed executions. Coupled with other managed services (replicated database, durable object storage, etc.), the result is an architecture that is inherently resilient to regional or hardware failures, without having to configure failover manually.

- Efficiency and eco-design. Using only the resources that are strictly necessary has both ecological and technical benefits. From a technical point of view, it eliminates idle resources The infrastructure is shared as much as possible between customers and allocated on the fly, which improves the overall efficiency of the network. data centres. From the point of view of software eco-design, serverless avoids continuously running servers that are rarely used, thereby helping to reduce energy wastage. The major suppliers optimise their installations for energy consumption, so delegating computing to these platforms can reduce the carbon footprint of your application. This is an increasingly popular argument: a well-designed serverless application minimises the use of CPU and memory, which, multiplied at scale, has a lower environmental impact.

Naturally, these advantages are accompanied by counterparties and limits (detailed below in the pitfalls to avoid). But in many scenarios, the gains in productivity, cost and flexibility make serverless an attractive choice compared with traditional architectures on dedicated servers or even managed containers.

Where to start?

Given the promise of serverless, it may be tempting to adopt it straight away. However, it's important to get off to a well-considered start. Where do you start when you want to embark on serverless? The first step is to a clear understanding of the concept and its use cases. It is advisable to read up on how serverless functions and their event models work, and to look at the examples provided by the major suppliers (AWS, Azure, GCP) to understand how this architecture differs from a traditional approach. In other words, you need to adopt the "state of mind serverless: think events, statelessThis means that you no longer have a permanent server to store sessions or temporary files, for example.

Next, we need toidentify a simple and appropriate initial use case to get the hang of it. Rather than switching an entire critical application directly to serverless, it makes sense to choose a small-scale pilot project. For example, this could be the development of a small internal API, a scheduled automation script or isolated data processing. The aim is to familiarise yourself with the development-deployment cycle and its specific features (log management, monitoring, permissions management) without major risk. Ideally, this first project should have characteristics that are conducive to serverless: a variable or rare load (to take advantage of pay-per-use), no need to keep a state in memory between two executions, and non-critical performance down to the millisecond (to tolerate possible "crashes"). cold starts). A common example is the creation of a small PDF report generation service that is triggered on demand, or a function that runs every night to clean a database - well-defined tasks adapted to the serverless model.

And finally.., where to start also involves prepare its development environment. This generally means opening an account with a cloud provider (or re-using your company's account), installing the appropriate command-line tools (e.g. AWS CLI, Azure CLI, Google Cloud SDK) and possibly a deployment framework. Most platforms offer free or very low-cost tiers for getting started, so you can experiment without budget constraints. For example, AWS Lambda offers one million free invocations per month, which is more than enough for initial testing. We therefore recommend that you take advantage of these free third party. To sum up, we start with training, choosing a small project, acquiring the necessary tools and accountsThen we can move on to concrete action.

Getting started with serverless: a multi-stage roadmap

Once the foundations have been laid and a target use case identified, it's time to get down to business. To help you get started, here's a step-by-step roadmap to get you off to an effective start with serverless. This step-by-step guide will help you to develop your skills and avoid common pitfalls:

Step 1: Choosing your supplier and environment

Start by selecting the serverless platform that makes the most sense for you. The choice often depends on your context: whether your company already has a strong presence on AWS, AWS Lambda will be a natural choice if you work in a Microsoft environment, Azure Functions fits in well; for specific needs or an attachment to Google Cloud, Google Cloud Functions or Cloud Run will be appropriate. Each major cloud has its strengths, but to get you started, they all offer rich documentation and ready-to-use examples. Once you've made your choice, make sure you install the appropriate development tools (CLI, SDK) and configure your access credentials (API keys, etc.). Prepare your local environment with the language you are going to use (Python, Node.js, C#, etc.). - Choose a supported language that you already know, so that you don't have to learn serverless and a new language at the same time).

Step 2: Deploying an initial "Hello World" function

Start with a minimal example to validate your development chain and understand the complete cycle. For example, create a small function that simply returns a "Hello World" message or today's date. Use the provider's web console or command line tools to deploy this function. On AWS Lambda, this can be done directly via the console in just a few clicks, or via the command aws lambda create-function. On Azure, you can use VS Code with the Azure Functions extension to initiate a local function project and publish it. The purpose of this step is to check that you know package your code, deploy and run it in the cloud. Test the invocation of your function: for example, if it's an HTTP function, call its public URL; if it's an event-triggered function, use a test mechanism (AWS Lambda offers a test tool in the console). Once "Hello World" is working, you've already achieved an important psychological milestone: your code is running "somewhere in the cloud" without you having configured a server!

Step 3: Add a trigger and integrate a service

Enrich your example by adding a real trigger event and possibly integration with a cloud service. For example, connect your function to a Gateway API to trigger it with an HTTP REST request (typical case for creating a serverless API). Or configure a time trigger (a cron) if your function needs to run in a scheduled manner - on Azure Functions, this is a Timer Trigger easy to configure, on AWS you use EventBridge (formerly CloudWatch Events) to schedule execution. You could also link your function to a storage event: for example, an image dropped into an S3 bucket that automatically triggers a Lambda image processing function. Take advantage of this step to call a managed service, such as a database or an external API, from the function, so that you can understand how to manage outgoing calls and permissions. For example, your function could write a record to DynamoDB (AWS NoSQL database) or in Firestore on GCP, or send a message to a queue (Azure Service Bus, AWS SQS). You will learn how to configure security roles (IAM on AWS, Identity/Access Management equivalent on GCP/Azure) to give your function the necessary rights, without opening up more than necessary. This stage consolidates your control of the interactions between the function and the ecosystem.

Stage 4: Structure and deploy using a framework

As soon as you go beyond a simple function, it's good practice to automate deployment using descriptive code (Infrastructure as Code). Use a tool like Serverless Framework, AWS SAM (Serverless Application Model), Terraformor your cloud's native tool (Azure ARM/Bicep, Google Cloud Deployment Manager). Take your project and transform it so that the configuration (triggers, linked resources) is described in a file (YAML, JSON or HCL depending on the tool) and the entire deployment can be done by a single command. For example, with Serverless Framework, you write a file called serverless.yml command, which defines the function, its runtime and the event that triggers it, then the serverless deploy creates everything automatically. This step may seem like an extra effort, but it gets you used to managing your code + configuration in a versioned way, and will make it much easier to evolve to several functions and environments (staging, production). This is also the time to organise your code into modules, manage dependencies (for example, use a requirements.txt in Python or a package.json in Node.js to embed the libraries needed for the function).

Step 5: Test, monitor and optimise

Once your function (or small set of functions) has been deployed via a framework, make sure you put in place the test and monitoring tools suitable. Test locally if possible: most frameworks offer emulators or local test tools to simulate execution (for example, the serverless invoke local allows you to test a Lambda locally). Set up unit tests on the logic of your functions, and integration tests that actually invoke the function deployed in the cloud to check its behaviour. On the monitoring side, familiarise yourself with the : CloudWatch Logs for AWS Lambda, Application Insights for Azure Functions, etc. Check that your function is writing useful logs (for example via console.log or print) and that you can consult these logs in the event of a problem. Also keep an eye on the basic metrics: number of invocations, average duration, any errors or timeouts. This stage teaches you how to debugging in a serverless environmentThis is a little different from traditional debugging on a server (you can't simply connect to a remote machine). Take the opportunity to optimise the configuration: for example, adjust the memory allocated to the function (more memory can speed up CPU execution at the cost of a higher unit cost - there's often an optimum point to be found). Observe the cold starts (cold starts): if you notice a noticeable delay on the first invocation after a period of inactivity, consider solutions to mitigate this (keeping a function "warm" by making regular calls, or on AWS Lambda, enabling Provisioned Concurrency to pre-allocate a permanently active instance if really necessary).

Stage 6: Extending the architecture and industrialising

Once you have successfully completed an initial pilot project, you can progressively extending the use of serverless to other components of your system. Identify other functionalities or microservices in your application that could benefit from migration to serverless. For example, after a simple API, perhaps migrate image processing, or set up an asynchronous notification system via functions. Build a more complete architecture bit by bit, while ensuring overall consistency. At this stage, you should also consider industrialise your CI/CD pipelines for your functions: integrate deployment into your workflows (for example, automatically deploy the function on the test platform each time you commit the corresponding branch). Make sure that the configuration management strategy (environment variables, for example) and security strategy (encryption from secretskey rotation) is in place for more advanced use in production. Finally, document best practice in your business context, train your colleagues and share feedback. The adoption of serverless may involve a cultural change (infrastructure managed by code, greater dependence on a supplier), so it is important to support the team in this transition.

By following this roadmap, you will gradually move from being a novice to a seasoned serverless practitioner. Each step consolidates your skills and ensures that you don't skip any stages, thereby reducing the risk of error when adopting this new approach.

Which serverless platform should you choose?

The choice of platform is a key issue when it comes to serverless. There are many numerous serverless offers and the choice will depend on technical, strategic and sometimes personal criteria. The three major cloud providers - Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP) - largely dominate the landscape, with mature solutions that are integrated into their respective ecosystems. But we shouldn't forget the existence of other players and specialist platforms, which may be suitable for certain uses.

Here's an overview of the main serverless platforms and the factors to consider:

  • AWS Lambda (AWS) AWS Lambda is the pioneer and most widely used offering on the market. AWS Lambda benefits from a very large ecosystem of integrated services (API Gateway to turn it into a web API, S3 to trigger file addition, DynamoDB Streams, CloudWatch Events, etc.). The community around AWS Lambda is massive, which means numerous examples, tutorials and third-party tools (Serverless Framework was born with AWS Lambda as its first target). AWS also offers complementary services such as AWS Step Functions for orchestrating multiple Lambdas in workflow, or Amazon EventBridge for cross-application event management. If you're looking for versatility and maturity, Lambda is an excellent choice. On the limitations side, AWS Lambda imposes a maximum execution time (15 minutes per invocation) and has historically suffered from cold starts a little longer for certain runtimes (e.g. Java or .NET). But AWS has introduced optimisations (Provisioned Concurrency) and even the possibility of pack functions into Docker images for greater flexibility.
  • Azure Functions (Microsoft Azure) Azure Functions: very well integrated into the Microsoft ecosystem, Azure Functions is a natural choice for companies already using Azure or Microsoft tools (.NET, Visual Studio). It supports numerous languages (C#, JavaScript/TypeScript, Python, Java, PowerShell, etc.) and integrates easily with other Azure services (Azure Event Grid for events, Azure Cosmos DB for output/input, Azure DevOps for CI/CD, etc.). Azure Functions is unique in that it can be hosted not only in 100 % serverless mode (consumption plan on demand) but also in dedicated or premium allowing you to have pre-allocated instances, which can reduce cold latency. For a .NET developer or a Microsoft 365 context, this is often the ideal platform. The development interface, particularly via Visual Studio Code, is very user-friendly for creating and deploying Functions. Azure has also focused on developer productivity Azure: local debugging, testing with Azure Functions Core Tools, etc.
  • Google Cloud Functions and Cloud Run (Google Cloud) Google offers two approaches. Cloud Functions is the direct equivalent of Lambda/Azure Functions - a managed event-driven function. Cloud Run is a serverless service for containers: you deploy a Docker image (containing a small web server or application, for example), and Google runs it serverlessly (autoscaling and pay-per-use too). Cloud Run has become very popular because it combines the ease of serverless with the flexibility of containers (unrestricted choice of language, ability to include binaries, etc.).
  • Other platforms Beyond the 'Big Three' of the cloud, there are serverless offerings to consider. These include IBM Cloud Functions is based on Apache OpenWhisk and also offers multi-language FaaS. Oracle Cloud has a Functions service (based on Fn Project). Alibaba Cloud in China has Function Compute. Also worth noting Cloudflare Workers which offers an original approach to serverlessness by running it on the edge (edge computing) directly as close as possible to users, which excels for very low latency requirements on distributed content. Cloudflare Workers uses a different model (isolation via V8 isolates) and supports JavaScript/Workers syntax in particular. This is a good choice for generating web pages dynamically at the edge of the network, for example, or implementing APIs that are globally distributed. On a different note, modern PaaS services such as Netlify Functions or Vercel Functions offer integrated serverless for web developers, coupled with the deployment of static sites.

So how do you choose?

The main criterion will often be existing knowledge and the ecosystem. If your team is already trained in AWS and makes extensive use of its services, chances are that AWS Lambda will be just fine (especially as many third-party tools are AWS-compatible to begin with). Likewise, don't underestimate the vendor lock-in If you're worried about proprietary lock-in, you should know that migrating a serverless application from one provider to another is not trivial. It may be easier to remain consistent with your main cloud.

Each platform has its own specific features (time and memory limits, supported formats, configuration modes). It's best to study them in advance to see which one suits your needs. For example, AWS Lambda imposes a maximum of 15 minutes per execution, GCP Cloud Functions 9 minutes, Azure Functions 10 minutes (in terms of consumption) - if you are planning longer processing times, Cloud Run or Azure Functions Premium would be more suitable. Another example, in terms of start-up performance: AWS functions on a Node.js or Python runtime start up very quickly, whereas a Java function may take longer; Azure Functions offers a very efficient .NET runtime if you code in C# because the platform is optimised for .NET.

Cost and billing model can also come into play: the big three are broadly similar in terms of pay-per-use (a few tens of cents per million invocations, plus CPU/memory time). However, there may be differences in the options: Google Cloud Functions may or may not include outgoing network calls in the billing, AWS charges separately for traffic via API Gateway, and so on. Depending on your use case (for example, a heavily used API where the main cost may be the API Gateway), this may influence the choice you make.

To sum up, when choosing a serverless platform, take the following into account your technological environment (affinity with AWS/Azure/GCP), the targeted use cases (simple functions vs. more complex containers, global real-time requirements, etc.), the technical data (supported languages, limits, performance) and, if necessary, the costs. The good news is that it's entirely possible to succeed with any of the major platforms - they're all tried and tested. Some even adopt a multi-cloud strategy, exploiting each platform for the best it has to offer (e.g. Cloudflare Workers for the edge + AWS Lambda for the internal backend). As a beginner, however, it's advisable to concentrate on one platform at first, to build up your skills on it, before eventually exploring another.

In the space of just a few years, serverless has become an established pillar of modern computing. By freeing developers from server management, it enables them to innovate faster and at lower cost. Serverless is now being applied to a wide range of use cases: from the web to data, from automation to IoT, via the machine learning. Of course, there is no one-size-fits-all solution, and serverless has its limitations, but properly managed, it offers a a powerful lever for transformation for information systems. So, whether you're a developer, architect or technical manager, it's highly likely that now - and even more so in the future - you'll cross paths with a serverless architecture.

Our expert

Made up of journalists specialising in IT, management and personal development, the ORSYS Le mag editorial team [...]

field of training

associated training