Historically, cloud usage has involved a significant amount of system operations (SysOps) activities in terms of provisioning, configuring, monitoring, and management of virtual machines in the compute ecosystem. The more complex the workload to be processed, the more complex SysOps becomes, especially in big data environments. From an economic standpoint as well, it was critical to get these system configurations right so as not to overpay for unnecessary resources. SysOps activities were also a major overhead for software engineers wanting to deliver software faster and often needed IT infrastructure specialists to support in the set up as well.
Serverless computing is promising to deliver faster and more reliable software solutions while significantly reducing time spent in configuring cloud infrastructure for scalability. It offers a new paradigm of creating small code blocks that can execute in response to events or specific request calls using HTTP/HTTPS.
Why is it important as a paradigm?
Serverless computing is a logical evolution of the microservices approach to architecting software. The idea here is to let the cloud service provider provision and manage the underlying compute infrastructure and let the developer focus purely on the functionality that is to be delivered. This has several benefits.
- Assured scalability: To start with, scalability is assured. Typical set up of auto-scaling features on cloud compute clusters is relatively time consuming and needs careful monitoring and fine-tuning over time. Public cloud providers have integrated monitoring and optimization tools into their environments to suggest configuration optimizations to users in response to this problem.
- Ideal for event driven scenarios: Serverless computing is ideal for event-driven architectures such as those one might encounter, for example, in Internet of Things (IoT) scenarios. Traditional auto-scaling can have warm-up times for clusters and scaling — both up and down — may not be seamless. Serverless computing is ideal to execute small blocks of code in response to event triggers and pay only for the fractional resource times that are consumed.
- Assemble low-cost microservices architecture: With serverless computing, several cloud functions can execute in parallel or independent of each other in response to events/triggers mirroring concurrency in execution. Also, smaller code blocks deployed in serverless computing environments are easy to test and manage. The cloud functions themselves can expose clean, representational state transfer (RESTful) interfaces to work with more such functions or other elements of an application. Developers can quickly put together an architecture mirroring microservices by deploying several cloud function that work together. Several leading platform developers are adopting this strategy to deploy apps in a highly cost-efficient manner.
One such use case is a “testing-on-demand” infrastructure. In this scenario, developers can use a serverless computing module which will spin up the test environment in response to upload of a test script. Similarly for developing applications, which work along with next generation user interfaces such as Alexa and Google Home and for developing a chatbots, serverless computing is ideal. Simpler actions such as triggering email(s) to customer and inventory updates every time an order is confirmed are most cost effective when executed in serverless computing environments using an AWS Lambda or a Google Cloud Function.
Despite these advantages, as of today, there are a few limitations in the serverless computing environment, though these are likely to be remedied in coming days. To start with, there is a limitation on the size of code that can be deployed and support is provided only for a few programming languages. The architectural approach has to be centered around the microservices paradigm to start with. Developers/architects need to be very disciplined in the manner in which they are using serverless computing. Typical chunky code blocks and monolithic architectures should be avoided. Also, this is still an evolving technology, so usage in highly performant systems is still open for debate.
Bigdata and data center