3 serverless downsides your cloud provider won’t mention

Serverless is a game changer. As we look to accelerate the post-pandemic movement to cloud, we would love to remove the step of sizing the cloud resources we think the workloads will need.

Serverless automatically provisions the cloud resources needed, such as storage and compute, and then deprovisions them once the workloads are through processing. Although some call this a lazy person’s cloud platform service, removing the need to guess about provisioning the correct number of resources will keep you out of trouble these days.

However, with all the upsides there always are a few downsides. I have three to review with you.

Cold starts, which are caused by running a serverless function in a virtual private cloud, may result in a lag or a cold start time. If you’re remembering starting your mom’s Buick in high school, you’re not far off.

Moreover, different languages have different lags. If you benchmark them, you’ll get interesting results, such as Python being the fastest and .NET and Java being the slowest (just an example). You can use tools to analyze the lag durations and determine the impact on workloads. If you’re at all in serverless, I suggest you look into those tools.

Distance latency is how far away the serverless function is from the ultimate users. This should be common sense, but I see companies run serverless functions in Asia when the majority of users are in the United States. The assumption is that bandwidth is not an issue, so they look for convenience instead of utility, and don’t consider the impacts, such as the admin being located in Asia.

Copyright © 2020 IDG Communications, Inc.

Source link