Concurrent execution of Monte Carlo serverless functions across AWS, Google, IBM, and Alibaba

Citation Author(s):
Sasko
Ristov
University of Innsbruck
Stefan
Pedratscher
University of Innsbruck
Thomas
Fahringer
University of Innsbruck
Submitted by:
Sashko Ristov
Last updated:
Fri, 09/11/2020 - 17:58
DOI:
10.21227/tz10-0f93
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Logs from running Monte Carlo simulation as serverless functions on Frankfurt, North Virginia, Tokyo regions of four FaaS systems (AWS, Google, IBM, Alibaba).

Each execution is repeated 5 times (all are warm start). 

The conducted analysis is a part of a submitted manuscript to IEEE TSC. 

Instructions: 

The zip file contains several types of datasets.

1. Logs contain details of each execution on all providers / regions. Each column has a self-descriptive title. The first 1000 functions on AWS, 200 on Alibaba, 100 on Google and 100 on IBM are all executed concurrently. The remaining functions are executed once some of the active functions finish due to concurrency limit of the provider.

2. Functions contain the Monte Carlo functions that are executed (in Python).

Based on these logs, we evaluated our xAFCL service along with our new FaaS model and the scheduler. 

3. Makespan<k> contains measured makespan for each set of experiments for scaling factor k. Experiments are denoted as N/r where N is the number of functions that are distributed across the r regions. N=k*r for weak scaling and N=12*r for strong scaling.

4. Regions are ordered in the file xAFCLModelInputs.csv. 

5. Summary presents the achieved average makespan and maximum throughput for each scaling factor k.