Replication Package of paper titled "Evaluating Performance and Resource Consumption of REST Frameworks and Execution Environments: Insights and Guidelines for Developers and Companies"

Citation Author(s):
Sergio
Di Meglio
Luigi
Libero Lucio Starace
Submitted by:
Sergio Di Meglio
Last updated:
Mon, 07/15/2024 - 06:47
DOI:
10.21227/3kde-v757
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

The REST (REpresentational State Transfer) paradigm has become essential for designing distributed applications that leverage the HTTP protocol, enabling efficient data exchange and the development of scalable architectures such as microservices. However, selecting an appropriate framework among the myriad available options, especially given the diversity of emerging execution environments, presents a significant challenge. Often, this decision neglects crucial factors such as performance and energy efficiency, favoring instead developer familiarity and popularity within the industry.

 

To address this, we conducted a comprehensive benchmark study using a prototype REST API application provided by an industry partner, which was implemented multiple times using different REST API frameworks.

We evaluated five different REST API frameworks across three popular programming languages, incorporating both traditional and emerging execution environments, resulting in twelve distinct configurations. Our results reveal significant differences in performance and computational resource consumption across different frameworks and execution environments, highlighting the necessity of making informed technology choices based on thorough analysis rather than convenience or familiarity.

 

In addition to our findings, we offer other contributions to the field: an automated pipeline that benchmarks different configurations with various frameworks and execution environments, and a reference benchmark REST API that can be used in other studies. This research provides valuable insights and tools for developers and organizations aiming to select high-performance, resource-efficient technologies that promote environmental sustainability and reduce operational costs.

 

Instructions: 

CONTENTS OF THIS REPLICATION PACKAGE 

  • The 'APPLICATIONS' folder contains the source code and Docker environments required to run the REST API in question, implemented using the SpringBoot, Micronaut, Django, Express, and Nest frameworks.
  • The 'TESTS' folder includes the performance test scripts we have developed, using the JMeter tool.
  • The 'RESULTS' folder collects the collected data for each configuration and test repetition. In addition, it includes Excel files with the performance and energy consumption data analyses.

  • The 'EXECUTION SCRIPTS' folder contains the bash scripts required for instrumenting and running the tests.  
  • The 'ANALYSIS SCRIPTS' folder contains the scripts for analyzing the data collected during the tests. Within it, there are two sub-folders:

    • "time-request-analysis" includes two further folders: one with the raw outputs of the performance tests and the other with the R scripts used to analyze the data and generate the graphs.

    • "statistical-analysis-scripts" contains the Python scripts used for statistical analysis

REQUIREMENTS

On the machine that will host the applications, we need the following requirements:

On the machine that will run the test:

 

EXECUTION OF THE TEST SCRIPT FOR A GIVEN CONFIGURATION 

This pipeline is equivalent for each configuration, suppose we want to run the test suite on the Micronaut-OpenJDK configuration.

1) To run the application with the desired execution environment, e.g GraalVM, simply change the dockerfile argument to Dockerfile.graalvm in docker-compose.yml.

2) The APPLICATIONS  folder, as well as the run_test.sh, check_cpu.sh and energy_collector_script.bash scripts, must be copied to the machine that will host the application. While the TESTS folder and the jmeter_runner.bash script must be copied to the machine that will run the tests.

3) Running the PerfMon's server on the machine hosting the application, with the following command:

./startAgent.sh --udp-port 0 --tcp-port 3450

4) In order to execute the script run_test.sh, the following variables must be changed internally: the source_folder which represents the folder where the other bash scripts are contained and where the results will be stored. The information required in the sshpass command for the execution of the test script which resides on the second machine.

    sshpass -p 'your_password' ssh username@remote_host 'bash -s' < "your-path/jmeter_runner.bash" ${framework} ${env} ${i} ${test}

In addition, the script needs the following parameters: frame name, environment name, file test name and the docker folder, an example is shown below:

./run_test.sh MICRONAUT OPENJDK micronaut-load-test.jmx  /home/sergio/IdeaProjects/mensura-2023/FRAMEWORKS/polisportiva-micronaut

5) In order to execute the script jmeter_runner.bash, the following variables must be changed internally: the test_path which represents the folder where the test are stored and res_path which represents where the results are stored.