spryker/robotframework-suite-tests

Automated tests for the Robot Framework

Installs: 44 222

Dependents: 9

Suggesters: 0

Security: 0

Stars: 11

Watchers: 16

Forks: 6

Open Issues: 2

Language:RobotFramework

dev-master 2024-11-21 13:16 UTC

README

This repository contains sets of API and UI tests, built on the Robot Framework. API tests use the RequestsLibrary in conjunction with Robot Framework, while UI tests rely on the Browser library (powered by Playwright).

Installation

Prerequisites

Robot Framework is implemented using Python, and a precondition to install it is having Python or its alternative implementation PyPy installed. Another recommended precondition is having the pip package manager available. Robot Framework requires Python 3.6 or newer.

  1. Install Robot Framework
python3 -m pip install -U robotframework
  1. Install RequestsLibrary
python3 -m pip install -U robotframework-requests
  1. Install DatabaseLibrary
python3 -m pip install -U robotframework-databaselibrary
  1. Install Python SQL library, depending on your configuration
    • Engine: MySQL
     python3 -m pip install PyMySQL
    • Engine: PostgreSQL
    python3 -m pip install psycopg2-binary

Installation for UI tests

For UI testing installation requires Robot Framework, RequestsLibrary, DatabaseLibrary and Browser library powered by Playwright.

If you installed everything from the prerequisites, all you need to install is Node.js and the Browser library.

  1. Install Node.jsĀ®
  2. Install Browser library
python3 -m pip install -U robotframework-browser
  1. Initialize the Browser library
rfbrowser init

Installation for API tests

For API testing installation requires Robot Framework, RequestsLibrary, JSONLibrary and DatabaseLibrary.

If you installed everything from the prerequisites, all you need to install is the JSONLibrary.

  1. Install JSONLibrary
python3 -m pip install -U robotframework-jsonlibrary

Automated installation

You can also run all of the installation steps in one go by executing the shell script install.sh.

How to run tests

Robot Framework test cases are executed from the command line, and the end result is, by default, an output file in XML format and an HTML report and log. After the execution, output files can be combined and otherwise post-processed with the Rebot tool.

Note: If you prefer to run test using the default configuration of your local environment, you can navigate to the Helper section

Synopsis

robot [options] data
python -m robot [options] data
python path/to/robot/ [options] data

Execution is normally started using the robot command created as part of installation. Alternatively it is possible to execute the installed robot module using the selected Python interpreter. This is especially convenient if Robot Framework has been installed under multiple Python versions. Finally, if you know where the installed robot directory exists, it can be executed using Python as well.

Regardless of execution approach, the path (or paths) to the test data to be executed is given as an argument after the command. Additionally, different command line options can be used to alter the test execution or generated outputs in many ways.

Basic usage example: robot -v env:{ENVIRONMENT} {PATH}

Supported CLI Parameters

CLI Examples

  • Execute all tests (positive and negative) in api/suite folder (all glue, bapi and sapi API tests that exist) via docker/sdk.
    docker/sdk exec robot-framework robot -v docker:True -v env:api_suite -d results -s '*'.tests.api.suite .
  • Execute all tests in api/b2b folder (all glue, bapi and sapi API tests that exist).
    robot -v env:api_b2b -d results -s '*'.tests.api.b2b .
  • Execute all tests in a specific folder (all API tests that exist inside the folder and sub-folders).
    robot -v env:api_b2b -d results -s '*'.tests.api.b2b.glue.access_token_endpoints .
  • Execute only positive tests in api folder (all positive API tests that exist, from all folders).
    robot -v env:api_suite -d results -s positive .
  • Execute all positive and negative API tests in tests/api/suite/glue/abstract_product_endpoints folder. Subfolders (other endpoints) will be executed as well.
    robot -v env:api_suite -d results -s '*'.tests.api.suite.glue.abstract_product_endpoints .
  • Execute all positive and negative API tests in tests/api/suite/glue/abstract_product_endpoints/abstract_products
    robot -v env:api_suite -d results -s '*'.tests.api.suite.glue.abstract_product_endpoints.abstract_products .
  • Execute all E2E UI tests for MP-B2B on specific cloud environment.
    robot -v env:ui_mp_b2b -v yves_env:http://yves.example.com -v zed_env:http://zed.example.com -v mp_env:http://mp.example.com -d results tests/ui/e2e/mp_b2b.robot
  • Execute all API tests for B2B on specific cloud environment with custom DB configuration.
    robot -v env:api_b2b -v db_engine:postgresql -v db_host:124.1.2.3 -v db_port:5336 -v db_user:fake_user -v db_password:fake_password -v db_name:fake_name -s '*'.tests.api.b2b.glue .

Supported Browsers in UI tests

Since Playwright comes with a pack of builtin binaries for all browsers, no additional drivers e.g. geckodriver are needed.

All these browsers that cover more than 85% of the world wide used browsers, can be tested on Windows, Linux and MacOS. Theres is not need for dedicated machines anymore.

Helper

For local testing, all tests are commonly executed against default hosts. To avoid typos in execution commands, you can use the Makefile helper to quickly start your runs. Note: no installation is required on macOS and Linux systems. The make command is included in most Linux distributions by default. To run Makefile on Windows, you need to install a program called "make".

Supported Helper commands
Helper Examples
  • Run all API tests for B2B on local environment
    make test_api_b2b
  • Run all UI tests for MP-B2C on local environment with disabled docker/sdk commands
    make test_ui_mp_b2c ignore_console=true
  • Run all UI tests for MP-B2C on local environment with enabled docker/sdk commands and specify your application location
    make test_ui_mp_b2c ignore_console=false project_location=/Users/your_user/projects/mp-b2b
  • Run all API tests for B2B on cloud environment
    make test_api_b2c glue_env=http://glue.example.com bapi_env=http://bapi.example.com sapi_env=http://sapi.example.com

Built-in libraries

External libraries that can be installed based on your needs

The full list can be found on the official website

Automatically re-executing failed tests

There is often a need to re-execute a subset of tests, for example, after fixing a bug in the system under test or in the tests themselves. This can be accomplished by selecting test cases by names (--test and --suite options), tags (--include and --exclude), or by previous status (--rerunfailed or --rerunfailedsuites).

Combining re-execution results with the original results using the default combining outputs approach does not work too well. The main problem is that you get separate test suites and possibly already fixed failures are also shown. In this situation it is better to use --merge (-R) option to tell Rebot to merge the results instead. In practice this means that tests from the latter test runs replace tests in the original.

The message of the merged tests contains a note that results have been replaced. The message also shows the old status and message of the test.

Merged results must always have same top-level test suite. Tests and suites in merged outputs that are not found from the original output are added into the resulting output.

Viewing and Generating Keyword Documentation

Keywords used in the tests can and should be documented. If you add any new keyword into the files inside the 'common' folder, they should have [Documentation] tag in them that describes what the keyword does, what parameters mean and gives an example of the usage.

Documentation can be generated by these tags. For now only common_api.robot has documentation generated. If you added new keywords, you should re-generate the documentation and commit it together with the other changes you made.

To generate the documentation for api, bapi and sapi tests use this command: libdoc resources/common/common_api.robot API_Keyword_Documentation.html

To view the documentation just open the generated html file in any browser.

Output files

Several output files are created when tests are executed, and all of them are somehow related to test results.

Log files contain details about the executed test cases in HTML format. They have a hierarchical structure showing test suite, test case and keyword details. Log files are needed nearly every time when test results are to be investigated in detail. Even though log files also have statistics, reports are better for getting an higher-level overview.

The command line option --log (-l) determines where log files are created. Unless the special value NONE is used, log files are always created and their default name is log.html.