Command Line Interface#
Invocation#
altwalker [...]
You can also invoke the command through the Python interpreter from the command line:
python -m altwalker [...]
Help#
Getting help on version, available commands, arguments or option names:
$ altwalker -v/--version
$ # show help message and all available commands
$ altwalker -h/--help
$ # show help message for the specified command
$ altwalker command_name -h/--help
Possible exit codes#
Running altwalker
can result in five different exit codes:
Exit Code 0: Tests were successfully run and passed.
Exit Code 1: Tests were successfully run and failed.
Exit Code 2: Command line errors.
Exit Code 3: GraphWalker errors.
Exit Code 4: AltWalker internal errors.
Commands#
altwalker#
A command line tool for running Model-Based Tests.
altwalker [OPTIONS] COMMAND [ARGS]...
Options
- -v, --version#
Show the version and exit.
- --log-level <log_level>#
Sets the AltWalker logger level to the specified level.
- Options:
CRITICAL | ERROR | WARNING | INFO | DEBUG | NOTSET
- --log-file <log_file>#
Sends logging output to a file.
- --graphwalker-log-level <graphwalker_log_level>#
Sets the GraphWalker logger level to the specified level.
- Default:
CRITICAL
- Options:
CRITICAL | ERROR | WARNING | INFO | DEBUG | NOTSET
Environment variables
- ALTWALKER_LOG_LEVEL
Provide a default for
--log-level
- ALTWALKER_LOG_FILE
Provide a default for
--log-file
- GRAPHWALKER_LOG_LEVEL
Provide a default for
--graphwalker-log-level
Commands
- init
Initialize a new project.
- generate
Generate template code based on the models.
- check
Check and analyze models for issues.
- verify
Verify and analyze test code for issues.
- online
Generate and run a test path.
- offline
Generate a test path.
- walk
Run the tests with steps from a file.
altwalker init#
Initialize a new project.
altwalker init [OPTIONS] OUTPUT_DIR
Options
- -m, --model <model_paths>#
The model, as a graphml/json file.
- --git, -n, --no-git#
If set to true will initialize a git repository.
- Default:
True
- -l, --language <language>#
Configure the programming language of the tests.
- Options:
python | py | dotnet | csharp | c#
Arguments
- OUTPUT_DIR#
Required argument
Note
The -m/--model
is not required and can be used multiple times to provide
multiple models.
Examples
$ altwalker init test-project -l python
The command will create a directory named test-project
with the following
structure:
test-project/
├── .git/
├── models/
│ └── default.json
└── tests/
├── __init__.py
└── test.py
test-project: The project root directory.
models: A directory containing the models files (
.json
or.graphml
).tests: A python package containing the test code.
tests/tests.py: A python module containing the code for the model(s).
If you don’t want test-project
to be git repository run the command with
--no-git
:
altwalker init test-project -l python --no-git
Note
If you don’t have git
installed on your machine use the --no-git
flag.
If you specify models (with the -m/--models
option) init
will copy the
models in the models
directory and test.py
will contain a template
with all the classes and methods needed for the models:
altwalker init test-project -m ./first.json -m ./second.json -l python
The test-project
directory will have the following structure:
test-project/
├── .git/
├── models/
│ ├── first.json
│ └── second.json
└── tests/
├── __init__.py
└── test.py
altwalker generate#
Generate template code based on the models.
altwalker generate [OPTIONS] [OUTPUT_DIR]
Options
- -m, --model <model_paths>#
Required The model as a graphml/json file.
- -l, --language <language>#
Configure the programming language of the tests.
- Options:
python | py | dotnet | csharp | c#
Arguments
- OUTPUT_DIR#
Optional argument
Note
The -m/--model
is required and can be used multiple times to provide
multiple models. The generate
command will generate a class for each model
you provide.
Examples
altwalker generate . -m models/models.json
The command will create a directory named test
with the following
structure:
test-project/
├── models/
│ ├── models.json
└── tests/
├── __init__.py
└── test.py
For a models.json file with a simple model named Model
, with an edge
named edge_name
and a vertex named vertex_name
, test.py
will
contain:
class Model:
def vertex_name(self):
pass
def edge_name(self):
pass
altwalker check#
Check and analyze models for issues.
altwalker check [OPTIONS]
Options
- -m, --model <models>#
Required The model as a graphml/json file followed by a generator with a stop condition.
- -b, --blocked#
Will filter out elements with the blocked property.
- Default:
False
Note
The -m/--model
is required and can be use it multiple times to provide
multiple models.
Note
For the -m/--model
option you need to pass a model_path
and a stop_condition
.
model_path
: Is the file (.json
or.graphml
) containing the model(s).
stop_condition
: Is a string that specifies the generator and the stop condition.
For example: "random(never)"
, "a_star(reached_edge(edge_name))"
where random
and a_star
are the generators; never
and reached_edge(edge_name)
are the stop conditions.
Further Reading/Useful Links:
Examples
$ altwalker check -m models/blog-navigation.json "random(never)" -m models/blog-post.json "random(never)"
Checking models syntax:
* models/blog-navigation.json::Navigation [PASSED]
* models/blog-post.json::PostBlog [PASSED]
Checking models against stop conditions:
No issues found with the model(s).
If the models are not valid the command will return a list of errors:
$ altwalker check -m models/invalid.json "random(never)"
Checking models syntax:
* models/invalid.json::InvalidModel [FAILED]
Id 'e_0' is not unique.
altwalker verify#
Verify and analyze test code for issues.
altwalker verify [OPTIONS] TEST_PACKAGE
Options
- --suggestions, --no-suggestions#
If set will print code suggestions for missing elements.
- Default:
True
- -m, --model <model_paths>#
Required The model as a graphml/json file.
- -x, -l, --executor, --language <executor_type>#
Configure the executor to be used.
- Default:
python
- Options:
http | python | py | dotnet | csharp | c#
- --executor-url <executor_url>#
Sets the url for the executor.
- --import-mode <import_mode>#
Sets the importing mode for the Python language, which controls how modules are loaded and executed.
- Default:
importlib
- Options:
importlib | prepend | append
Arguments
- TEST_PACKAGE#
Required argument
Environment variables
- ALTWALKER_IMPORT_MODE
Provide a default for
--import-mode
Note
The -m/--model
is required and can be use it multiple times to provide
multiple models.
Examples
$ altwalker verify tests -m models/default.json
Verifying code against models:
* ModelName [PASSED]
No issues found with the code.
The verify
command will check that every element from the provided
models is implemented in the tests/test.py
(models as classes and
vertices/edges as methods inside the model class).
If methods or classes are missing the command will return a list of errors and code suggestions to fix the errors:
Verifying code against models:
* ModelName [FAILED]
Expected to find method 'edge_A' in class 'ModelName'.
Expected to find method 'vertex_B' in class 'ModelName'.
Expected to find method 'vertex_A' in class 'ModelName'.
Expected to find class 'ModelName'.
Code suggestions:
# Append the following class to your test file.
class ModelName:
def edge_A(self):
pass
def vertex_A(self):
pass
def vertex_B(self):
pass
If you don’t need the code suggestions you can add --no-suggestions
flag.
Verifying code against models:
* ModelName [FAILED]
Expected to find method 'edge_A' in class 'ModelName'.
Expected to find method 'vertex_B' in class 'ModelName'.
Expected to find method 'vertex_A' in class 'ModelName'.
Expected to find class 'ModelName'.
altwalker online#
Generate and run a test path.
altwalker online [OPTIONS] TEST_PACKAGE
Options
- --gw-host <gw_host>#
Sets the host of the GraphWalker REST service.
- --gw-port <gw_port>#
Sets the port of the GraphWalker REST service.
- Default:
8887
- -m, --model <models>#
Required The model as a graphml/json file followed by a generator with a stop condition.
- -e, --start-element <start_element>#
Sets the starting element in the first model.
- -x, -l, --executor, --language <executor_type>#
Configure the executor to be used.
- Default:
python
- Options:
http | python | py | dotnet | csharp | c#
- --executor-url <executor_url>#
Sets the url for the executor.
- -o, --verbose#
Will also print the model data and the properties for each step.
- Default:
False
- -u, --unvisited#
Will also print the remaining unvisited elements in the model.
- Default:
False
- -b, --blocked#
Will filter out elements with the blocked property.
- Default:
False
- --report-path#
Report the execution path and save it into a file (path.json by default).
- --report-path-file <report_path_file>#
Set the report path file.
- --report-file <report_file>#
Save the report in a file.
- --report-xml#
Report the execution path and save it into a file (report.xml by default).
- --report-xml-file <report_xml_file>#
Set the xml report file.
- --import-mode <import_mode>#
Sets the importing mode for the Python language, which controls how modules are loaded and executed.
- Default:
importlib
- Options:
importlib | prepend | append
Arguments
- TEST_PACKAGE#
Required argument
Environment variables
- ALTWALKER_IMPORT_MODE
Provide a default for
--import-mode
Note
The -m/--model
is required and can be use it multiple times to provide
multiple models.
Note
For the -m/--model
option you need to pass a model_path
and a stop_condition
.
model_path
: Is the file (.json
or.graphml
) containing the model(s).
stop_condition
: Is a string that specifies the generator and the stop condition.
For example: "random(never)"
, "a_star(reached_edge(edge_name))"
where random
and a_star
are the generators; never
and reached_edge(edge_name)
are the stop conditions.
Further Reading/Useful Links:
Examples
$ altwalker online tests -m models.json "random(vertex_coverage(30))" -p 9999
Running:
[2019-02-07 12:56:42.986142] ModelName.vertex_A Running
[2019-02-07 12:56:42.986559] ModelName.vertex_A Status: PASSED
...
Status: True
If you use the -o/--verbose
flag, the command will print for each step
the data
(the data for the current module) and properties
(the
properties of the current step defined in the model):
[2019-02-18 12:53:13.721322] ModelName.vertex_A Running
Data:
{
"a": "0",
"b": "0",
"itemsInCart": "0"
}
Properties:
{
"x": 1,
"y": 2
}
If you use the -u/--unvisited
flag, the command will print for each
step the current list of all unvisited elements:
[2019-02-18 12:55:07.173081] ModelName.vertex_A Running
Unvisited Elements:
[
{
"elementId": "v1",
"elementName": "vertex_B"
},
{
"elementId": "e0",
"elementName": "edge_A"
}
]
altwalker offline#
Generate a test path.
altwalker offline [OPTIONS]
Options
- -f, --output-file <output_file>#
Output file.
- -m, --model <models>#
Required The model as a graphml/json file followed by a generator with a stop condition.
- -e, --start-element <start_element>#
Sets the starting element in the first model.
- -o, --verbose#
Will also print the model data and the properties for each step.
- Default:
False
- -u, --unvisited#
Will also print the remaining unvisited elements in the model.
- Default:
False
- -b, --blocked#
Will filter out elements with the blocked property.
- Default:
False
Note
The -m/--model
is required and can be use it multiple times to provide
multiple models.
Note
For the -m/--model
option you need to pass a model_path
and a stop_condition
.
model_path
: Is the file (.json
or.graphml
) containing the model(s).
stop_condition
: Is a string that specifies the generator and the stop condition.
For example: "random(never)"
, "a_star(reached_edge(edge_name))"
where random
and a_star
are the generators; never
and reached_edge(edge_name)
are the stop conditions.
Further Reading/Useful Links:
Warning
1. If you are using in your model(s) guards and in the test code you update the models data,
the offline
command may produce invalid paths.
2. The never
and time_duration
stop condition is not usable with the offline
command only with the online
command.
Example
$ altwalker offline -m models/login.json "random(length(5))"
[
{
"id": "v_0",
"modelName": "LoginModel",
"name": "v_start"
},
{
"id": "e_0",
"modelName": "LoginModel",
"name": "e_open_app"
},
{
"id": "v_1",
"modelName": "LoginModel",
"name": "v_app"
},
{
"actions": [
"isUserLoggedIn = true;"
],
"id": "e_1",
"modelName": "LoginModel",
"name": "e_log_in"
},
{
"id": "v_1",
"modelName": "LoginModel",
"name": "v_app"
}
]
If you want to save the steps in a .json
file you can use the
-f/--output-file <FILE_NAME>
option:
altwalker offline -m models/login.json "random(length(5))" --output-file steps.json
If you use the -o/--verbose
flag, the command will add for each step
data
(the data for the current module), actions
(the actions
of the current step as defined in the model) and properties
(the properties
of the current step as defined in the model).
$ altwalker offline -m models/login.json "random(length(5))" --verbose
[
{
"data": {
"JsonContext": "org.graphwalker.io.factory.json.JsonContext@55704859",
"isUserLoggedIn": "false"
},
"id": "v_0",
"modelName": "LoginModel",
"name": "v_start",
"properties": []
},
{
"data": {
"JsonContext": "org.graphwalker.io.factory.json.JsonContext@55704859",
"isUserLoggedIn": "false"
},
"id": "e_0",
"modelName": "LoginModel",
"name": "e_open_app",
"properties": []
},
{
"data": {
"JsonContext": "org.graphwalker.io.factory.json.JsonContext@55704859",
"isUserLoggedIn": "false"
},
"id": "v_1",
"modelName": "LoginModel",
"name": "v_app",
"properties": []
},
{
"data": {
"JsonContext": "org.graphwalker.io.factory.json.JsonContext@55704859",
"isUserLoggedIn": "false"
},
"id": "e_4",
"modelName": "LoginModel",
"name": "e_for_user_not_logged_in",
"properties": []
},
{
"data": {
"JsonContext": "org.graphwalker.io.factory.json.JsonContext@55704859",
"isUserLoggedIn": "false"
},
"id": "v_3",
"modelName": "LoginModel",
"name": "v_logged_out",
"properties": []
}
]
If you use the -u/--unvisited
flag, the command will add for each step the
current list of all unvisited elements, the number of elements and the number
of unvisited elements.
$ altwalker offline -m models/login.json "random(length(1))" --unvisited
[
{
"id": "v_0",
"modelName": "LoginModel",
"name": "v_start"
}
]
altwalker walk#
Run the tests with steps from a file.
altwalker walk [OPTIONS] TEST_PACKAGE STEPS_FILE
Options
- -x, -l, --executor, --language <executor_type>#
Configure the executor to be used.
- Default:
python
- Options:
http | python | py | dotnet | csharp | c#
- --executor-url <executor_url>#
Sets the url for the executor.
- --import-mode <import_mode>#
Sets the importing mode for the Python language, which controls how modules are loaded and executed.
- Default:
importlib
- Options:
importlib | prepend | append
- --report-path#
Report the execution path and save it into a file (path.json by default).
- --report-path-file <report_path_file>#
Set the report path file.
- --report-file <report_file>#
Save the report in a file.
- --report-xml#
Report the execution path and save it into a file (report.xml by default).
- --report-xml-file <report_xml_file>#
Set the xml report file.
Arguments
- TEST_PACKAGE#
Required argument
- STEPS_FILE#
Required argument
Environment variables
- ALTWALKER_IMPORT_MODE
Provide a default for
--import-mode
Examples:
Usually the walk
command will execute a path generated by the offline
command, but it can execute any list of steps, that respects that format.
$ altwalker walk tests steps.json
Running:
[2019-02-15 17:18:09.593955] ModelName.vertex_A Running
[2019-02-15 17:18:09.594358] ModelName.vertex_A Status: PASSED
[2019-02-15 17:18:09.594424] ModelName.edge_A Running
[2019-02-15 17:18:09.594537] ModelName.edge_A Status: PASSED
[2019-02-15 17:18:09.594597] ModelName.vertex_B Running
[2019-02-15 17:18:09.594708] ModelName.vertex_B Status: PASSED
Status: True