Usage
To run a scenario file, use the following command:
./scenario-runner --scenario ${SCENARIO_JSON_FILE}
Where:
--scenario
: File to load the scenario from. The file must be in JSON format. If the resources in the SCENARIO_JSON_FILE are not specified with absolute paths, their relative paths will be resolved against the parent directory of the SCENARIO_JSON_FILE.
For more details, see the help output:
./scenario-runner --help
Usage: ./scenario-runner [--help] [--version] --scenario VAR [--output VAR] [--profiling-dump-path VAR] [--pipeline-caching] [--clear-pipeline-cache] [--cache-path VAR] [--fail-on-pipeline-cache-miss] [--perf-counters-dump-path VAR] [--log-level VAR] [--wait-for-key-stroke-before-run] [--dry-run] [--disable-extension VAR...]... [--enable-gpu-debug-markers] [--session-memory-dump-dir VAR] [--repeat VAR] [--capture-frame]
Optional arguments:
-h, --help shows help message and exits
-v, --version prints version information and exits
--scenario file to load the scenario from. File should be in JSON format [nargs=0..1] [default: ""]
--output output folder [nargs=0..1] [default: ""]
--profiling-dump-path path to save runtime profiling [nargs=0..1] [default: ""]
--pipeline-caching enable the pipeline caching
--clear-pipeline-cache clear pipeline cache
--cache-path set pipeline cache location [nargs=0..1] [default: "/tmp"]
--fail-on-pipeline-cache-miss ensure an error is generated on a pipeline cache miss
--perf-counters-dump-path path to save performance counter stats [nargs=0..1] [default: ""]
--log-level set logging level [nargs=0..1] [default: "debug"]
--wait-for-key-stroke-before-run Wait for a key stroke before run
--dry-run Setup pipelines but skip the actual execution
--disable-extension specify extensions to disable out of the following: VK_EXT_custom_border_color, VK_EXT_frame_boundary, VK_KHR_maintenance5, VK_KHR_deferred_host_operations, [nargs: 0 or more] [may be repeated]
--enable-gpu-debug-markers enable GPU debug markers
--session-memory-dump-dir path to dump the contents of the sessions ram after inference completes
--repeat Repeat count for scenario execution [nargs=0..1] [default: 1]
--capture-frame Enable RenderDoc integration for frame capturing
Resource memory aliasing
It is useful for some resources to share the same underlying memory. Sharing memory allows resources to share inputs or outputs between shaders and VGF workloads.
To enable the resource memory aliasing feature, in the scenario json file, a tensor must have a memory_group
field with the id
contained inside naming the unique memory object that the resources will use. Only one resource in a single memory_group
can have a src
file.
The following example shows you how to setup a tensor resource that alias an image resource.
{
"commands": [],
"resources": [
{
"image": {
"shader_access": "readonly",
"dims": [ 1, 64, 10, 1 ],
"format": "VK_FORMAT_R32_SFLOAT",
"src": "input.dds",
"uid": "input_image",
"memory_group": {
"id": "group0"
}
}
},
{
"tensor": {
"shader_access": "writeonly",
"dims": [ 1, 10, 64, 1 ],
"format": "VK_FORMAT_R32_SFLOAT",
"dst": "output.npy",
"uid": "output_tensor",
"memory_group": {
"id": "group0"
}
}
}
]
}
This example performs no calculations. However, the example reads in the input image data and saves it to the output.npy
file. If the image has padding added to its data, the image padding is discarded when saving to the NumPy file.
The following example is a more realistic usage of the memory aliasing feature. The example has a preprocessing shader which has images as its input and output. The example then uses this image output as the input for a VGF dispatch, which has tensors for its input and output. The image outputs of the preprocessing shader stage are then aliased to tensors which are used as input of the VGF dispatch stage.
{
"commands": [
{
"dispatch_compute": {
"bindings": [
{ "set": 0, "id": 0, "resource_ref": "input_image" },
{ "set": 0, "id": 1, "resource_ref": "output_image" }
],
"shader_ref": "image_shader",
"rangeND": [64, 64, 1]
}
},
{
"dispatch_graph": {
"bindings": [
{ "set": 0, "id": 2, "resource_ref": "input_tensor" },
{ "set": 0, "id": 3, "resource_ref": "output_tensor" }
],
"shader_ref": "tensor_vgf",
"rangeND": [64, 64, 1]
}
}
],
"resources": [
{
"shader": {
"uid": "image_shader",
"src": "imageShader.spv",
"entry": "main",
"type": "SPIR-V"
}
},
{
"graph": {
"uid": "tensor_vgf",
"src": "tensorVgf.vgf"
}
},
{
"image": {
"uid": "input_image",
"dims": [1, 64, 64, 1],
"shader_access": "image_read",
"format": "VK_FORMAT_R16G16B16A16_SFLOAT"
}
},
{
"image": {
"uid": "output_image",
"dims": [1, 64, 64, 1],
"mips": false,
"format": "VK_FORMAT_R16G16B16A16_SFLOAT",
"shader_access": "writeonly",
"memory_group": {
"id": "group0"
}
}
},
{
"tensor": {
"uid": "input_tensor",
"dims": [1, 64, 64, 4],
"format": "VK_FORMAT_R16_UINT",
"shader_access": "readwrite",
"memory_group": {
"id": "group0"
}
}
},
{
"tensor": {
"uid": "output_tensor",
"dims": [1, 64, 64, 4],
"format": "VK_FORMAT_R16_UINT",
"shader_access": "readwrite",
"dst": "outputTensor.npy"
}
}
]
}
For this example, the scenario runner automatically inserts a memory barrier between the shader and graph dispatches. The memory barrier allows the output image data to be correctly shared with the “input_tensor” resource. Only single component data types are allowed in NumPy files, therefore, you can use the VK_FORMAT_R16_UINT type and multiplying the innermost “dims” value by 4 to approximate the VK_FORMAT_R16G16B16A16_SFLOAT type that the images use.
In general, the innermost dimension of the tensor must match the number of components of the image data type. The size of the tensor data type must also match the size of the image data type component.
Using Emulation and Validation layers
- You can use the ML SDK Emulation Layer to enable the Scenario Runner to run on platforms which do not support the Tensor and Graph Vulkan® extensions. However, you must have already built the Emulation layer. To enable the Emulation layer on Linux, set the following environment variables:
LD_LIBRARY_PATH=path/to/build/sw/vulkan-ml-emulation-layer/build/graph/:path/to/build/sw/vulkan-ml-emulation-layer/build/tensor/:$LD_LIBRARY_PATH
VK_LAYER_PATH=path/to/build/sw/vulkan-ml-emulation-layer/build/graph/:path/to/build/sw/vulkan-ml-emulation-layer/build/tensor/
VK_INSTANCE_LAYERS=VK_LAYER_ML_Graph_Emulation:VK_LAYER_ML_Tensor_Emulation
- To check for correct usage of the Scenario Runner’s Vulkan® API calls, you can use the Vulkan® Validation Layers. To enable the Vulkan® Validation Layers on Linux, set the following environment variables:
LD_LIBRARY_PATH={PATH_TO_VALIDATION_LAYERS}/build:$LD_LIBRARY_PATH
VK_LAYER_PATH={PATH_TO_VALIDATION_LAYERS}/build/layers
VK_INSTANCE_LAYERS=VK_LAYER_KHRONOS_validation