Search by Tags

Torizon IDE Backend Architecture and Internals


Article updated at 19 Oct 2021
Compare with Revision


Providing the capability of running applications inside containers gives Torizon the ability to support many different languages and programming environments.

This, unfortunately, does not mean that everything will work out of the box.

Some effort is required to manage different aspects:

  • Build code for the right target (for languages that are compiled to native code)
  • Host the application inside a container
  • Deploy code and container to the target in an efficient way
  • Support remote debugging

Configuring all those things is time-consuming, error-prone, and may lead to non-optimal scenarios where users have to deal with too many details every time they build and run their applications. They can even have very inefficient development setups with very long wait times after each minor change to the code, lots of manual operations to perform every time the application should run on the target, or no chance at all to debug code interactively.

Supporting all available editors, IDEs, and development environments would not be possible, but Toradex provides an open-source backend that can be integrated into all the development environments that provide some kind of extensibility and support for invoking REST-based APIs.

Toradex currently provides fully implemented extensions for:

  • Visual Studio 2019 (C/C++ using MSBuild)
  • Visual Studio Code (C/C++ with different build systems, Python and C#/.NET Core)

This article describes the features and architecture of the Torizon IDE backend, which can be useful for users of extensions that want to understand the system and leverage its capabilities, and for developers willing to better integrate their favorite editor/ide/language with Torizon.

The purpose of the IDE backend is to simplify the task of packaging an application as a container, deploy, run and debug it on a target device.

The IDE backend is implemented as a Python application called moses. The backend is designed to run in the background and receive requests via a REST API.

The API is declared using openAPI v2.0 (same version supported by docker) and can be found in the source tree under moses/swagger.yaml.

IDE backend has been tested on Linux and Windows. It runs on the developer's machine and securely connects to the target devices via SSH.

This article complies to the Typographic Conventions for Torizon Documentation.

Basic Concepts

Before looking into the implementation it can be useful to understand some basic concepts used by IDE backend.

Debug/Release Configurations

The backend can deploy applications in debug or release mode, so many of the settings you’ll see in this document can be specified for "debug" configuration, for "release" configuration, or as “common”, using the same settings for both configurations.

When looking for a setting the backend tries to find a setting specific for the current configuration. If it does not find it, it tries with the common settings, and if this one also does not exists, it will use the default value.


A platform defines a class of applications (ex: C/C++ console applications, Qt for Python applications, etc.) and a CPU architecture (ex: ARM32, ARM64).

A platform can support one or more runtimes.

The runtime can be defined as a development language (C/C++, Python), a framework (.NET, ASP.NET), or a combination of the two (C/C++ for Qt, Qt for Python, etc.).

The goal of a platform is to provide a basic template that simplifies the steps required to build, deploy, run and debug an application on a device running Torizon.

A platform defines a base container image (optionally different images for debug and release configurations). This image is described by a Dockerfile that can be built as it is ,but also be used as a template to add application-specific features (more about this later).

Optionally a platform can provide also an SDK that is, again, a container template that configures the right environment for building an application, without requiring the setup of complex toolchains on the developer’s PCs.

Platforms can be compatible only with some modules. For example, the 64-bit platform will support only 64-bit capable CPUs.

In addition to the container templates, the platform can provide also parameters used to run the container and additional scripts, docker-compose files to run additional services and containers required to run a specific kind of application (for example to start the Wayland compositor when running a graphical application).

The platforms are defined using YAML and text files. Toradex provided platforms are in the “platforms” subfolder of Moses setup and should not be edited by users. Users may add additional platforms under the folder defined in the TIE_PLATFORMSPATH environment variable (if the variable is not defined a directory named .moses/platforms inside the user's home folder will be used).

Platforms also have a set of generic properties that may be used by the IDE plugin to configure, for example, compiler parameters or other options.

Tags can be used in YAML files, dockerfiles, docker-compose files, and scripts and can be replaced at build/runtime with values configured for a specific application.

Tags use the following format:


Properties can be exposed by platform and application objects. A detailed list will be in the reference section of this document.

The YAML file contains different parameters, described in the following paragraphs.

Platform description

Parameter Description
standard is set to true for platforms that are part of the default setup, those should not be changed by users and may be updated by Toradex in new releases of the ide-extensions
name short mnemonic name
description longer and more detailed description used by user-interface
version version of the platform

standard: true
name: python3 arm64v8
version: "1.0"
description: minimal python3 setup on debian

Supported devices

Platforms may require a specific CPU architecture or a specific module to work, those can be specified using the parameters described in this section.

Parameter Description
supportedmodels This parameter may contain a list of PID4 codes or "*" to enable the platform for all modules
unsupportedmodels This list can be used to specify modules that are not compatible with the platform, can be useful when the incompatibility comes from lacking features (ex: module has no GPU)
supportedarchitectures Can be used to restrict platform compatibility to a specific CPU architecture. Currently supported values are aarch64 and armv7l. This is mutually exclusive with supported/unsupportedmodules parameters

supportedmodels: ["*"]
unsupportedmodels: []

Application Container information

The properties described here are used to build the container that will host the application running on the target device. Some of this parameters may support different values for debug/release (marked in the D/R column).

Parameter D/R Description
architecture docker architecture used to build the container. Unfortunately Docker, Debian and the kernel have different naming convention for architectures. For this parameter the docker one is used (linux/arm or linux/arm64)
baseimage * tags of the base image used to build the container, parameter is a list with repository name first and tag second.
container * name of the template file used to build the container image, the file must be in the same folder used for the YAML

architecture: linux/arm64
    - torizon/arm64v8-debian-base
    - buster
  debug: debug.dockerfile
  release: release.dockerfile


Runtimes are used by IDE extensions to filter platforms by language or technology.

Parameter Description
runtimes list of runtimes supported by the platform (ex: python3, dotnet etc.)

  - python3

Container execution

A dockerfile defines the container image contents, but you may also need some additional information about how to run a specific container, parameters described in this paragraph are used to configure how the container should be executed.

Parameter D/R Description
ports * Ports exposed by the container in a dictionary with port as the key and a null or integer value. If value is null port will be assigned at runtime by docker
volumes * Additional mountpoints for docker volumes or local folders shared inside the container. The mountpoints must be in a dictionary with the volume name/local folder as the key and the container folder as value
devices * list of device entries that should be re-created in the container
networks * list of networks that the container will be connected to.
extraparms * With this parameter it's possible to specify additional less-common parameters. The values must be organized in a dictionary with the parameter name as the key and value as a YAML string, formatted according to docker documentation.
privileged Can be set to true to run the container in privileged mode (not recommended)

  common: { "6502/tcp": null }
  debug: {}
  release: {}
  common: { "/tmp":"/tmp"}
  debug: {}
  release: {}
  common: [ "/dev/ttyUSB0" ]
  debug: []
  release: []
  common: [
  debug: []
  release: []
  common: {
    device_cgroup_rules: '[ "c 199:* rmw" , "c 226:* rmw" ]'
  debug: {}
  release: {}
privileged: false


Some platforms may provide a build environment to simplify code build. This is done using an SDK container. Depending on the IDE used the container may be accessed directly (Visual Studio Code) or via SSH (Visual Studio 2019).

Parameter D/R Description
usesdk Must be set to true to enable SDK usage
usessh SDK exposes an SSH connection, backend will take care of assigning a port to it
sdkbaseimage * Base image for the SDK container, as a list of string with repository as first entry and tag as second.
sdkcontainer * Template dockerfile used to build the container image. File must be in the platform folder.
sdkcontainerusername Username of the container user (if SSH is used). Default is "build".
sdkcontainerpassword Password used for the SSH connection. Default is "build".

usesdk: true
usessh: false
    - torizon/debian-cross-toolchain-arm64
    - buster
  common: sdk.dockerfile
  debug: null
  release: null

Custom properties

Some additional properties may be defined to support platform-specific features. For example a C/C++ platform may specify prefix of its toolchain. Those properties can be used in the container templates or accessed by specific IDE-extensions.

Parameter D/R Description
props * A dictionary with property name and its string value.

    linkeroptions: "-Wl,--dynamic-linker=/lib/"
    prefix: "aarch64-linux-gnu-"
    vscode-arch: "arm64"
    gdb-arch: "aarch64"
    cmake-arch: "aarch64"
    debian-arch: "arm64"
    intellisensemode: "gcc-arm64"
  debug: {}
  release: {}

additional scripts

Sometimes to run/debug a container you may need to run additional containers (ex: weston if your container uses UI services via wayland) or to perform additional operation on the host OS (ex: creating a folder that will be bind-mounted in the container etc.). For this purpose users can define additional script and add the corresponding files to the platform folder.

Parameter D/R Description
dockercomposefile * Docker-compose file used to start/stop additional services needed to run the platform container.
startupscript * Script that is executed before starting the application.
shutdownscript * Script that is executed when the application is stopped (can be used to perform some cleanup operations).

  common: null
  debug: null
  release: null
  common: null
  debug: null
  release: null
  common: null
  debug: null
  release: null

Container templates

Each platform can provide container templates for the target (in release and debug configuration) and, eventually, for the SDK.

Container templates are dockerfiles with some tags that will be replaced with platform or application configuration (discussed later) properties at build time.

Those tags are identified with #% as escape characters. For example:


Below is a container template for a generic Debian C/C++. As you can see the template does not add any package on top of the current base container, but tags can be redefined at the application level to add packages and provide some configuration.

FROM --platform=#%platform.architecture%# #%platform.baseimage%#


ENV DEBIAN_FRONTEND="noninteractive"



RUN if [[ -z "#%application.extrapackages%#" ]]; then \
    apt-get -q -y update \
    && apt-get -q -y install #%application.extrapackages%# \
    && rm -rf /var/lib/apt/lists/* ; \



USER #%application.username%#


A debug container usually also includes components required by the IDE to be able to debug an application running inside that container.

For example, for Visual Studio C/C++ it will need an ssh server and gdb.


Some kinds of applications have to be compiled to native code before they could be deployed on a target.

For some language (C#, Go, etc.) installing the compiler and keeping multiple versions of the development environment may not be too complicated and even managed directly by the IDE.

For other languages (C and C++ for example), configuring and installing the toolchain is not simple on a Linux machine and almost impossible on a Windows one.

Containers can be used to host a development environment tailored to a specific application.

This will allow the installation of the same set of libraries required by the components installed in the runtime container, making build simpler even on different machines.

In some cases (like on Visual Studio 2019) SDKs are accessed via SSH. In any case, compilers and tools will run inside the container, in a sandboxed environment.

SDK can also be configured using a template file, being possible to have different templates for debug/release builds.

Below is an example of a generic SDK container for debian-based applications. The base image already includes multi-arch support and the cross-compiler, so the SDK just installs the additional “-dev” packages required by a specific application.

FROM #%platform.sdkbaseimage%#

# commands that should be run before installing packages (ex: to add a feed or keys)

RUN apt-get -q -y update \
    && apt-get -q -y install \
    gdb rsync \
    && rm -rf /var/lib/apt/lists/*

RUN if [ ! -z "#%application.devpackages%#" ]; then \
    apt-get -q -y update \
    && apt-get -q -y install #%application.devpackages%# \
    && rm -rf /var/lib/apt/lists/* ; \


Define a custom platform

If you have multiple applications using the same libraries or sharing the same configuration, you may want to define your own custom platform.

To do this you have to provide a config.yaml file, a dockerfile template (or one for release and one for debug), and, optionally, an SDK template - that may also have debug and release variants.

You need to put those files inside a subfolder of .moses/platforms in your home folder. You have to use a unique name for the folder, this will be the platform id.

The IDE backend does not dynamically reload platforms, so you've to restart it to ensure that your new platforms are ready to be used.

The easiest way to create a new custom platform is taking one of the existing ones (stored under the platforms folder of the backend installation or source repo) and "clone" it, changing it to match your requirements.

You may also use a custom platform when you move to production, to ensure that no components/versions are changed by updates of the IDE extensions.

Application Configuration

An application configuration takes the generic definition provided by the platform and configures it for a specific scenario.

Usually, the Platform defines the general support required for a specific runtime/language, and the Application Configuration provides the additional details required to run a specific user application.

This "layered" approach avoids duplication of the basic settings, allowing plenty of customization chances to run your code in the exact way you want it to run.

The application configuration object always references a platform.

Application configuration is meant to be kept together with the application code, in a subfolder.

You may have multiple application configurations in the same codebase, all in his own separate subfolder.

IDE backend service does not load all application configurations at startup, it will be time-consuming and the service has no way to know where those configuration files are on your filesystem, so before operating on an application object it should be loaded by pointing the service to the right folder. This is usually done automatically by the IDE extensions.

The service can create an application, populating its folder and assigning it a unique id. This id will then be used when naming images, containers, etc, to avoid overlapping non-unique names from other apps.

Application configuration contains the same information related to container startup as the platform does (parameters can override or be in addition to those in the platform configuration, more on this in the following chapters).

Most of those parameters will be changed by the end-user using IDE plugins user interface, others like image IDs are used internally by the system.

The application configuration also contains keys used to automate ssh connections to the target container.

The application object is not only a data container, it also provides actions to:

  • Build the container for a specific configuration (debug/release) - This involves generating a real docker-compose file by replacing tags inside the platform’s template and then running docker build
  • Deploy the container to a specific device (more on devices later) - The container is deployed over ssh via docker save/docker load, this will avoid uploading/downloading to a docker hub. The image will be deployed only if not already on the target
  • Run/stop the container on a target - Creates an instance of the image
  • Build the SDK container - A dockerfile is generated from the template and then build. If the SDK container was already running it will be restarted
  • Run/stop the SDK container on the developer’s PC - Start and stop the SDK container enabling SSH connection if needed


The application configuration file is stored in the application configuration’s root folder.

It contains also RSA keys that make it not very readable, but users will change the information inside it only using the IDE plugins.

Changing those files when the backend is running may lead to unpredictable results since it's not granted that manual changes will be preserved when the system needs to store additional information in the file.

If an SCM system is used, it's a good idea to store these files together with code, this will allow all users to build the images using the same IDs.

When this is not desirable (ex: when publishing an application as open-source on GitHub), the system provides features to remove all the IDs and re-generate them on the next re-opening.

See below an example of the application config.yaml file.

# unique id of the application, assigned on creation time
id: 4b83c734-6675-42ea-aeff-441e21c64f1f
# platform used as base for this application
platformid: arm64v8-debian-base_buster
# user account used to run the application inside the container
username: torizon
# this is updated when service changes the application via REST
# calls, this will allow build systems to decide when a rebuild
# of the images is required
# Date is changed only when properties that may impact image
# build are modified
modificationdate: '2019-12-20T06:36:37.962539'
# information used to start the application container
# this information will be merged with the one provided
# by the platform (each field has debug, release and common parts)
    common: []
    debug: []
    release: []
    common: {}
    debug: {}
    release: {}
    common: {}
    debug: {}
    release: {}
    common: []
    debug: []
    release: []
    common: {}
    debug: {}
    release: {}
# scripts and docker-compose file used to start additional
# containers/servers application-provided ones will override
# platform ones
    common: null
    debug: null
    release: null
    common: null
    debug: null
    release: null
    common: null
    debug: null
    release: null
# custom application properties that can be used as tags in dockerfile
# templates
        arg: ''
        buildcommands: ''
        buildfiles: ''
        command: ''
        devpackages: ''
        env: ''
        expose: ''
        extrapackages: ''
        preinstallcommands: ''
        sdkpostinstallcommands: ''
        sdkpreinstallcommands: ''
        targetfiles: ''
        arg: 'ARG SSHUSERNAME=#%application.username%#'
    release: {}
# ID of the last built image (those should not be changed by users)
    debug: sha256:53df69db9df438b205a07c548d104872df861edb10cafbf7722215b46156f216
    release: null
    debug: dummy001_arm32v7-wayland-base-no-ssh_bullseye_debug_663389d5-2bd1-4935-b80b-c4ce4bcf839a
    release: dummy001_arm32v7-wayland-base-no-ssh_bullseye_release_663389d5-2bd1-4935-b80b-c4ce4bcf839a
# ssh information
privatekey: '-----BEGIN RSA PRIVATE KEY-----
    -----END RSA PRIVATE KEY-----
publickey: ssh-rsa ...
# information about SDK image, those should not be changed by users.
    debug: sha256:40eb3348849f0431273e42769be50bf0872969e7beeac4d8875b2c1390e5bb47
    release: ''
    debug: dummy001_arm32v7-wayland-base-no-ssh_bullseye_debug_663389d5-2bd1-4935-b80b-c4ce4bcf839a_sdk_image
    release: dummy001_arm32v7-wayland-base-no-ssh_bullseye_release_663389d5-2bd1-4935-b80b-c4ce4bcf839a_sdk_image
# address of the SDK container (localhost:<port>)
    debug: null
    release: null

Work folder

During image builds or other operations, the backend needs to generate or acquire additional files, for example, the Dockerfile used to build an image, or files that need to be included into it or deployed to the target.

This kind of content can be re-generated at any time and is stored in a subfolder of the directory that hosts config.yaml, named "work". This folder can be safely ignored by backup or SCM systems.


The device object can be used to control and monitoring processes, images, and containers running on a Torizon device.

It's also used during deployment and debug to actually deploy an application (defined via Application Configuration and Platform) to an actual running device.

Each device is identified using its unique Toradex serial number (PID8 - 8 digits).

Devices can be detected using a serial or network connection.

On detection, the device will be configured by enabling Docker TCP/IP interface only on and adding the keys for automated ssh login to the selected user account (by default: torizon).

After configuration, the device will be rebooted. No software will be installed on the target device.

Any further connection will be performed over SSH, so a network connection is required to use the device with the IDE backend.

Devices can be used to implement monitoring functionality from an IDE, showing the status of the device in terms of resources, processes, containers.

Detected devices are saved in folders under .moses/devices folder in the user’s home.

Each folder will be named using the device's unique ID (serial number). This will allow the system to provide some information about them even when they are offline.


The device configuration file is created automatically on detect, and users should not edit it directly.

An exception could be the hostname, to replace it with an IP if your network does not resolve names correctly.

See below an example of a device config.yaml file.

# this may need to be edited if your system can't resolve hostnames
hostname: apalis-imx6-05040105.local
# descriptive name, used for UI, can be changed to a more descriptive one
name: Toradex Apalis iMX6Q/D Module on Apalis Evaluation Board(05040105)
# Toradex model id (PID4)
model: 0028
# CPU architecture
architecture: armv7l
# HW/SW version information
hwrev: V1.1C
kernelrelease: 5.4.2-0.0.0-devel+git.0a15b6b8f633
kernelversion: '#1 SMP Fri Dec 6 13:39:24 UTC 2019'
torizonversion: '19700101000000'
# user account information
username: torizon
homefolder: /home/torizon
# security keys
privatekey: '-----BEGIN RSA PRIVATE KEY-----
    -----END RSA PRIVATE KEY-----
publickey: ...

The device provides actions to:

  • Read information about memory and storage
  • Read information about images and running containers
  • Delete images
  • Control containers (stop, start, delete)
  • Read the list of running processes on the host or on a specific container


The IDE backend application should run on the developer’s PC and provides an HTTP/REST interface, by default on port 5000.

The API is defined using openAPI, and the information is exchanged in JSON format over an HTTP connection established on port 5000.

Backend server accepts only local connections.

It will talk to different entities:

  • IDE plugin(s) to expose its features to the development environment
  • Local instance of docker for building containers and running SDK instances
  • Remote device(s) main OS to monitor resources and processes
  • Remote device(s) main OS to transfer container images and applications
  • Remote instance(s) of docker to create and monitor container instances

The server communicates with a local docker instance using standard docker APIs. This will use a socket on Linux and a TCP local connection on Windows.

The IDE can communicate with SDK containers running on the PC or remote containers on the device, the extension will setup the configuration to make this process transparent to the end-user.

Ports and protocols used will depend on the specific runtime/debugger used.

Platforms should provide a debug configuration that configures that in a way that will be transparent for the end-user. Separate configurations are provided for debug and release to allow users to not ship any debugging component together with their production version of the application.

Communication with SDK containers can happen via SSH (Visual Studio 2019) or by executing processes inside the container (Visual Studio Code).

Application deployment is controlled by the IDE plugin and can be done to a folder on the host then synchronized with the device (via rsync) and mounted inside the container (for debugging), or directly inside the container (for release).

Deploying to a shared folder during debugging could be more efficient, mostly for applications that need external resources that are not changed after each build.

IDE backend provides also functions to generate a docker command line or a docker-compose file that could be used to run the containerized application. This can be useful to run it on devices that are not configured for debugging (testing or production).

Application build

During the build step, the dockerfile template provided by the platform is converted into a complete dockerfile by replacing the tags with current values of application/platform properties.

Then the docker on the developer’s PC is used to build the container image.

On Windows, the system automatically uses emulation to build an ARM container on a x86/x64 machine.

On Linux, the emulation will be enabled during visual studio code startup.

Optionally the application code can be built using an SDK container providing the right toolchain for the target, including the right set of libraries matching the components in the target container.

This is what currently happens for C/C++ applications built using Visual Studio 2019 or Visual Studio Code.

Application deployment

First, the system checks if the application’s container is already running and if the running instance is using the latest image (each image as a unique SHA256 identifier).

If this is not the case or if the container is not even running, the system deploys the new image over SSH using docker save and docker load features.

It is possible to enable deployment via docker registry. This is usually faster (mostly when minimal changes need to be deployed), but may require some additional configuration on the developer's PC to expose the ports used by the registry. See the Visual Studio Code Extension for Torizon article for more information about this.

This will avoid the need of uploading and downloading the image to a remote docker registry using a potentially slow and unreliable internet connection.

The application container is started accordingly to the platform/application configuration.

In debug the application will be deployed using rsync, between the developer's PC or SDK container and a folder on the target or inside the remote container.

Application debug

For debug deployments, the system will not start the application itself, but it will start what is needed to allow the IDE’s debugger to connect and debug it.

This will depend on the development environment and runtime, and the platform’s base template should take care of adding and running the right components.

The debug platform for C/C++ applications in Visual Studio, for example, configures an SSH server and adds gdb to the image.

In Visual Studio Code:

  • C/C++ debugging uses gdbserver running on the target and gdb running in the SDK container
  • python debugging uses ptvsdbg (Python debugger for Visual Studio)
  • .NET debugging uses an ssh connection to start vsdbg


The openAPI definition is self-documenting and all APIs functions can be viewed and tested using the moses service itself through its Swagger API documentation panel.

Start the service and point your browser to http://localhost:5000/api/ui/ to see the APIs grouped by tag, corresponding to the entities described in the first chapter, and be able to call them and see the returned values.

Clients for the APIs can be automatically generated using different tools, current Python and C# clients are generated using oneAPI Generator Cli running in a container.

Each client has also some documentation that could be used to better understand how to use the generated entities:

Command line interface

If you need to integrate the container build/deployment operations into CI/CD pipelines you can use the command line wrapper for the ide-backend API. The tool is named tdskt (tdskt.exe on Windows) and provide access to most of the APIs features. Torizon IDE Backend Command-line Interface provides an extended description of all the features of the tool.

Tags Reference

Those tags can usually be modified using the IDEs plugins user interface, but for some specific scenarios, it may be required to edit them manually inside the YAML configuration files.

Tag Type Description string Unique id of the platform (folder name) string Mnemonic name of the platform
platform.version string Version of the platform
platform.folder path Absolute path of the folder where platform configuration is stored (can be used to add files to a container)
platform.baseimage string Base image of the container template (used in FROM clause of Dockerfile)
platform.sdkbaseimage string Base image of the SDK template (can be empty if the platform does not support an SDK)
platform.runtimes string[] Runtimes supported by the image. Currently supported runtimes are: ccpp, ccpp-no-ssh, python3, dotnet, aspnet
platform/application.ports key/value pairs Ports exposed by the container (those configured by application configuration will be merged with those provided by the platform, replacing those with same keys and adding others )
platform/application.volumes key/value pairs Volumes mounted in the container (where "key "is the local path or volume name, "value" is the path inside the container and, optionally, ",ro" to mount read-only)
platform/application.devices string[] List of paths of devices that should be mapped inside the container (ex: /dev/gpiochip0)
platform/application.networks string[] List of networks that should be connected to the container. For a network created by a docker-compose script associated with the appplication configuration you've to prepend "" to the actual name)
platform/application.extraparams key/value pairs This tag can be used to add specify some additional custom settings. Check docker python API documentation of method for a list of the supported parameter. "Key" should be parameter name, "value" must be YAML representation of the value. For example to set host network mode, add "network_mode" as key and "host" as value.
platform/application.startupscript relative path The script that will be launched before starting the container, tags can be used inside the script. The script must be in the same folder as platform/application config file or in a subfolder, path must be relative. If the script is specified for both platform and application, only the application one is executed (but it can invoke the platform one that will be parsed and copied to the target anyway).
platform/application.shutdownscript relative path A script that will be launched after the container has been stopped. Tags can be used inside the script. The script must be in the same folder as platform/application config file or in a subfolder. If the script is specified for both platform and application, only the application one is executed (but it can invoke the platform one that will be parsed and copied to the target anyway).
platform/application.dockercomposefile relative path The docker-compose script that will be used to start other containers required to run the application, tags can be used inside the script. The script must be in the same folder as platform/application config file or in a subfolder. The path must be relative. If the compose file is specified for both platform and application, only the application one is used. string Application unique id (used also as a prefix for docker-compose created resources like volumes or networks)
application.expose docker command Ports exposed by the application in the format: "EXPOSE NN NN" Where NN are port number (ex: "EXPOSE 80 8080)
application.arg docker command Docker build arguments in the format: ARG NAME=VALUE. You can also specify multiple values. This can be useful only if you plan to use the generated dockerfile in a standalone build
application.env docker command Environment variables in the format: ENV NAME=VALUE. Multiple entries can be specified and VALUE can contain other tags (ex: ENV FOLDER="/home/dummy" FILE="filename")
application.preinstallcommands docker command Commands that will be executed during container build before any package installation. The format must be the one used in Dockerfiles. This can be used to add Debian package feeds to apt list, add security keys, etc.
application.extrapackages string Additional packages that should be installed inside the container. You can specify multiple packages separated by spaces.
application.devpackages string Development packages that will be installed in the SDK container. If a package has architecture-specific versions you’ll have to specify the correct architecture. ex: libopencv:armhf or libopencvf:aarch64
application.sdkpackages string Additional packages that will be installed in the SDK container. This can be used to install additional tools or compilers.
application.buildfiles docker command This command can be used to add additional files to the image using the ADD or COPY command. Files must be placed inside the application configuration folder.
application.buildcommands docker command Command that will be executed after having installed all packages and having configured the debugger and the services. This will give you a chance to change configuration before the actual command is executed
application.targetfiles docker command Command that will be executed at the end of the build, can be used to add files to the container (ex: providing pre-configuration for services or overriding the default configuration files)
application.targetcommands docker command Command executed when the container runs, this may be used to override execution of the application in release containers
application.appname string mnemonic name of the application, for application create using Visual Studio Code it will match folder name
application.exename string relative path (from application install folder) of the exe started when the container starts. Used only by VSCode
application.appargs string optional arguments that should be passed to the application
application.username string username used to run the container CMD. Other commands will be executed as root.
application.sdkpreinstallcommands docker command Command executed before installing packages into the SDK container can be used to add Debian feeds or keys
application.sdkpostinstallcommands docker command Command executed after devpackages and skdpackages have been installed
application.main string Used only for python application. Provides the name of the python file that container the main entry point

Both applications and platforms provide a generic entry named “props” where you can specify your own properties that will be replaced as tags using the same logic applied for standard tags.
In the extension UI, those will be referenced as "custom properties".
You can define your custom tags and use them in your dockerfile templates or inside other tags.