Skip to main content

Utilities Tasks

Logging

Log

task: "Utilities/Log@1"
name: log
inputs:
message: "Hello World"

Mapping

Map

The Map task maps values from one object to another. The result of the mapping is stored in the output variable.

task: "Utilities/Map@1"
name: map
inputs:
subject: "Hello World"
emailBody: "Hello {{subject}}"
outputs:
- name: "result"
mapping: "result"

Output:

{
"result": {
"subject": "Hello World",
"emailBody": "Hello Hello World"
}
}

Error

The Error task is used to throw an error.

task: "Utilities/Error@1"
name: error
inputs:
message: "Error message"

HTTP Request

The HTTP Request task handles REST API communications with advanced features like retry policies, response caching, and event tracking.

Key Features:

  • Automatic content handling for JSON/x-www-form-urlencoded/XML
  • Exponential backoff retry mechanism
  • Response caching with multiple expiration strategies
  • Security header sanitization for event logging
  • Multi-format response parsing (JSON, XML, PDF base64 encoding)

YAML Structure:

task: "Utilities/HttpRequest@1"
name: httpRequest
inputs:
url: "https://api.example.com/data"
method: "POST"
contentType: "application/json" # Supported: application/json, application/x-www-form-urlencoded
retryOptions:
limit: 3 # Max retry attempts
codes: [500, 503] # HTTP status codes triggering retries
enableActionEvents: true
eventName: "API_CALL_EVENT"
responseContentType: "application/json" # Auto-detected from response headers
cache:
enabled: true
duration: "1h" # Supports "EndOfDay|TZ" (e.g., "EndOfDay|CST")
useSlidingExpiration: false # Reset expiration window on access
headers:
- name: "Authorization"
value: "Bearer ${API_TOKEN}"
body:
key: "value"
outputs:
- name: "response"
mapping: "response"

Attribute Details:

Input ParameterDescription
retryOptions.limitMaximum retry attempts using exponential backoff (2^attempt seconds)
cache.durationSupports ISO 8601 durations or "EndOfDay" with optional timezone
contentTypeDefaults to JSON. Form-encoded data requires key-value pairs in body
responseContentTypeFallback if Content-Type header missing. Supports JSON/XML auto-conversion

Response Structure:

{
"response": {
"statusCode": 200,
"headers": {
"content-type": "application/json"
},
"body": {
"data": "payload" // Type varies by content:
// - JSON: object
// - XML: converted JSON
// - PDF: base64 string
// - Other: raw string
}
}
}

Best Practices:

  1. Use EndOfDay caching for daily batch operations
  2. Avoid caching POST/PUT methods except for idempotent operations
  3. Use environment variables for sensitive headers like Authorization
  4. Enable retry for 5xx status codes and network failures
  5. Use sliding expiration for frequently accessed static data

Error Handling:

  • Throws WorkflowRuntimeException for non-2xx responses
  • Retries follow pattern: 2s, 4s, 8s delays by default
  • Failed requests generate action events with sanitized details

Circuit Breaker:

HTTP requests include an automatic per-host circuit breaker that prevents cascading failures when a remote service is down or rate-limiting.

BehaviorDetails
Failure threshold5 consecutive failures per host opens the circuit
Break duration30 seconds by default
429 handlingHonors Retry-After header — the break duration uses the server-specified delay
Tripping errorsHttpRequestException, HTTP 429 (Too Many Requests), 502 (Bad Gateway), 503 (Service Unavailable)
RecoveryCircuit closes automatically after break duration expires; next successful request clears the failure counter

When the circuit is open for a host, subsequent HTTP requests to that host fail immediately without making a network call, protecting both the workflow engine and the remote service.

Download File

The Download File task handles file downloads with caching and retry capabilities.

task: "Utilities/DownloadFile@1"
name: downloadFile
inputs:
url: "https://jsonplaceholder.typicode.com/todos/1.zip"
fileName: "file.zip" # Optional - File name to save the downloaded file as
method: "POST"
headers:
- name: "Content-Type"
value: "application/json"
body:
name: "John Doe"
email: ""
decompress:
enabled: true
type: "zip" # Optional - Type of compression to use (zip, gzip, tar, rar, 7z)
filter: "*.csv" # Optional - Filter to apply to the decompressed files
cache:
enabled: true
duration: "1h" # Supports ISO 8601 or "EndOfDay|TZ"
useSlidingExpiration: false
retryOptions:
limit: 3
codes: [500, 503]
outputs:
- name: "files"
mapping: "files"
- name: "folder"
mapping: "folder"

Attribute Details:

Input ParameterDescription
cache.durationFile cache duration (same format as HTTP Request)
cache.enabledEnable disk caching of downloaded files
retryOptions.limitMax retry attempts with exponential backoff
decompressDecompress the downloaded file if the file is compressed

Output:

{
"files": ["/temp/unzipped/file1", "/temp/unzipped/file2"],
"folder": "/temp/unzipped"
}

CsvParse

The CsvParse task (Utilities/CsvParse@1) parses CSV data from a URL (http://, https://, or file://) into a list of dictionary records. It supports streaming parsing, custom delimiters, explicit column definitions, and built-in deduplication via distinct.

Inputs:

InputTypeRequiredDefaultDescription
urlstringyesURL to the CSV file (http://, https://, or file://)
hasHeaderboolnotrueWhether the first row contains column headers
delimiterstringno,Field delimiter. Supports escape sequences (e.g., \t for tab)
columnsstring[]noExplicit column names. When provided, the file header row is ignored and these names are used instead
distinctstring[]noDeduplicate records by the specified fields. When set, only unique combinations are returned, projected down to just the distinct fields

Output: result.records (array of dicts), result.count (int), result.hasRecords (boolean).

Behavior: Column headers are automatically converted to camelCase. Empty/whitespace headers become UnnamedColumn{index}. Blank rows are skipped.

# Basic CSV parse
task: "Utilities/CsvParse@1"
name: parseCsv
inputs:
url: "https://example.com/data.csv"
outputs:
- name: "records"
mapping: "result.records"
# Tab-delimited with explicit columns
task: "Utilities/CsvParse@1"
name: parseTsv
inputs:
url: "{{ fileUrl }}"
delimiter: "\\t"
columns: ["code", "name", "country"]
outputs:
- name: "records"
mapping: "result.records"
# Extract distinct states from postal codes
task: "Utilities/CsvParse@1"
name: parsePostalCodes
inputs:
url: "{{ fileUrl }}"
distinct: ["stateCode", "stateName", "countryCode"]
outputs:
- name: "states"
mapping: "result.records"

GroupBy

The GroupBy task (Utilities/GroupBy@1) groups a collection of dictionaries by one or more fields, producing a list of { key, values } groups. Useful for splitting imported data into logical batches before processing each group in a loop.

Inputs:

InputTypeRequiredDescription
collectionList<Dictionary>yesThe collection of records to group
bystring[]yesField names to group by (case-insensitive matching)

Output: items (array of { key, values } groups), count (number of groups).

task: "Utilities/GroupBy@1"
name: groupByCustomer
inputs:
collection: "{{ data.parseCsv.records }}"
by: ["customerId"]
outputs:
- name: "groups"
mapping: "items"
- name: "groupCount"
mapping: "count"

Set Variable

The Set Variable task is used to set a variable.

task: "Utilities/SetVariable@1"
name: setVariable
inputs:
name: "getData.results"
value:
extends: "getData.results"
mapping:
- name: "name"
value: "John Doe"
- name: "email"
value: "john.doe@example.com"

Sequence Number/Get

The SequenceNumber/Get task is used to obtain the next sequence number.

task: "SequenceNumber/Get@1"
name: getSequenceNumber
inputs:
sequenceNumber: "OrderNumber"

Utilities/Export

The Utilities/Export task is used to export data to a file. The exported file URL is stored in the output variable.

task: "Utilities/Export@1"
name: export
inputs:
name: "contacts" # Name of the export, added to the file name
fileType: "Csv" # File type to export (Csv, Xlsx, Json)
data:
- name: "John Doe"
email: "john.doe@example.com"
outputs:
- name: "fileUrl"
mapping: "fileUrl" # URL of the exported file

Utilities/Lock (Proposed)

The Utilities/Lock workflow task is used to lock a resource.

task: "Utilities/Lock@1"
name: lock
inputs:
resource: "resourceName"
lockDuration: 60 # Optional - Lock duration in seconds
timeout: 60 # Optional - Timeout in seconds
retryInterval: 5 # Optional - Retry interval in seconds

Utilities/Unlock (Proposed)

The Utilities/Unlock workflow task is used to unlock a resource.

task: "Utilities/Unlock@1"
name: unlock
inputs:
resource: "resourceName"

Utilities/ValidateHMAC@1

The Utilities/ValidateHMAC task is used to validate a HMAC signature. It is typically used to validate webhook requests.

task: "Utilities/ValidateHMAC@1"
name: validateHMAC
inputs:
secret: "your-secret-key"
algorithm: "SHA256"
signature: "signature"
payload: "payload"
outputs:
- name: "isValid"
mapping: "isValid"

Utilities/ValidateReCaptcha@1

The Utilities/ValidateReCaptcha task is used to validate a ReCaptcha response. It is typically used to validate webhook requests.

task: "Utilities/ValidateReCaptcha@1"
name: validateReCaptcha
inputs:
secret: "your-recaptcha-secret"
token: "token"
throwException: true # Optional - Throw an exception if the validation fails
version: "v3" # Optional - Version of the ReCaptcha (v2, v3)
outputs:
- name: "isValid"
mapping: "isValid"

Utilities/Template

The Utilities/Template task is used to render a template using handlebars.

Inputs:

  • template - The template to render. Use the $raw property to pass templated strings.
  • data - The data to render the template with.

Outputs:

  • result - The rendered template.
task: "Utilities/Template@1"
name: template
inputs:
template:
$raw: "{{name}} is {{age}} years old."
data:
name: "John Doe"
age: 30
outputs:
- name: "result"
mapping: "result"

UnzipFile

The UnzipFile task (Utilities/UnzipFile@1) extracts ZIP archives to a temporary directory within the workflow execution context. Extracted file paths are returned so subsequent tasks (e.g., Utilities/CsvParse@1, PostalCodes/Import@1) can access them via file:// URLs.

Inputs:

InputTypeRequiredDescription
filePathstringone of filePath/fileUrlLocal file path to a ZIP file
fileUrlstringone of filePath/fileUrlURL to a ZIP file (http://, https://, or file://)
filePatternstringnoGlob-style filter for extracted files (e.g., *.csv, data_*.json). When omitted, all files are returned

Output: Files (string[] — absolute paths), Count (int). Temp directory is auto-cleaned when workflow completes.

task: "Utilities/UnzipFile@1"
name: extract
inputs:
filePath: "{{ download.downloadZip.response.FilePath }}"
filePattern: "*.csv"
outputs:
- name: csvFiles
mapping: "Files"

Import Tasks

Import tasks handle bulk data ingestion. They share a common pattern: accept data from a file URL, stream, or in-memory collection; delegate to a MediatR command; and return a standardized result with added, updated, and errors counts. All support file:// URLs via UrlStreamHelper.

Common Output Format

{
"result": {
"success": true,
"added": 150,
"updated": 30,
"errors": [],
"totalProcessed": 180,
"hasErrors": false
}
}

Order/Import@1

Imports orders from CSV, JSON, or XLSX files.

InputTypeRequiredDescription
organizationIdintyesOrganization context (auto-injected)
fileUrlstringone of fileUrl/stream/ordersURL to import file
fileTypeFileTypenocsv, json, or xlsx. Auto-detected from URL when omitted
streamStreamone of fileUrl/stream/ordersDirect stream input (requires fileType)
ordersList<Dictionary>one of fileUrl/stream/ordersIn-memory order data
optionsImportOrderOptionsnoImport options (match-by fields, update behavior)

States/Import@1

Imports US states / provinces / regions. Incoming states are deduplicated by match key before processing.

InputTypeRequiredDescription
organizationIdintyesOrganization context
fileUrlstringone of fileUrl/stream/statesURL to import file
fileTypeFileTypenoAuto-detected from URL when omitted
statesList<Dictionary>one of fileUrl/stream/statesIn-memory state data
matchByFieldsstring[]noFields used to match existing states for upsert
updateIfExistsboolnotrue (default). Whether to update matched states

PostalCodes/Import@1

Imports postal/ZIP codes with state association. Match-key fields are not overwritten on re-import.

InputTypeRequiredDescription
organizationIdintyesOrganization context
fileUrlstringone of fileUrl/stream/postalCodesURL to import file
fileTypeFileTypenoAuto-detected from URL when omitted
postalCodesList<PostalCodeExportDto>one of fileUrl/stream/postalCodesIn-memory postal code data
matchByFieldsstring[]noFields used for matching. Falls back to fipsCode when matching states

TrackingEvent/Import@1

Imports tracking events for a specific order. Automatically links events to commodities via CommodityId foreign key.

InputTypeRequiredDescription
orderIdintyesTarget order ID
eventsList<Dictionary>yesTracking event data
matchByFieldsstring[]noDefault: ["eventDefinitionName", "eventDate"]
skipIfExistsboolnotrue. Skip events that already exist
createEventDefinitionsboolnotrue. Auto-create missing event definitions
eventDefinitionDefaultsDictionarynoDefault values for new event definitions

Example: Full ZIP-to-Import Pipeline

activities:
- name: download
steps:
- task: "Utilities/HttpRequest@1"
name: downloadZip
inputs:
url: "{{ sourceUrl }}"
method: "GET"
saveToFile: true

- task: "Utilities/UnzipFile@1"
name: extract
inputs:
filePath: "{{ download.downloadZip.response.FilePath }}"
filePattern: "*.csv"

- task: "Utilities/CsvParse@1"
name: parsePostalCodes
inputs:
url: "file://{{ download.extract.Files[0] }}"
distinct: ["stateCode", "stateName", "countryCode"]

- task: "States/Import@1"
name: importStates
inputs:
organizationId: "{{ organizationId }}"
states: "{{ download.parsePostalCodes.states }}"

- task: "PostalCodes/Import@1"
name: importPostalCodes
inputs:
organizationId: "{{ organizationId }}"
fileUrl: "file://{{ download.extract.Files[0] }}"
fileType: "csv"

Move File

Move or rename a file in storage.

task: "Utilities/MoveFile@1"
name: moveFile
inputs:
sourcePath: "uploads/temp/file.pdf"
destinationPath: "documents/orders/{{ orderId }}/file.pdf"

Inputs

ParameterTypeRequiredDescription
sourcePathstringYesSource file path in storage
destinationPathstringYesDestination file path in storage

ResolveTimezone

Resolves the IANA timezone identifier and current UTC offset for a given geographic coordinate (latitude/longitude). Uses the GeoTimeZone library for offline timezone boundary lookup.

task: "Utilities/ResolveTimezone@1"
name: resolveTimezone
inputs:
latitude: "{{ Data.GetPostalCode.postalCode.location.y }}"
longitude: "{{ Data.GetPostalCode.postalCode.location.x }}"
outputs:
- name: timezone
mapping: "timezoneId?"
- name: offset
mapping: "utcOffset?"

Inputs

ParameterTypeRequiredDescription
latitudedouble (or string)YesGeographic latitude. Strings are parsed automatically
longitudedouble (or string)YesGeographic longitude. Strings are parsed automatically

Outputs

OutputTypeDescription
timezoneIdstringIANA timezone identifier (e.g., America/Chicago, Europe/Berlin)
utcOffsetdoubleCurrent UTC offset in hours (e.g., -5 for CST, 1 for CET). Accounts for DST

Error Handling

Throws a WorkflowRuntimeException if latitude or longitude is missing or cannot be parsed to a number.