Utilities Tasks
Logging
Log
task: "Utilities/Log@1"
name: log
inputs:
message: "Hello World"
Mapping
Map
The Map task maps values from one object to another. The result of the mapping is stored in the output variable.
task: "Utilities/Map@1"
name: map
inputs:
subject: "Hello World"
emailBody: "Hello {{subject}}"
outputs:
- name: "result"
mapping: "result"
Output:
{
"result": {
"subject": "Hello World",
"emailBody": "Hello Hello World"
}
}
Error
The Error task is used to throw an error.
task: "Utilities/Error@1"
name: error
inputs:
message: "Error message"
HTTP Request
The HTTP Request task handles REST API communications with advanced features like retry policies, response caching, and event tracking.
Key Features:
- Automatic content handling for JSON/x-www-form-urlencoded/XML
- Exponential backoff retry mechanism
- Response caching with multiple expiration strategies
- Security header sanitization for event logging
- Multi-format response parsing (JSON, XML, PDF base64 encoding)
YAML Structure:
task: "Utilities/HttpRequest@1"
name: httpRequest
inputs:
url: "https://api.example.com/data"
method: "POST"
contentType: "application/json" # Supported: application/json, application/x-www-form-urlencoded
retryOptions:
limit: 3 # Max retry attempts
codes: [500, 503] # HTTP status codes triggering retries
enableActionEvents: true
eventName: "API_CALL_EVENT"
responseContentType: "application/json" # Auto-detected from response headers
cache:
enabled: true
duration: "1h" # Supports "EndOfDay|TZ" (e.g., "EndOfDay|CST")
useSlidingExpiration: false # Reset expiration window on access
headers:
- name: "Authorization"
value: "Bearer ${API_TOKEN}"
body:
key: "value"
outputs:
- name: "response"
mapping: "response"
Attribute Details:
| Input Parameter | Description |
|---|---|
retryOptions.limit | Maximum retry attempts using exponential backoff (2^attempt seconds) |
cache.duration | Supports ISO 8601 durations or "EndOfDay" with optional timezone |
contentType | Defaults to JSON. Form-encoded data requires key-value pairs in body |
responseContentType | Fallback if Content-Type header missing. Supports JSON/XML auto-conversion |
Response Structure:
{
"response": {
"statusCode": 200,
"headers": {
"content-type": "application/json"
},
"body": {
"data": "payload" // Type varies by content:
// - JSON: object
// - XML: converted JSON
// - PDF: base64 string
// - Other: raw string
}
}
}
Best Practices:
- Use
EndOfDaycaching for daily batch operations - Avoid caching POST/PUT methods except for idempotent operations
- Use environment variables for sensitive headers like Authorization
- Enable retry for 5xx status codes and network failures
- Use sliding expiration for frequently accessed static data
Error Handling:
- Throws
WorkflowRuntimeExceptionfor non-2xx responses - Retries follow pattern: 2s, 4s, 8s delays by default
- Failed requests generate action events with sanitized details
Circuit Breaker:
HTTP requests include an automatic per-host circuit breaker that prevents cascading failures when a remote service is down or rate-limiting.
| Behavior | Details |
|---|---|
| Failure threshold | 5 consecutive failures per host opens the circuit |
| Break duration | 30 seconds by default |
| 429 handling | Honors Retry-After header — the break duration uses the server-specified delay |
| Tripping errors | HttpRequestException, HTTP 429 (Too Many Requests), 502 (Bad Gateway), 503 (Service Unavailable) |
| Recovery | Circuit closes automatically after break duration expires; next successful request clears the failure counter |
When the circuit is open for a host, subsequent HTTP requests to that host fail immediately without making a network call, protecting both the workflow engine and the remote service.
Download File
The Download File task handles file downloads with caching and retry capabilities.
task: "Utilities/DownloadFile@1"
name: downloadFile
inputs:
url: "https://jsonplaceholder.typicode.com/todos/1.zip"
fileName: "file.zip" # Optional - File name to save the downloaded file as
method: "POST"
headers:
- name: "Content-Type"
value: "application/json"
body:
name: "John Doe"
email: ""
decompress:
enabled: true
type: "zip" # Optional - Type of compression to use (zip, gzip, tar, rar, 7z)
filter: "*.csv" # Optional - Filter to apply to the decompressed files
cache:
enabled: true
duration: "1h" # Supports ISO 8601 or "EndOfDay|TZ"
useSlidingExpiration: false
retryOptions:
limit: 3
codes: [500, 503]
outputs:
- name: "files"
mapping: "files"
- name: "folder"
mapping: "folder"
Attribute Details:
| Input Parameter | Description |
|---|---|
cache.duration | File cache duration (same format as HTTP Request) |
cache.enabled | Enable disk caching of downloaded files |
retryOptions.limit | Max retry attempts with exponential backoff |
decompress | Decompress the downloaded file if the file is compressed |
Output:
{
"files": ["/temp/unzipped/file1", "/temp/unzipped/file2"],
"folder": "/temp/unzipped"
}
CsvParse
The CsvParse task (Utilities/CsvParse@1) parses CSV data from a URL (http://, https://, or file://) into a list of dictionary records. It supports streaming parsing, custom delimiters, explicit column definitions, and built-in deduplication via distinct.
Inputs:
| Input | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | yes | URL to the CSV file (http://, https://, or file://) | |
hasHeader | bool | no | true | Whether the first row contains column headers |
delimiter | string | no | , | Field delimiter. Supports escape sequences (e.g., \t for tab) |
columns | string[] | no | Explicit column names. When provided, the file header row is ignored and these names are used instead | |
distinct | string[] | no | Deduplicate records by the specified fields. When set, only unique combinations are returned, projected down to just the distinct fields |
Output: result.records (array of dicts), result.count (int), result.hasRecords (boolean).
Behavior: Column headers are automatically converted to camelCase. Empty/whitespace headers become UnnamedColumn{index}. Blank rows are skipped.
# Basic CSV parse
task: "Utilities/CsvParse@1"
name: parseCsv
inputs:
url: "https://example.com/data.csv"
outputs:
- name: "records"
mapping: "result.records"
# Tab-delimited with explicit columns
task: "Utilities/CsvParse@1"
name: parseTsv
inputs:
url: "{{ fileUrl }}"
delimiter: "\\t"
columns: ["code", "name", "country"]
outputs:
- name: "records"
mapping: "result.records"
# Extract distinct states from postal codes
task: "Utilities/CsvParse@1"
name: parsePostalCodes
inputs:
url: "{{ fileUrl }}"
distinct: ["stateCode", "stateName", "countryCode"]
outputs:
- name: "states"
mapping: "result.records"
GroupBy
The GroupBy task (Utilities/GroupBy@1) groups a collection of dictionaries by one or more fields, producing a list of { key, values } groups. Useful for splitting imported data into logical batches before processing each group in a loop.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
collection | List<Dictionary> | yes | The collection of records to group |
by | string[] | yes | Field names to group by (case-insensitive matching) |
Output: items (array of { key, values } groups), count (number of groups).
task: "Utilities/GroupBy@1"
name: groupByCustomer
inputs:
collection: "{{ data.parseCsv.records }}"
by: ["customerId"]
outputs:
- name: "groups"
mapping: "items"
- name: "groupCount"
mapping: "count"
Set Variable
The Set Variable task is used to set a variable.
task: "Utilities/SetVariable@1"
name: setVariable
inputs:
name: "getData.results"
value:
extends: "getData.results"
mapping:
- name: "name"
value: "John Doe"
- name: "email"
value: "john.doe@example.com"
Sequence Number/Get
The SequenceNumber/Get task is used to obtain the next sequence number.
task: "SequenceNumber/Get@1"
name: getSequenceNumber
inputs:
sequenceNumber: "OrderNumber"
Utilities/Export
The Utilities/Export task is used to export data to a file. The exported file URL is stored in the output variable.
task: "Utilities/Export@1"
name: export
inputs:
name: "contacts" # Name of the export, added to the file name
fileType: "Csv" # File type to export (Csv, Xlsx, Json)
data:
- name: "John Doe"
email: "john.doe@example.com"
outputs:
- name: "fileUrl"
mapping: "fileUrl" # URL of the exported file
Utilities/Lock (Proposed)
The Utilities/Lock workflow task is used to lock a resource.
task: "Utilities/Lock@1"
name: lock
inputs:
resource: "resourceName"
lockDuration: 60 # Optional - Lock duration in seconds
timeout: 60 # Optional - Timeout in seconds
retryInterval: 5 # Optional - Retry interval in seconds
Utilities/Unlock (Proposed)
The Utilities/Unlock workflow task is used to unlock a resource.
task: "Utilities/Unlock@1"
name: unlock
inputs:
resource: "resourceName"
Utilities/ValidateHMAC@1
The Utilities/ValidateHMAC task is used to validate a HMAC signature. It is typically used to validate webhook requests.
task: "Utilities/ValidateHMAC@1"
name: validateHMAC
inputs:
secret: "your-secret-key"
algorithm: "SHA256"
signature: "signature"
payload: "payload"
outputs:
- name: "isValid"
mapping: "isValid"
Utilities/ValidateReCaptcha@1
The Utilities/ValidateReCaptcha task is used to validate a ReCaptcha response. It is typically used to validate webhook requests.
task: "Utilities/ValidateReCaptcha@1"
name: validateReCaptcha
inputs:
secret: "your-recaptcha-secret"
token: "token"
throwException: true # Optional - Throw an exception if the validation fails
version: "v3" # Optional - Version of the ReCaptcha (v2, v3)
outputs:
- name: "isValid"
mapping: "isValid"
Utilities/Template
The Utilities/Template task is used to render a template using handlebars.
Inputs:
template- The template to render. Use the$rawproperty to pass templated strings.data- The data to render the template with.
Outputs:
result- The rendered template.
task: "Utilities/Template@1"
name: template
inputs:
template:
$raw: "{{name}} is {{age}} years old."
data:
name: "John Doe"
age: 30
outputs:
- name: "result"
mapping: "result"
UnzipFile
The UnzipFile task (Utilities/UnzipFile@1) extracts ZIP archives to a temporary directory within the workflow execution context. Extracted file paths are returned so subsequent tasks (e.g., Utilities/CsvParse@1, PostalCodes/Import@1) can access them via file:// URLs.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
filePath | string | one of filePath/fileUrl | Local file path to a ZIP file |
fileUrl | string | one of filePath/fileUrl | URL to a ZIP file (http://, https://, or file://) |
filePattern | string | no | Glob-style filter for extracted files (e.g., *.csv, data_*.json). When omitted, all files are returned |
Output: Files (string[] — absolute paths), Count (int). Temp directory is auto-cleaned when workflow completes.
task: "Utilities/UnzipFile@1"
name: extract
inputs:
filePath: "{{ download.downloadZip.response.FilePath }}"
filePattern: "*.csv"
outputs:
- name: csvFiles
mapping: "Files"
Import Tasks
Import tasks handle bulk data ingestion. They share a common pattern: accept data from a file URL, stream, or in-memory collection; delegate to a MediatR command; and return a standardized result with added, updated, and errors counts. All support file:// URLs via UrlStreamHelper.
Common Output Format
{
"result": {
"success": true,
"added": 150,
"updated": 30,
"errors": [],
"totalProcessed": 180,
"hasErrors": false
}
}
Order/Import@1
Imports orders from CSV, JSON, or XLSX files.
| Input | Type | Required | Description |
|---|---|---|---|
organizationId | int | yes | Organization context (auto-injected) |
fileUrl | string | one of fileUrl/stream/orders | URL to import file |
fileType | FileType | no | csv, json, or xlsx. Auto-detected from URL when omitted |
stream | Stream | one of fileUrl/stream/orders | Direct stream input (requires fileType) |
orders | List<Dictionary> | one of fileUrl/stream/orders | In-memory order data |
options | ImportOrderOptions | no | Import options (match-by fields, update behavior) |
States/Import@1
Imports US states / provinces / regions. Incoming states are deduplicated by match key before processing.
| Input | Type | Required | Description |
|---|---|---|---|
organizationId | int | yes | Organization context |
fileUrl | string | one of fileUrl/stream/states | URL to import file |
fileType | FileType | no | Auto-detected from URL when omitted |
states | List<Dictionary> | one of fileUrl/stream/states | In-memory state data |
matchByFields | string[] | no | Fields used to match existing states for upsert |
updateIfExists | bool | no | true (default). Whether to update matched states |
PostalCodes/Import@1
Imports postal/ZIP codes with state association. Match-key fields are not overwritten on re-import.
| Input | Type | Required | Description |
|---|---|---|---|
organizationId | int | yes | Organization context |
fileUrl | string | one of fileUrl/stream/postalCodes | URL to import file |
fileType | FileType | no | Auto-detected from URL when omitted |
postalCodes | List<PostalCodeExportDto> | one of fileUrl/stream/postalCodes | In-memory postal code data |
matchByFields | string[] | no | Fields used for matching. Falls back to fipsCode when matching states |
TrackingEvent/Import@1
Imports tracking events for a specific order. Automatically links events to commodities via CommodityId foreign key.
| Input | Type | Required | Description |
|---|---|---|---|
orderId | int | yes | Target order ID |
events | List<Dictionary> | yes | Tracking event data |
matchByFields | string[] | no | Default: ["eventDefinitionName", "eventDate"] |
skipIfExists | bool | no | true. Skip events that already exist |
createEventDefinitions | bool | no | true. Auto-create missing event definitions |
eventDefinitionDefaults | Dictionary | no | Default values for new event definitions |
Example: Full ZIP-to-Import Pipeline
activities:
- name: download
steps:
- task: "Utilities/HttpRequest@1"
name: downloadZip
inputs:
url: "{{ sourceUrl }}"
method: "GET"
saveToFile: true
- task: "Utilities/UnzipFile@1"
name: extract
inputs:
filePath: "{{ download.downloadZip.response.FilePath }}"
filePattern: "*.csv"
- task: "Utilities/CsvParse@1"
name: parsePostalCodes
inputs:
url: "file://{{ download.extract.Files[0] }}"
distinct: ["stateCode", "stateName", "countryCode"]
- task: "States/Import@1"
name: importStates
inputs:
organizationId: "{{ organizationId }}"
states: "{{ download.parsePostalCodes.states }}"
- task: "PostalCodes/Import@1"
name: importPostalCodes
inputs:
organizationId: "{{ organizationId }}"
fileUrl: "file://{{ download.extract.Files[0] }}"
fileType: "csv"
Move File
Move or rename a file in storage.
task: "Utilities/MoveFile@1"
name: moveFile
inputs:
sourcePath: "uploads/temp/file.pdf"
destinationPath: "documents/orders/{{ orderId }}/file.pdf"
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
sourcePath | string | Yes | Source file path in storage |
destinationPath | string | Yes | Destination file path in storage |
ResolveTimezone
Resolves the IANA timezone identifier and current UTC offset for a given geographic coordinate (latitude/longitude). Uses the GeoTimeZone library for offline timezone boundary lookup.
task: "Utilities/ResolveTimezone@1"
name: resolveTimezone
inputs:
latitude: "{{ Data.GetPostalCode.postalCode.location.y }}"
longitude: "{{ Data.GetPostalCode.postalCode.location.x }}"
outputs:
- name: timezone
mapping: "timezoneId?"
- name: offset
mapping: "utcOffset?"
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
latitude | double (or string) | Yes | Geographic latitude. Strings are parsed automatically |
longitude | double (or string) | Yes | Geographic longitude. Strings are parsed automatically |
Outputs
| Output | Type | Description |
|---|---|---|
timezoneId | string | IANA timezone identifier (e.g., America/Chicago, Europe/Berlin) |
utcOffset | double | Current UTC offset in hours (e.g., -5 for CST, 1 for CET). Accounts for DST |
Error Handling
Throws a WorkflowRuntimeException if latitude or longitude is missing or cannot be parsed to a number.