Skip to main content

Overview

run_dl_job is Stage 3b of the pipeline (alternative to run_ml_job). It triggers the dl-job pipeline job, which trains a PyTorch neural network on the feature-engineered dataset and generates per-ticker price predictions. The tool blocks until the job completes and returns the output blob URL.

Parameters

feature_url
string
required
Blob storage URL pointing to a feature_engine_*.json file.This is the output_url returned by run_feature_worker.
data_extractor_url
string
required
Blob storage URL pointing to a data_extractor_*.json file.This is the output_url returned by run_data_extraction. Required alongside feature_url for proper train/test date alignment.
config
object
required
Neural network configuration.

Returns

{
  "status": "Succeeded",
  "output_url": "https://stmcpfabricdev.blob.core.windows.net/data/nn_engine_20260307_125841.json",
  "output_name": "nn_engine_20260307_125841.json",
  "execution_name": "dl-job-abc123xyz"
}
FieldDescription
statusJob terminal status (Succeeded)
output_urlFull HTTPS URL to the output blob — pass to run_po_job
output_nameBlob filename
execution_nameJob execution ID for audit/debugging

Example — LSTM (per-ticker)

{
  "feature_url": "https://stmcpfabricdev.blob.core.windows.net/data/feature_engine_20260307_125103.json",
  "data_extractor_url": "https://stmcpfabricdev.blob.core.windows.net/data/data_extractor_20260307_124937.json",
  "config": {
    "NN params": {
      "Dot prediction": {
        "status": true,
        "current option": "lstm",
        "global_settings": { "device": "auto", "seed": 42 },
        "data_processing": { "scaler": "robust", "lookback": 10, "batch_size": 64 },
        "training_params": {
          "epochs": 20,
          "learning_rate": 0.001,
          "patience": 5,
          "optimizer": "adam",
          "weight_decay": 0.0,
          "loss_type": "mse"
        },
        "models_params": {
          "lstm": { "hidden_dims": [64, 32], "dropout": 0.2 }
        }
      }
    },
    "Backtest params": {
      "Learning_end": null,
      "Testing_end": null
    }
  }
}

Example — Portfolio Transformer

{
  "feature_url": "...",
  "data_extractor_url": "...",
  "config": {
    "NN params": {
      "Dot prediction": {
        "status": true,
        "current option": "portfolio_transformer",
        "global_settings": { "device": "auto", "seed": 42 },
        "data_processing": { "scaler": "minmax", "lookback": 12, "batch_size": 64 },
        "training_params": {
          "epochs": 5,
          "learning_rate": 0.0005,
          "patience": 10,
          "optimizer": "sgd",
          "weight_decay": 0.0001,
          "loss_type": "sharpe"
        },
        "models_params": {
          "portfolio_transformer": {
            "d_model": 64,
            "nhead": 4,
            "num_layers": 2,
            "dropout": 0.4
          }
        }
      }
    }
  }
}

Resources

ResourceValue
Container Apps Jobdl-job
Container namedl-job
Env vars injectedFEATURE_URL, DATA_EXTRACTOR_URL, CONFIG
Output blob prefixnn_engine_
Timeout600 seconds

Next Step

Pass output_url to run_po_job as input_url.
If you prefer classical machine learning, use run_ml_job instead — it produces an ml_engine_*.json blob equally compatible with run_po_job.