Overview
run_dl_job is Stage 3b of the pipeline (alternative to run_ml_job). It triggers the dl-job pipeline job, which trains a PyTorch neural network on the feature-engineered dataset and generates per-ticker price predictions.
The tool blocks until the job completes and returns the output blob URL.
Parameters
Blob storage URL pointing to a feature_engine_*.json file. This is the output_url returned by run_feature_worker.
Blob storage URL pointing to a data_extractor_*.json file. This is the output_url returned by run_data_extraction. Required alongside feature_url
for proper train/test date alignment.
Neural network configuration. Top-level container for all neural network settings. Training and architecture configuration. Show Dot prediction fields
Enable or disable this prediction block. Default: true.
Model architecture to use. Supported values: Value Architecture "lstm"Multi-layer LSTM (per-ticker) "cnn"1D Convolutional Network (per-ticker) "transformer"Transformer Encoder (per-ticker) "lstm_cnn"Hybrid LSTM + CNN (per-ticker) "mlp"Multi-layer Perceptron (per-ticker) "portfolio_lstm"LSTM trained across all tickers jointly "portfolio_transformer"Transformer trained across all tickers jointly
Runtime settings. Show global_settings fields
Compute device. "auto" selects CUDA if available, else CPU. Default: "auto".
Random seed for reproducibility. Default: 42.
Data preparation settings. Show data_processing fields
Feature scaler. Options: "standard", "minmax", "robust", "maxabs". Default: "robust".
Sequence window length (number of past days fed to the model). Default: 10.
Mini-batch size for training. Default: 64.
Optimizer and training loop settings. Show training_params fields
Maximum number of training epochs. Default: 20.
Learning rate. Default: 0.001.
Early stopping patience (epochs without validation improvement). Default: 5.
Optimizer. Options: "adam", "sgd", "rmsprop". Default: "adam".
L2 regularization coefficient. Default: 0.0.
Loss function. Options: "mse", "mae", "huber", "directional", "sharpe", "sortino", "mahalanobis".
Default: "mse" for per-ticker models; "mahalanobis" for portfolio models.
Architecture-specific hyperparameters. Only the block matching current option is used. Show Per-architecture fields
lstm / portfolio_lstm
hidden_dims (int array) — hidden layer sizes, e.g. [64, 32]
dropout (float) — dropout rate, e.g. 0.2
cnn
filters (int array) — number of filters per conv layer, e.g. [32, 64]
kernel_size (int) — convolution kernel size, default 3
dropout (float) — dropout rate
pool_size (int) — max-pool size, default 2
transformer / portfolio_transformer
d_model (int) — model embedding dimension, e.g. 64
nhead (int) — number of attention heads, e.g. 4
num_layers (int) — number of encoder layers, e.g. 2
dim_feedforward (int) — feedforward dimension, e.g. 128
dropout (float) — dropout rate
lstm_cnn
conv_filters (int) — number of conv filters, e.g. 64
lstm_units (int) — LSTM hidden size, e.g. 64
dropout (float) — dropout rate
mlp
hidden_dims (int array) — hidden layer sizes, e.g. [128, 64]
dropout (float) — dropout rate
Optional date range hints for train/test split alignment. Show Backtest params fields
End of the training window. Format: YYYY-MM-DD. Pass null to infer from data.
End of the testing window. Format: YYYY-MM-DD. Pass null to infer from data.
Returns
{
"status" : "Succeeded" ,
"output_url" : "https://stmcpfabricdev.blob.core.windows.net/data/nn_engine_20260307_125841.json" ,
"output_name" : "nn_engine_20260307_125841.json" ,
"execution_name" : "dl-job-abc123xyz"
}
Field Description statusJob terminal status (Succeeded) output_urlFull HTTPS URL to the output blob — pass to run_po_job output_nameBlob filename execution_nameJob execution ID for audit/debugging
Example — LSTM (per-ticker)
{
"feature_url" : "https://stmcpfabricdev.blob.core.windows.net/data/feature_engine_20260307_125103.json" ,
"data_extractor_url" : "https://stmcpfabricdev.blob.core.windows.net/data/data_extractor_20260307_124937.json" ,
"config" : {
"NN params" : {
"Dot prediction" : {
"status" : true ,
"current option" : "lstm" ,
"global_settings" : { "device" : "auto" , "seed" : 42 },
"data_processing" : { "scaler" : "robust" , "lookback" : 10 , "batch_size" : 64 },
"training_params" : {
"epochs" : 20 ,
"learning_rate" : 0.001 ,
"patience" : 5 ,
"optimizer" : "adam" ,
"weight_decay" : 0.0 ,
"loss_type" : "mse"
},
"models_params" : {
"lstm" : { "hidden_dims" : [ 64 , 32 ], "dropout" : 0.2 }
}
}
},
"Backtest params" : {
"Learning_end" : null ,
"Testing_end" : null
}
}
}
{
"feature_url" : "..." ,
"data_extractor_url" : "..." ,
"config" : {
"NN params" : {
"Dot prediction" : {
"status" : true ,
"current option" : "portfolio_transformer" ,
"global_settings" : { "device" : "auto" , "seed" : 42 },
"data_processing" : { "scaler" : "minmax" , "lookback" : 12 , "batch_size" : 64 },
"training_params" : {
"epochs" : 5 ,
"learning_rate" : 0.0005 ,
"patience" : 10 ,
"optimizer" : "sgd" ,
"weight_decay" : 0.0001 ,
"loss_type" : "sharpe"
},
"models_params" : {
"portfolio_transformer" : {
"d_model" : 64 ,
"nhead" : 4 ,
"num_layers" : 2 ,
"dropout" : 0.4
}
}
}
}
}
}
Resources
Resource Value Container Apps Job dl-jobContainer name dl-jobEnv vars injected FEATURE_URL, DATA_EXTRACTOR_URL, CONFIGOutput blob prefix nn_engine_Timeout 600 seconds
Next Step
Pass output_url to run_po_job as input_url.
If you prefer classical machine learning, use run_ml_job instead — it produces
an ml_engine_*.json blob equally compatible with run_po_job.