Configure execution behavior
Use this page when you need PyRel to retry network-related job submission failures, send execution logs through Python logging, or collect execution metrics for debugging.
If you don’t need those behaviors, you can usually leave the execution settings alone.
This guide shows how to configure these optional settings in raiconfig.yaml and in Python code.
- You have access to a Snowflake account with the RelationalAI Native App installed. If you are unsure, contact your Snowflake administrator.
- You have a working PyRel installation. See Set Up Your Environment for instructions.
What execution configuration covers
Section titled “What execution configuration covers”These settings apply to the Submit RelationalAI job step in the PyRel workflow:
You can use execution settings to control retries and enable or disable execution logs and metrics:
- Retries let PyRel re-submit a job after short-lived problems, with a configurable delay between attempts.
- Execution logs show structured log messages about what PyRel is doing.
- Metrics collection records SDK-side counters and timings you can inspect in Python.
Use the sections below to configure these settings in raiconfig.yaml or in Python code.
Configure retries
Section titled “Configure retries”Turn on retries when you want PyRel to try job submission again after a short-lived problem. Retries help scripts, notebooks, and scheduled jobs continue through brief interruptions. They are disabled by default but should be enabled for production workloads.
For most workloads, turn retries on and keep the default settings. Only change the other retry settings if you need a different attempt limit or backoff window.
To configure retries:
-
Enable retries in
raiconfig.yaml:connections:# ...execution:retries:enabled: trueIf you only want the default retry behavior, you can stop here. PyRel keeps the default values for the other retry settings.
-
Optionally tune the other retry settings:
Skip this step unless the default values are not a good fit for your workload. By default, PyRel uses
max_attempts: 3,base_delay_s: 0.25,max_delay_s: 5.0, andjitter: 0.2.connections:# ...execution:retries:enabled: truemax_attempts: 5base_delay_s: 0.25max_delay_s: 5.0jitter: 0.2max_attemptsdefaults to3. PyRel counts the first try as one of those attempts.base_delay_sdefaults to0.25seconds. PyRel increases the wait time after each failed try.max_delay_sdefaults to5.0seconds. PyRel does not wait longer than this before it tries again.jitterdefaults to0.2. This adds a small random amount to the wait time so many jobs do not all retry at the exact same moment.
-
Confirm that PyRel loaded your retry settings:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.execution.retries.enabled)print(m.config.execution.retries.max_attempts)print(m.config.execution.retries.base_delay_s)print(m.config.execution.retries.max_delay_s)print(m.config.execution.retries.jitter)- This step confirms that PyRel loaded the retry policy you configured.
- A real retry only occurs when a request fails with a retryable transient error, so this example does not demonstrate a retry in progress.
Set execution.retries using ExecutionConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Enable retries in code:
from relationalai.config import ExecutionConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(execution=ExecutionConfig(retries=ExecutionConfig.RetriesConfig(enabled=True,),),)m = Model("MyModel", config=cfg)print(m.config.execution.retries.enabled)If you only want the default retry behavior, you can stop here. PyRel keeps the built-in values for the other retry settings. Retries are disabled by default. Set
enabled=Falseonly if you want to turn them off explicitly.You can also configure the same setting with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(execution={"retries": {"enabled": True,},},)m = Model("MyModel", config=cfg)print(m.config.execution.retries.enabled) -
Optionally tune the other retry settings:
Skip this step unless the default values are not a good fit for your workload. By default, PyRel uses
max_attempts=3,base_delay_s=0.25,max_delay_s=5.0, andjitter=0.2.from relationalai.config import ExecutionConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(execution=ExecutionConfig(retries=ExecutionConfig.RetriesConfig(enabled=True,max_attempts=5,base_delay_s=0.25,max_delay_s=5.0,jitter=0.2,),),)m = Model("MyModel", config=cfg)print(m.config.execution.retries.enabled)print(m.config.execution.retries.max_attempts)print(m.config.execution.retries.base_delay_s)print(m.config.execution.retries.max_delay_s)print(m.config.execution.retries.jitter)You can also configure the same settings with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(execution={"retries": {"enabled": True,"max_attempts": 5,"base_delay_s": 0.25,"max_delay_s": 5.0,"jitter": 0.2,},},)m = Model("MyModel", config=cfg)print(m.config.execution.retries.enabled)print(m.config.execution.retries.max_attempts)print(m.config.execution.retries.base_delay_s)print(m.config.execution.retries.max_delay_s)print(m.config.execution.retries.jitter)
max_attemptssets the total number of times PyRel will try to submit the job, including the first try.base_delay_ssets the initial wait time between tries. PyRel increases that wait time after each failed try.max_delay_ssets the longest time PyRel will wait before it tries again.jitteradds a small random amount to the wait time so many jobs do not all retry at the exact same moment.
- These checks confirm that PyRel loaded the retry policy you configured.
- A real retry only occurs when a request fails with a retryable transient error, so these examples do not demonstrate a retry in progress.
Enable or disable execution logs
Section titled “Enable or disable execution logs”Execution logs are structured log messages that PyRel emits while it runs RelationalAI jobs. Enable them when you want to see what PyRel is doing during job submission and execution. Turning this on makes the logs available through Python’s logging system, so you also need to configure Python logging to display them.
Select the tab below for your preferred config method and follow the steps to enable execution logs, confirm that PyRel loaded the setting, and run a simple query to trigger a job.
-
Set
execution.logginginraiconfig.yaml:connections:# ...execution:logging: true -
Confirm that PyRel loaded the configured value:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.execution.logging) -
Run a simple query to trigger a job:
import loggingfrom relationalai.semantics import Modellogging.basicConfig(level=logging.INFO)logging.getLogger("relationalai.client.execution").setLevel(logging.INFO)m = Model("MyModel")# Run a query to trigger a job and look for execution logs in the output.m.select("hello world").to_df()
Set execution.logging using ExecutionConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Create the config with a typed config class:
from relationalai.config import ExecutionConfig, create_configcfg = create_config(execution=ExecutionConfig(logging=True)) -
Alternatively, create the config with a dict:
from relationalai.config import create_configcfg = create_config(execution={"logging": True}) -
Confirm that PyRel loaded the configured value:
from relationalai.semantics import Modelm = Model("MyModel", config=cfg)print(m.config.execution.logging) -
Run a simple query to trigger a job:
import loggingfrom relationalai.semantics import Modellogging.basicConfig(level=logging.INFO)logging.getLogger("relationalai.client.execution").setLevel(logging.INFO)m = Model("MyModel", config=cfg)# Run a query to trigger a job and look for execution logs in the output.m.select("hello world").to_df()
- The config check confirms that PyRel loaded
execution.logging. - The query step triggers a job so you can look for execution logs in the output.
- Each execution log line includes the request type, the operation name, how long the operation took, and
meta, which is extra context PyRel attaches to the request. - These logs summarize what happened, but they do not show the full SQL text or full HTTP request details such as headers or bodies.
- If a job fails, check the exception message returned by the backend for more detailed error information.
Enable or disable metrics collection
Section titled “Enable or disable metrics collection”Execution metrics are small counters and timings that the PyRel SDK collects while it runs RelationalAI jobs. Enable them when you want a quick way to inspect how often operations run and how long they take. PyRel keeps these metrics in the current Python process and does not export them automatically.
Select the tab below for your preferred config method and follow the steps to enable execution metrics, confirm that PyRel loaded the setting, and run a simple query to trigger a job.
-
Set
execution.metricsinraiconfig.yaml:connections:# ...execution:metrics: true -
Confirm that PyRel loaded the configured value:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.execution.metrics) -
Run a simple query to trigger a job:
from relationalai.semantics import Modelm = Model("MyModel")m.select("hello world").to_df()
Set execution.metrics using ExecutionConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Create the config with a typed config class:
from relationalai.config import ExecutionConfig, create_configcfg = create_config(execution=ExecutionConfig(metrics=True)) -
Alternatively, create the config with a dict:
from relationalai.config import create_configcfg = create_config(execution={"metrics": True}) -
Confirm that PyRel loaded the configured value:
from relationalai.semantics import Modelm = Model("MyModel", config=cfg)print(m.config.execution.metrics) -
Run a simple query to trigger a job:
from relationalai.semantics import Modelm = Model("MyModel", config=cfg)m.select("hello world").to_df()
- The config check confirms that PyRel loaded
execution.metrics. - The query step triggers a job with metrics enabled.
- These metrics are process-local to the current Python process.