Skip to content

Configure execution behavior

Use this page when you need PyRel to retry network-related job submission failures, send execution logs through Python logging, or collect execution metrics for debugging. If you don’t need those behaviors, you can usually leave the execution settings alone. This guide shows how to configure these optional settings in raiconfig.yaml and in Python code.

  • You have access to a Snowflake account with the RelationalAI Native App installed. If you are unsure, contact your Snowflake administrator.
  • You have a working PyRel installation. See Set Up Your Environment for instructions.

These settings apply to the Submit RelationalAI job step in the PyRel workflow:

1. Load and validateconfiguration2. Build modeland query3. SubmitRelationalAI job4. Run job withreasoners5. Materializeresults

You can use execution settings to control retries and enable or disable execution logs and metrics:

  • Retries let PyRel re-submit a job after short-lived problems, with a configurable delay between attempts.
  • Execution logs show structured log messages about what PyRel is doing.
  • Metrics collection records SDK-side counters and timings you can inspect in Python.

Use the sections below to configure these settings in raiconfig.yaml or in Python code.

Turn on retries when you want PyRel to try job submission again after a short-lived problem. Retries help scripts, notebooks, and scheduled jobs continue through brief interruptions. They are disabled by default but should be enabled for production workloads.

For most workloads, turn retries on and keep the default settings. Only change the other retry settings if you need a different attempt limit or backoff window.

To configure retries:

  1. Enable retries in raiconfig.yaml:

    connections:
    # ...
    execution:
    retries:
    enabled: true

    If you only want the default retry behavior, you can stop here. PyRel keeps the default values for the other retry settings.

  2. Optionally tune the other retry settings:

    Skip this step unless the default values are not a good fit for your workload. By default, PyRel uses max_attempts: 3, base_delay_s: 0.25, max_delay_s: 5.0, and jitter: 0.2.

    connections:
    # ...
    execution:
    retries:
    enabled: true
    max_attempts: 5
    base_delay_s: 0.25
    max_delay_s: 5.0
    jitter: 0.2
    • max_attempts defaults to 3. PyRel counts the first try as one of those attempts.
    • base_delay_s defaults to 0.25 seconds. PyRel increases the wait time after each failed try.
    • max_delay_s defaults to 5.0 seconds. PyRel does not wait longer than this before it tries again.
    • jitter defaults to 0.2. This adds a small random amount to the wait time so many jobs do not all retry at the exact same moment.
  3. Confirm that PyRel loaded your retry settings:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.execution.retries.enabled)
    print(m.config.execution.retries.max_attempts)
    print(m.config.execution.retries.base_delay_s)
    print(m.config.execution.retries.max_delay_s)
    print(m.config.execution.retries.jitter)
    • This step confirms that PyRel loaded the retry policy you configured.
    • A real retry only occurs when a request fails with a retryable transient error, so this example does not demonstrate a retry in progress.

Execution logs are structured log messages that PyRel emits while it runs RelationalAI jobs. Enable them when you want to see what PyRel is doing during job submission and execution. Turning this on makes the logs available through Python’s logging system, so you also need to configure Python logging to display them.

Select the tab below for your preferred config method and follow the steps to enable execution logs, confirm that PyRel loaded the setting, and run a simple query to trigger a job.

  1. Set execution.logging in raiconfig.yaml:

    connections:
    # ...
    execution:
    logging: true
  2. Confirm that PyRel loaded the configured value:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.execution.logging)
  3. Run a simple query to trigger a job:

    import logging
    from relationalai.semantics import Model
    logging.basicConfig(level=logging.INFO)
    logging.getLogger("relationalai.client.execution").setLevel(logging.INFO)
    m = Model("MyModel")
    # Run a query to trigger a job and look for execution logs in the output.
    m.select("hello world").to_df()
  • The config check confirms that PyRel loaded execution.logging.
  • The query step triggers a job so you can look for execution logs in the output.
  • Each execution log line includes the request type, the operation name, how long the operation took, and meta, which is extra context PyRel attaches to the request.
  • These logs summarize what happened, but they do not show the full SQL text or full HTTP request details such as headers or bodies.
  • If a job fails, check the exception message returned by the backend for more detailed error information.

Execution metrics are small counters and timings that the PyRel SDK collects while it runs RelationalAI jobs. Enable them when you want a quick way to inspect how often operations run and how long they take. PyRel keeps these metrics in the current Python process and does not export them automatically.

Select the tab below for your preferred config method and follow the steps to enable execution metrics, confirm that PyRel loaded the setting, and run a simple query to trigger a job.

  1. Set execution.metrics in raiconfig.yaml:

    connections:
    # ...
    execution:
    metrics: true
  2. Confirm that PyRel loaded the configured value:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.execution.metrics)
  3. Run a simple query to trigger a job:

    from relationalai.semantics import Model
    m = Model("MyModel")
    m.select("hello world").to_df()
  • The config check confirms that PyRel loaded execution.metrics.
  • The query step triggers a job with metrics enabled.
  • These metrics are process-local to the current Python process.