Shortcuts

torch._dynamo

Warning

This module is an early prototype and is subject to change.

torch._dynamo.allow_in_graph(fn)[source]

Customize which functions TorchDynamo will include in the generated graph. Similar to torch.fx.wrap().

torch._dynamo.allow_in_graph(my_custom_function)

@torch._dynamo.optimize(...)
def fn(a):
    x = torch.add(x, 1)
    x = my_custom_function(x)
    x = torch.add(x, 1)
    return x

fn(...)

Will capture a single graph containing my_custom_function().

torch._dynamo.disallow_in_graph(fn)[source]

Customize which functions TorchDynamo will exclude in the generated graph and force a graph break on.

torch._dynamo.disallow_in_graph(torch.sub)

@torch._dynamo.optimize(...)
def fn(a):
    x = torch.add(x, 1)
    x = torch.sub(x, 1)
    x = torch.add(x, 1)
    return x

fn(...)

Will break the graph on torch.sub, and give two graphs each with a single torch.add() op.

torch._dynamo.forbid_in_graph(fn)[source]

Customize which functions TorchDynamo will assert are not present while tracing.

If you want a graph break on this function instead, use disallow_in_graph. TODO(voz): We now have allow_in_graph, disallow_in_graph, forbid_in_graph - some more robust documentation would not be amiss.

torch._dynamo.graph_break()[source]

Force a graph break

torch._dynamo.mark_dynamic(t, index)[source]

Mark a tensor as having a dynamic dim.

[Note - on the state of mark_dynamic]

The behavior of having a dynamic dimension on a tensor is governed by a few factors:

  1. torch._dynamo.config dynamic_shapes True or False.

    a) dynamic_shapes=True - dynamic_shapes must be True for mark_dynamic to work. a) dynamic_shapes=False - This config will raise an exception when used in conjunction with mark_dyamic. We will eventually support this.

2) If the dimension is fully constrained - as in, it does not allow more than a single value in both eager (torch.compile, torch._dynamo.optimize) mode and export mode (torch._dynamo.export), we will raise an error

3) If the dimension is partially constrained - allowing at least 2 values but not the full unbounded range of shapes, in eager we will pass it through, but export will raise an error.

4) Attempts to trace this function will explicitly raise. As such, all calls to mark_dynamic must be made before torch.compile.

torch._dynamo.mark_static(t, index=None)[source]

Mark a tensor as having a static dim.

This will prevent us from attempting to compile it dynamically when dynamic=True; this can improve trace-time performance.

This has lower precedence than mark_dynamic.

torch._dynamo.optimize(backend='inductor', *, nopython=False, guard_export_fn=None, guard_fail_fn=None, disable=False, dynamic=False)[source]

The main entrypoint of TorchDynamo. Do graph capture and call backend() to optimize extracted graphs.

Parameters:
  • backend – One of the two things: - Either, a function/callable taking a torch.fx.GraphModule and example_inputs and returning a python callable that runs the graph faster. One can also provide additional context for the backend, like torch.jit.fuser(“fuser2”), by setting the backend_ctx_ctor attribute. See AOTAutogradMemoryEfficientFusionWithContext for the usage. - Or, a string backend name in torch._dynamo.list_backends()

  • nopython – If True, graph breaks will be errors and there will be a single whole-program graph.

  • disable – If True, turn this decorator into a no-op

  • dynamic – If True, turn on dynamic shapes support

Example Usage:

@torch._dynamo.optimize()
def toy_example(a, b):
    ...
torch._dynamo.optimize_assert(backend, *, hooks=Hooks(guard_export_fn=None, guard_fail_fn=None), export=False, export_constraints=None, dynamic=False)[source]

The same as torch._dynamo.optimize(backend, nopython=True)

torch._dynamo.export(f, *args, aten_graph=False, decomposition_table=None, tracing_mode='real', constraints=None, **kwargs)[source]

Export an input function f to a format that can be executed outside of PyTorch using the FX graph.

Parameters:
  • f (callable) – A PyTorch function to be exported.

  • *args – Variable length argument list to be passed to the function f.

  • aten_graph (bool) – If True, exports a graph with ATen operators.

  • False (If) –

  • False. (exports a graph with Python operators. Default is) –

  • decomposition_table (dict) – A dictionary that maps operators to their decomposition functions.

  • None. (Required if aten_graph or tracing_mode is specified. Default is) –

  • tracing_mode (str) – Specifies the tracing mode. Must be set to “real” if decomposition_table is not specified.

  • specified (If decomposition_table is) –

  • "real". (the options are "symbolic" or "fake". Default is) –

  • **kwargs – Arbitrary keyword arguments to be passed to the function f.

Returns:

A tuple of (graph, guards) Graph: An FX graph representing the execution of the input PyTorch function with the provided arguments and options. Guards: The guards we accumulated during tracing f above

Raises:
  • AssertionError – If decomposition_table or tracing_mode is specified without setting aten_graph=True,

  • or if graph breaks during tracing in export.

  • AssertionError – If Dynamo input and output is not consistent with traced input/output.

Return type:

Tuple[GraphModule, Set[Guard]]

Note - this headerdoc was authored by ChatGPT, with slight modifications by the author.

torch._dynamo.run(fn=None)[source]

Don’t do any dynamic compiles, just use prior optimizations

torch._dynamo.disable(fn=None, recursive=True)[source]

Decorator and context manager to disable TorchDynamo

If recursive=True, Dynamo is completely skipped on the decorated function frame as well as the recursively invoked functions.

If recursive=False, Dynamo skips frames associated with the function code, but still process recursively invoked frames.

torch._dynamo.reset()[source]

Clear all compile caches and restore initial state

class torch._dynamo.OptimizedModule(mod, dynamo_ctx)[source]

Wraps the original nn.Module object and later patches its forward method to optimized self.forward method.

torch._dynamo.register_backend(compiler_fn=None, name=None, tags=())[source]

Decorator to add a given compiler to the registry to allow calling torch.compile with string shorthand. Note: for projects not imported by default, it might be easier to pass a function directly as a backend and not use a string.

Parameters:
torch._dynamo.list_backends(exclude_tags=('debug', 'experimental'))[source]

Return valid strings that can be passed to:

torch.compile(…, backend=”name”)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources