Welcome to Efemarai’s user guide!

Efemarai is an advanced tool for visualizing, inspecting and debugging machine learning code written in Python using libraries such as NumPy and PyTorch. In this guide you will learn in depth about Efemarai, the core concepts behind it, the capabilities it provides you with and how to use it in your projects.

Before you continue, you might want to quickly go through the Getting Started page or open the Python API or the Assertions Library catalog as a reference while you go through this guide.

rocket get started

Setup Efemarai and generate your first visualizations.

api reference icon

Complete reference of the Efemarai Python API.

Complete catalog of the Assertions Library.

Core Capabilities

Debugging is hard. Debugging machine learning (ML) code is even harder as the standard debugging techniques are not adequate for the complexity and scale of modern machine learning. The crux of the problem is the mismatch of abstraction levels. When we describe machine learning models we use terms with strong mathematical connotation (e.g. ‘graph’, ‘tensor’, ‘gradient’), but when we debug we are forced to use computer science terms that are more generic (e.g. ‘value’, ‘array’, ‘call stack’). Technically, a tensor is a multidimensional array and a computational graph defines a sequence of function calls, but by speaking about our models with more generic terms we throw away lots of semantic content which is absolutely crucial when we want to find and fix bugs in our ML code.

Efemarai’s core purpose is to make debugging ML code faster and easier by reducing this abstraction gap. This is achieved by focusing on three core capabilities that form a solid foundation enabling code and model introspection that goes well beyond pure debugging. Efemarai is an indispensable tool for tackling hard research problems (e.g. investigating explainability or robustness) as it will undoubtedly provide you with a deeper understanding of your models.


Visualizing data and computations as close as possible to what they actually represent, mean or do is a crucial step towards efficient ML code development and debugging. Efemarai can automatically build intuitive 3D visualizations of large multidimensional tensors as well as complex computational graphs. These visualizations let you identify issues with your data or ML code at a glance.


When debugging it is important to have access to each and every value consumed or produced by your code. Efemarai allows you to inspect effortlessly any tensor or tensor element in just a few clicks. Even tensors that are not explicitly handled by your code, such as gradient tensors, can also be quickly inspected.


Quickly detecting violations of your assumptions about the data, or the behaviour of your code is absolutely essential for bug fixing. Efemarai allows you to easily express assertions about tensors and functions working with tensors. All assertions are automatically monitored and whenever any one of them is violated your code is paused and visualized such that you can investigate what caused the error.

Inspecting Tensors

Inspecting a tensor is similar to printing it and looking at the values. However, instead of looking at terminal printouts, you get to explore an interactive 3D visualization of the tensor. All you need to do is call inspect instead of print.


Remember to start the Efemarai daemon and launch Efemarai (see Getting Started) before you run any of the code in this guide.


import numpy as np
import efemarai as ef

tensor = np.random.rand(3, 4, 5)


import torch
import efemarai as ef

tensor = torch.rand(3, 4, 5)

Any Efemarai function that accepts NumPy arrays also works with PyTorch tensors. Run the code above, switch to your browser and you should be able to see the tensor (press v for a better view angle).

UI Basics

Efemarai’s UI allows you to explore and interact with the automatically generated 3D visualizations. You can navigate within the 3D view with Shift + LeftMouse to rotate, Shift + RightMouse to translate and Scroll to zoom in or out. You can also press v to automatically rotate the camera to a dimetric view.

In order to read the value of a tensor element simply hover on the corresponding voxel — this will display its value (and index) as well as highlight its position within the tensor histogram.

Each corner of the UI contains menus and controls dedicated to a particular functionality provided by Efemarai.

Inspect List (top left)

See all inspected tensors and switch between them (or the computational graph). Remove inspected tensors once you no longer need them.

Tensor Menu (top right)

Dive deeper into exploring a tensor. Configure how it is visualized, show its gradient, or check the distribution of its values.

Execution Control (bottom left)

Pause or resume the execution of your program. Monitor assertions and highlight errors.

Point of View Gauge (bottom right)

Keep track of the view orientation such that you know the direction of each tensor axis. Click on any of the rotating colored points to quickly set the point of view.


Gradients are a key component for any continuous optimization procedure. While you can manually inspect tensors that contain gradient information, Efemarai makes it much easier, by automatically tracking tensor gradients when available and allowing you to inspect them with a single click.

To toggle the gradient of a tensor you can press the gradient symbol () at the top of the Tensor Menu or press g.

The following example creates and inspects two tensors, one with and one without a gradient.

import torch
import efemarai as ef

shape = (10, 10, 10)

tensor = torch.rand(*shape)
ef.inspect(tensor, name="Without gradient", wait=True)

tensor = torch.rand(*shape, requires_grad=True)
tensor.grad = torch.randn(*shape)
ef.inspect(tensor, name="With gradient", wait=True)

To keep track of what tensor you are inspecting you can provide a tensor name to inspect. Additionally, we have provided another named argument wait=True to inspect() which tells Efemarai to pause the execution of your program while you inspect the tensor. In order to resume the execution you can either press the ‘play’ button on the Execution Control panel (bottom left corner) or, similarly to pdb and ipdb, press c to continue.

So when you run the code above you will see the visualization of the first tensor and your program will be paused. If you switch to the gradient of that tensor (press g) you will see an empty tensor with no data. Since the inspected tensor does not have an associated gradient its gradient is assumed to be a tensor of the same size, but it has no data.

Press c to resume the execution of your program and you will see the visualization of the second tensor. Now this tensor does contain gradient information so you can visualize it by pressing g. If you look at the gradient histogram in the Tensor menu you will clearly see that the gradient values come from a normal distribution as we expected.

Tensor Intersections

Hovering over a voxel displays its value, but what if you want to check the value of an ‘internal’ voxel that cannot simply be hovered over? With Efemarai you can easily visualize the intersections of a tensor with the planes perpendicular to the world axes. You can turn on all three intersections by enabling the Intersections section of the Tensor Menu or by clicking on any of the X, Y or Z buttons within the section. Drag the sliders to move the intersections along the axes.

If you are not sure which intersection you are looking at simply check the Point of View Gauge (bottom right corner) as it shows you the direction of each axis. From the screenshot above you can see that we have hovered over a voxel on the intersection along the Z axis with index (4, 5, 2) and value of ~0.4.

View Dimensions

So far we have only visualized 3-dimensional tensors, but Efemarai can visualize tensors of greater dimensionality.

Multidimensional Tensors

Efemarai can visualize 6-dimensional tensors as a ‘cube of cubes’ that we refer to as a ‘grid of cells’. Each cell can contain a 3-dimensional tensor and the grid has 3 dimensions along which the cells are arranged. The 3 most significant dimensions of a tensor are spanned by the grid, whereas the 3 least significant dimensions are spanned by each cell.

As a result, each one of the 3D axes (annotated as X, Y, Z) are used for up to two tensor dimensions (one grid and one tensor dimension) as illustrated in the diagram bellow. For example, the most significant grid dimension and the most significant cell dimension are arranged along the Z axis.

View dimensions diagram.

Additionally, it is common to process entire input batches instead of individual inputs. In that case the most significant dimension of a batched tensor is assumed to contain the batch elements. It is possible to visualize the entire tensor or individual batch elements by setting the batch index of that element from the Graph Menu. Thus you can visualize batched 6-dimensional tensors. In other words Efemarai is capable of visualizing up to 7-dimensional tensors.

Let’s have a look at some examples to better understand how multidimensional tensors are visualized. Pay attention to the View Dimensions section of the Tensor Menu in the following screenshots.

import torch
import efemarai as ef

for i in range(1, 7):
    shape = (3,) * i
    tensor = torch.rand(shape)
    ef.inspect(tensor, name="{}D".format(i))


Let’s consider the following example inspecting a 4-dimensional tensor.

import torch
import efemarai as ef

shape = (10, 5, 3, 7)
num_elements = 10 * 5 * 3 * 7

data = torch.linspace(0, 1, num_elements).view(*shape)
noise = torch.rand(*shape)

ef.inspect(data + noise)

As you already know, tensor intersections are helpful if you want to see the values ‘inside’ the tensor. However, Efemarai provides an even faster and easier way to inspect all tensor values at a glance. In order to minimize voxel overlap, it is possible to alter the way tensor axes are assigned to view dimensions. It is useful to think about each view dimension as an empty slot that can be filled with a tensor axis.

In the screenshot above, you can see that the axes of the 4-dimensional tensor are visualized along the 4 least significant view dimensions. If you click on any of the tensor axes (the blue numbers shown in the view dimension slots) a new empty tensor axis will be inserted. This is equivalent to using newaxis in NumPy or unsqueeze in PyTorch. As a result, you can visualize overlapping voxels along an empty view dimension, like so

Now, there are no voxels that you cannot inspect just by hovering on them. If you click on the unsqueezed tensor axis (the blue view dimension slot with a minus sign) you can remove the new axis and see the tensor in its original shape. More importantly, we can improve the way this tensor is visualized even more!


You can not only click on tensor axes to insert new ones, but you can also drag and drop them to different view dimensions which is equivalent to transposing the underlying tensor. To better utilize the visualization area in our example we can perform two more axes swaps and then press the Z axis button in the Point of View Gauge. The result is a visualization that allows you to clearly see and inspect any tensor element you would like.

As shown in the screenshots above, when you hover on a voxel you see its value and its index. To make working with multidimensional tensor easier, the index is split in a grid index (displayed below the value) and a cell index (displayed at the bottom) which when concatenated in that order give the entire element index.

Tensor Views

Tensors typically contain data that represents something; they rarely are just a bunch of values with no assigned meaning. For example, in computer vision, tensors often represent images, in natural language processing they represent words and in generative models — probability distributions. Having to mentally translate between the raw tensor data to what it actually stands for can be quite demanding. This error prone process will most certainly distract you and slow you down when developing and debugging ML code.

To help you with that, Efemarai introduces the notion of Tensor Views — one tensor (the same data) can be viewed in many different ways (views) depending on what it represents.

One way to add a new view to a tensor is to set the view argument of inspect. For, example you can view a tensor as an image with the following code. This does not remove the raw tensor view, so you can easily switch between the two by selecting the one you need from the Tensor Menu.

import torch
import efemarai as ef

tensor = torch.rand(3, 32, 32)
ef.inspect(tensor, view=ef.View.Image)

Another way to add a new Tensor View without changing your code is to use the user interface. Whenever you want to add a new Tensor View you can select from a list of all available tensor views that are applicable to the corresponding tensor. Removing a tensor view can also be done with a single mouse click.

Below you can find a list of all available tensor views and how you can use them.

Raw View

Every tensor has at least one Tensor View and that is the Raw View. Each voxel is colored according to its value based on a colormap.

Tensor raw view.

Image View

Any 3-dimensional tensor of shape (C, H, W) or any batched 4-dimensional tensor of shape (B, C, H, W), where C is divisible by 3, can be visualized as an RGB image.

 ef.inspect(image, view=ef.View.Image)
Tensor image view.

Need a Tensor View tailored to your data?

Scanning Computational Graphs

So far you saw how to inspect tensors and the data they hold. In this section you will learn how to visualize and explore the entire computational graph constructed by your code.

Efemarai can monitor the execution of your code and automatically build an interactive 3D visualization of the executed computational graph.

Let’s consider the following neural network that we will visualize and explore (it is based on a PyTorch tutorial). It’s a fairly standard convolutional neural network with 2 convolutional layers, max pooling, 3 fully connected layers and ReLu activations.

import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.pool = nn.MaxPool2d(2, 2)        
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

Assuming that the training subset of the CIFAR10 dataset is loaded in trainloader and the network is instantiated as model, we can use the following code for training:

for data, target in trainloader:

        output = model(data)
        loss = F.cross_entropy(output, target)


Now, visualizing the model is as easy as importing the efemarai package and adding a single line of code telling Efemarai to scan the execution of the forward and backward passes.

import efemarai as ef

for data, target in trainloader:

        with ef.scan():
               output = model(data)
               loss = F.cross_entropy(output, target)



This example is included with the efemarai Python package and you can run it with


This will automatically generate an interactive 3D visualization of the constructed computational graph and pause the program execution. Switch to your browser and explore the visualization (remember Shift + LeftMouse to rotate, Shift + RightMouse to move, v to move the camera to a dimetric view, scroll to zoom).

You have already seen tensor visualizations, what you have not seen yet though, are the blue cubes — they represent functions. Here is how the code from above relates to the visualized graph

Unfolding & Folding Functions

Efemarai automatically visualizes calls to differentiable functions or methods. Protected or private methods (as per the leading underscores convention) are visualized only if they are implemented in a file somewhere within the current working directory. In that way, distracting implementation details of the machine learning library that you are using are concealed.

As you can see from the definition of the Net class above, its forward method actually calls other methods. You can quickly explore what other methods were called simply by left clicking on the model function. This results in unfolding the graph, showing you a more detailed visualization. You can fold the graph again by right clicking on any of the functions being called by the forward method.

You can continue unfolding functions by clicking on them until you unfold the entire graph. However, there is an easier way — open the Graph Menu in the top right corner and click the unfold button at the top. Now you should see the entire computational graph that your code constructs (you might want to zoom out).

You can fold the entire graph back to its initial state by clicking the fold button next to the unfold one. In general, tensors between function nodes contain data and tensors below each functions are function parameters or transformations thereof.

Inspecting Tensors in the Graph

When you click on a tensor the Tensor Menu pops up just below the Graph Menu. You can inspect any tensor in exactly the same way as you saw in the Inspecting Tensors section while seeing the entire graph. This is especially useful if you need to debug a function and cross check its inputs and outputs. However, if you need to isolate a tensor from the rest of the graph you can click on it while holding the Ctrl key (i.e. Ctrl + LeftMouse) or click on the magnifying glass button in the Tensor Menu.

In our example, the input tensor is actually an image so we should add an image view to it. You’ve seen how you can add a tensor view from the Tensor Menu or with the inspect function. You can add tensor views from your code when scanning computational graphs with the add_view function

import efemarai as ef

for data, target in trainloader:

        ef.add_view(data, ef.View.IMAGE)

        with ef.scan():
               output = model(data)
               loss = F.cross_entropy(output, target)



Similarly to tensors, you can view the gradients of every tensor in the graph by clicking on the the gradient symbol () at the top of the Graph Menu or by pressing g.

Visualizing the gradients of the entire graph is incredibly useful especially when you want to figure out why your models does not converge during training.


We usually train models by going over the data in batches. By default, when visualizing a computational graph you see only the first item in the batch. You can change which batch element you want to see by using the slider in the Graph Menu

or, alternatively, show all items of the batch (in this example the batch size is 3) by clicking on the batch button (or pressing b)

When using inspect tensors are never visualized as batched. However, Efemarai automatically detects if a tensor is batched or not when scanning a computational graph. Typically, data tensors are batched while parameters and buffers are not.

Need a custom visualization of your model?

Tensor & Function Assertions

In order to quickly detect problems in your code Efemarai enables you to easily specify various data and model assertions through a highly expressive range of assertion types. Efemarai supports two main types of assertions – Tensor and Function Assertions.

  • Tensor Assertions let you specify various constraints that must hold true for any tensor (including gradients) that’s part of the computational graph.
  • Function Assertions let you specify constraints on any function called during the construction of the computational graph.

To make debugging even more effortless Efemarai ships with an Assertions Library – a collection of assertions detecting common problems such as vanishing gradients or NaN values. To run your model through the extensive set of checks specified in the Assertions Library is as easy as calling:


In this guide, however, we will dive deeper into how assertions work such that you can start writing your own assertions that are highly customized for your data and models.

Execution Stages

Efemarai breaks down the process of scanning a computational graph into 4 stages as shown below

Computational graph scan stages.

Any code that you write within the ef.scan context is executed during the Forward Pass. Once your code calls backward() Efemarai automatically transitions to the Backward Pass stage and records all calculated gradients. The Start and Complete stages are needed for some book keeping performed by Efemarai.

Depending on the type of constraint an assertion is to express it can be staged to be checked at any of the following moments during a computational graph scan

Assertion execution stages.

Tensor Assertions staged for execution after the Forward Pass have access only to tensors calculated during the Forward Pass. Tensor Assertions staged for execution before or after the Backward Pass have access only to gradients calculated before the current (after the previous) or during the current Backward Pass. Tensor Assertions staged for execution after the scan is complete have access to all tensors and gradients calculated during both the Forward and Backward passes. Additionally, all Function Assertions are executed after the full scan is complete.

Staging assertions at various points of the graph scan is quite useful since whenever an assertion error occurs the scanning process is paused, the computational graph is visualized and all assertion errors that have occurred so far during the scan are displayed. Thus, you always have access to the most relevant information directly causing the error.

Assertion Errors

For example purposes, let’s modify the training code snippet from above and explicitly introduce a mistake in the code as well as corrupt the input data.

import efemarai as ef

for data, target in trainloader:
        # optimizer.zero_grad()
        data[0, 0, 16, 0] = float("inf")

        ef.add_view(data, ef.Views.IMAGE)

        with ef.scan():
               output = model(data)
               loss = F.cross_entropy(output, target)


There is a small number of assertions in the Assertions Library that are enabled by default – they detect NaN and Inf values as well as non zero gradients at the start of the backward pass. So if you run the code above you will see the following visualization

According to the Execution Menu (bottom left corner) two assertions (executed right after the forward pass) have been violated – there are tensors containing NaN and Inf values. Firstly, you can see the orange voxel in the image that we explicitly set to +Inf. Secondly, you can see that some tensors are completely red meaning that they contain only NaN values. Sometimes it is not so easy to spot the tensor/function violating an assertion, so you can just hover over the assertion error which will highlight the visible error nodes.

You can also highlight all error nodes in the graph, unfolding it as necessary, by clicking on the highlight button next to each violated assertion. If you click on it again you’ll fold the graph back to its initial state.

You can also see that the loss at the end is NaN which must affect the gradients. However, if you use g to toggle the gradients now you would see empty tensors. This is the case because the failed assertions are staged for execution just after the forward pass, before any gradients are actually calculated, and so the scanning of the graph is paused at that point. If you pay closer attention at the top of the Execution Menu you will see a red label tag just next to the ‘After Forward’ title which indicates the execution stage at which the graph scan is paused. To continue the scan simply press c or click on the play button in the Execution Menu.

Now you should see another assertion error that occurred after the backward pass. Toggle the gradients with g and unfold a couple of functions to see the parameter tensors. Surprisingly, a single +Inf value has resulted in pretty much all parameter gradients to be full of NaN values.

Press c again to complete the scan of the graph. Press c once more to start the second iteration. You will quickly see that the same assertion errors as the ones in the previous iteration, appear after the forward pass. Now press c again and you will see a new error that appears before the backward pass.

This assertion ensures that zero_grad() is called before backward() and we have deliberately commented it out. You can see that the gradient for each torch.nn.Parameter has not been cleared and so the NaN values from the previous iteration are still present. The other gradients are empty since this assertion error is detected before the actual backward pass.

Finally, if you press c one more time you will see the assertion errors that appear after the Backward Pass as well.

So with just a few clicks you can trace very precisely how errors propagate throughout your model during training. The assertion errors that you see in the Execution Menu are all of the assertions available by default.

Writing Assertions

An assertion in Efemarai is any class that inherits from one of the Efemarai base assertion classes and implements the check method. While the input arguments to the check method depend on the assertion class you inherit from, the output should always be a single boolean indicating whether the assertion is true or not. The following sections provide an overview of the various assertion types supported by Efemarai.


For full implementation details of the supported assertion types see Python API.

Tensor Assertions

Tensor assertions let you specify various constraints on any tensor that your code works with. Depending on the complexity of the assertion that you want to create you should inherit from one of the following base tensor assertion classes.


This is the simplest possible assertion that works on individual tensors with the following signature of the check method

def check(self, tensor) -> bool

Depending on when the assertion is staged to be checked, the tensor argument can be a data or parameter tensor (if staged after the forward pass) or gradient tensor (if staged before or after the backward pass). You can specify when a TensorAssertion is to be checked by providing a value for the check_on argument at construction time.

For example, let’s have look at the assertion detecting non zero gradients before the backward pass.

from efemarai.assertion import ExecutionStage, TensorAssertion

class NoNonZeroGradientsAssertion(TensorAssertion):
    def __init__(self):
            message="Non-zero gradients detected.",
            hint="Did you forget to call 'zero_grads()'?",

    def check(self, gradient):
        return not (gradient != 0).any().item()

If you want an assertion to be executed at multiple stages you can provide a tuple instead of a single execution stage. Additionally, if you omit the check_on argument, its default value is (ExecutionStage.AfterForward, ExecutionStage.AfterBackward) ensuring the tensor assertion is applied to all tensors in your computational graph. The other arguments that we have provided specify what message to be displayed when the assertion is violated and what color to use when highlighting error tensors.


Often you might want to express an assertion regarding the parameters (and their gradients) of your model. Parameter assertions are only checked after the entire graph scan is complete so there is no need to provide the check_on argument. The check method that you should implement has the following signature

def check(self, parameter, gradient) -> bool

Parameter assertions are also very useful if you want to keep track of parameter changes over time since you can store the values of the parameter as the check method is being called for the same parameter at every training iterations.


If you want to express an assertion with respect to any tensor node (not just parameters) in your computational graph that depends both on the value of the tensor and its gradient you can inherit from TensorNodeAssertion and implement a check method with the following signature

def check(self, id, tensor, gradient) -> bool

where id is a unique string assigned to that node. The tensor node id is preserved throughout consecutive graph scans as long as the same graph (i.e. graph with the same nodes and topology) is being scanned. Thus you can track changes in any node of your computational graph through out the entire course of training. Tensor node assertions are checked after the graph scan is complete so you do not need to provide the check_on argument at construction time.

Function Assertions

Function assertions let you specify various constraints on any function executed as part of the computational graph. You can express various constraints on the function inputs, outputs and corresponding gradients. Importantly, function assertions are not checked on the fly as functions are being called, but are all checked at once at the end of the graph scan.


The most simple function assertion lets you implement a check method with the following signature

def check(self, output, *args, **kwargs) -> bool

where output is the value that the function has returned and the other arguments are the input arguments that have been passed to the function during the construction of the computational graph. There is one extra keyword argument containing gradient information that is automatically passed by Efemarai which we will explain later on as we have a look at an example.

Moreover, you should specify which functions are targeted by the assertion you are creating. You can do that by setting the targets argument at construction time. A function assertion target can be 1) any function (as long as it is part of the computational graph), 2) any torch.nn.Module instance (the assertion is run on the __call__ method) or 3) torch.nn.Module subclass (meaning that the assertion is run for all instances of that class).

To get a better idea, let’s a consider an example function assertion ensuring that the implementation of the ReLU activation function is correct. For clarity purposes we will consider only a subsets of all checks needed to verify that the ReLU function is implemented correctly. Therefore, we will consider the two following constraints:

1) the output of a ReLU unit must be zero if and only if the input is nonpositive;
2) the gradient with respect to the input must be zero if the input is nonpositive;

import torch
from efemarai.assertion import FunctionAssertion

class NoBuggyReLUs(FunctionAssertion):
    def __init__(self):
            targets=(torch.nn.functional.relu, torch.nn.ReLU),
            message="Buggy ReLU units detected.",
     def check(self, output, x, inplace=False, grads=None):
        # Constraint 1)
        if torch.logical_xor(output == 0, x <= 0).any():
            return False

        # Constraint 2)
        grad_x = grads[x]
        indices = (x <= 0).nonzero(as_tuple=True)
        if (grad_x[indices] != 0).any():
            return False

        return True

First, let’s have a look at the constructor. Thetargets initialisation argument tells Efemarai to run this assertion on any call to the function torch.nn.functional.relu or any instance of the torch.nn.ReLU class. Additionally, we provide a message and set the icon displayed in the Execution Menu when this assertion is violated to be a bug icon (you can choose any icon from the material design icons cheatsheet).

Now let’s have a closer look at the check function. As explained above the first argument after self is the return value of the function followed by the input arguments. Both torch.nn.functional.relu and torch.nn.ReLU.forward take a single tensor as input which we denote with x. If they were to take different number of input arguments you could use variadic arguments *args. torch.nn.ReLU.forward does not take any other arguments, but torch.nn.functional.relu takes a keyword argument inplace and so we should add it to the signature of the check method. Importantly, the grads keyword argument is not an input argument to the function. It is a dictionary automatically provided by Efemarai that maps any input or output tensor to its gradient. We use it in the example code above to get the gradient with respect to x. If we did not need any gradient information (and since we do not care about the inplace keyword) we could’ve simply used the following signature

def check(self, output, x, **kwargs) -> bool

and not worry about any of the keyword arguments.


Usually, when you develop your own module you might want to get access not just to its inputs and outputs, but also to its internals as well. In order to do that you should inherit from the ModuleAssertion class. It works exactly the same as the FunctionAssertion class, but it can only target torch.nn.Module instances or subclasses and the check function signature is

def check(self, module, output, *args, **kwargs) -> bool

where module is the current module that is being checked. Thus, you can express all sorts of constraints based on the internal state of your module.


The FunctionNodeAssertion, similarly to the TensorNodeAssertion, allows you to track any function node in you computational graph throughout the entire training period. In order to do that, Efemarai gives you a unique ID associated to each targeted node that is preserved between consecutive graph scans (as long as the same graph is constructed). The signature of the assertion function is

def check(self, id, output, *args, **kwargs) -> bool

where id is a string that represents the unique function node ID.

Registering Assertions

Once you have implemented an assertion you should let Efemarai know by registering it before running any graph scans. Let’s register the NoBuggyReLUs assertion from above.

import efemarai as ef

To remove an assertion you should deregister it by either providing the assertion instance or its class to deregister_assertion


You can also get a list of all registered assertion instances with


Final Words

Well done for making it through the entire guide!

Hopefully by now, you should have a much better idea of how Efemarai works, what it has to offer, and you will not have to spend numerous hours of searching and fighting elusive bugs and problems in your ML code anymore!

If you have any questions that we have not answered do not hesitate to Contact Us.