LVModels.jl Documentation

Neural Differential Equation Model

LVModels.NDEType
NDE{P,R,A,K} <: AbstractNDEModel

Model for setting up and training Neural Differential Equations.

Fields:

  • p: Parameter struct instance
  • prob: DEProblem
  • alg: Algorithm to use for the solve command
  • kwargs: any additional keyword arguments that should be handed over (e.g. sensealg)

Constructors

  • NDE(prob; alg=Tsit5(), kwargs...)
  • NDE(model::NDE; alg=model.alg, kwargs...) remake the model with different kwargs and solvers

Input / call

An instance of the model is called with a trajectory pair (t,x) in t are the timesteps that NDE is integrated for and x is a trajectory N x ... x N_t in which x[:, ... , 1] is taken as the initial condition.

source

Training Parameters

LVModels.LearnableParamsType
mutable struct LearnableParams{T}

Learnable parameters while training the model.

Fields:

  • θ: Neural network weights
  • θ1: Decay rates vector

Constructor

  • LearnableParams(θ, θ1)
source

Data Manipulation

LVModels.sparsify_dataFunction
sparsify_data(sol; fraction=1.0)

Randomly sparsifies only the last dimension of the solution data from an n-dimensional system by masking a fraction of its values.

Arguments

  • sol: A solution object containing the time array (sol.t) and state values.
  • fraction: The probability (between 0 and 1) of keeping a data point unmasked in the last dimension. Default is 1.0 (no sparsification).

Returns

A tuple (t, X_sparse, mask), where:

  • t: The time array from the solution.
  • X_sparse: The sparsified state data array, where only the last dimension is masked.
  • mask: A binary mask array applied to the last dimension, indicating retained (1) and masked (0) values.
source
LVModels.get_mask_for_batchFunction
get_mask_for_batch(batch_t, global_mask, t0, dt)

Extracts a slice of the global mask corresponding to a given batch time vector.

Arguments

  • batch_t: A vector of time points for the current batch.
  • global_mask: A binary mask array corresponding to the full time series.
  • t0: The starting time of the global time series.
  • dt: The time step interval between consecutive points in the global time series.

Returns

A mask slice corresponding to the time points in batch_t, extracted from global_mask.

source

Loss Function

LVModels.lossFunction
loss(m, batch, truth, batch_mask; λ1=0f0, λ2=0f0)

Computes the loss for an n-dimensional system based on predicted values, ground truth, and a mask for the batch. The loss consists of mean squared errors (MSE) for each dimension, along with regularization terms for model parameters.

Arguments

  • m: A model producing predictions for each dimension of the system.
  • batch: The input data batch to the model.
  • truth: The ground truth values for the n-dimensional system.
  • batch_mask: A mask that is applied to the batch to handle missing or masked data points.
  • λ1: The weight for the L1 regularization term (default is 0).
  • λ2: The weight for the L2 regularization term (default is 0).

Returns

The total loss, which is the sum of:

  • Mean squared error (MSE) for each dimension (x, y, ..., n),
  • L1 regularization of model parameters (if λ1 > 0),
  • L2 regularization of model parameters (if λ2 > 0).
source

Model Performance Evaluation

LVModels.plot_model_performanceFunction
plot_model_performance(sol, t, X_sparse, train_70, model, U_re, U_truth, mask, dt)

Generates and returns four plots to evaluate model performance of an n-dimensional system:

  1. Trajectories Plot: Compares predicted trajectories with ground truth for all dimensions (first 70 points).
  2. Interaction Terms Plot: Compares neural network outputs with expected interaction terms (first 70 points).
  3. L2 Error Plot: Computes and visualizes the total L2 error across all dimensions at each time step (first 70 points).
  4. Reconstructed Solution Plot: Compares the full ground truth trajectory with NODE predictions over all time points.

Arguments

  • sol: Original solution object.
  • t: The time vector associated with the original solution object.
  • X_sparse: The sparsified ground truth.
  • train_70: Training data batches of size 70.
  • model: The trained model used to predict system trajectories.
  • U_re: A function that reconstructs interaction terms from the ANN parameters.
  • U_truth: True interaction terms.
  • mask: The global mask used to indicate available data points.
  • dt: Time step interval between consecutive points.

Returns

A tuple containing four plots:

  1. plt_traj: The trajectories plot (first 70 points).
  2. plt_interaction: The interaction terms plot (first 70 points).
  3. plt_l2_error: The L2 error plot (first 70 points).
  4. plt_re: The reconstructed solution plot (full time range).
source

Symbolic Regression

LVModels.perform_symbolic_regressionFunction
perform_symbolic_regression(X_sparse, dx, niterations=100; binary_operators=[+, *, -], unary_operators=[])

Perform symbolic regression to extract interpretable equations from a neural ODE model.

Arguments

  • X_sparse: Input features for regression.
  • dx: The learned interaction terms from the neural ODE model.
  • niterations: Number of iterations for the symbolic regression search (default=100).
  • binary_operators: List of binary operators to use in equation search (default [+, *, -]).
  • unary_operators: List of unary operators to use in equation search (default []).

Returns

  • hall_of_fame: Best equations found for each dimension.
  • pareto_frontiers: Pareto-optimal equations for each dimension.
source