echotorch.nn

Echo State Layers

ESNCell

class nn.ESNCell(input_dim, output_dim, spectral_radius=0.9, bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, w_fdb=None, sparsity=None, input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=<built-in function tanh>, feedbacks=False, feedbacks_dim=None, wfdb_sparsity=None, normalize_feedbacks=False, seed=None, w_distrib='uniform', win_distrib='uniform', wbias_distrib='uniform', win_normal=(0.0, 1.0), w_normal=(0.0, 1.0), wbias_normal=(0.0, 1.0), dtype=torch.float32)

Echo State Network layer

forward(u, y=None, w_out=None, reset_state=True)

Forward :param u: Input signal :param y: Target output signal for teacher forcing :param w_out: Output weights for teacher forcing :return: Resulting hidden states

static generate_gaussian_matrix(size, sparsity, mean=0.0, std=1.0, dtype=torch.float32)

Generate gaussian Win matrix :return:

static generate_uniform_matrix(size, sparsity, input_set)

Generate uniform Win matrix :param w_in: :param seed: :return:

static generate_w(output_dim, w_distrib='uniform', w_sparsity=None, mean=0.0, std=1.0, seed=None, dtype=torch.float32)

Generate W matrix :param output_dim: :param w_sparsity: :return:

get_spectral_radius()

Get W’s spectral radius :return: W’s spectral radius

init_hidden()

Init hidden layer :return: Initiated hidden layer

reset_hidden()

Reset hidden layer :return:

set_hidden(x)

Set hidden layer :param x: :return:

static to_sparse(m)

To sparse matrix :param m: :return:

ESN

class nn.ESN(input_dim, hidden_dim, output_dim, spectral_radius=0.9, bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, w_fdb=None, sparsity=None, input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=<built-in function tanh>, learning_algo='inv', ridge_param=0.0, create_cell=True, feedbacks=False, with_bias=True, wfdb_sparsity=None, normalize_feedbacks=False, softmax_output=False, seed=None, washout=0, w_distrib='uniform', win_distrib='uniform', wbias_distrib='uniform', win_normal=(0.0, 1.0), w_normal=(0.0, 1.0), wbias_normal=(0.0, 1.0), dtype=torch.float32)

Echo State Network module

finalize()

Finalize training with LU factorization

forward(u, y=None, reset_state=True)

Forward :param u: Input signal. :param y: Target outputs :return: Output or hidden states

get_spectral_radius()

Get W’s spectral radius :return: W’s spectral radius

get_w_out()

Output matrix :return:

hidden

Hidden layer :return:

reset()

Reset learning :return:

reset_hidden()

Reset hidden layer :return:

set_w(w)

Set W :param w: :return:

w

Hidden weight matrix :return:

w_in

Input matrix :return:

LiESNCell

class nn.LiESNCell(leaky_rate=1.0, train_leaky_rate=False, *args, **kwargs)

Leaky-Integrated Echo State Network layer

forward(u, y=None, w_out=None, reset_state=True)

Forward :param u: Input signal. :return: Resulting hidden states.

LiESN

class nn.LiESN(input_dim, hidden_dim, output_dim, spectral_radius=0.9, bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None, input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=<built-in function tanh>, learning_algo='inv', ridge_param=0.0, leaky_rate=1.0, train_leaky_rate=False, feedbacks=False, wfdb_sparsity=None, normalize_feedbacks=False, softmax_output=False, seed=None, washout=0, w_distrib='uniform', win_distrib='uniform', wbias_distrib='uniform', win_normal=(0.0, 1.0), w_normal=(0.0, 1.0), wbias_normal=(0.0, 1.0), dtype=torch.float32)

Leaky-Integrated Echo State Network module