AnomalyDAEBase#

class pygod.nn.AnomalyDAEBase(in_dim, num_nodes, emb_dim=64, hid_dim=64, dropout=0.0, act=<function relu>, **kwargs)[source]#

Bases: Module

Dual Autoencoder for Anomaly Detection on Attributed Networks

AnomalyDAE is an anomaly detector that consists of a structure autoencoder and an attribute autoencoder to learn both node embedding and attribute embedding jointly in latent space. The structural autoencoder uses Graph Attention layers. The reconstruction mean square error of the decoders are defined as structure anomaly score and attribute anomaly score, respectively, with two additional penalties on the reconstructed adj matrix and node attributes (force entries to be nonzero).

See [FZL20] for details.

Parameters:
  • in_dim (int) – Input dimension of model.

  • num_nodes (int) – Number of input nodes or batch size in minibatch training.

  • emb_dim: – int: Embedding dimension of model. Default: 64.

  • hid_dim (int) – Hidden dimension of model. Default: 64.

  • dropout (float, optional) – Dropout rate. Default: 0..

  • act (callable activation function or None, optional) – Activation function if not None. Default: torch.nn.functional.relu.

  • **kwargs (optional) – Other parameters of torch_geometric.nn.GATConv.

forward(x, edge_index, batch_size)[source]#

Forward computation.

Parameters:
Returns:

  • x_ (torch.Tensor) – Reconstructed attribute embeddings.

  • s_ (torch.Tensor) – Reconstructed adjacency matrix.

static process_graph(data)[source]#

Obtain the dense adjacency matrix of the graph.

Parameters:

data (torch_geometric.data.Data) – Input graph.