Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latent space documentation #15

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

punkduckable
Copy link
Collaborator

I added documentation (comments + doc strings + type hints + formatting) to latent_space.py

The code is functionally almost identical, save for a few minor tweaks:

Removed the MultiHeadAttention activation function option. This isn't really an activation function, even though it is listed as such in the torch api. It is used to implement an attention layer in a transformer block. It introduces a whole new set of parameters (for the key, query, and value weight matrices) and is really designed to map finite sequences to finite sequences. I do not think it makes sense to include this in the MLP class as an activation option, so I removed it.

Removed apply_attention. Since there is no more multiheadedattention activation option, this function should be removed.

Otherwise, all changes are cosmetic (e.g., comments + formatting).

I also fixed a small bug in gplasdi.py: np.Inf no longer exists. It is deprecated. It has been replaced with np.inf. I fixed this.

@dreamer2368
Copy link
Collaborator

dreamer2368 commented Oct 24, 2024

This PR will be rebased and merged after PR #11.
There are code conflicts in PR, which happens after other PRs get merged to main branch. Since this is already based on sphinx branch (PR #11), technically it does not have a conflict, only that github does not automatically resolve the past commits. Once PR #11 is merged, rebasing the last 3 commits from the (merged) main branch would not require any editing.

@dreamer2368
Copy link
Collaborator

@punkduckable , you did not resolve any conflict while you're rebasing. See there are bunch of conflict messages everywhere in the code.

This won't happen if you rebase the branch only with your commits. You had to do the following git commands:

git checkout latent_space_documentation
# interactive rebasing. pick only the last 3 commits, drop everything else.
git rebase -i main latent_space_documentation

But since you already rebased without resolving, rebasing may still leave those unresolved conflict messages, which needs to be resolved manually. If you need help in this overall, I can help you in a web-ex.

combination and then return a list housing the latent space ICs.


-----------------------------------------------------------------------------------------------
Copy link
Collaborator

@dreamer2368 dreamer2368 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this format is not recognized by sphinx and makes param_grid description hidden in the API website (see this webpage).

Remove the line 51 and reduce line 53 the same as line 52. and writing line 55-60 as below will resolve the issue:

    Arguments
    ---------
    param_grid: :obj:`numpy.ndarray`
        A 2d numpy.ndarray object of shape (number of parameter combination) x (number of 
    parameters).

      ...

It'd be great if you can also specify type, but I don't think it's necessary.

I thought I was following numpy style in src/lasdi/timing.py, but it was actually google style. Since your comment is already similar to numpy, we can stick to numpy style. I'll fix src/lasdi/timing.py in future.

autoencoder: The actual autoencoder object that we use to map the ICs into the latent space.


-----------------------------------------------------------------------------------------------
Copy link
Collaborator

@dreamer2368 dreamer2368 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, change line 63-66 as below:

    Returns
    -------
    Z0 : :obj:`numpy.ndarray`
        A list of numpy ndarray objects whose i'th element holds the latent space initial condition

      ...

Robert Stephany added 3 commits October 29, 2024 08:58
I also removed the multheadedattention activation type (plus the associated apply_attention function) because they did not make sense in this class.

I have now documented the MLP class, but need to add documentation to the autoencoder class.
I added comments + doc strings to the Autoencoder class.
gplasdi had an instance of np.Inf (which is depricated). I also fixed a typo in latent_spaces.py
@punkduckable punkduckable force-pushed the latent_space_documentation branch from cc77c3f to 782f57a Compare October 29, 2024 15:59


# -------------------------------------------------------------------------------------------------
# initial_conditions_latent function
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to be parsed in API document as one-line description, this line can be moved to line 43 with an additional empty line. Something like below:

def initial_condition_latent(param_grid     : np.ndarray, 
                             physics        : Physics, 
                             autoencoder    : torch.nn.Module) -> list[np.ndarray]:
    """initial_conditions_latent function

    This function maps a set of initial conditions for the fom to initial conditions for the



# -------------------------------------------------------------------------------------------------
# MLP class
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to initial_condition_latent. Move line 99 below line 102 with """ """ comment as below:

class MultiLayerPerceptron(torch.nn.Module):
""" MLP class """

    def __init__(   self, 

self.fcs += [torch.nn.Linear(layer_sizes[k], layer_sizes[k + 1])]
self.fcs = torch.nn.ModuleList(self.fcs)
self.init_weight()
-------------------------------------------------------------------------------------------
Copy link
Collaborator

@dreamer2368 dreamer2368 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for function argument.

below the threshold.


-------------------------------------------------------------------------------------------
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for returns.

"""

# Run checks.
# Make sure the reshape index is either 0 (input to 1st layer) or -1 (output of last
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this long explanation for two lines 172-173. Line 160-163 is simply repeating what the code is doing. I suggest:

If the reshape index and shape are provided for input (index 0) or output (index -1),
we will either flatten the input ndarray or reshape the output 1d array into the provided ndarray shape.
Either way, the specified ndarray shape (reshape_shape) should match the size of input/output layer.



-------------------------------------------------------------------------------------------
Arguments
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for arguments.



-------------------------------------------------------------------------------------------
Returns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for returns.



-------------------------------------------------------------------------------------------
Arguments
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for arguments.



-------------------------------------------------------------------------------------------
Returns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for returns.



# -------------------------------------------------------------------------------------------------
# Autoencoder class
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as for MLP class, move line 305 below line 308 with """ ... """ comment.

self.use_multihead = False
super(MultiLayerPerceptron, self).__init__()

# Note that layer_sizes specifies the dimensionality of the domains and co-domains of each
Copy link
Collaborator

@dreamer2368 dreamer2368 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment describes a member variable, but it is not parsed by API document. This can be parsed if the comment is written after the code with """ ... """ comment as below:

self.n_layers : int = len(layer_sizes) - 1;
"""
Note that layer_sizes specifies the dimensionality of the domains and co-domains of each
layer. Specifically, the i'th element specifies the input dimension of the i'th layer,

...
"""

# while the final element specifies the dimensionality of the co-domain of the final layer.
# Thus, the number of layers is one less than the length of layer_sizes.
self.n_layers : int = len(layer_sizes) - 1;
self.layer_sizes : list[int] = layer_sizes
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be great if you can add description for each class member variables, but it's not a must.

if (self.reshape_index == 0):
# make sure the input has a proper shape
# Make sure the input has a proper shape. There is a lot going on in this line; let's
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think most of these comments are somewhat repetitive of what is written earlier in arguments/returns section. I think we can reduced it into one sentence as below:

For k-dimension self.reshape_shape, make sure the last k dimension of x has the same shape.

# we use torch.Tensor.view instead of torch.Tensor.reshape,
# in order to avoid data copying.

# Now that we know the final k dimensions of x have the correct shape, let's squeeze
Copy link
Collaborator

@dreamer2368 dreamer2368 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, how about this sentence?

For k-dimension self.reshape_shape, reshape and flatten the last k dimension of x into 1d array that fits the first layer.
Note that we use torch.Tensor.view instead of torch.Tensor.reshape in order to avoid data copying.

if (self.reshape_index == -1):
# we use torch.Tensor.view instead of torch.Tensor.reshape,
# in order to avoid data copying.
# In this case, we need to split the last dimension of x, the output of the final
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use key words only?

split the last dimension of x, the output of the final layer, to match the reshape_shape.
Note that we use torch.Tensor.view instead of torch.Tensor.reshape in order to avoid data copying.

class Autoencoder(torch.nn.Module):
def __init__(self, physics : Physics, config : dict) -> None:
"""
Copy link
Collaborator

@dreamer2368 dreamer2368 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this is somehow not parsed by API. I found that we need to turn on an option. We can just leave as is and wait until I push a new PR for it, or it'd be great if you can add one liner in line 28 of docs/source/conf.py:

autoapi_python_class_content = 'both'



-------------------------------------------------------------------------------------------
Arguments
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for arguments


physics: A "Physics" object that holds the fom solution frames. We use this object to
determine the shape of each fom solution frame. Recall that each Physics object has a
corresponding PDE. We
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a sentence missed here at the end?



-------------------------------------------------------------------------------------------
Returns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for Returns.

# each point).
self.qgrid_size : list[int] = physics.qgrid_size;

# The product of the elements of qgrid_size is the number of dimensions in each fom
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move these comments after 364 with """ ... """ comment so that it can be parsed into API.

self.decoder = MultiLayerPerceptron(layer_sizes[::-1], act_type,
reshape_index=-1, reshape_shape=self.qgrid_size,
threshold=threshold, value=value, num_heads=num_heads)
# A Physics object's qgrid_size is a list of integers specifying the shape of each frame of
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move these comments after 364 with """ ... """ comment so that it can be parsed into API.

I personally think these belong to Physics class comments, but we can leave it here. It'd be great if we can refer readers to Physics class.



-------------------------------------------------------------------------------------------
Arguments
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for arguments



-------------------------------------------------------------------------------------------
Returns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above for returns



-------------------------------------------------------------------------------------------
Arguments
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above



-------------------------------------------------------------------------------------------
Returns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above



-------------------------------------------------------------------------------------------
Arguments
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above



-------------------------------------------------------------------------------------------
Returns
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

Copy link
Collaborator

@dreamer2368 dreamer2368 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@punkduckable , I reviewed through the comments. Most of them look good, just need some format changes to be parsed into API.

I thought I commented src/lasdi/timing.py per numpy style, but it was google style. I'll change this later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants