Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Line #1. def compare_motif_list(df_motifs_a, df_motifs_b, motif_scoring_metric=motif_scoring_KL_divergence, plot_motif_probs=False):
- Are
df_motifs_aanddf_motifs_bgoing to be a pandas Series ? - We should add a
Callabletype hint tomotif_scoring_metric. - We should add a
booltype hint toplot_motif_probs. - We should add the
torch.Tensortype hint to the function output.
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Line #1. def metric_comparison_between_components(original_data, generated_data, x_label_plot, y_label_plot):
- We should add the
Dicttype hint tooriginal_dataandgenerated_data. - If I understand correctly
x_label_plotandy_label_plotare strings (type:str) ? If yes we should add that. - We should add the
Nonetype hint to the function output.
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Line #1. def one_hot_encode(seq, nucleotides, max_seq_len):
- Are the
seqandnucleotidesparameters, Lists or Strings ? The corresponding type hint should be added. - We should add the
inttype hint tomax_seq_len. - We should add the
np.ndarraytype hint to the function output (viz. ->np.ndarray)
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Line #1. def log(t, eps = 1e-20):
We should add the torch.Tensor type hint to the parameter t and the function output.
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Same goes for all class methods except update_average. As they don't return any values we should add None to the method output.
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Line #2. def __init__(self, beta):
- We should ideally add class docstrings. The markdown comments above would be ideal.
- We should add
floattype hint to thebetaparameter.
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Line #7. def update_model_average(self, ma_model, current_model):
We should add nn.Module type hints to ma_model and current_model and None to the method output.
Reply via ReviewNB
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
| @@ -0,0 +1,7119 @@ | |||
| { | |||
There was a problem hiding this comment.
Can we also do Garbage Collection and Empty cache after each step through the dataloader viz.
for epoch in range(...): for idx, sample in enumerate(dataloader): ... # ⭐️⭐️ Garbage Collection torch.cuda.empty_cache() _ = gc.collect()
Reply via ReviewNB
Concrete improvements from the previous notebook:
The main goal of this notebook is: