Skip to content

Conversation

@hcl1992
Copy link

@hcl1992 hcl1992 commented Dec 23, 2015

Replace blobs_lr with lr_mult in readme.md.

jeffdonahue and others added 30 commits August 25, 2015 17:58
Fix the MVNLayer tests so they actually test what they claim.

MVNLayer fixes: sum_multiplier_ sized correctly; backward gradient calculation.

Gradient calculation per analysis of seanbell, found here:
#1938

Fixes according to review comments.
Draw Deconvolution layers like Convolution layers
…==1 in SPPLayer

also, do nothing in SPPLayer Reshape if already reshaped once and bottom size unchanged
Fix SPPLayer top blob num and address `pyramid_height_ == 1`
Give the python layer parameter/weight blobs.
Fix EmbedLayer compiler warning for unused variable.
Previously, the prefetch GPU -> top GPU and prefetch CPU -> prefetch GPU
copies were launched concurrently in separate streams, allowing the next
batch to be copied in before the current one is read.

This patch explicitly synchronizes the prefetch -> top copy wrt the
host, preventing the CPU -> GPU from being launched until its
completion.
Fix a recently introduced race condition in DataLayer
Compute backward for negative lr_mult
Replaces CAffe_POSTFIX -> Caffe_POSTFIX.
This fixes a memory leak by using delete[] rather than plain delete.
Cleanup: Fixup capitalisation of Caffe_POSTFIX.
Commit 4227828 set the default binary format from HDF5 to BINARYPROTO to
fix #2885. This broke the cifar10 examples which relied on this default.

This commit specifies the snapshot_format explicitly since the rest of the
example relies on this being HDF5.
Fix some doxygen warnings about an undocumented argument in Blob and
incorrect documentation for SoftmaxWithLossLayer::Forward_cpu().
cifar10: Fix examples by setting snapshot_format.
Fix memory leak in convert_mnist_siamese_data.
Add extra OpenBLAS include search path
cdoersch and others added 28 commits November 22, 2015 14:47
Better normalization options for SoftmaxWithLoss layer
This `examples/lenet/lenet_stepearly_solver.prototxt` is introduced in #190 by mistake, since stepearly is never actually merged.
Remove bogus stepearly in MNIST example
Skip python layer tests if WITH_PYTHON_LAYER unset
replace snprintf with a C++98 equivalent
Deprecated OpenCV consts leading to compilation error
Safely create temporary files and directories
No more monolithic includes: split layers into their own headers for modular inclusion and build.
Remove dead preprocessor code for number of CUDA threads
[build] Display and store cuDNN version numbers for CMake
don't divide by 0 duration when downloading model binary
Remove hamming_distance and popcount
Fix compatibility issues with extract_features
A Python script for at-a-glance net summary
[docs] fix typo in interfaces.md
Add a macro to check the current cuDNN version
Add ifdef in CuDNNConvolutionLayer for cuDNN v4
models/finetune_flickr_style/deploy.prototxt uses lr_mult now.
Replace blobs_lr with lr_mult in readme.md.
@seanbell
Copy link

I assume this PR was a mistake?

@seanbell seanbell closed this Dec 23, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.