Skip to content

Add clip layer#3791

Closed
hberntsen wants to merge 1 commit intoBVLC:masterfrom
hberntsen:clip
Closed

Add clip layer#3791
hberntsen wants to merge 1 commit intoBVLC:masterfrom
hberntsen:clip

Conversation

@hberntsen
Copy link

This PR adds a clipping layer which clips the bottom data to a certain range. I used this to ensure the output of the network is in a certain range after it was trained.

@seanbell
Copy link

Neat! This can be used to implement relu6: https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#relu6

@coallaoh
Copy link

Would be nice to have this.. Why is it not merged yet?

@stoneyang
Copy link
Contributor

Still not merged?

@Noiredd
Copy link
Member

Noiredd commented Mar 27, 2018

I pulled this locally and resolved the conflicts (just a matter of fixing the param IDs in caffe.proto). Works good except for the backward pass - code seems fine (mathematically), yet I keep getting the same failure every time:

[ RUN      ] NeuronLayerTest/3.TestClipGradient
./include/caffe/test/test_gradient_check_util.hpp:175: Failure
The difference between computed_gradient and estimated_gradient is 0.052634884643931734, which exceeds threshold_ * scale, where
computed_gradient evaluates to 0,
estimated_gradient evaluates to 0.052634884643931734, and
threshold_ * scale evaluates to 0.001.
debug: (top_id, top_data_id, blob_id, feat_id)=0,33,0,33; feat = -1.0094736511535607; objective+ = -1.9989473023071214; objective- = -2

It's not random - always the same numbers (top_data_id and feat_id = 33). @hberntsen are you willing to help with this?

The reason for this is that sometimes an input value will land near the function's discontinuity (it's always the same element because we keep the random seed the same). This causes the gradient estimate to be different than the computed value, by a rather large factor. I can think of two hacky ways around this:

  • make the GradientChecker's stepsize smaller, thus reducing the probability of generating an input value near a discontinuous region,
  • find and remove such values from the input blob.

I will submit a new PR, cherry-picking @harm-nedap's commit and adding this fix to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants