automatic-differentiation
Here are 169 public repositories matching this topic...
I'm using TF 2.0, and I get this error when I import tangent, due to a list of non-differentiable functions that includes tf.to_float (line 60), which is deprecated:
https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/to_float
-
Updated
Jun 17, 2020 - OCaml
I found that function mod2pi is not implemented yet, but mod works. Is there any list of implemented functions? Minimal working example is:
using Zygote
# This is working
gradient(x -> mod(x, 2pi), 1.)
# This is not
gradient(x -> mod2pi(x), 1.)
The README.md in branch 3.0.x mentions ND4J as a related project in that it "provides numerical computing used in DeepLearning.scala". This is currently true for DL4J, but it appears that Compute.scala is a git-submodule'd in the root of DeepLearning.scala...
So which is it? Is the README.md right, an
The docs generated for #431 at https://mratsim.github.io/Arraymancer/pca.html have broken formatting:
-
Updated
Jun 19, 2020 - Go
Currently if a user wants to determine if a particular operation is supported on a device from the documentation (e.g., in the default.qubit plugin), they will find a bunch of gates/operations/observables listed, but under different names than found in PennyLane. This may be confusing to users. We should state very clearly
-
Updated
May 10, 2018 - Haskell
Description
In the constrain_XXX functions applied to map unconstrained parameters to parameters in Stan, the value being constrained is required to be the same type as the log density target being incremented. These functions should allow the two types to vary independently. The target will always be at least double, whereas the variable being constrained might be int when used in t
I know I have opened rather a lot of issues in the last 24 hours (and have a few more to go), but I just wanted to comment that I think you have some of the nicest documentation I have ever read for any C++ project. It's well put together and builds on itself very nicely so it is easy(ish) to see how to build up from simple hello world examples to something a bit more involved.
The documentation currently describes how to add custom derivative definitions using DiffRules. However, it seems that this only covers basic custom derivatives. For example, I don't know how this approach would support a black box function (e.g. an external binary) that returns both the fun
-
Updated
Jun 11, 2020 - C++
-
Updated
Nov 16, 2016 - Python
-
Updated
Jan 10, 2018 - Python
I'm curious if your visions include making it a feature-complete NN training framework?
What will be the master plan? Integrating with Torch/TF/MXNet or build hardware-level compilation framework from scratch?
Also, what is the standard & code of conduct for contributions from the community?
I'm totally convinced of its capability and believe it can fit into the missing link between horiz
-
Updated
Jun 16, 2020 - Python
Example:
auto const xml = R"urdf(
<robot name="test">
<link name="base">
</link>
<link name="link1">
<inertial>
<origin xyz="0. 0. 0."/>
<mass value="1."/>
<inertia ixx="1." ixy="0." ixz="0." iyy="1." iyz="0." izz="1."/>
</inertial>
</link>
<joint name="joint0" type="continuo-
Updated
Jun 18, 2020 - Julia
-
Updated
Nov 5, 2019 - Haskell
-
Updated
Apr 5, 2019 - C++
With the increasing number of processors being used in many simulations, especially with exch2-grids, nPx quickly goes beyond 999 and the formatted write to msgBuf in ini_procs.F starting here:
https://github.com/MITgcm/MITgcm/blob/07e785229e35cf2d8247b74b6d9d95d2c3adb417/eesupp/src/ini_procs.F#L269
leads to "***" and error messages that clutter the output making it annoying to search
-
Updated
May 20, 2020 - Julia
The following works for gradient!():
using DiffBase, ReverseDiff
f(x) = sum(sin, x)+prod(tan, x)*sum(sqrt, x);
x = rand(4);
result = DiffBase.GradientResult(x);
rcfg = ReverseDiff.GradientConfig(x);
ReverseDiff.gradient!(result, f, x, rcfg);
DiffBase.value(result)
DiffBase.gradient(result)
However, the Hessian analogue of the above fails:
using DiffBase
-
Updated
Apr 18, 2020 - Rust
We should make N a type parameter, so that Taylor1 produces an object of type Taylor{1}, and TaylorN an object of type Taylor{N}, where N is the number of variables.
Changes to Docs
Lots has changed since the docs were first written. #152 addresses a number of things, but there are a few more things that we might want to consider:
- changing all references to autodiff / automatic differentiation to AD / algorithmic differentiation, with a terminology box in the docs somewhere, explaining what we're on about.
- In the "On writing good rrule and frule " bit, we should consi
-
Updated
Feb 12, 2020 - Julia
-
Updated
Jun 3, 2019 - Python
-
Updated
May 29, 2020 - Julia
Improve this page
Add a description, image, and links to the automatic-differentiation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the automatic-differentiation topic, visit your repo's landing page and select "manage topics."




In operations_broadcast_test.go there are some tests that are not yet filled in. The point is to test that broadcasting works for different shapes. The semantics of broadcast probably isn't clear, so please do send me a message for anything.
This is a good first issue for anyone looking to get interested