Templated c++ Forward Automatic differentiation.
Find a file
2019-03-25 21:36:23 +01:00
examples added example of 2nd derivative with vector version that does not work... Also, added operator+-*/ for scalar op Dual and scalar op dualvector 2019-03-25 21:36:23 +01:00
.gitignore fixed test and folder structures 2019-03-25 21:09:11 +01:00
AutomaticDifferentiation.hpp added example of 2nd derivative with vector version that does not work... Also, added operator+-*/ for scalar op Dual and scalar op dualvector 2019-03-25 21:36:23 +01:00
AutomaticDifferentiationVector.hpp added example of 2nd derivative with vector version that does not work... Also, added operator+-*/ for scalar op Dual and scalar op dualvector 2019-03-25 21:36:23 +01:00
coverage_test.txt initial commit 2019-03-24 19:13:06 +01:00
Makefile fixed test and folder structures 2019-03-25 21:09:11 +01:00
Makefile.linux initial commit 2019-03-24 19:13:06 +01:00
Makefile.windows initial commit 2019-03-24 19:13:06 +01:00
README.md Update README.md 2019-03-24 19:18:55 +01:00
test_AutomaticDifferentiation.cpp initial commit 2019-03-24 19:13:06 +01:00
test_AutomaticDifferentiation_main.cpp initial commit 2019-03-24 19:13:06 +01:00
test_AutomaticDifferentiation_manual.cpp initial commit 2019-03-24 19:13:06 +01:00
test_AutomaticDifferentiation_vector.cpp fixed test and folder structures 2019-03-25 21:09:11 +01:00
test_vector_version_additions_manual.cpp fixed test and folder structures 2019-03-25 21:09:11 +01:00

AutomaticDifferentiation

Templated c++ Forward Automatic differentiation.

There are two versions :

  • A scalar one,
  • a vectorized one.

The class is a simple one, no expression templates are used. The class is however a template, meaning that any base numeric type can be used with it. It has successfully tested with boost::multiprecision::mpfr.

Scalar version

The scalar one allows very easily to produce higher order derivatives.

Vector version

The vectorized one is harder to make work with higher order derivatives, but allows the simultaneous computation of the full gradient, in a single function call, making it more efficient than backward automatic differentiation. It currently depends on Eigen for the vectorized part.