Templated c++ Forward Automatic differentiation.
Find a file
2019-03-25 19:47:41 +01:00
.gitignore Initial commit 2019-03-24 19:06:50 +01:00
AutomaticDifferentiation.hpp initial commit 2019-03-24 19:13:06 +01:00
AutomaticDifferentiationVector.hpp Changed vector type from eigen to valarray. added function diff(int i) to set the derivatives more easily. Created a small test file to check the changes. automated vector tests are not updated yet 2019-03-25 19:47:41 +01:00
coverage_test.txt initial commit 2019-03-24 19:13:06 +01:00
Makefile Changed vector type from eigen to valarray. added function diff(int i) to set the derivatives more easily. Created a small test file to check the changes. automated vector tests are not updated yet 2019-03-25 19:47:41 +01:00
Makefile.linux initial commit 2019-03-24 19:13:06 +01:00
Makefile.windows initial commit 2019-03-24 19:13:06 +01:00
README.md Update README.md 2019-03-24 19:18:55 +01:00
test_AutomaticDifferentiation.cpp initial commit 2019-03-24 19:13:06 +01:00
test_AutomaticDifferentiation_main.cpp initial commit 2019-03-24 19:13:06 +01:00
test_AutomaticDifferentiation_manual.cpp initial commit 2019-03-24 19:13:06 +01:00
test_AutomaticDifferentiation_vector.cpp initial commit 2019-03-24 19:13:06 +01:00
test_vector_version_additions_manual.cpp Changed vector type from eigen to valarray. added function diff(int i) to set the derivatives more easily. Created a small test file to check the changes. automated vector tests are not updated yet 2019-03-25 19:47:41 +01:00

AutomaticDifferentiation

Templated c++ Forward Automatic differentiation.

There are two versions :

  • A scalar one,
  • a vectorized one.

The class is a simple one, no expression templates are used. The class is however a template, meaning that any base numeric type can be used with it. It has successfully tested with boost::multiprecision::mpfr.

Scalar version

The scalar one allows very easily to produce higher order derivatives.

Vector version

The vectorized one is harder to make work with higher order derivatives, but allows the simultaneous computation of the full gradient, in a single function call, making it more efficient than backward automatic differentiation. It currently depends on Eigen for the vectorized part.