Templated c++ Forward Automatic differentiation.
Find a file
2019-03-27 21:27:39 +01:00
examples Static derivative computation test added (boost::proto). 2019-03-27 21:27:39 +01:00
old Cleaned up folder structure, moved old stuff to old/ 2019-03-26 12:40:00 +01:00
tests/compile_time_derivative Static derivative computation test added (boost::proto). 2019-03-27 21:27:39 +01:00
.gitignore Added Function object for computing the grad and macros to create them. 2019-03-26 23:25:01 +01:00
AutomaticDifferentiation.hpp Added Function object for computing the grad and macros to create them. 2019-03-26 23:25:01 +01:00
coverage_test.txt Fixed vector version to look more like FADBAD in terms of interface 2019-03-26 12:30:17 +01:00
Doxyfile Added Function object for computing the grad and macros to create them. 2019-03-26 23:25:01 +01:00
Makefile Added Function object for computing the grad and macros to create them. 2019-03-26 23:25:01 +01:00
Makefile.linux Fixed vector version to look more like FADBAD in terms of interface 2019-03-26 12:30:17 +01:00
Makefile.windows Fixed vector version to look more like FADBAD in terms of interface 2019-03-26 12:30:17 +01:00
README.md Fixed vector version to look more like FADBAD in terms of interface 2019-03-26 12:30:17 +01:00
test_AutomaticDifferentiation_main.cpp Fixed vector version to look more like FADBAD in terms of interface 2019-03-26 12:30:17 +01:00
test_small_manual.cpp Cleaned up folder structure, moved old stuff to old/ 2019-03-26 12:40:00 +01:00

AutomaticDifferentiation

Templated c++ Forward Automatic differentiation.

There are two versions :

  • A scalar one,
  • a vectorized one.

The class is a simple one, no expression templates are used. The class is however a template, meaning that any base numeric type can be used with it. It has successfully tested with boost::multiprecision::mpfr.

Scalar version

The scalar one allows very easily to produce higher order derivatives.

Vector version

The vectorized one is harder to make work with higher order derivatives, but allows the simultaneous computation of the full gradient, in a single function call, making it more efficient than backward automatic differentiation. It currently depends on Eigen for the vectorized part.