Skip navigation
The Australian National University

Floating-point numbers and polynomial approximation

Sylvain Chevillard (ENS Lyon)

MSI Computational Mathematics Seminar Series

DATE: 2008-02-25
TIME: 11:00:00 - 12:00:00
LOCATION: John Dedman G35
CONTACT: JavaScript must be enabled to display this email address.

ABSTRACT:
When it comes to implement a numerical algorithm for computing a mathematical function f (such as exp, sin, erf, etc.), f is frequently replaced by another well-chosen function p. This function p should be such that for all x, p(x) is a sufficiently good approximation of f(x).

Generally, p is chosen to be a polynomial, in particular because there is a lot of powerful algorithms for the evaluation of polynomials. The coefficients of this polynomial have to be stored in the memory of the computer, and hence, have a finite representation of the form:

± 1.b1 b2 ... b(t-1) * 2^e

where b1, b2, ..., b(t-1) are bits in {0, 1}. Such a number is called a floating-point number with precision t.

We address the problem of finding a very good polynomial to approximate a function f, with the constraint that the coefficients of the polynomial shall be floating-point numbers. Using lattice reduction theory (and particularly the LLL algorithm due to A. Lenstra, H. Lenstra and L. Lovasz), we propose an algorithm for computing such a polynomial. We will show how the algorithm works on a complete example.
BIO:
http://perso.ens-lyon.fr/sylvain.chevillard/index_en.php

Updated:  25 February 2008 / Responsible Officer:  JavaScript must be enabled to display this email address. / Page Contact:  JavaScript must be enabled to display this email address.