Karim Belabas on Wed, 12 Feb 2014 14:25:40 +0100

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: precision of result when adding t_REAL with lots of cancellation

* Peter Bruin [2014-02-12 13:12]:
> Bonjour Denis et pari-dev,
> The following issue came up when I tried to apply Denis Simon's
> 2-descent program for elliptic curves over number fields (ell.gp) to a
> certain elliptic curve over the quadratic field with defining polynomial
> y^2 - 229.  The linear polynomial b in ZZ[y] below takes positive values
> at both roots of y^2 - 229, but one of these values is very small.
> Denis's program takes this into account and increases the precision if
> b(y) is exactly 0 or has precision less than 10 digits.  In the example
> below, there is a huge cancellation between two real numbers that only
> differ by one bit, but PARI rounds up the precision to 19 digits.
> A quick fix for Denis's program is to increase the bound for the
> precision test in ell.gp:nfrealsign() from 10 to 38 digits.

A better fix would be for us to export nfsign(). Then

  K = bnfinit(y^2 - 229);
  b = -1554544300737274875964190134520312870631312460283689944298138572669148295776039072867720281361776956435252620954745928376624817557704277432961924925312*y + 23524523971732905757341977352314040726186200302188191824300117738073539522011689544444863977622786771332621915440577829842674416407299864303146477224320;

(13:26) gp > nfsign(K, b)
  ***   at top-level: nfsign(K,b)
  ***                 ^-----------
  *** nfsign: precision too low in nfsign_arch.

(13:26) gp > \p115
   realprecision = 115 significant digits
(13:26) gp > K = nfnewprec(K);
(13:27) gp > nfsign(K, b)
%7 = Vecsmall([0, 0])

Something like

      if (errname(e)!="e_PREC", error(e))
      default(realprecision, 2*precision(1.));
      K = newnewpref(K)));

would work more reliably.  

That one must currently change the global variable 'realprecision' for this is
an unfortunate, and serious, known bug. Having a way to locally change
'defaults' (or at least 'realprecision', e.g. myprec() or mybitprec()) has
been in the TODO list for some time.

> However, I am wondering if in general the current PARI behaviour of rounding
> up the precision of the result of t_REAL + t_REAL is really desirable in 
> cases where the actual precision is extremely small (say at most four bits).
> Would it be reasonable to round the precision _down_ in such cases?
> This would mean that in some computations slightly more precision will
> be lost than necessary, and maybe this case distinction has a negative
> effect on speed, but erring on the side of caution here seems possibly
> worthwile here.

Relying on the the PARI "floating point philosophy" for "real world" programs
is a bug in itself. It just cannot work reliably.

One must proceed as usual in numerical analysis: avoid catastrophic
cancellation a priori, bound the sizes and errors in terms of known quantities
(here the sup norm of b and K.roots), then use multiprecision at the
appropriate 'realprecision' to guarantee results. Using interval
arithmetic as a black box would not solve magically such instances: one
would obtain [-infinity, +infinity] for all practical purposes (e.g. in
this case, it would not be possible to determine the sign of the real 

Independently, in this particular case, the appearance of such a large
'b' might be a design problem. Libpari programs almost never use huge
algebraic numbers in this (expanded) form, but restrict to
(possibly large) factorizations in terms of (small) S-units.
This is unfortunately not easy to do in GP. But most functions, e.g.
nfsign here (or ideallog mentioned yesterday), accept such factorizations
as inputs.


Karim Belabas, IMB (UMR 5251)  Tel: (+33) (0)5 40 00 26 17
Universite Bordeaux 1          Fax: (+33) (0)5 40 00 69 50
351, cours de la Liberation    http://www.math.u-bordeaux1.fr/~kbelabas/
F-33405 Talence (France)       http://pari.math.u-bordeaux1.fr/  [PARI/GP]