Peter Bruin on Wed, 05 Feb 2014 15:43:49 +0100

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

 Re: bb_field.{sub,div}

```Hi Bill,

Let's ignore bb_field for the moment, because you raise an interesting
point about solving linear systems.

>> At least for subtraction there is one (and at the moment admittedly
>> only one).  Also, I did not make the patch for abstract reasons, but
>> because I was writing a function gen_matinvimage() (doing as the name
>> suggests) that would profit from both sub() and div().
>
> What will be the difference with gen_Gauss ?

The same as the difference between inverseimage() and gauss()
(matinverseimage() and matsolve() in GP): the latter is only for solving
A*X = B with A invertible.  In my application, A is non-square with full
column rank (i.e. A is the matrix of an injective but non-surjective
linear map) and the column space of B is known to be contained in that
of A, so there exists a unique X solving the system.

In earlier PARI versions (maybe 2-3 years ago), matsolve() could solve
such a system, but at some point it stopped working and I quickly found
out that matinverseimage() provided the functionality that I needed.
Now I switched from matrices with t_FFELT entries to FlxqM for
efficiency reasons.  I did try FlxqM_gauss(), but it crashed; I did not
check why exactly, but I assume because it implicitly also requires A to
be invertible.  (Which is presumably why there also are separate
functions Flm_gauss() and Flm_invimage().)

Hence I tried to write a bb_field analogue of inverseimage(), and that
worked fine.  In the case where the matrix is invertible, actually it
often seems to be faster than gauss(), although I haven't done any
systematic tests.  An example with my experimental gen_matinvimage()
patch applied:

gp> F=ffgen(27);
gp> M=matrix(100,100,i,j,random(F));
time = 8 ms.
gp> N=matrix(100,100,i,j,random(F));
time = 4 ms.
gp> matsolve(M,N);
time = 808 ms.
gp> matinverseimage(M,N);
time = 441 ms.

At least for me it would also be great if gauss()/gen_Gauss() could be
adapted to handle the situation where A is not necessarily invertible,
but I somehow suspected that this could affect the performance and/or
the desired behaviour (error reporting if solution does not exist or is
not unique) in case A is not invertible.

>> Last week I also wrote a bb_field implementation of Strassen's matrix
>> multiplication algorithm (for personal use; as far as I understand it
>> is considered to be outside the scope of PARI).
>
> I do not think it is outside the scope of PARI. Rather so far we did
> not have any application for it.

I see.  But any application of matrix multiplication (over exact rings,
because fast multiplication is numerically less stable) is of course a
potential application of Strassen's algorithm, since you can just switch
between classical and Strassen multiplication inside the multiplication
routine, depending on whether the dimensions are larger than some tuning
parameter.  From some small experiments that I did, it seems that over
finite fields (I used a field of size 29^10), Strassen multiplication
already becomes worthwile for dimensions larger than about 16 (using the
classical algorithm for sizes <= 16 as base cases).

However, after that I committed some industrial espionage and took a
look at what FLINT does; it uses Kronecker substitution to reduce to
multiplying matrices with integer coefficients.  This could actually be
a more promising way of speeding up matrix multiplication over finite
fields (one could then still apply Strassen, but I suspect the
dimensions at which this becomes profitable are much larger).

Cheers,

Peter

```