|Bill Allombert on Fri, 15 May 2009 22:40:06 +0200|
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
|Re: Another problem with matrix inversion|
On Fri, May 15, 2009 at 05:34:09AM +0200, Lorenz Minder wrote: > Hi, > > I take it, the API for FpM_gauss() must not change, right? Yes, it should stay as documented, though you can change FpM_gauss_sp as long as it does not affect FpM_gauss(). > > At this point the only trouble is to pick the fastest algorithm depending > > on the matrices entries. > > I did some tests to see how this behaves for t_INTMOD over small moduli. > I took m = 2^8 - 1 = 3*5*17 as an example. I created random invertible > matrices mod m of size n * n. > > Here are the results: > > matadjoint technique: > n = 150, stack usage between 128MB and 256MB, running time 67s. > n = 170, stack usage between 256MB and 512MB, running time 112s. > chinese remaindering: > n = 150, stack usage < 8MB, running time 0.15s. > n = 170, stack usage < 8MB, running time 0.227s. > > For me, the most worrying thing here is the excessive memory usage, > which suggests that something with n = 200 variables or so will be > difficult to solve on a typical PC. > > The numbers suggest that for t_INTMOD, the splitting trick, or possibly > adapting matsolvemod(), will fare much better, unless matadjoint() can > be sped up by orders of magnitude, and its memory usage can be contained. This is in part because RgM_inv calls Flm_inv which is optimized for modular arithmetic with long integer, instead of computing with INTMOD. Flm are between 7 and 9 times smaller that INTMOD matrices and operations can be done "in place". Cheers, Bill.