Hi,
I am working on a package for matrix functions and I noticed that some of the functionality might be relevant for scipy as well. In particular, I wrote a more accurate version for scipy.linalg.funm. I extracted the relevant code I would like to put in the pull request at https://github.com/nextdorf/matrix-functions/blob/linalg/matrixfuncs/_linalg.py (it’s just a draft atm, i.e. currently lacks documentation)
The draft consists of the function funm, a class Multiplicity for representing and constructing the geometric and algebraic eigenvalue multiplicity of a matrix, and a helper function cluster.
-
clustertakes the eigenvalues and aranges values which are close together and returns the count. It essentially constructs the algebraic multiplicity. -
Multiplicityis just a named tuple for representing and constructing the eigenvalue multiplicity. Afaik neither scipy nor numpy provides the features to do that directly but for the way my replacement forfunmworks, the multiplicity is essential. To find the geometric multiplicityscipy.linalg.orthis applied to the eigenspace of the respective eigenvalues. -
My replacement for
funmunfortunately has not the same function signature as scipy’sfunm. Reason is that scipy’sfunmcannot actually handle defective matrices. Consider for example the following snippet:A = np.array(((2,1), (0,2)))
scipy.linalg.expm(A)
array([[7.3890561, 7.3890561], [0. , 7.3890561]])scipy.linalg.funm(A, np.exp)
array([[7.3890561, 0. ], [0. , 7.3890561]])Any general implementation for
funmwould need to involve derivatives of the function you want to apply to the matrix (see Jordan decomposition). I solved this in my package by utilizing the external package numdifftools but I’m not sure how welcome additional dependencies are if not strictly necessary. With the current implementation the user would need to provide the derivatives.
Accuracywise
my implementation seems to agree with expm, logm, sqrtm, sinm, etc very well. It handles small, large, diagonal, defective, real, and complex matrices.
Stabilitywise
I noticed some problems with constructing the Multiplicity when eigenvalues are close but not close enough to be considered identical. I don’t really have a solution for that case other than allowing the user to passing their own Multiplicity object. Besides that edge case, the implementation seems to be stable.
Performancewise
linalg.orth makes Multiplicity.from_matrix very costly. That might be sped up by just constructing a basis using the Gram-Schmidt method, but for the first draft I decided to just use the dedicated function. The most costly calls in funm seem to be tensordot and solve, but here I don’t see possibilities for a speed up. All in all, according to timeit, calling my funm takes between 0-30% longer than scipy’s funm.
Theoretical basis
For the underlying idea on how the algorithm works, it essentially builds on the Cayley–Hamilton theorem and the existence of the Jordan decomposition. Together they state that any f(M) is linear in the first n powers of M and linear in f, f’, f’', … at the eigenvalues. The orders of the derivatives depend on the multiplicity of the respective eigenvalue.
I asked for feedback on Discord and was advised to open a discussion here, as funm was last modified by Pearu 24 years ago. This would be my first contribution to a large repository like SciPy, so I’d greatly appreciate any feedback - both on the implementation and the contribution process! ![]()