| Bytes | Lang | Time | Link |
|---|---|---|---|
| 199 | Python 3 | 250321T203644Z | CrSb0001 |
| 020 | APLNARS | 250323T104445Z | Rosario |
| 002 | Vyxal 3 | 250321T215932Z | Themooni |
| 020 | 05AB1E | 201021T085405Z | Kevin Cr |
| 350 | Fortran GFortran | 221228T102951Z | roblogic |
| 008 | PARI/GP | 221124T183720Z | Hugo Pfo |
| 003 | q | 221125T015852Z | cillianr |
| 023 | APL Dyalog Unicode | 201022T133837Z | ovs |
| 048 | Charcoal | 201021T135201Z | Neil |
| 232 | Scala | 201020T225057Z | user |
| 003 | Matlab | 201026T052953Z | Dmitry K |
| 029 | Excel | 201023T211807Z | Engineer |
| 061 | R | 201020T213940Z | Giuseppe |
| 188 | Python 2 | 201022T130358Z | lynn |
| 228 | Python 2 | 201021T142511Z | lynn |
| 057 | Octave | 201021T015709Z | Sisyphus |
| 019 | Ruby rmatrix | 201021T094644Z | Razetime |
| 011 | SageMath | 201020T233130Z | Noodle9 |
| 002 | J | 201021T070327Z | Bubbler |
| 003 | Jelly | 201021T064123Z | Kevin Cr |
| 004 | MATL | 201021T022336Z | Mukundan |
| 169 | JavaScript ES6 | 201020T230908Z | Arnauld |
| 003 | Julia 1.0 | 201020T215146Z | Kirill L |
| 005 | R | 201020T210613Z | Kirill L |
| 001 | APL Dyalog Unicode | 201020T204712Z | RGS |
| 007 | Wolfram Language Mathematica | 201020T204409Z | ZaMoC |
Python 3, 211 202 199 bytes
2025-03-22: -9 bytes
2025-03-25: -3 bytes (sub 200)
Currently loses to the most golfed Python solution by 11 bytes.
I know I'm losing to the sub-200 byte Python 2 answer, but I thought it was weird that there were no Python 3 answers, so I decided to try it out for myself. However I did beat the 228 byte Python 2 solution. :))
Actually I decided to do this because I wanted to use an inverted matrix method for an answer to this problem (even though it's most definitely inefficient because of other methods that could be used) and ended up finding this question.
Solution
e=enumerate
m=lambda l,i,j:[r[:j]+r[j+1:]for r in l[:i]+l[i+1:]]
d=lambda l:sum([l[0][c]*(-1)**c*d(m(l,0,c))for c,_ in e(l)])or 1
lambda l:[[(-1)**(x+y)*d(m(l,x,y))/d(l)for x,_ in e(l)]for y,_ in e(l)]
Old solution (258 bytes)
r=range
z=len
m=lambda l,i,j:[r[:j]+r[j+1:]for r in l[:i]+l[i+1:]]
d=lambda l:sum([l[0][c]*(-1)**c*d(m(l,0,c))for c in r(z(l))])if 2<z(l)else l[0][0]*l[1][1]-l[0][1]*l[1][0]
i=lambda l:[[(-1)**(x+y)*d(m(l,x,y))/d(l)for x in r(z(l))]for y in r(z(l))]if d(l)else 0
mis the lambda function for getting cofactor matricesdis the lambda function for getting the determinantiis the lambda function that finally gets the inverse of the matrix.
I had originally used e=enumerate in the first line (without the r= or z=), but somehow this had ended up losing me ~20-25 bytes over my 258 byte solution, so I had used r=range, z=len to circumvent that 'issue'.
Then I looked at @lynn's answer and saw that e=enumerate is indeed more efficient - I just need to write for [var],_ in e(l). After that, I
- Realized that the OP only wants it to handle matrices with non-zero determinants, so 13 bytes saved in the main lambda.
- Changed the check in the determinant function from
2<len(l)to1<len(l)(and actually to~-len(l)since I can then writeif~-len(l)else l[0][0])- I then for really no reason decided to try out
if len(l)else 1which for some reason works. I have tested the current version with invertible 2x2, 3x3, 4x4, and 5x5 matrices to actually make sure this change works, and it seems that it does, but still.
- I then for really no reason decided to try out
As of 2025-03-22, I
- Have the determinant lambda do
l==[]orat the beginning, asTrue==1in Python. (-7 bytes) - wrote the main function to compute the inverse - as I don't need to call it anywhere in the code - anonymously. (-2 bytes)
As of 2025-03-25, I
- Rewrote
l==[]orby writingsum(...)or 1instead of what I had before. This trick doesn't seem to work with @lynn's Python 2 answer, but they can still save 1 byte off (to get 187) by rewriting thed=lambdafunction as follows:d=lambda a:sum(b[0]*c(a,i,0) for i,b in e(a))or[]==a
APL(NARS), 20 chars
{⍉⊃{⍵⌹k}¨,/=/¨⍳⍴k←⍵}
This idea came from to see https://codegolf.stackexchange.com/a/213904/120560 answer...
Here ⌹ is builtin that compute inverse matrix but Cramer rule too.
It seems one can easily build inverse of one input matrix nxn using Cramer rule for vectors each
row of identity matrix on n, build the matrix with rows result and traspose. So I use ⌹ only
for apply Cramer rule not as bultin inverse matrix...(and Cramer rule would be in APL as 25 chars solution if it is need one answer without builtin)
this =/¨⍳⍴k is the identity matrix of matrix have dimensions of k.
f←{⍉⊃{⍵⌹k}¨,/=/¨⍳⍴k←⍵}
aaa←3 3⍴ 1 3 5 7 5 1 7 6 3 2 1
aaa
┌3─────┐
3 1 3 5│
│ 7 5 1│
│ 7 6 3│
└~─────┘
f aaa
┌3──────────────┐
3 4.5 10.5 ¯11│
│ ¯7 ¯16 17│
│ 3.5 7.5 ¯8│
└~──────────────┘
⌹ aaa
┌3──────────────┐
3 4.5 10.5 ¯11│
│ ¯7 ¯16 17│
│ 3.5 7.5 ¯8│
└~──────────────┘
this is the test of the question, the first with input a list of float, the second should be a list of rational numbers if mark with "x" the last element.
f 3 3⍴ 4 ¯3 0 ¯4 ¯7 6 5 7 6
┌3──────────────────────────────────────────┐
3 0.1686746988 ¯0.03614457831 0.03614457831│
│ ¯0.1084337349 ¯0.04819277108 0.04819277108│
│ ¯0.0140562249 0.08634538153 0.08032128514│
└~──────────────────────────────────────────┘
f 3 3⍴ 4 ¯3 0 ¯4 ¯7 6 5 7 6x
┌3────────────────────┐
3 14r83 ¯3r83 3r83│
│ ¯9r83 ¯4r83 4r83│
│ ¯7r498 43r498 20r249│
└~────────────────────┘
Vyxal 3, 2 bytes
Þ⅟
boring builtin but I was testing the snippet builder so I might as well post it
<script type="vyxal3">
Þ⅟
</script>
<script>
args=[["[[4,-3,0],[-4,-7,6],[5,7,6]]"]]
</script>
<script src="https://themoonisacheese.github.io/snippeterpreter/snippet.js" type="module"/>
05AB1E, 38 22 21 20 bytes
˜nO/øтF©I2Føδ*O®}·s-
Port of @Sisyphus' Octave answer, so make sure to upvote him!!
-16 bytes thanks to @ovs.
Code explanation:
˜ # Flatten the (implicit) input-matrix to a single list
n # Square each value in this list
O # Take the sum (this is the trace of M*M')
/ # Divide each value in the (implicit) input-matrix by this trace
ø # Zip/transpose this matrix; swapping rows/columns
тF # Loop 100 times:
© # Store the current matrix in variable `®` (without popping)
I # Push the input-matrix
2F # Loop 2 times:
ø # Zip/transpose the top matrix; swapping rows/columns
δ # Apply double-vectorized with the top two matrices:
* # Multiply
O # Sum each inner row
® # Push the matrix from variable `®` again
}· # After the inner loop: double all values in matrix `®`
s # Swap so the calculated matrix VMV is at the top again
- # Subtract this VMV from the 2V
# (after the outer loop, the resulting matrix is output implicitly)
Original answer (38 bytes) and detailed explanation:
εUεX*O]Å\OIøs/тFxs©εUIøεX*O}U®øεX*O}}-
05AB1E has barely any useful builtins for matrices, not even matrix multiplication. So almost everything has to be done manually..
Since I'm an absolute noob in math, I'm gonna explain everything in full detail to help others like me who want to do this challenge without any builtins, and also to keep this answer self-contained.
Step 1) Matrix multiplication of the input-matrix \$M\$ with it's transpose: \$M\times M'\$:
If we have a matrix \$A\$ and \$B\$ and want to do matrix-multiplication \$AB\$, we take the dot-product of every \$i^{th}\$ row of \$A\$ and \$j^{th}\$ column of B for every coordinate \$i,j\$ in the two matrices.
For example, if we use the matrix in the challenge description:
\$M = \left[\begin{matrix} 4 & -3 & 0 \\ -4 & -7 & 6 \\ 5 & 7 & 6 \end{matrix}\right]\$
We can for example calculate the values in the top row of the resulting \$M\times M'\$ matrix with:
Top-left: \$4\times4+-3\times-3+0\times0 = 25\$
Top-center: \$4\times-4+-3\times-7+0\times6=5\$
Top-right: \$4\times5+-3\times7+0\times6 = -1\$
I've done matrix multiplication in 05AB1E before in this answer of mine, so I've used that code snippet here as well. Since we want to multiply the input-matrix by it's transpose, we actually won't need the transpose builtin here.
ε # Map over each row of the (implicit) input-matrix
U # Pop and store the current row in variable `X`
ε # Map over each row of the (implicit) input-matrix again
X* # Multiply the values of the current row by the values at the same
# positions in row `X`
O # And take the sum of this row
] # Close both maps
Step 2) Take the trace of this new matrix: \$(M\times M')^T\$
The trace of a square matrix is basically the sum of its main diagonal (the values of the top-left to the bottom-right).
Å\ # Take the main diagonal of the matrix of step 1
O # And sum the values in this list together
Try the first two steps online.
Step 3) Divide all values in the transposed matrix by this trace we calculated:
I # Push the input-matrix
ø # Zip/transpose it; swapping rows/columns
s # Swap so the trace we calculated it at the top of the stack
/ # And divide each value in the transposed matrix by this trace
Try the first three steps online.
Step 4) Repeat the following steps (5 through 8) enough times for the answer to not change anymore:
Since this program isn't very fast in 05AB1E, I've decided to loop just 100 times, but this can be increased to improve the accuracy of the decimal results (I've verified with @Sisyphus' Octave answer that changing the 1e4 to 1e2 still holds the same result for most matrices).
тF # Loop 100 times:
I'm not sure if the values will eventually not change anymore if we loop enough times. If this is the case we could (in theory) save a byte by changing this тF to Δ (loop until the result no longer changes).
(Let's call the intermediate matrix inside this loop \$V\$ for the explanations of the following steps.)
Step 5) Double each value in the current matrix: \$2V\$:
x # Double each value in the current matrix V (without popping)
Try the first five steps online, excluding the loop of step 4.
Step 6) Do matrix multiplication again for \$VM\$ (where \$M\$ is the input-matrix):
s # Swap to take the non-doubled matrix V at the top again
© # Store this matrix V in variable `®` (without popping)
ε # Map over each row of matrix V:
U # Pop the current row, and store it in variable `X`
I # Push the input-matrix M
ø # Zip/transpose; swapping rows/columns
ε # Map over each row of this transposed matrix M':
X* # Multiply the values in the current row by row `X`
O # And take the sum
Try the first six steps online, excluding the loop of step 4.
Step 7) And do matrix multiplication yet again right after: \$VMV\$:
} # Close the inner map
U # Pop and store this as new `X`
® # Push the matrix V from variable `®`
ø # Zip/transpose; swapping rows/columns
ε # Map over each row of this transposed matrix V':
X* # Multiply the values in the current row by row `X`
O # And take the sum
}} # Close both the inner and outer maps
Try the first seven steps online, excluding the loop of step 4.
Step 8) Subtract the values at the same positions of these two matrices from one another: \$2V-VMV\$:
- # Subtract matrix VMV from 2V
Try the first eight steps online, excluding the loop of step 4.
And after the loop is done, the resulting matrix is output implicitly.
Fortran (GFortran), 350 376 bytes
real,allocatable,dimension(:,:)::A,B,U;read*,n;allocate(A(n,n));B=A;U=A;read*,((A(i,j),j=1,n),i=1,n)
do i=1,n;do j=1,n;U(i,j)=merge(1,0,i==j);B(i,j)=merge(1.,1/A(i,j),A(i,j)==0);enddo;enddo
do 4 l=1,99;do 4 i=1,n;do 4 j=1,n;s=U(i,j);do k=1,n;m=i;if(k/=m)s=s-A(i,k)*B(k,j);enddo;B(m,j)=s/A(i,m)
4 continue;print'(3(x,f9.6))',((B(i,j),j=1,n),i=1,n)
end
Adapted from here. Saved 26 bytes using merge instead of if.
PARI/GP 8 bytes
f(A)=1/A
Built-in function; particularly efficient for matrices with integer matrix elements.
Most likely, the method used is similar to the one described in this refererence: H. Haramotu, M. Matsumotu, A p-adic algorithm for computing the inverse of integer matrices, Journal of Computational and Applied Mathematics, Volume 225, Issue 1, 1 March 2009, Pages 320-322.
APL (Dyalog Unicode), 27 23 bytes
-4 bytes thanks to Adám!
This implements the method advertised by Sisyphus.
⊢(⊢+⊢-⊢+.×+.×)⍣≡⍉÷,+.×,
A function that takes the matrix as the right argument.
+.× computes the dot product (sum of element-wise product) of the flattened matrix , and the flattened matrix ,. This is sum of the squared matrix entries, which is equal to \$tr(AA^T)\$.
⍉÷ divides the transposed matrix by the trace.
(⊢+⊢-⊢+.×+.×)⍣≡ is a function which is applied to the original matrix \$A\$ as a left argument (⊢) and \$A^T\div tr(AA^T)\$ as the right argument (⍉÷,+.×,):
⊢+⊢-⊢+.×+.× takes the current matrix \$V\$ on its right and the input matrix \$A\$ on its left and executes one iteration step:
+.× is the inner product of + and ×. Given two matrices, this calculates their product. In this case \$ A \times V \$.
⊢ is the right argument \$V\$, ⊢+.× the product \$V \times (A \times V)\$.
⊢- subtracts this from the right argument: \$V-V \times A \times V\$.
⊢+ adds this to the right argument: \$V+V-V \times A \times V\$.
⍣≡ applies the function on its left until the result doesn't change. Because of the way equality testing works in Dyalog APL, this actually terminates.
Charcoal, 48 bytes
≔Eθ∕Eθ§λκΣEXθ²ΣληFφUMηEκ⁻⊗μΣEθ×ΣEθ×§κς§ρπ§§ηπνIη
Try it online! Link is to verbose version of code. Explanation: Another port of @Sisyphus's answer.
≔Eθ∕Eθ§λκΣEXθ²Σλη
Transpose the input and divide it by the sum of squares of all the elements. Sadly neither sum nor divide fully vectorise, so I have to divide a row at a time and calculate the sum via a nested loop.
Fφ
Repeat 1000 times, which should be enough for floating-point precision.
UMηEκ⁻⊗μΣEθ×ΣEθ×§κς§ρπ§§ηπν
Calculate the matrix multiplication and subtraction in-place. Charcoal doesn't have any vector or matrix operations, so we have to loop over the rows and columns manually, but there are a couple of places where we can share variables which saves us a couple of bytes each.
Iη
Output the array. (Note that each element is output on its own line and each row is double-spaced from the previous.)
Scala, 237 232 bytes
Uses the method from Sisyphus's answer. Go upvote that!
m=>{val h=m.indices
Seq.iterate(m.transpose.map(_.map(_/m.flatten.map(x=>x*x).sum)),9999){v=>h.map(i=>h.map{j=>2*v(i)(j)-(h.map(k=>v(i).zip(m.transpose.apply(k))map(t=>t._1*t._2)sum),v.transpose.apply(j)).zipped.map(_*_).sum})}last}
h is just a range from 0 until n to reuse later (mostly because Scala doesn't have matrix multiplication builtins). The function makes a sequence of 9999 elements and takes the last element. The first element is the transpose of m divided by the trace of m times its transpose. Subsequent elements are calculated with 2*v-v*m*v, where v was the previous element.
To calculate \$V_0\$ (It turns out the trace of m times its transpose is just the sum of squares of all of m's cells):
m.transpose.map( //For every row in m's transpose
_.map( //For every cell in that row
_ / //Divide it by (trace(M * M's transpose))
m.flatten //Turn m into a 1D list
.map(x=>x*x) //Square each cell
.sum)) //Add them up
To calculate subsequent elements, we use \$2V - (VA)V\$, but you have to map over h instead of over v itself:
h.map(i => //For every i in [0, n)
h.map{j => //For every j in [0, n)
2*v(i)(j) - //2V at these coordinates minus
<(v * m * v)[i][j]> }) //v*m*v at these coordinates (see explanation below)
To calculate (v*m)[i]:
h.map(k => //k is the index of a row in [0, n)
v(i).zip( //Zip column i of v with
m.transpose.apply(k) //Row k of m (apply is used for indexing here)
) map(t=>t._1*t._2) //Multiply v(i)(j) with m(k)(i)
sum //Add then up
)
And getting the cross product of that with row j of v uses pretty much the same approach.
Scala, 346 342 bytes
Saved 4 bytes thanks to @corvus_192!
type M=Seq[Seq[Double]]
def c(m:M)={val I=m.indices;I.map(i=>I.map(j=>m(i)(j)*math.pow(-1,i+j)))}
def d(m:M):(M,Double)=if(m.size<2)m->m(0)(0)else{val I=m.indices
val M=I.map(i=>I.map{j=>d(I.filter(i!=_)map(k=>I.filter(j!=_)map(m(k))))._2})
c(M)->c(m).head.zip(M.head).map(t=>t._1*t._2).sum}
def i(m:M)=d(m)._1.transpose.map(_.map(_/d(m)._2))
As you can see, I'm not very good at math.
Matlab 6 3 bytes
inv
Computes and prints the inverse of a square matrix. Pretty boring in-built solution. Thanks to @Bubbler for the clarification and -3 bytes.
Excel, 29 bytes
=MINVERSE(OFFSET(A2,,,A1,A1))
Straightforward application of the MINVERSE() function. It's boring but I got excited about Excel having a built-in for something. Input \$n\$ in A1, the matrix starting in A2, and the formula anywhere the spill won't interfere.
R, 72 61 bytes
function(A,V=t(A/sum(diag(A%*%t(A))))){for(i in 1:1e4)V=2*V-V%*%A%*%V;V}
Porting Sisyphus' answer is not futile at all...and thanks to Sisyphus for -11 bytes.
Observes that \$Tr(AA^T)=\sum\limits_{i,j}a_{ij}^2\$.
R, 94 bytes
function(M)outer(k<-1:dim(M),k,Vectorize(function(j,i)det(M[-i,-j,drop=F])*(-1)^(i+j)))/det(M)
Thanks to Robin Ryder for fixing a bug and making this actually work.
Calculates \$A^{-1}\$ using the adjugate/determinant method.
Python 2, 188 bytes
lambda a:[[c(a,j,i)/d(a)for j,_ in e(a)]for i,_ in e(a)]
c=lambda a,i,j:(-1)**(i+j)*d([b[:j]+b[j+1:]for I,b in e(a)if i-I])
d=lambda a:a==[]or sum(b[0]*c(a,i,0)for i,b in e(a))
e=enumerate
The top lambda computes \$A^{-1} = \frac{1}{\det(A)}\text{adj}(A)\$.
d(a) computes the determinant and c(a,i,j) computes cofactors.
Python 2, 228 bytes
from random import*
a=input()
exec"""$:j,J=i,I;J+=[j==i $]
while~-all(I[i]$):shuffle(a)
$:
j,J=i,I
$:
if j-i:I[:]=[y-I[j]*x/J[j]for x,y in zip(J,I)]
$:print[x/I[i]for x in I][len(a):]""".replace("$","for i,I in enumerate(a)")
Augment the matrix with the identity matrix, then apply Gauss–Jordan elimination. I don't know if this is the shortest approach, but it's the one I wanted to try golfing down.
I use while not all(a[i][i]for i in r):shuffle(a) to move zeros off the diagonal. This loop will definitely terminate, because if there is no permutation of the rows of \$A\$ that makes the diagonal free of zeros, then \$\det(A)=0\$, which we are guaranteed is not the case. This can be seen from the Leibniz formula for \$\det(A)\$:
$$\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n a_{\sigma(i),i}$$
“There is no permutation \$\sigma\$ of the rows that makes the diagonal free of zeros” can be equivalently rephrased as “\$\prod_{i=1}^n a_{\sigma(i),i}\$ is always 0, for all \$\sigma\$” which causes this whole formula to be 0.
Octave, 57 bytes
A=input('');V=A'/trace(A*A');for i=1:1e4V=2*V-V*A*V;end
V
This is not particularly well golfed, but I wanted to advertise an approach that could be useful for other non-builtin answers.
This uses the Hotelling-Bodewig scheme:
$$ V_{i+1} = V_i\left(2I - AV_i\right)$$
Which iteratively computes the inverse of a non singular matrix. This is guaranteed to converge for \$\left\lVert I - AV_0\right\rVert < 1\$ (under a suitable matrix norm). Choosing the \$V_0\$ is difficult, but Soleymani, F. shows in "A New Method For Solving Ill-Conditioned Linear Systems" that the inital guess \$V_0 = \frac{A^T}{\text{tr}(AA^T)}\$ will always satisfy this condition, so the system is numerically stable.
What makes this a particularly attractive approach to other potential answers is that we don't require any builtin determinant or inverse functions. The most complex part is just matrix multiplication, since the transpose and trace are trivial to compute.
I have chosen 1e4 iterations here to make the runtime somewhat reasonable, although you could of course push it to 1e9 with no loss of byte count.
-10 thanks to xnor for noting we don't need to construct an identity matrix.
Ruby -rmatrix, 23 19 bytes
->a{Matrix[*a].inv}
Returns the result as a Ruby matrix object.
-4 bytes from Dingus.
SageMath, 14 13 11 bytes
Saved a byte thanks to FryAmTheEggman!!!
Saved 2 bytes thanks to Sisyphus!!!
lambda M:~M
Inputs any square matrix and returns its inverse.
J, 2 bytes
%.
Same as APL, but more powerful, as J can produce exact rational matrix when given a matrix of extended integers as input.
Jelly, 3 bytes
æ*-
Explanation:
# Full program taking a single integer-matrix as argument
æ* # Matrix exponentiation
- # with -1
# (after which the result is output implicitly)
MATL, 4 bytes
-1Y^
Explanation
-1Y^
-1 : Push -1 onto the stack
Y^ : Raise implicit input to -1 power
JavaScript (ES6), 169 bytes
This computes \$M^{-1} = \dfrac{\operatorname{adj}(M)}{\det(M)}\$
M=>M.map((r,y)=>r.map((_,x)=>D(h(M,x).map(r=>h(r,y)))*(x+y&1?-1:1)/D(M)),h=(a,n)=>a.filter(_=>n--),D=M=>+M||M.reduce((s,[v],i)=>s+(i&1?-v:v)*D(h(M,i).map(r=>h(r,0))),0))
R, 5 bytes
solve
Nothing new here...
Basically, the code solve(A, B) solves \$AX = B\$, but when \$B\$ is not given, it is treated as identity matrix, thus giving us the inverse as the result.
APL (Dyalog Unicode), 1 byteSBCS
⌹
The domino primitive is a very interesting APL "built-in". It already featured in another 1-byte answer of my own where it was used to solve a least-squares problem. When applied to a square matrix, ⌹ tries to find the matrix inverse of its argument.
Many golfing languages will also have a built-in for this... But mind you, APL is not a golfing language, although it is terse enough to be very competitive and, in cases like this, win.
