| Bytes | Lang | Time | Link |
|---|---|---|---|
| 007 | Uiua | 241105T022648Z | nyxbird |
| 014 | Wolfram Language Mathematica | 240309T080007Z | lesobrod |
| 081 | Charcoal | 240227T083828Z | Neil |
| 038 | Pyth | 240227T154458Z | CursorCo |
| 023 | APL Dyalog Unicode | 240226T222400Z | Kamila S |
| 024 | Python + NumPy | 240227T035945Z | alephalp |
| 115 | JavaScript Node.js | 240227T033010Z | tsh |
Uiua, 8 7 bytes
/+⌝⤸⊚2⤸
/+⌝⤸⊚2⤸
⤸ # re-orient the specified axes to the front
⌝⤸⊚2 # collapse the first two axes into one
/+ # sum along the new first axis
(⌝⤸ antiorient moves the front axes to the specified locations, and ⊚2 gives [0 0]. If multiple axes are moved to the same index, they're collapsed, giving the diagonal.)
Wolfram Language (Mathematica), 14 bytes
TensorContract
Axes are 1-indexed.
-5 thanks to @att
Try it online!
Charcoal, 103 82 81 bytes
≔LθεW⁺υΣθ«≔ιθ→»≔Eθ⟦⟧δUMEθ﹪÷κXε⮌…·⁰ⅈε∧⁼§ιη§ιζ⊞O§δ↨Φι∧⁻μη⁻μζε§θκUMδΣιFⅈ≔⪪δεδ⭆¹⊟⮌§δ⁰
Attempt This Online! Link is to verbose version of code. Explanation:
≔Lθε
Get the size of the tensor.
W⁺υΣθ«≔ιθ→»
Flatten the tensor into a list and get 1 less than the number of dimensions.
≔Eθ⟦⟧δ
Create a temporary holding list of lists for the elements of the tensor that will be summed to make the contracted tensor. This list is still the same size as the flattened tensor for now.
UMEθ﹪÷κXε⮌…·⁰ⅈε∧⁼§ιη§ιζ⊞O§δ↨Φι∧⁻μη⁻μζε§θκ
For each element of the tensor, calculate whether it contributes to the contracted tensor, and if so, append that element to the relevant list element.
UMδΣι
Sum each of the lists.
Fⅈ≔⪪δεδ
Unflatten the list back into a tensor of the original dimension.
⭆¹⊟⮌§δ⁰
Retrieve the first element of the first element to produce the final result.
Previous 103-byte version was a backport to TIO of a 79-byte version using my experimental multidimensional indexing branch:
≔θδW⁺υδ«≔⌊δδ→»≔X⁰⊕↔…⌊θ¹εFΦEXLθⅈ﹪÷ιXLθ…⁰ⅈLθ⁼§ιη§ιζ«≔⁺⟦⁰⟧Φι∧⁻λη⁻λζγ≔⊟γβ≔εαFγ≔§ακα≔θγFι≔§γκγ§≔αβ⁺§αβγ»⭆¹⊟ε
Try it online! Link is to verbose version of code. Explanation:
≔θδW⁺υδ«≔⌊δδ→»
Calculate the number of dimensions of the input tensor.
≔X⁰⊕↔…⌊θ¹ε
Generate a list of a zero tensor of two fewer dimensions as the initial value of the result tensor.
FΦEXLθⅈ﹪÷ιXLθ…⁰ⅈLθ⁼§ιη§ιζ«
Loop over the multidimensional indices of all the elements of the tensor where the coordinates in the two given axes are identical.
≔⁺⟦⁰⟧Φι∧⁻λη⁻λζγ≔⊟γβ≔εαFγ≔§ακα
Get the parent list of the corresponding element of the result tensor.
≔θγFι≔§γκγ§≔αβ⁺§αβγ
Get the element of the input tensor and add it to that result element.
»⭆¹⊟ε
Pretty-print the final result. (This is to show that the result is a single element when the input has two dimensions, as otherwise the output would be indistinguishable.)
Explanation of the 79-byte multidimensional indexing version:
≔θδW⁺υδ«≔⌊δδ→»
Calculate the number of dimensions of the input tensor as before.
≔×⁰…⌊θ¹ε
Generate a list of a zero initial result tensor. (This is slightly shorter because Times fully vectorises on that branch but it only vectorises over single dimensional arrays on TIO.)
FEXLθⅈ﹪÷ιXLθ…⁰ⅈLθ«
Loop over the multidimensional indices of the input tensor.
≔⁺⟦⁰⟧Φι∧⁻λη⁻λζδ
Get the multidimensional index of the result element.
¿⁼§ιη§ιζ§≔εδ⁺§εδ§θι
Update that element if the coordinates in the two given axes are identical.
»⭆¹⊟ε
Pretty-print the final result.
Pyth, 38 bytes
.N?T:RtTtYN?tY:CN1Y?sIssNs.e@bkN:CMNT2
Defines a function : which takes three inputs: a nested list, first axis, and second axis. This assumes that the first axis is less than the second axis.
Explanation
The function operates recursively. The two axis inputs are T and Y. There are four cases to consider:
- In the base case
T=0,Y=1, and the matrix has depth 2. In this case we simply take the trace of the matrix, which is straightforward to implement. - In the case that
T>0we map:over the list with axest-1andy-1. - In the case that
T=0andY>1, we transpose the matrix and call:on the transposition with axes1andY. - And finally in the case that
T=0,Y=1, and the matrix has depth greater than 2, we map transposition over the list and call:on that with axes0,2.
.N # define :(N, T, Y)
?T # if T != 0:
:RtTtYN # map : over N with additional arguments T-1, Y-1
?tY # else if Y > 1:
:CN1Y # :(transpose(N), 1, Y)
?sIssN # else if matrix depth is 2
s # sum of
.e N # map lambda k, b over the indices, values of N
@bk # b[k]
# else
CMN # map transposition over N
: T2 # :(above, T, 2)
APL (Dyalog Unicode), 23 bytes
+/⊢⍉⍨-⍨⍥≢∘⍴⌊{⍋⍋⍺∊⍨⍳≢⍴⍵}
Follows a golf suggested by Marshall and Adam. My original answer that shows the idea better:
{+/⍺⍉⍨(⍳r-2)@(⍵~⍨⍳r)⊢(r-2)@⍵⊢0⍴⍨r←⍴⍴⍺}
Matrix on the left, the two axes on the right. Works by noticing that tensor contraction on two axes given a tensor A is equivalent to dyadic transposition with rank(A)-2 in place of axes and iota everywhere else. Then we sum once by trailing axis. Transposition reduces rank by one, sum on trailing axis also reduces rank by one. Hence the resulting tensor has rank of rank(A)-2. Assumes quad IO of zero.
JavaScript (Node.js), 115 bytes
f=(m,x,y,j)=>(r=m.map?.((n,i)=>f(n,x-1,y-1,x?j:i)),y?x?r:r.reduce(a=(u,v)=>u.map?.((w,i)=>a(w,v[i]))??u+v):r[j])??m