g | x | w | all
Bytes Lang Time Link
007Uiua241105T022648Znyxbird
014Wolfram Language Mathematica240309T080007Zlesobrod
081Charcoal240227T083828ZNeil
038Pyth240227T154458ZCursorCo
023APL Dyalog Unicode240226T222400ZKamila S
024Python + NumPy240227T035945Zalephalp
115JavaScript Node.js240227T033010Ztsh

Uiua, 8 7 bytes

/+⌝⤸⊚2⤸

Try it!

/+⌝⤸⊚2⤸
       ⤸ # re-orient the specified axes to the front
  ⌝⤸⊚2  # collapse the first two axes into one
/+       # sum along the new first axis

(⌝⤸ antiorient moves the front axes to the specified locations, and ⊚2 gives [0 0]. If multiple axes are moved to the same index, they're collapsed, giving the diagonal.)

Wolfram Language (Mathematica), 14 bytes

TensorContract

Axes are 1-indexed.
-5 thanks to @att
Try it online!

Charcoal, 103 82 81 bytes

≔LθεW⁺υΣθ«≔ιθ→»≔Eθ⟦⟧δUMEθ﹪÷κXε⮌…·⁰ⅈε∧⁼§ιη§ιζ⊞O§δ↨Φι∧⁻μη⁻μζε§θκUMδΣιFⅈ≔⪪δεδ⭆¹⊟⮌§δ⁰

Attempt This Online! Link is to verbose version of code. Explanation:

≔Lθε

Get the size of the tensor.

W⁺υΣθ«≔ιθ→»

Flatten the tensor into a list and get 1 less than the number of dimensions.

≔Eθ⟦⟧δ

Create a temporary holding list of lists for the elements of the tensor that will be summed to make the contracted tensor. This list is still the same size as the flattened tensor for now.

UMEθ﹪÷κXε⮌…·⁰ⅈε∧⁼§ιη§ιζ⊞O§δ↨Φι∧⁻μη⁻μζε§θκ

For each element of the tensor, calculate whether it contributes to the contracted tensor, and if so, append that element to the relevant list element.

UMδΣι

Sum each of the lists.

Fⅈ≔⪪δεδ

Unflatten the list back into a tensor of the original dimension.

⭆¹⊟⮌§δ⁰

Retrieve the first element of the first element to produce the final result.

Previous 103-byte version was a backport to TIO of a 79-byte version using my experimental multidimensional indexing branch:

≔θδW⁺υδ«≔⌊δδ→»≔X⁰⊕↔…⌊θ¹εFΦEXLθⅈ﹪÷ιXLθ…⁰ⅈLθ⁼§ιη§ιζ«≔⁺⟦⁰⟧Φι∧⁻λη⁻λζγ≔⊟γβ≔εαFγ≔§ακα≔θγFι≔§γκγ§≔αβ⁺§αβγ»⭆¹⊟ε

Try it online! Link is to verbose version of code. Explanation:

≔θδW⁺υδ«≔⌊δδ→»

Calculate the number of dimensions of the input tensor.

≔X⁰⊕↔…⌊θ¹ε

Generate a list of a zero tensor of two fewer dimensions as the initial value of the result tensor.

FΦEXLθⅈ﹪÷ιXLθ…⁰ⅈLθ⁼§ιη§ιζ«

Loop over the multidimensional indices of all the elements of the tensor where the coordinates in the two given axes are identical.

≔⁺⟦⁰⟧Φι∧⁻λη⁻λζγ≔⊟γβ≔εαFγ≔§ακα

Get the parent list of the corresponding element of the result tensor.

≔θγFι≔§γκγ§≔αβ⁺§αβγ

Get the element of the input tensor and add it to that result element.

»⭆¹⊟ε

Pretty-print the final result. (This is to show that the result is a single element when the input has two dimensions, as otherwise the output would be indistinguishable.)

Explanation of the 79-byte multidimensional indexing version:

≔θδW⁺υδ«≔⌊δδ→»

Calculate the number of dimensions of the input tensor as before.

≔×⁰…⌊θ¹ε

Generate a list of a zero initial result tensor. (This is slightly shorter because Times fully vectorises on that branch but it only vectorises over single dimensional arrays on TIO.)

FEXLθⅈ﹪÷ιXLθ…⁰ⅈLθ«

Loop over the multidimensional indices of the input tensor.

≔⁺⟦⁰⟧Φι∧⁻λη⁻λζδ

Get the multidimensional index of the result element.

¿⁼§ιη§ιζ§≔εδ⁺§εδ§θι

Update that element if the coordinates in the two given axes are identical.

»⭆¹⊟ε

Pretty-print the final result.

Pyth, 38 bytes

.N?T:RtTtYN?tY:CN1Y?sIssNs.e@bkN:CMNT2

Try it online!

Defines a function : which takes three inputs: a nested list, first axis, and second axis. This assumes that the first axis is less than the second axis.

Explanation

The function operates recursively. The two axis inputs are T and Y. There are four cases to consider:

.N                                        # define :(N, T, Y)
  ?T                                      # if T != 0:
    :RtTtYN                               #   map : over N with additional arguments T-1, Y-1
           ?tY                            # else if Y > 1:
              :CN1Y                       #   :(transpose(N), 1, Y)
                   ?sIssN                 # else if matrix depth is 2
                         s                #   sum of
                          .e   N          #   map lambda k, b over the indices, values of N
                            @bk           #     b[k]
                                          # else
                                 CMN      #   map transposition over N
                                :   T2    #   :(above, T, 2)

APL (Dyalog Unicode), 23 bytes

+/⊢⍉⍨-⍨⍥≢∘⍴⌊{⍋⍋⍺∊⍨⍳≢⍴⍵}

Follows a golf suggested by Marshall and Adam. My original answer that shows the idea better:

{+/⍺⍉⍨(⍳r-2)@(⍵~⍨⍳r)⊢(r-2)@⍵⊢0⍴⍨r←⍴⍴⍺}

Matrix on the left, the two axes on the right. Works by noticing that tensor contraction on two axes given a tensor A is equivalent to dyadic transposition with rank(A)-2 in place of axes and iota everywhere else. Then we sum once by trailing axis. Transposition reduces rank by one, sum on trailing axis also reduces rank by one. Hence the resulting tensor has rank of rank(A)-2. Assumes quad IO of zero.

Python + NumPy, 24 bytes

-3 bytes thanks to @Mukundan314.

lambda a,i:a.trace(0,*i)

Attempt This Online!

JavaScript (Node.js), 115 bytes

f=(m,x,y,j)=>(r=m.map?.((n,i)=>f(n,x-1,y-1,x?j:i)),y?x?r:r.reduce(a=(u,v)=>u.map?.((w,i)=>a(w,v[i]))??u+v):r[j])??m

Attempt This Online!