| Bytes | Lang | Time | Link |
|---|---|---|---|
| 8614 | K ngn/k | 241024T221622Z | att |
| 012 | iogii | 241025T220055Z | Darren S |
| 023 | JavaScript ES6 | 241023T163346Z | Arnauld |
| nan | Math | 241030T013654Z | ninjamar |
| 5510 | Vyxal 3 | 241029T141954Z | Ginger |
| nan | C gcc | 241029T005054Z | AShelly |
| nan | AWK | 241028T134916Z | xrs |
| nan | Raku | 241028T101239Z | Mustafa |
| nan | J | 241027T232913Z | south |
| nan | Halfwit | 241024T153704Z | Kevin Cr |
| 336 | Set Theory The Language | 241025T084026Z | RubenVer |
| 018 | Charcoal | 241024T075143Z | Neil |
| 844 | 05AB1E | 241024T145329Z | Kevin Cr |
| 6713 | x86 32bit machine code | 241024T142122Z | m90 |
| 107 | Go | 241023T153219Z | bigyihsu |
| 034 | R | 241023T185122Z | pajonk |
| 022 | Python | 241023T190321Z | Albert.L |
| 437 | Jelly | 241023T155751Z | Unrelate |
| 010 | Japt | 241023T145233Z | Shaggy |
| nan | APL+WIN | 241023T155235Z | Graham |
| nan | Google Sheets | 241023T151921Z | doubleun |
K (ngn/k), 10+6=16 8+6 = 14 bytes
Encoder:
+/~~!-2*
Decoder:
-1/!-:
-2* negative double
! range to/from 0
+/~~ count nonzero
!-: range -x...-1
-1/ convert from base -1
iogii 12 bytes
encoder 6
2*:_(X
2* dup negate pred max
Max of 2* thing and its complement as others have done
decoder 6
}_bW1_
countTo negate backwards baseFrom 1 negate
Based on att's K solution
JavaScript (ES6), 23 bytes
-6 thanks to @Neil
-1 thanks to @m90
Encoder (12 bytes)
n=>n*2^n>>31
Decoder (11 bytes)
n=>n/2^-n%2
Math, 19 + 33 = 52 bytes
Encoder: Actual Source:
f(x)={x<0:-2x-1,2x}
Formatted Latex: $$ f(x)=\{x<0:-2x-1,2x\} $$ Decoder: Actual Source:
g(x)={0<=mod(x,2)<1:x/2,(x+1)/-2}
Formatted Latex: $$ g(x)=\{0\le mod(x,2):x/2,(x+1)/2)\} $$
*Desmos requires quite verbose latex code, so the formatted latex can't be directly copied into Desmos. I counted the equation in the actual source for determining my byte count.
Vyxal 3, 5 + 5 = 10 bytes
Encoder: (Vyxal It Online!)
ÞṬN$Ḟ
Decoder: (Vyxal It Online!)
ÞṬN$i
Explanation of the encoder:
ÞṬN$Ḟ
$Ḟ # The index of the input in:
ÞṬ # The set of integers (0, 1, -1, 2, -2, ...)
N # Negated (0, -1, 1, -2, 2, ...)
The decoder works the same way, except with indexing instead of finding the index.
AWK, 21+23=44 bytes
Encoder
$0=$1>=0?2*$1:-2*$1-1
Decoder
$0=$1%2?-($1+1)/2:$1/2
Unified 48 bytes
$0=$1~/e/?$2>=0?2*$2:-2*$2-1:$2%2?-($2+1)/2:$2/2
Combined example: awk -f zigzag.awk <<< "d 19" or "e -10" for decode/encode respectively.
Raku, 14 + 27 = 41 bytes
Encoder:
.abs*2-($_ <0)
Decoder:
($_/2).ceiling*(1-2*($_%2))
Encoder: take 2 times the absolute value then maybe subtract 1 depending on negativity.
Decoder: do ceil-division then determine the sign over evenness.
J, 8+10=18 bytes
Both functions use the same formulas as pajonk's R sol.
Encoder, 8 bytes
+:@|-<&0
+:@|-<&0
<&0 NB. less than 0?
+:@| NB. absolute value(|) then(@) double(+:)
- NB. subtraction
Decoder, 10 bytes
<.@-:-+:|]
<.@-:-+:|]
] NB. input
+: NB. double the input
| NB. mod
<.@-: NB. halve(-:) then(@) floor(<.), aka integer division
- NB. subtraction
Halfwit, 17 16 12 (5.5+6.5) bytes
Encoder:
+?><l[n+N
-4 bytes thanks to emanresuA.
Try it online or verify all test cases.
Decoder:
(>M<+N;>{</
Try it online or verify all test cases.
This language feels as inefficient as I remember. 😅
Explanation:
+ # Add the (implicit) input-bigint to itself to double it
? # Push the input again
>< # Push compressed 0
l # Check if the input is smaller than 0
[ # If it is indeed negative:
n+ # Add 1
N # And then negate it
# (after which the result is output implicitly)
( # Loop the (implicit) input-bigint amount of times:
>M< # Push compressed integer 1
+ # Add it to the current bigint
# (which uses the implicit input-bigint in the first iteration)
N # Then negate it
;>{< # After the loop: push compressed 2
/ # Integer-divide
# (after which the result is output implicitly)
Set Theory: The Language, 3+3 = 6 bytes*
Encode
Flags: -iℤ -oℕ
↣ℤℕ
Decode
Flags: -iℕ -oℤ
↣ℕℤ
Explanation
In STTL, everything is represented as a mathematical set. For normal operations to work, they use "universes", a sort of context. The function ↣ "inject" is biversal, i.e. it uses two universes. ℕ refers to the naturals universe and ℤ to the integers universe. So the encoder ↣ℤℕ injects integers onto naturals, which follows the challenge requirement and more importantly the usual ℕ ↔ ℤ bijection in mathematics. ↣ℕℤ is required to form a bijection with ↣ℤℕ, and therefore does the inverse operation (casting a natural to an integer is done with → "convert").
Running
Save the code to files and run sttl encode.sttl -iℤ -oℕ <number> and sttl decode.sttl -iℕ -oℤ <number>. Remember that negative numbers have to be inputted with ¯ to not make it think it's flags.
Scoring
The flags are purely for nicer IO. The two programs are standalone: if you input the natural representation for sets (which is defined as such: 0 → ∅; n + 1 → n ∪ {n}) to the decoder you get the result as the integer representation of sets (which is defined as a pair of naturals as defined above, to be interpreted by subtracting their meanings; pairs are defined as (a, b) → {{a}, {a, b}}); the opposite applies for encoder.
Charcoal, 22 18 bytes
Encoder, 10 bytes
NθI⁻⊗↔θ‹θ⁰
Try it online! Link is to verbose version of code. Explanation: Port of @doubleunary and @pajonk's encoders.
Decoder, 12 8 bytes
I↨…±N⁰±¹
Try it online! Link is to verbose version of code. Explanation: Port of @att's decoder.
05AB1E, 10 8 (4+4) bytes
Encoder:
·D±M
Try it online or verify all test cases.
Decoder:
F±};
-2 bytes porting the decoder of @UnrelatedString's Jelly answer, so make sure to upvote that answer as well!
Try it online or verify all test cases.
Original decoder (6 bytes):
±‚;ʒ.ï
Outputs as a singleton.
Try it online or verify all test cases.
Explanation:
· # Double the (implicit) input-integer
D # Duplicate this
± # Take the bitwise-NOT of the copy: -n-1
M # Push a copy of the maximum value of the stack
# (which is output implicitly as result)
F # Loop the (implicit) input-integer amount of times:
± # Take the bitwise-NOT: -n-1
# (which uses the implicit input-integer in the first iteration)
}; # After the loop: halve the result
# (which is output implicitly as result)
± # Take the bitwise-NOT of the (implicit) input-integer
‚ # Pair it with the (implicit) input-integer
; # Halve each
ʒ # Filter this pair by:
.ï # Is it an integer?
# (after which the resulting singleton is output implicitly)
x86 32-bit machine code, 6 + 7 = 13 bytes
99 D1 E0 31 D0 C3
D1 E8 19 D2 31 D0 C3
Uses the regparm(1) calling convention – argument in EAX, result in EAX.
In assembly:
e: cdq # Sign-extend EAX into EDX:EAX.
# This sets every bit of EDX to the sign bit of EAX.
shl eax, 1 # Shift EAX left by 1 bit.
xor eax, edx # Exclusive-or EAX with EDX.
# This inverts all bits if the sign bit was 1.
# (In two's complement notation, that turns n into -n-1.)
# x becomes 2x if nonnegative, -2x-1 if negative.
ret # Return.
d: shr eax, 1 # Shift EAX right by 1 bit. The low bit goes into CF.
sbb edx, edx # Subtract EDX+CF from EDX, so it becomes -CF.
xor eax, edx # Exclusive-or EAX with EDX.
# This inverts all bits if the low bit was 1.
# 2n becomes n; 2n+1 becomes -n-1.
ret # Return.
This can be reduced to 10 bytes by providing the two functions together and sharing code, if that is allowed:
D1 E8 EB 01 C0 19 D2 31 D0 C3
The decoding function starts at the beginning, and the encoding function starts after the first 3 bytes.
In assembly:
d: shr eax, 1 # Shift EAX right by 1 bit. The low bit goes into CF.
.byte 0xEB # This byte and the next form JMP with displacement +1,
# jumping to the SBB instruction.
e: .byte 0x01 # This byte and the next form 'add eax, eax'.
.byte 0xC0 # The value of EAX is doubled. The high bit goes into CF.
sbb edx, edx # Subtract EDX+CF from EDX, so it becomes -CF.
xor eax, edx # Exclusive-or EAX with EDX.
ret # Return.
Go, 107 bytes
func e(a int)int{o:=a*2;if a<0{o=(-a)*2-1};return o}
func d(a int)int{o:=a/2;if a%2>0{o=-(a+1)/2};return o}
e encodes, and d decodes.
R, 36 34 bytes
Encoder, 18 bytes
\(n)abs(n)*2-(n<0)
Or
\(n)2*max(n,-n-.5)
Decoder, 18 16 bytes
\(n)n%/%2-n%%2*n
Python, encoder 22 bytes, decoder 19 bytes
e=lambda x:max(x+x,~x-x)
d=lambda x:x//2-x%2*x
Python, encoder 22 bytes, decoder 23 bytes
e=lambda x:max(x+x,~x-x)
d=lambda x:[x,-x][x&1]>>1
Jelly, 4 + 6 5 3 = 7 bytes
Encoder:
Ḥ»~$
Ḥ Double the input.
» Take the maximum of that and
$ its
~ bitwise NOT. (-1-n)
Decoder:
~¡H
-1 and inspiration for another -1 thanks to Jonathan Allan
Full program only. Try it modified for a test harness!
~ Bitwise NOT the input
¡ a number of times equal to itself.
H Halve that result.
Japt, 8 7 5 + 5 = 13 12 10 bytes
Encoder
Ñ
w~U
Try it (includes all test cases)
Ñ\nw~U :Implicit input of integer U
Ñ :Multiply by 2
\n :Reassign to U
w :Maximum with
~U : Bitwise NOT of U
Decoder
϶U}c
Try it (includes all test cases
϶U}c :Implicit input of integer U
Ï :Function taking a 0-based iteration index as argument
¶U : Is equal to U
} :End function
c :Get the first integer from the sequence [0,-1,1,-2,2,-3,3,...] that returns true
APL+WIN, 13 + 17 = 30 bytes
Index origin = 0
Encoder:- Prompts for integer
(0⌊×n)+2×|n←⎕
Try it online! Thanks to Dyalog Classic
Decoder:- Prompts for integer
1 ¯1[2|n]×⌈.5×n←⎕
Google Sheets, 17 + 24 = 41 bytes
Encoder
=2*abs(A1)-(A1<0)
Decoder
=int(B1/(2-4*isodd(B1)))
