| Bytes | Lang | Time | Link |
|---|---|---|---|
| 028 | sh + coreutils | 220525T154941Z | matteo_c |
| 033 | APLNARS | 250906T092334Z | Rosario |
| 003 | Thunno 2 B | 230816T075040Z | The Thon |
| 015 | J | 221112T004911Z | naffetS |
| 009 | Pyth | 221111T210003Z | hakr14 |
| nan | Fig | 221111T183730Z | Seggan |
| 2221 | ><> | 221110T132651Z | mousetai |
| 032 | Knight | 220803T045904Z | Bubbler |
| 047 | Knight | 220802T221727Z | naffetS |
| 017 | Desmos | 220525T165707Z | naffetS |
| 039 | Red | 220612T051736Z | chunes |
| 010 | K ngn/k | 220611T155733Z | coltim |
| 026 | Juby | 220602T052810Z | Razetime |
| 020 | StackCell u32 | 220601T191429Z | Starwort |
| 023 | Factor | 220525T090139Z | chunes |
| 057 | Java 8 | 220525T075141Z | Kevin Cr |
| 060 | C gcc | 220525T153227Z | matteo_c |
| 049 | Rust | 220526T024835Z | Peter Co |
| 077 | JavaScript ES6 | 220526T054358Z | liam-mil |
| 029 | C clang | 220525T134151Z | Noodle9 |
| 054 | Python 3 | 220525T062044Z | Unrelate |
| 026 | Ruby | 220525T072102Z | dingledo |
| 017 | BQN | 220525T163810Z | Dominic |
| 022 | Perl 5 + p | 220525T143156Z | Dom Hast |
| 007 | x86 32bit machine code | 220525T132403Z | m90 |
| 013 | Burlesque | 220525T104712Z | DeathInc |
| 005 | Vyxal s | 220525T060921Z | emanresu |
| 050 | R | 220525T082113Z | Dominic |
| 006 | MathGolf | 220525T085237Z | Kevin Cr |
| 006 | 05AB1E | 220525T070518Z | Kevin Cr |
| 107 | GeoGebra | 220525T073747Z | Aiden Ch |
| 032 | JavaScript Node.js | 220525T063610Z | Arnauld |
| 033 | Retina 0.8.2 | 220525T073948Z | Neil |
| 012 | Charcoal | 220525T072518Z | Neil |
| 013 | APL dzaima/APL | 220525T065906Z | Adá |
| 007 | Jelly | 220525T064003Z | Unrelate |
| 011 | Husk | 220525T062556Z | Dominic |
APL(NARS), 33 chars
{⎕AV[1+16⊥¨⌽¨{16 16⊤⍵}¨¯1+⎕AV⍳⍵]}
test:
f←{⎕AV[1+16⊥¨⌽¨{16 16⊤⍵}¨¯1+⎕AV⍳⍵]}
f 'debug'
FV&Wv
f 'bcd'
&6F
f '234'
#3C
f '7W66V77'
success
Thunno 2 B, 3 bytes
ḤṃH
Explanation
ḤṃH # Implicit input
# Implicit cast to ordinals
Ḥ # Convert each to hexadecimal
ṃ # Reverse each string
H # Convert each from hexadecimal
# Implicit cast to characters
# Implicit output
Pyth, 9 bytes
mCi_.Hd16
Outputs a list of characters.
Explanation:
mCi_.Hd16 | Full program
mCi_.Hd16Q | with implicit variables
-----------+-----------------------------------
m Q | For each character d of the input,
.Hd | Convert to hexstring
_ | Reverse
i 16 | Convert from base 16
C | Convert to character
Fig, \$8\log_{256}(96)\approx\$ 6.585 bytes
CmHe$mHC
Input as string, output as list of chars.
CmHe$mHC
C # Charcodes
mH # To hex
e$ # Reverse each
mH # From hex
C # To chars
><>, 22 21 bytes
-1 byte thanks to enigma
i:0(?;:a6+,$a6+:@%*+o
Explanation
a6+ is 16. Basically, divide by 16, mod 16, * 16, sum up and print.
Knight, 32 bytes
;=xP:Wx;O+A%*16Ax 255"\"=xGx 1Lx
Golfed Steffan's answer to minimize the use of auxiliary variables. y is not necessary if we print each char right away; i is not necessary since A implicitly grabs the 0th char from the current string and we can chop x directly.
Ungolfed:
; = x P x = a line of stdin
: W x While x is nonempty:
; O + A % * 16 A x 255 "\" Output chr(ord(x[0]) * 16 % 255) + "\"
(to suppress the implicit newline ending)
: = x G x 1 L x x = x[1:]
If output of each char separated by newline is allowed:
Knight, 29 bytes
;=xP:Wx;O A%*16Ax 255=xGx 1Lx
Knight, 47 bytes
;;;;=xP=y""=i~1W<=i+1iLx=y+yA%*16A Gx i 1 255Oy
Ungolfed & explained:
; = x PROMPT # x = input()
; = y "" # y = ""
; = i ~1 # i = -1
; WHILE (< (= i (+ 1 i)) (LENGTH x)) # while (i = i + 1) < length(x):
: = y (+ y (ASCII (% (* 16 (ASCII (GET x i 1))) 255))) # y = y + chr((ord(x[i]) * 16) % 255)
: OUTPUT y # print(y)
Desmos, 105 17 bytes
f(a)=mod(16a,255)
Input and output are list of codepoints because Desmos doesn't support strings.
Red, 39 bytes
func[s][foreach c s[prin c * 16 % 255]]
Based on other similar answers. Thanks to @dingledooper and @UnrelatedString (I think) for discovering it.
K (ngn/k), 10 bytes
`c$16/|16\
16\treating the string input as if it is integers (corresponding to the ASCII codes), convert it to a base-16 representation|reverse the digits16/convert back from the base-16 representation`c$convert back to a character/string format and (implicitly) return
J-uby, 26 bytes
~:unpack&'H*'|~:pack&'h*'
packing and unpacking turns out to be the same length as the optimal ruby solution.
J-uby, 37 bytes
:bytes|:*&(:*&257|~:>>&4)|~:pack&'c*'
J-uby, 52 bytes
:bytes|:*&(~:to_s&16|:reverse|~:to_i&16)|~:pack&'c*'
StackCell (u32), 20
'.[@:'ÿ'␂+*'␐x/'ÿ&;] (due to an interpreter bug, programs are required to contain valid UTF-8 - the 255 bytes need to be converted to I have just fixed this interpreter bug#FF to run this, losing two bytes; however, by the language's specification this is a well-formed program - therefore I have not counted the two lost bytes as part of my score; let me know if I need to)
Explanation:
'.[@:'ÿ'␂+*'␐x/'ÿ&;]
'. Push a non-null byte (46) to the stack
[ ] Loop until EOF
@: Input a byte (0xXY) and duplicate it
'ÿ'␂+ Push the bytes 255 and 2 to the stack, and add them together
* Multiply the inputted character by 0x101 (-> 0xXYXY)
'␐x Push the byte 16 to the stack and swap it below the multiplied character
/ Divide the character by 16 (-> 0xXYX)
'ÿ& Mask the character with 0xFF (-> 0xYX)
; Print the swapped character
Factor, 29 23 bytes
[ [ 4 8 bitroll ] map ]
Roll each code point 4 bits to the left, wrapping around after 8 bits.
Java 8, 116 61 57 bytes
s->s.chars().forEach(c->System.out.printf("%c",c*16%255))
-55 bytes by porting @UnrelatedString's Python answer, so make sure to upvote him/her as well!
-4 bytes thanks to @dingledooper
Input as String (which is mandatory according to the rules and overwrites the default rules..), but outputs directly to STDOUT.
Explanation:
s-> // Method with String parameter and no return-type
s.chars().forEach(c->// Loop over the input-characters as integers:
System.out.printf( // Print:
"%c", // Converting a codepoint-integer to character:
c*16 // The integer multiplied by 16
%255)) // Modulo-255
With default I/O rules this could have been 21 bytes, by having the I/O as a stream of codepoint integers: s->s.map(c->c*16%255) - Try it online.
Original 116 bytes answer:
s->s.chars().forEach(c->{var t="".format("%02x",c).split("");System.out.print((char)Long.parseLong(t[1]+t[0],16));})
Again input as a String and output directly to STDOUT.
Explanation:
s-> // Method with String parameter and no return-type
s.chars().forEach(c->{// Loop over the input-characters as integers:
var t="".format("%02x",c)
// Convert the integer to a hexadecimal string of two hex-digits
.split(""); // Convert it to a String-array
System.out.print( // Print:
(char) // Cast a long to character:
Long.parseLong( // Convert a string to a long:
t[1]+t[0], // The hex-digits in reversed order
16));}) // Converted from base-16 to base-10
Some notes:
- It loops over the array as integers with
for(int c:s)so we can usecin the"".format(...,c), instead of as characters withfor(var c:s)so we would need an explicit cast to integer(int)c. - Although
Long.valueOfis 2 bytes shorter, it'll return an objectLong, which requires a cast tolongbefore the cast tochar((char)(long)Long.valueOf(...)). So instead we useLong.parseLong, which already returns a primitivelong.
C (gcc), 70 66 60 bytes
-7 bytes thanks to @ceilingcat
main(t,v)char**v;{for(++v;t=**v;++*v)putchar(t>>4|t%16<<4);}
Rust, 49 bytes
fn f(s:&mut[u8]){for c in s{*c=c.rotate_left(4)}}
Rust doesn't have a string type for plain-ASCII strings, only UTF-8 with enforcement. An SO answer recommends &[u8] slices for working with ASCII bytes. I realize this is bending the rules, so I did also find sufficient incantations to get Rust to let me do this to the bytes of an actual str primitive type (a slice of which is like a C char* + length). IDK if there's any shorter way to write this, or a way to use compiler options instead of the unsafe{}.
Rust using str strings, 72 bytes
fn h(s:&mut str){unsafe{for c in s.as_bytes_mut(){*c=c.rotate_left(4)}}}
These could apparently be smaller as closures, but I'm just taking baby steps towards learning some Rust. Suggestions welcome.
Expanding ASCII codes to hex-strings and then swapping pairs is equivalent to rotating the original integer by 4, to swap nibbles. Rust is fun for that because it has as language built-in most of the common things many CPU instructions can do to integers. Instead of needing voodoo that a compiler has pattern-recognize back to a rotate or popcount for portable code to run efficiently. The u8 docs include all these operations.
These compile to x86-64 asm that rotates the bytes in an array, as you can see on Godbolt. (Note the rol byte ptr [rcx], 4 in the cleanup loop if the SIMD code isn't easy to follow. (2x shifts and vpternlogd as a bit-blend.) Unfortunately the -C opt-level=1 asm is rather hard to follow, and the opt-level=2 code vectorizes, so I couldn't get just a simple scalar loop to look at more easily. But I'm just using range-for stuff so I don't have to worry about loop bounds.)
Unlike many languages, Rust does not do implicit promotion to wider types for operators like *, or even implicit conversion of integer types on assignment. The *16%255 hack does not save space over .rotate_left(4). I don't know if both sets of () are truly necessary around the as i32 and so on, but I'm pretty sure the as something and as u8 are necessary.
fn g(s:&mut[u8]){for c in s{*c=((*c as i32)*16%255) as u8}}
JavaScript (ES6), 113 107 77 bytes
s=>String.fromCharCode(...[...s].map(c=>c.charCodeAt()).map(c=>c%16*16|c>>4))
77 bytes if we use Unrelated String's method.
107: Slightly better.
s=>String.fromCharCode(...[...s].map(c=>parseInt([...c.charCodeAt().toString(16)].reverse().join(''), 16)))
Original 113 byte answer:
s=>[...s].map(c=>String.fromCharCode(parseInt([...c.charCodeAt().toString(16)].reverse().join(''), 16))).join('')
This is my first time ever posting here. Obviously not the best solution but I wanted to compete.
C (clang), 36 \$\cdots\$ 34 29 bytes
f(*s){for(;*s++=*s*16%255;);}
Saved a byte thanks to ovs!!!
Saved a byte thanks to Juan Ignacio Díaz!!!
Saved 5 bytes thanks to dingledooper!!!
Inputs a pointer to a wide string.
Returns the reverse hex cipher through the input pointer.
Python 3, 55 54 bytes
lambda x:''.join(chr(c%16*16|c>>4)for c in map(ord,x))
Can be shorter if bytestrings are permitted:
Python 3, 38 bytes
lambda x:bytes(c%16*16|c>>4for c in x)
Ruby, 26 bytes
Accepts input from STDIN.
$<.bytes{|c|putc c*257>>4}
Alternative 26s:
$<.bytes{|c|putc c*16%255}
$<.bytes{|c|putc c*16.065} # was discovered by @Arnauld
Ruby, 28 bytes
->s{s.unpack('H*').pack'h*'}
BQN, 17 bytes
16(@+⟜⌊÷˜+⊣×|)-⟜@
-⟜@ # subtract null character (@) to get ASCII codepoints
16( ) # now, with this as right arg and 16 as left arg:
| # right arg (codepoints) modulo left arg (16)
× # multiplied by
⊣ # left arg (16)
+ # plus
÷˜ # right arg (codepoints) divided by left arg (16)
⟜⌊ # now use the floor of this
@+ # to add to null character (@) to get ASCII characters
x86 32-bit machine code, 7 bytes
C0 02 04 42 E2 FA C3
Following the fastcall calling convention, this takes the length and address of a string in ECX and EDX, respectively, and modifies the string in place.
In assembly:
f: rol BYTE PTR [edx], 4 # Rotate the character left by 4 bits, swapping its nybbles.
inc edx # Add 1 to EDX, advancing the pointer.
loop f # Subtract 1 from ECX and jump back if it is nonzero.
ret # Return.
Burlesque, 13 bytes
m{**b6<-b6L[}
m{ # Map
** # Codepoint
b6 # To hex
<- # Reverse
b6 # From hex
L[ # To char
}
Vyxal s, 5 bytes
CHRHC
Look ma, no unicode, and it's a palindrome!
C C # To charcodes...
H H # To hexadecimal versions of charcodes
R # Reverse each
# (s flag transforms output into string)
R, 50 bytes
function(s,r=utf8ToInt(s))intToUtf8(16*r%%16+r/16)
Much more boring than my previous approach, but 7 bytes shorter.
Previous approach:
R, 57 bytes
function(s)intToUtf8(t(matrix(0:255,16))[utf8ToInt(s)+1])
MathGolf, 6 bytes
Æ$¢x¢$
I guess it could be a palindrome by adding a trailing no-op Æ. ;)
Explanation:
Æ # Loop over the characters of the (implicit) input-string,
# using five characters as inner code-block:
$ # Convert the character to a codepoint-integer
¢ # Convert the integer to a hexadecimal string
x # Reverse it
¢ # Convert it from a hexadecimal string to an integer
$ # Convert it from a codepoint-integer to a character
# (after the loop, implicitly output the entire stack joined together)
05AB1E, 6 bytes
ÇhíHçJ
Try it online or verify all test cases.
Explanation:
Ç # Convert the (implicit) input-string to a list of codepoint integers
h # Convert each integer to a hexadecimal string
í # Reverse each string in the list
H # Convert each string from hexadecimal to a base-10 integer
ç # Convert each codepoint-integer to a character
J # Join them together to a string, since a string I/O that overwrites the
# default I/O ruling is mandatory for this challenge
# (after which it is output implicitly as result)
GeoGebra, 107 bytes
s=""
InputBox(s)
UnicodeToText(Zip(FromBase(Sum(Reverse(Split(ToBase(a,16),{""}))),16),a,TextToUnicode(s)))
To enter this code into GeoGebra, copy/paste in one line at a time and press enter to go to a new line. Do not copy paste directly; it won't work.
Input goes in the Input Box. It was a bit of a challenge to get this working, as GeoGebra has a pretty limited set of tools for working with strings.
Explanation
Zip(...,a,TextToUnicode(s)): For every integer element a in the list of char codes of each character in the input string s:
FromBase(Sum(Reverse(Split(ToBase(a,16),{""}))),16)
ToBase(a,16) Convert a to a base 16 string
Split( ,{""}) Split the string into a list of characters
Reverse( ) Reverse the list
Sum( ) Concatenate the characters into a string
FromBase( ,16) Convert the base 16 string back to base 10
UnicodeToText(...): Convert the list of char codes back to a string
JavaScript (Node.js), 33 32 bytes
Saved 1 byte thanks to @dingledooper
s=>Buffer(s).map(n=>n*257>>4)+''
How?
When map'ing over a Buffer, the updated values are implicitly truncated to bytes, so we can just do n * 257 >> 4 without worrying about the upper nibble.
Retina 0.8.2, 33 bytes
T`#-'4-7E-GVWgvuet\dTscSCrbRB2`Ro
Try it online! Link includes test cases. Explanation: o substitutes for the transliteration string and R reverses it so each listed character gets mapped to the one opposite. Character ranges are used twice to reduce the byte count and once to avoid quoting the E but as d maps to F I can't use character ranges for both d and E.
Charcoal, 12 bytes
⭆S℅↨¹⁶⮌↨℅ι¹⁶
Try it online! Link is to verbose version of code. Explanation:
S Input string
⭆ Map over characters and join
ι Current character
℅ ASCII code
↨ Convert to base
¹⁶ Literal integer `16`
⮌ Reversed
↨ Convert from base
¹⁶ Literal integer `16`
℅ ASCII Character
Implicitly print
If both parameters to Base are integers then the second is always the base but if one parameter is an array then the other is the base. This allows the two 16s to be naturally separated in the code thus avoiding an explicit separator.
APL (dzaima/APL), 14 13 bytes
−1 thanks to Unrelated String
Anonymous tacit prefix function.
⊖⍢(0 16⊤⎕UCS)
⊖ flip…
⍢(…) while argument is represented as…
0 16⊤… two-digit hexadecimal representation of…
⎕UCS Universal Character Set code points
Jelly, 7 bytes
⁴ɓObUḅỌ
Not a palindrome, but still sounds kind of goofy.
O Character codes.
b Convert to base
⁴ɓ 16,
U reverse each,
⁴ɓ ḅ convert from base 16,
Ọ and convert from character codes.
Husk, 11 bytes
m(cB16↔B16c
map ( characters of Base-16 values of ↔=reverse of Base-16 representation of character codes of the input.