g | x | w | all
Bytes Lang Time Link
042><> value x240615T072952ZNone1
100Javascript240615T044156ZPatcail
100Haskell170609T052137ZAnders K
nanPython211231T152642ZBinary19
nanBinary Lambda Calculus200429T204039ZJohn Tro
100Ruby170731T122103ZSimply B
100Haskell170622T141200ZWheat Wi
042MATL170609T145739ZJ Doe
094Javascript ES6170611T155141Zes1024
100Brachylog170611T210515ZLeo
091Clojure170611T172132ZNikoNyrh
nan170608T192326ZDraco18s
nan170608T183949ZChristop
097JavaScript ES6170608T233527ZStephen
099Mathematica170609T034444ZDELETE_M
nanAPL170608T195937ZUriel
100Python 3170608T184906ZLeaky Nu

><> (--value x), 42 bytes

f0@:@@:@>(?v~~n;
        ^  >$:*$r1+@:@@:@

Try it online!

Computes log2log15x, the input number x is given through the --value argument.

This creates the number 15 and squares it until it is larger than x. Then the number of times to square is printed.

Javascript, 100 bytes, BB-1(x) ≤ f(x) ≤ 100x

This program simultaneously is the slowest program and the fastest program for this question. It is the slowest as in for any strictly increasing computable function F, then there exists arbitrarily large values of n such that F(P(n))<n, where P is the program function. This program P is also the fastest where for some values of n, P(n)>n, so in a sense P fails to be slower than the identity function! P is not monotonic, but P(n) approaches infinity as n approaches infinity.

Also, for some small values of n, P(n) might not give a result. This is okay since we only care about the end behavior of the program.

Strategy

It works by by:

  1. enumerating all of the Bitwise Cyclic Tag programs by ID number.
  2. Go through the programs, running each one for X steps. If the BCT program halts, then if the halting time is between X/4 and X/3, then we return the associated ID number
  3. Otherwise, we continue going through the programs

Thus, we are essentially calculating a Kolmogorov complexity for a number in terms of halting times. This means that "random" numbers, such as 6.661337...*1069420, could have a larger output than much larger numbers.

The program

x=>eval("for(q=i=A=0;q<x&&A^0?1:q/x^3&&(i++,q=A='1');A=+L?A&1?L+A:A:A/10)i+='',L=i[q++%i.length];i")

Here is the ungolfed analysis version, using a while loop:

x=>                                         //function signature
    eval("                                  //golfing eval trick
        q=0                                 //index for program string
        i=0                                 //what BCT we're on
        A=0                                 //data string for BCT
        while(                              //initialize while loop
            q<x                             //q<x - if we haven't exceeded our limit yet
                &&
            A^0                             //and we are still running
            ?1:                             //we continue the while loop
                q/x^3                       //otherwise we check 3<=q/x<4
                  &&                        //normally q==x, so only way for this to be zero is if we halted the BCT on the empty data
                                            //If that expression is 0, we halt everything
                (i++,                       //otherwise we increment our BCT ID. This also un-stringify the i.
                q=A='1')                    //we also reset data string and program pointer
        ) {
            i+=''                           //we stringify i again
            L=i[q++%i.length]               //we set L to be the digit q in the program pointer is looking at in i. We also increment the pointer q.

            A=                              //we manipulate our data string
                +L?                         //check if L==0
                    A&1?                    //if not, we check last digit of A if it is odd
                        L+A                 //append if A is odd
                        :A                  //do nothing if A is even
                    :A/10)                  //remove last digit of A
        };
        i                                   //implicit return our BCT id
    ")

To implement BCT, we used this table of data: 0 - delete the least significant digit Odd #'s - prepend an odd number if the least significant digit is odd Even #'s - prepend an even number if the least significant digit is odd

Thus, 0 corresponds to 0, odd numbers correspond to 11, and even numbers correspond to 10 in BCT.

When dividing by 10, we do have some residue decimal digits. Those are insignificant; we always check A&1 for even/oddness, and we check A^0 to see if A is zero/no active bits or nonzero.

Haskell, 100 bytes, score=fε0−1(n)

_#[]=0
k#(0:a)=k#a+1
k#(a:b)=(k+1)#(([1..k]>>fst(span(>=a)b)++[a-1])++b)
f n=[k|k<-[0..],1#[k]>n]!!0

How it works

This computes the inverse of a very fast growing function related to Beklemishev’s worm game. Its growth rate is comparable to fε0, where fα is the fast-growing hierarchy and ε0 is the first epsilon number.

For comparison with other answers, note that

Python, 100 bytes, \$999 \uparrow^{99999} \text{Score} = \text{Input}\$, using up-arrow notation

f=lambda b,i,n:n/b if i==0 else(0if n<=1else 1+f(b,i,f(b,i-1,n)));print(f(999,99999,int(input(""))))

This is based on extremely iterated logarithms, since most people know that logarithms alone grow pretty slowly.

This program describes a ternary lambda function known simply as f which takes a base b, iterations i and input n. If i == 0, it just returns the input divided by the base. With i == 1, you may recognise the output as \$\text{max}(0, \lfloor \text{log}_b(n)\rfloor)\$. Then, with i == 2, this is equal to \$\text{log*}(n) = \text{max}(0, \lfloor \text{slog}_b(n)\rfloor)\$, where \$\text{slog}\$ is the super logarithm (iteration instead of exponentiation). Then, this program gets user input and returns \$\text{max}(0, \lfloor ^{99999}\text{log}_{999}(n)\rfloor)\$. Here, \$^m\text{log}\$ is logarithm using \$\uparrow^m\$, so \$^2\text{log}\$ is \$\text{slog}\$.

Binary Lambda Calculus, 237 bits (under 30 bytes)

-- compute periods of final row of Laver tables as in as in
-- section 3.4 of https://arxiv.org/pdf/1810.00548.pdf
let
  -- Let mx = 2^n - 1. We consider a dual version of the shelf, determined
  -- by a |> mx = a-1 (mod 2^n), which satisfies
  --   0 |> b  = b
  --   a |> 0  = 0
  --   a |> mx = a-1
  --   a |> b  = (a |> b+1) |> a-1
  -- Note that the latter two equations iterate f_a = \x. x |> (a-1).
  -- In fact, for a > 0, we implement a |> b = f_a^{mx-b} (a-1) as f_a^{mx*b} 0,
  -- exploiting the fact that a |> mx = a-1 = 0 |> a-1 = f_a 0
  -- and (modulo 2^n) 1+mx-b = -b = -1 * b = mx*b

  0 = \f\x. x;
  1 = \x. x;
  2 = \f\x. f (f x);
  succ = \n\f\x. f (n f x);
  pred = \n\f\x. n (\g\h. h (g f)) (\h. x) (\x. x);
  laver = \mx.
    -- swap argument order to simplify iterated laver
    let laver = \b\a. a (\_. mx (b (laver (pred a))) 0) b
    in laver;
  dblp1 = \n\f\x. n f (n f (f x)); -- map n to 2*n+1
  find0 = \mx\i.laver mx i mx (\_.find0 mx (succ i)) i;
  period = \n.let mx = pred (n 2) in find0 mx 1
in
  period

compiles down to the 237 bits

010000010101000110100000000101010101000110100000000101100001011111110011110010111110111100111111111101100000101101011000010101111101111011100000011100101111101101010011100110000001110011101000100000000101011110000001100111011110001100010

This function grows to infinity assuming large cardinal axiom I3, which I believe is what Peter Taylor was hinting at. It grows way slower than any sort of inverse Ackerman.

Ruby, 100 bytes, score-1 = fωω+1(n2)

Basically borrowed from my Largest Number Printable, here's my program:

->k{n=0;n+=1 until(H=->z,a=[0]*z{b,*c=a;z.times{z+=b ?H[z,b==1?c:[b>1?b-1:z]*z+c]:z};z};H[n*n]>k);n}

Try it online

Basically computes the inverse of fωω+1(n2) in the fast growing hierarchy. The first few values are

x[0] = 1
x[1] = 1
x[2] = 1
x[3] = 1
x[4] = 2

And then it continues to output 2 for a very long time. Even x[G] = 2, where G is Graham's number.

Haskell, 100 bytes

f 0 a b=a^b
f c a b=foldr(f$c-1)a$[0..b]>>[a]
i=length.show
0#x=i x
y#x=i$(y-1)#x
g=(f(f 9 9 9)9 9#)

Try it online!

This solution does not take the inverse of a quickly growing function instead it takes a rather slowly growing function, in this case length.show, and applies it a bunch of times.

First we define a function f. f is a bastard version of Knuth's uparrow notation that grows slightly faster (slightly is a bit of an understatement, but the numbers we are dealing with are so large that in the grand scheme of things...). We define the base case of f 0 a b to be a^b or a to the power of b. We then define the general case to be (f$c-1) applied to b+2 instances of a. If we were defining a Knuth uparrow notation like construct we would apply it to b instances of a, but b+2 is actually golfier and has the advantage of being faster growing.

We then define the operator #. a#b is defined to be length.show applied to b a times. Every application of length.show is an approximately equal to log10, which is not a very swiftly growing function.

We then go about defining our function g that takes and integer in and applies length.show to the integer a bunch of times. To be specific it applies length.show to the input f(f 9 9 9)9 9. Before we get into how large this is lets look at f 9 9 9. f 9 9 9 is greater than 9↑↑↑↑↑↑↑↑↑9 (nine arrows), by a massive margin. I believe it is somewhere between 9↑↑↑↑↑↑↑↑↑9 (nine arrows) and 9↑↑↑↑↑↑↑↑↑↑9 (ten arrows). Now this is an unimaginably large number, far to big to be stored on any computer in existence (in binary notation). We then take that and put that as the first argument of our f that means our value is larger than 9↑↑↑↑↑↑...↑↑↑↑↑↑9 with f 9 9 9 arrows in between. I'm not going describe this number because it is so large I don't think I would be able to do it justice.

Each length.show is approximately equal to taking the log base 10 of the integer. This means that most numbers will return 1 when f is applied to them. The smallest number to return something other than 1 is 10↑↑(f(f 9 9 9)9 9), which returns 2. Lets think about that for a moment. As abominably large as that number we defined earlier is, the smallest number that returns 2 is 10 to its own power that many times. Thats 1 followed by 10↑(f(f 9 9 9)9 9) zeros.

For the general case of n the smallest input outputting any given n must be (10↑(n-1))↑↑(f(f 9 9 9)9 9).

Note that this program requires massive amounts of time and memory for even small n (more than there is in the universe many times over), if you want to test this I suggest replacing f(f 9 9 9)9 9 with a much smaller number, try 1 or 2 if you want ever get any output other than 1.

MATL, 42 bytes

iXI:`2*.]X{oXH1H/16L+XKxI:`Yl.]K+XKXdXzXGx

Try it online!

This program is based on the harmonic series with the use of Euler–Mascheroni constant. As I was reading @LuisMendo documentation on his MATL language (with capital letters, so it looks important) I noticed this constant. The slow growth function expression is as following: enter image description here

where εk ~ 1/2k

I tested up to 10000 iterations (in Matlab, since it is too large for TIO) and it scores below 10, so it is very slow.

enter image description here

Explanations:

 iXI      % ask user input (number of iterations)

:`2*.]    % do...while loop, multiply by 2

X{        % convert numeric array into cell array

o         % convert to double precision array 

XH1H/     % copy to clipboard H and divide by 1: now we have an array of 1/2k

16L       % Euler–Mascheroni constant 

+         % addition (element-wise, singleton expansion)

XKxI:`    % save, clear the stack, do...while loop again

  Yl      % logarithm 

  .]      % break, ends the loop

K+XK      % paste from clipboard K, sum all

Xd        % trim: keep the diagonal of the matrix 

Xz        % remove all zeros

XG        % plot (yes it plots on-line too!)

x         % clear the stack
          % (implicit) display

Empirical proof: (ln k) + 1 in red always above lnk + γ + εk in blue.

enter image description here

The program for (ln k) + 1 was made in

Matlab, 47 18 14 bytes

n=input('')
A=(1:n)
for k=1:n
A(k)=log(k)+1
end

Interesting to note that the elapsed time for n=100 is 0.208693s on my laptop, but only 0.121945s with d=rand(1,n);A=d*0; and even less, 0.112147s with A=zeros(1,n). If zeros is a waste of space, it saves speed! But I am diverging from the topic (probably very slowly).

Edit: thanks to Stewie for helping reducing this Matlab expression to, simply:

 @(n)log(1:n)+1

Javascript (ES6), 94 bytes

(p=j=>i=>h=>g=>f=>x=>x<2?0:1+p(p(j))(j(i))(i(h))(h(g))(g(f))(f(x)))(_=x=>x)(_)(_)(_)(Math.log)

Explanation:

Id refers to x => x in the following.

Let's first take a look at:

p = f => x => x < 2 ? 0 : 1 + p(p(f))(f(x))

p(Math.log) is approximately equal to log*(x).

p(p(Math.log)) is approximately equal to log**(x) (number of times you can take log* until the value is at most 1).

p(p(p(Math.log))) is approximately equal to log***(x).

The inverse Ackermann function alpha(x) is approximately equal to the minimum number of times you need to compose p until the value is at most 1.

If we then use:

p = g => f => x => x < 2 ? 0 : 1 + p(p(g))(g(f))(f(x))

then we can write alpha = p(Id)(Math.log).

That's pretty boring, however, so let's increase the number of levels:

p = h => g => f => x => x < 2 ? 0 : 1 + p(p(h))(h(g))(g(f))(f(x))

This is like how we constructed alpha(x), except instead of doing log**...**(x), we now do alpha**...**(x).

Why stop here though?

p = i => h => g => f => x => x < 2 ? 0 : 1 + p(p(i))(i(h))(h(g))(g(f))(f(x))

If the previous function is f(x)~alpha**...**(x), this one is now ~ f**...**(x). We do one more level of this to get our final solution.

Brachylog, 100 bytes

llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll

Try it online!

This is probably nowhere near the slowness of some other fancy answers, but I couldn't believe noone had tried this simple and beautiful approach.

Simply, we compute the length of the input number, then the length of this result, then the length of this other result... 100 times in total.

This grows as fast as log(log(log...log(x), with 100 base-10 logs.

If you input your number as a string, this will run extremely fast on any input you could try, but don't expect to ever see a result higher than 1 :D

Clojure, 91 bytes

(defn f (apply +(for[n(range %)](/(loop[r 1 v n](if(< v 1)r(recur(* r v)(Math/log v))))))))

Kind of calculates the sum 1/(n * log(n) * log(log(n)) * ...), which I found from here. But the function ended up 101 bytes long so I had to drop the explicit number of iterations, and instead iterate as long as the number is greater than one. Example outputs for inputs of 10^i:

0 1
1 3.3851305685279143
2 3.9960532565317575
3 4.232195089969394
4 4.370995106860574
5 4.466762285601703
6 4.53872567524327
7 4.595525574477128
8 4.640390570825608

I assume this modified series still diverges but do now know how to prove it.

The third series actually requires a googolplex numbers of terms before the partial terms exceed 10.

Pure Evil: Eval

a=lambda x,y:(y<0)*x or eval("a("*9**9**9+"x**.1"+",y-1)"*9**9**9)
print a(input(),9**9**9**9**9)//1

The statement inside the eval creates a string of length 7*1010101010108.57 which consists of nothing but more calls to the lambda function each of which will construct a string of similar length, on and on until eventually y becomes 0. Ostensibly this has the same complexity as the Eschew method below, but rather than relying on if-and-or control logic, it just smashes giant strings together (and the net result is getting more stacks...probably?).

The largest y value I can supply and compute without Python throwing an error is 2 which is already sufficient to reduce an input of max-float into returning 1.

A string of length 7,625,597,484,987 is just too big: OverflowError: cannot fit 'long' into an index-sized integer.

I should stop.

Eschew Math.log: Going to the (10th-)root (of the problem), Score: function effectively indistinguishable from y=1.

Importing the math library is restricting the byte count. Lets do away with that and replace the log(x) function with something roughly equivalent: x**.1 and which costs approximately the same number of characters, but doesn't require the import. Both functions have a sublinear output with respect to input, but x0.1 grows even more slowly. However we don't care a whole lot, we only care that it has the same base growth pattern with respect to large numbers while consuming a comparable number of characters (eg. x**.9 is the same number of characters, but grows more quickly, so there is some value that would exhibit the exact same growth).

Now, what to do with 16 characters. How about...extending our lambda function to have Ackermann Sequence properties? This answer for large numbers inspired this solution.

a=lambda x,y,z:(z<0)*x or y and a(x**.1,z**z,z-1)or a(x**.1,y-1,z)
print a(input(),9,9**9**9**99)//1

The z**z portion here prevents me from running this function with anywhere close to sane inputs for y and z, the largest values I can use are 9 and 3 for which I get back the value of 1.0, even for the largest float Python supports (note: while 1.0 is numerically greater than 6.77538853089e-05, increased recursion levels move the output of this function closer to 1, while remaining greater than 1, whereas the previous function moved values closer to 0 while remaining greater than 0, thus even moderate recursion on this function results in so many operations that the floating point number loses all significant bits).

Re-configuring the original lambda call to have recursion values of 0 and 2...

>>>1.7976931348623157e+308
1.0000000071

If the comparison is made to "offset from 0" instead of "offset from 1" this function returns 7.1e-9, which is definitely smaller than 6.7e-05.

Actual program's base recursion (z value) is 101010101.97 levels deep, as soon as y exhausts itself, it gets reset with 10101010101.97 (which is why an initial value of 9 is sufficient), so I don't even know how to correctly compute the total number of recursions that occur: I've reached the end of my mathematical knowledge. Similarly I don't know if moving one of the **n exponentiations from the initial input to the secondary z**z would improve the number of recursions or not (ditto reverse).

Lets go even slower with even more recursion

import math
a=lambda x,y:(y<0)*x or a(a(a(math.log(x+1),y-1),y-1),y-1)
print a(input(),9**9**9e9)//1

New output value (at 9 recursion depth):

>>>1.7976931348623157e+308
6.77538853089e-05

Recursion depth has exploded to the point at which this result is literally meaningless except in comparison to the earlier results using the same input values:

Lambda Inception, score: ???

I heard you like lambdas, so...

from math import*
a=lambda m,x,y:y<0and x or m(m,m(m,log(x+1),y-1),y-1)
print int(a(a,input(),1e99))

I can't even run this, I stack overflow even with a mere 99 layers of recursion.

The old method (below) returns (skipping the conversion to an integer):

>>>1.7976931348623157e+308
0.0909072713593

The new method returns, using only 9 layers of incursion (rather than the full googol of them):

>>>1.7976931348623157e+308
0.00196323936205

I think this works out to be of similar complexity to the Ackerman sequence, only small instead of big.

Also thanks to ETHproductions for a 3-byte savings in spaces I didn't realize could be removed.

Old answer:

The integer truncation of the function log(i+1) iterated 20 25 times (Python) using lambda'd lambdas.

PyRulez's answer can be compressed by introducing a second lambda and stacking it:

from math import *
x=lambda i:log(i+1)
y=lambda i:x(x(x(x(x(i)))))
print int(y(y(y(y(y(input()))))))

99 100 characters used.

This produces an iteration of 20 25, over the original 12. Additionally it saves 2 characters by using int() instead of floor() which allowed for an additional x() stack. If the spaces after the lambda can be removed (I cannot check at the moment) then a 5th y() can be added. Possible!

If there's a way to skip the from math import by using a fully qualified name (eg. x=lambda i: math.log(i+1))), that would save even more characters and allow for another stack of x() but I don't know if Python supports such things (I suspect not). Done!

This is essentially the same trick used in XCKD's blog post on large numbers, however the overhead in declaring lambdas precludes a third stack:

from math import *
x=lambda i:log(i+1)
y=lambda i:x(x(x(i)))
z=lambda i:y(y(y(i)))
print int(z(z(z(input()))))

This is the smallest recursion possible with 3 lambdas that exceeds the computed stack height of 2 lambdas (reducing any lambda to two calls drops the stack height to 18, below that of the 2-lambda version), but unfortunately requires 110 characters.

The floor of the function log(i+1) iterated 14 times (Python)

import math
x=lambda i: math.log(i+1)
print int(x(x(x(x(x(x(x(x(x(x(x(x(x(x(input())))))))))))))))

I don't expect this to do very well, but I figured its a good start.

Examples:

JavaScript (ES6), Inverse Ackermann Function*, 97 bytes

*if I did it right

A=(m,n)=>m?A(m-1,n?A(m,n-1):1):n+1
a=(m,n=m,i=1)=>{while(A(i,m/n|0)<=Math.log2(n))i++;return i-1}

Function A is the Ackermann function. Function a is supposed to be the Inverse Ackermann function. If I implemented it correctly, Wikipedia says that it will not hit 5 until m equals 2^2^2^2^16. I get a StackOverflow around 1000.

Usage:

console.log(a(1000))

Explanations:

Ackermann Function

A=(m,n)=>                           Function A with parameters m and n
         m?                   :n+1  If m == 0, return n + 1; else,
           A(m-1,n?        :1)       If n == 0, return A(m-1,1); else,
                   A(m,n-1)          return A(m-1,A(m,n-1))

Inverse Ackermann Function

a=(m,n=m,i=1)=>{                                                Function a with parameter m, with n preinitialized to m and i preinitialized to 1
                while(A(i,m/n|0)<=Math.log2(n))                 While the result of A(i, floor(m/n)) is less than log₂ n,
                                               i++;             Increment i
                                                   return i-1}  Return i-1

Mathematica, 99 bytes

(assuming ± takes 1 byte)

0±x_=1±(x-1);y_±0=y+1;x_±y_:=(y-1)±x±(x-1);(i=0;NestWhile[(++i;#±#±#±#±#±#±#±#)&,1,#<k&/.k->#];i)&

The first 3 commands define x±y to evaluate Ackermann(y, x).

The result of the function is the number of times f(#)=#±#±#±#±#±#±#±# need to be applied to 1 before the value get to the value of parameter. As f(#)=#±#±#±#±#±#±#±# (that is, f(#)=Ackermann[Ackermann[Ackermann[Ackermann[Ackermann[Ackermann[Ackermann[#, #], #], #], #], #], #], #]) grow very fast, the function grow very slow.

APL, Apply log(n + 1), e^9^9...^9 times, where the length of the chain is e^9^9...^9 of length of chain minus 1 times, and so on.

⌊((⍟1+⊢)⍣((*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣⊢)))))))))))))))))))))9))⊢

Python 3, 100 bytes

The floor of the function log(i+1) iterated 99999999999999999999999999999999999 times.

One can use exponents to make the above number even larger...

from math import *
s=input()
exec("s=log(s+1);"*99999999999999999999999999999999999)
print(floor(s))

Try it online!