g | x | w | all
Bytes Lang Time Link
072Bash+parallel+coreutils250819T121039ZToby Spe
215Go240329T141908Zbigyihsu
026Uiua240329T124418Znoodle p
081Javascript240329T061834Zl4m2
048Julia p 9210827T103240ZMarcMush
088PowerShell Core210824T214400ZJulian
234Rust210825T062730ZRitwin
nanC90 OpenMP160618T010338Zdj0wns
070Matlab160619T130821ZSanchise
021Dyalog APL160616T203844Zmarinus
357c++170414T074817Zjdt
131C#160617T153822ZAXMIM
457Common Lisp Lispworks160718T045533Zsadfaf
nanClojure160617T195155ZChris F
101Perl160620T144224Zprimo
166Common Lisp SBCL160621T181019ZJason
116Ruby with parallel gem160616T143417ZGosha U.
127C160617T082117Znneonneo
071Perl 6160618T222401ZHotkeys
164Scratch160618T210907ZScimonst
132Python 2160618T120955Zmoooeeee
172Python 2160616T093324Zuser4594
246Haskell160617T132734ZKoterpil
130Python 2160617T195254Znneonneo
678R + Snowfall160617T153648ZJDL
168Elixir160617T152359ZCandy Gu
nanJavascript ES6160617T021924ZIsmael M
143Groovy160616T162309Zmanatwor
nanC with pthreads160617T000348Zbodqhroh
313Java160616T123236Zuser9023
105Javascript ES6160617T025737ZAndrew
nanJavascript ES6160616T170932ZIsmael M
nanJavaScript ES6160616T203425Zbodqhroh
109Mathematica160616T205742ZLegionMa
148JavaScript ES6160616T194153ZNeil
085Bash + GNU utilities160616T172225ZDigital
092Ruby160616T183219Zhistocra
093Bash160616T085328ZJulie Pe
189Go160616T150425ZRob
144PowerShell v4160616T143310ZAdmBorkB
247Rust160616T141739Zraggy

Bash+parallel+coreutils, 72 bytes

parallel -j7 time -f%e+ sleep -- 8.{3..9} 2>&1|\time -f%e dc -e0 -f- -ep

Total time is written to standard output stream, and elapsed time to the standard error stream.

Try it online!

Go, 215 205 bytes

import."time"
type D=Duration
func f()(e,t D){N:=Now
s,c:=N(),make(chan D)
for range 9{go func(C chan D){s:=N();Sleep(7*Second);C<-N().Sub(s)}(c)}
for range 9{select{case a:=<-c:t+=a}}
return N().Sub(s),t}

Attempt This Online!

Spawns 9 threads of 7 seconds each. Returns in the ballpark of 7.000295677s 1m3.002016702s.

Ungolfed Explaination

import."time"
func f()(exec,total Duration){
    start:=Now()                  // start time
    duration:=7*Second            // duration per thread
    c:=make(chan Duration)        // channel to get time elapsed from
    for i:=0;i<9;i++{             // for 0 to 9...
        go func(C chan Duration){ // spawn a thread that...
            s:=Now()              // gets its start time
            Sleep(duration)       // sleeps for the duration
            C<-Now().Sub(s)       // returns the time slept
        }(c)                      // actually run the thread
    }
    for i:=0;i<9;i++{             // for each thread...
        select {                  // wait for...
        case amt:=<-c:            // something to come in on the channel
            total+=amt            // add it to the total slept time
        }
    }
    return Now().Sub(start),total // return elapsed time, and slept time
}

Attempt This Online!

Uiua, 26 bytes

⊃/+/↥wait≡spawn⍜now&sl↯9 7

Spawns 9 threads, each waits 7 seconds.

Explanation:

⊃/+/↥wait≡spawn⍜now&sl↯9 7­⁡​‎‎⁡⁠⁢⁢⁣‏⁠‎⁡⁠⁢⁢⁤‏⁠‎⁡⁠⁢⁣⁡‏⁠‎⁡⁠⁢⁣⁢‏‏​⁡⁠⁡‌⁢​‎‎⁡⁠⁣⁢‏⁠‎⁡⁠⁣⁣‏⁠‎⁡⁠⁣⁤‏⁠‎⁡⁠⁤⁡‏⁠‎⁡⁠⁤⁢‏⁠‎⁡⁠⁤⁣‏‏​⁡⁠⁡‌⁣​‎‎⁡⁠⁤⁤‏⁠‎⁡⁠⁢⁡⁡‏⁠‎⁡⁠⁢⁡⁢‏⁠‎⁡⁠⁢⁡⁣‏‏​⁡⁠⁡‌⁤​‎‎⁡⁠⁢⁡⁤‏⁠‎⁡⁠⁢⁢⁡‏⁠‎⁡⁠⁢⁢⁢‏‏​⁡⁠⁡‌⁢⁡​‎‎⁡⁠⁢⁢‏⁠‎⁡⁠⁢⁣‏⁠‎⁡⁠⁢⁤‏⁠‎⁡⁠⁣⁡‏‏​⁡⁠⁡‌⁢⁢​‎‎⁡⁠⁡‏⁠‎⁡⁠⁢‏⁠‎⁡⁠⁣‏⁠‎⁡⁠⁤‏⁠‎⁡⁠⁢⁡‏‏​⁡⁠⁡‌­
                      ↯9 7  # ‎⁡Create a list of 9 7s.
         ≡spawn             # ‎⁢For each of these, spawn a thread which
               ⍜now         # ‎⁣  returns the time taken to
                   &sl      # ‎⁤  sleep 7 seconds.
     wait                   # ‎⁢⁡Wait for all threads to finish.
⊃/+/↥                       # ‎⁢⁢Push the maximum and the sum of this list.
💎

Created with the help of Luminespire.

Javascript, 81 bytes

for(M=(N=Date.now)(T=Y=i=0);i<350;)setTimeout("T+=k=N()-M,--i||alert([k,T])",++i)

From Ismael Miguel's

Julia -p 9, 48 bytes

@time show(sum(pmap(_->@elapsed(sleep(7)),1:9)))

Try it online!

PowerShell Core, 88 bytes

$u=date
1..9|% -Pa{$d=date
sleep 7
(date)-$d|% T*ls*}-Th 9|measure -su
(date)-$u|% T*ls*

Explained

$u=date                              # Execution start time
1..9|% -Pa{$d=date                   # Starts 9 threads
sleep 7                              # Sleeps for 7 seconds
(date)-$d|% T*ls*}-Th 9|measure -su  # Gets the (T)ota(ls)econds for the current thread and sum them
(date)-$u|% T*ls*                    # Gets the script time in seconds

Sample output

Shows total wait time of 63.1528189 seconds and a total run time of 7.1531403 seconds

Rust, 237 234 bytes

This is basically the same as @raggy's answer, just improved a bit.

use std::thread::*;fn main(){let n=std::time::Instant::now;let i=n();let mut t=i-i;for x in(0..8).map(|_|spawn(move||{let i=n();sleep_ms(9000);i.elapsed()})).collect::<Vec<_>>(){t+=x.join().unwrap();}print!("{:?}{:?}",t,i.elapsed());}

Ungolfed:

use std::thread::*;
fn main() {
    let n = std::time::Instant::now;
    let i = n();
    let mut t = i-i;
    for x in (0..8).map(|_| spawn(move || {
            let i = n();
            sleep_ms(9000);
            i.elapsed()
        })).collect::<Vec<_>>() {
            t += x.join().unwrap();
    }
    print!("{:?}{:?}", t, i.elapsed());
}

Improvements:

use std::thread::*;

at the beginning saves 7 bytes.

Not creating a new variable for a Vec of threads saves 5 bytes.

Removing the space in for x in (0..8) saves 1 byte.

The .collect::<Vec<_>>() hurts to look at though, but I can't think of a way to remove it because iterators are lazy in rust (so it won't even start the threads if we simply remove that part).

C90 (OpenMP), 131 Bytes (+ 17 for env variable) = 148 Bytes

#include <omp.h>
#define o omp_get_wtime()
n[4];main(t){t=o;
#pragma omp parallel
while(o-9<t);times(n);printf("%d,%f",n[0],o-t);}

Example Output:

7091,9.000014

Try it online!

Notes:

7091 is in cycles (100/sec), so the program ran for 70 seconds

Could be much shorter if I figured a way to get a timer to work other than omp_get_wtime() because then I could remove the include statement aswell.

Run with OMP_NUM_THREADS=9

Matlab, 75 70 bytes

tic;parpool(9);b=1:9;parfor q=b
tic;pause(7);b(q)=toc;end
[sum(b);toc]

5 bytes saved as it turns out tic and toc are local to each worker process so they did not need to be assigned to a variable.

Quick explanation: parfor creates a parallel for-loop, distributed across the pool of workers. tic and toc measure time elapsed (and are in my opinion one of the best named functions in MATLAB). The last line (an array with the total time slept and the real time elapsed) is outputted since it's not terminated with a semicolon.

Note however that this creates a whopping 9 full-fledged MATLAB processes. Chances are then that this particular program this will not finish within the allotted 10 seconds on your machine. However, I think with a MATLAB installation that has no toolboxes except for the Parallel Computing toolbox installed - installed on a high-end system with SSD - may just be able to finish within 10 seconds. If required, you can tweak the parameters to have less processes sleeping more.

Dyalog APL, 65 27 23 21 bytes

(⌈/,+/)⎕TSYNC⎕DL&¨9/7

I.e.:

      (⌈/,+/)⎕TSYNC⎕DL&¨9/7
7.022 63.162

Explanation:

Try it online!

c++, 332 358 357 bytes

Thanks Adám!

#include <iostream>
#include <chrono>
#include <thread>
#define n(x)auto x=chrono::steady_clock::now();
using namespace std;double t=0;void f(){n(s)this_thread::sleep_for(chrono::seconds(9));n(e)t+=(e-s).count()/1e9;}int main(){int i;n(s)thread*a[7];for(i=0;i<7;i++)a[i]=new thread(f);for(i=0;i<7;i++)a[i]->join();n(e)cout<<t<<","<<(e-s).count()/1e9<<"\n";}

Try it online

C#, 131 bytes

The following start 9 threads that each wait 6667 milliseconds.
The program run in 6.73 secondes and has the following output : 00:00:06.7116711|604140408
Where "604140408" is the number of tick.
There are 10 000ticks is a millisecond.
So this give ~60.414 seconds. Total execution time and wait time don't have the same output format for golfing reason.

using t=DateTime;var s=t.Now;Task.WhenAll(new t[9].Select(y=>Task.Delay(6667))).Wait();Debug.Write(t.Now-s+"|"+(t.Now-s).Ticks*9);

Here is a fiddle that exceed the limit of execution of time of fiddle. Lower "6667" so that it fix fiddle's criteria if you want to run it.

Common Lisp (Lispworks), 457 bytes

(defun f(n)(labels((h(n b v)(mp:process-run-function nil nil #'(lambda(b v)(progn(let((s(get-internal-real-time)))(sleep 5)(setf(svref v n)(-(get-internal-real-time)s)))(mp:barrier-wait b :pass-through t)))b v)))(let((s(get-internal-real-time))(e 0)(q 0)(v(make-sequence 'vector n :initial-element 0))(b(mp:make-barrier(1+ n))))(dotimes(i n)(h i b v))(mp:barrier-wait b)(setf e(-(get-internal-real-time)s))(dotimes(p n)(setf q(+ q(svref v p))))(list e q))))

ungolfed:

    (defun f (n-thread)
      (labels ((my-process (process-name n barrier vec)
                 (mp:process-run-function
                  process-name
                  nil
                  #'(lambda (barrier vec)
                      (progn
                        (let ((start-time (get-internal-real-time)))
                          (sleep 5)
                          (setf (svref vec n)
                                (- (get-internal-real-time) start-time)))
                        (mp:barrier-wait barrier :pass-through t)))
                  barrier
                  vec)))

        (let ((total-start-time (get-internal-real-time))
              (total-time 0)
              (sum-per-process-time 0)
              (vector (make-sequence 'vector n-thread :initial-element 0))
              (barrier (mp:make-barrier (1+ n-thread))))
          (dotimes (i n-thread)
            (my-process
             (concatenate 'string "process-" (write-to-string i))
             i
             barrier
             vector))
          (mp:barrier-wait barrier)
          (setf total-time (- (get-internal-real-time) total-start-time))
          (dotimes (p n-thread)
            (setf sum-per-process-time
                  (+ sum-per-process-time (svref vector p))))
          (list total-time sum-per-process-time))))

Usage:

CL-USER 1 > (f 14)
(5028 70280)

Clojure, 135 120 111 109 bytes

(let[t #(System/nanoTime)s(t)f #(-(t)%)][(apply +(pmap #(let[s(t)](Thread/sleep 7e3)%(f s))(range 9)))(f s)])

Formatted version with named variables:

(let [time #(System/currentTimeMillis)
      start (time)
      fmt #(- (time) %)]
  [(apply +
           (pmap #(let [thread-start (time)]
                   (Thread/sleep 7e3)
                   %
                   (fmt thread-start)) (range 9)))
   (fmt start)])

output (in nanoseconds):

[62999772966 7001137032]

Changed format. Thanks Adám, I might have missed that format specification in the question when I read it.

Changed to nanoTime for golfing abilities.

Thanks cliffroot, I totally forgot about scientific notation and can't believe I didn't see apply. I think I used that in something I was golfing yesterday but never posted. You saved me 2 bytes.

Perl, 101 bytes

use Time::HiRes<time sleep>;pipe*1=\time,0;
print time-$1,eval<1>if open-print{fork&fork&fork}-sleep 9

Forks 7 child processes, each of which wait 9 seconds.

Sample Output:

perl wait-one-minute.pl
9.00925707817078-63.001741

Common Lisp (SBCL) 166 bytes:

(do((m #1=(get-internal-real-time))(o(list 0)))((>(car o)60000)`(,(car o),(- #1#m)))(sb-thread:make-thread(lambda(&aux(s #1#))(sleep 1)(atomic-incf(car o)(- #1#s)))))

This just spawns threads that sleep and then atomically increment the time took, with an outer-loop that spins waiting for the total time to be more than 60000 ticks (i.e. 60s on sbcl). The counter is stored in a list due to limitations to the types of places atomic-incf can modify. This may run out of space before terminating on faster machines.

Ungolfed:

(do ((outer-start (get-internal-real-time))
       (total-inner (list 0)))
      ((> (car total-inner) 60000)
       `(,(car total-inner)
      ,(- (get-internal-real-time) outer-start)))
    (sb-thread:make-thread
     (lambda (&aux(start (get-internal-real-time)))
       (sleep 1)
       (atomic-incf (car total-inner) (- (get-internal-real-time) start)))))

Ruby (with parallel gem), 123 116 bytes

require'parallel'
n=->{Time.now}
t=n[]
q=0
Parallel.each(1..10,:in_threads=>10){z=n[];sleep 6;q+=n[]-z}
puts n[]-t,q

Edit: Added the "Time.now" reference from the Ruby answer by histocrat.

C, 127 bytes (spins CPU)

This solution spins the CPU instead of sleeping, and counts time using the times POSIX function (which measures CPU time consumed by the parent process and in all waited-for children).

It forks off 7 processes which spin for 9 seconds apiece, and prints out the final times in C clocks (on most systems, 100 clock ticks = 1 second).

t;v[4];main(){fork(fork(fork(t=time(0))));while(time(0)<=t+9);wait(0);wait(0);wait(0)>0&&(times(v),printf("%d,%d",v[0],v[2]));}

Sample output:

906,6347

meaning 9.06 seconds real time and 63.47 seconds total CPU time.

For best results, compile with -std=c90 -m32 (force 32-bit code on a 64-bit machine).

Perl 6, 72 71 bytes

There might be a shorter way to do this

say sum await map {start {sleep 7;now -ENTER now}},^9;say now -INIT now

this outputs

63.00660729694
7.0064013

Scratch - 164 bytes (16 blocks)

when gf clicked
set[t v]to[
repeat(9
  create clone of[s v
end
wait until<(t)>[60
say(join(join(t)[ ])(timer
when I start as a clone
wait(8)secs
change[t v]by(timer

Visual script

See it in action here.

Uses a variable called 't' and a sprite called 's'. The sprite creates clones of itself, each of which waits 8 seconds, and increments a variable clocking the entire wait time. At the end it says the total execution time and the total wait time (for example, 65.488 8.302).

Python 2, 132 bytes

Uses a process pool to spawn 9 processes and let each one sleep for 7 seconds.

import time as t,multiprocessing as m
def f(x):d=s();t.sleep(x);return s()-d
s=t.time
a=s()
print sum(m.Pool(9).map(f,[7]*9)),s()-a

Prints total accumulated sleeptime first, then the actual runtime:

$ python test.py
63.0631158352 7.04391384125

Python 2, 172 bytes

import threading as H,time as T
m=T.time
z=H.Thread
s=m()
r=[]
def f():n=m();T.sleep(9);f.t+=m()-n
f.t=0
exec"r+=[z(None,f)];r[-1].start();"*8
map(z.join,r)
print m()-s,f.t

This requires an OS with time precision greater than 1 second to work properly (in other words, any modern OS). 8 threads are created which sleep for 9 seconds each, resulting in a realtime runtime of ~9 seconds, and a parallel runtime of ~72 seconds.

Though the official documentation says that the Thread constructor should be called with keyword arguments, I throw caution to the wind and use positional arguments anyway. The first argument (group) must be None, and the second argument is the target function.

nneonneo pointed out in the comments that attribute access (e.g. f.t) is shorter than list index access (e.g. t[0]). Unfortunately, in most cases, the few bytes gained from doing this would be lost by needing to create an object that allows user-defined attributes to be created at runtime. Luckily, functions support user-defined attributes at runtime, so I exploit this by saving the total time in the t attribute of f.

Try it online

Thanks to DenkerAffe for -5 bytes with the exec trick.

Thanks to kundor for -7 bytes by pointing out that the thread argument is unnecessary.

Thanks to nneonneo for -7 bytes from miscellaneous improvements.

Haskell, 278 271 262 246 bytes

import Control.Concurrent.Chan
import Data.Time
import GHC.Conc
t=getCurrentTime
b!a=b=<<flip diffUTCTime<$>t<*>(a>>t)
w=threadDelay$5^10
0#_=t
i#a=a>>(i-1)#a
main=print!do r<-newChan;9#(forkIO$writeChan r!w);getChanContents r>>=print.sum.take 9

! measures the time taken by action a (second argument) and applies b (first argument) to the result.

w is the sleep function.

main is measured itself, and result printed (print!...).

# is replicateM, repeating the given action N times (and returning t because golfing).

Inside the measured part, 9 threads (replicate 9 $ forkIO ...) sleep for 5^10 milliseconds (9.765625 seconds) and post the result (writeChan) to a pipe created by the main thread (newChan), which sums the 9 results up and prints the total (getChanContents >>= print . sum . take 9).

Output:

87.938546708s
9.772032144s

Python 2, 130 bytes

import thread as H,time as T
m=T.clock;T.z=m()
def f(k):T.sleep(k);T.z+=m()
exec"H.start_new_thread(f,(7,));"*9
f(8);print m(),T.z

This is a derivation of Mego's answer, but it's sufficiently different that I thought it should be a separate answer. It is tested to work on Windows.

Basically, it forks off 9 threads, which sleep for 7 seconds while the parent sleeps for 8. Then it prints out the times. Sample output:

8.00059192923 71.0259046024

On Windows, time.clock measures wall time since the first call.

R + Snowfall, 67 UTF-8 bytes

library(snowfall)
sfInit(T,8)
sfSapply(1:8,function(j)Sys.sleep(8))

Elixir, 168 bytes

import Task;import Enum;IO.puts elem(:timer.tc(fn->IO.puts(map(map(1..16,fn _->async(fn->:timer.tc(fn->:timer.sleep(4000)end)end)end),&(elem(await(&1),0)))|>sum)end),0)

Sample run:

$ elixir thing.exs
64012846
4007547

The output is the total time waited followed by the time the program has run for, in microseconds.

The program spawns 14 Tasks, and awaits each of them by mapping over them, and then finds the sum of their elapsed time. It uses Erlang's timer for measuring time.

Javascript (ES6), 108 92 bytes

I'm making a new answer since this uses a slightly different aproach.

It generates a massive amount of setTimeouts, which are almost all executed with 4ms between them.

Each interval is of 610 milliseconds, over a total of 99 intervals.

M=(N=Date.now)(T=Y=0),eval('setTimeout("T+=N()-M,--i||alert([N()-M,T])",610);'.repeat(i=99))

It usually runs within 610ms, for a total execution time of around 60.5 seconds.

This was tested on Google Chrome version 51.0.2704.84 m, on windows 8.1 x64.


Old version (108 bytes):

P=performance,M=P.now(T=Y=0),eval('setTimeout("T+=P.now()-M,--i||alert([P.now()-M,T])",610);'.repeat(i=99))

Groovy, 158 143 characters

d={new Date().getTime()}
s=d(j=0)
8.times{Thread.start{b=d(m=1000)
sleep 8*m
synchronized(j){j+=d()-b}}}addShutdownHook{print([(d()-s)/m,j/m])}

Sample run:

bash-4.3$ groovy wait1minute.groovy
[8.031, 64.055]

C (with pthreads), 339 336 335 bytes

#include<stdio.h>
#include<sys/time.h>
#include<pthread.h>
#define d double
d s=0;int i;pthread_t p[14];d t(){struct timeval a;gettimeofday(&a,NULL);return a.tv_sec+a.tv_usec/1e6;}
h(){d b=t();sleep(5);s+=t()-b;}
main(){d g=t();for(i=14;i-->0;)pthread_create(&p[i],0,&h,0);for(i=14;i-->0;)pthread_join(p[i],0);printf("%f %f",t()-g,s);}

Java, 358 343 337 316 313 bytes

import static java.lang.System.*;class t extends Thread{public void run(){long s=nanoTime();try{sleep(999);}catch(Exception e){}t+=nanoTime()-s;}static long t,i,x;public static void main(String[]a)throws Exception{x=nanoTime();for(;++i<99;)new t().start();sleep(9000);out.println((nanoTime()-x)/1e9+" "+t/1e9);}}

and ungolfed

import static java.lang.System.*;

class t extends Thread {
    public void run() {
        long s = nanoTime();
        try {
            sleep(999);
        } catch (Exception e) {
        }
        t += nanoTime() - s;
    }

    static long t,i,x;

    public static void main(String[] a) throws Exception {
        x = nanoTime();
        for (; ++i < 99;)
            new t().start();
        sleep(9000);
        out.println((nanoTime() - x) / 1e9 + " " + t / 1e9);
    }
}


    
     
    

please don't try it at home, as this solution is not thread safe.

Edit:

I took @A Boschman's and @Adám's suggestions, and now my program require less than 10 seconds to run, and it's shorter by 15 bytes.

Javascript (ES6), 105 bytes

((t,c,d)=>{i=t();while(c--)setTimeout((c,s)=>{d+=t()-s;if(!c)alert([t()-i,d])},8e3,c,t())})(Date.now,8,0)

Updated version: 106 bytes Borrowed from @Ismael Miguel as he had the great idea to lower sleep time and raise intervals.

((t,c,d)=>{i=t();while(c--)setTimeout((c,s)=>{d+=t()-s;if(!c)alert([t()-i,d])},610,c,t())})(Date.now,99,0)

Javascript Ungolfed, 167 bytes

(function(t, c, d){
	i = t();
	while(c--){
		setTimeout(function(c, s){
			d += t() - s;
			if (!c) alert([t() - i, d])
		}, 8e3, c, t())
	}
})(Date.now, 8, 0)

Javascript (ES6), 212 203 145 bytes

This code creates 10 images with a time interval of exactly 6 seconds each, upon loading.

The execution time goes a tiny bit above it (due to overhead).

This code overwrites everything in the document!

P=performance,M=P.now(T=Y=0),document.body.innerHTML='<img src=# onerror=setTimeout(`T+=P.now()-M,--i||alert([P.now()-M,T])`,6e3) >'.repeat(i=10)

This assumes that you use a single-byte encoding for the backticks, which is required for the Javascript engine to do not trip.


Alternativelly, if you don't want to spend 6 seconds waiting, here's a 1-byte-longer solution that finishes in less than a second:

P=performance,M=P.now(T=Y=0),document.body.innerHTML='<img src=# onerror=setTimeout(`T+=P.now()-M,--i||alert([P.now()-M,T])`,600) >'.repeat(i=100)

The difference is that this code waits 600ms across 100 images. This will give a massive ammount of overhead.


Old version (203 bytes):

This code creates 10 iframes with a time interval of exactly 6 seconds each, instead of creating 10 images.

for(P=performance,M=P.now(T=Y=i=0),D=document,X=_=>{T+=_,--i||alert([P.now()-M,T])};i<10;i++)I=D.createElement`iframe`,I.src='javascript:setTimeout(_=>top.X(performance.now()),6e3)',D.body.appendChild(I)


Original version (212 bytes):

P=performance,M=P.now(T=Y=0),D=document,X=_=>{T+=_,Y++>8&&alert([P.now()-M,T])},[...''+1e9].map(_=>{I=D.createElement`iframe`,I.src='javascript:setTimeout(_=>top.X(performance.now()),6e3)',D.body.appendChild(I)})

JavaScript (ES6, using WebWorkers), 233 215 bytes

c=s=0;d=new Date();for(i=14;i-->0;)(new Worker(URL.createObjectURL(new Blob(['a=new Date();setTimeout(()=>postMessage(new Date()-a),5e3)'])))).onmessage=m=>{s+=m.data;if(++c>13)console.log((new Date()-d)/1e3,s/1e3)}

UPD: replaced the way a worker is executed from a string with a more compact and cross-browser one, in the aspect of cross-origin policies. Won't work in Safari, if it still have webkitURL object instead of URL, and in IE.

Mathematica, 109 bytes

a=AbsoluteTiming;LaunchKernels@7;Plus@@@a@ParallelTable[#&@@a@Pause@9,{7},Method->"EvaluationsPerKernel"->1]&

Anonymous function. Requires a license with 7+ sub-kernels to run. Takes 9 seconds realtime and 63 seconds kernel-time, not accounting for overhead. Make sure to only run the preceding statements once (so it doesn't try to re-launch kernels). Testing:

In[1]:= a=AbsoluteTiming;LaunchKernels@7;func=Plus@@@a@ParallelTable[#&@@a@Pause
@9,{7},Method->"EvaluationsPerKernel"->1]&;

In[2]:= func[]

Out[2]= {9.01498, 63.0068}

In[3]:= func[]

Out[3]= {9.01167, 63.0047}

In[4]:= func[]

Out[4]= {9.00587, 63.0051}

JavaScript (ES6), 148 bytes

with(performance)Promise.all([...Array(9)].map(_=>new Promise(r=>setTimeout(_=>r(t+=now()),7e3,t-=now())),t=0,n=now())).then(_=>alert([now()-n,t]));

Promises to wait 9 times for 7 seconds for a total of 63 seconds (actually 63.43 when I try), but only actually takes 7.05 seconds of real time when I try.

Bash + GNU utilities, 85

\time -f%e bash -c 'for i in {1..8};{ \time -aoj -f%e sleep 8&};wait'
paste -sd+ j|bc

Forces the use of the time executable instead of the shell builtin by prefixing with a \.

Appends to a file j, which must be empty or non-existent at the start.

Ruby, 92

n=->{Time.now}
t=n[]
a=0
(0..9).map{Thread.new{b=n[];sleep 6;a+=n[]-b}}.map &:join
p n[]-t,a

Bash 196 117 114 93 bytes

Updated to support better time precision by integrating suggestions from @manatwork and @Digital Trauma as well as a few other space optimizations:

d()(date +$1%s.%N;)
b=`d`
for i in {1..8};{ (d -;sleep 8;d +)>>j&}
wait
bc<<<`d`-$b
bc<<<`<j`

Note that this assumes the j file is absent at the beginning.

Go - 189 bytes

Thanks @cat!

package main
import(."fmt";."time");var m,t=60001,make(chan int,m);func main(){s:=Now();for i:=0;i<m;i++{go func(){Sleep(Millisecond);t<-0}()};c:=0;for i:=0;i<m;i++{c++};Print(Since(s),c)}

Outputs (ms): 160.9939ms,60001 (160ms to wait 60.001 seconds)

PowerShell v4, 144 bytes

$d=date;gjb|rjb
1..20|%{sajb{$x=date;sleep 3;((date)-$x).Ticks/1e7}>$null}
while(gjb -s "Running"){}(gjb|rcjb)-join'+'|iex
((date)-$d).Ticks/1e7

Sets $d equal to Get-Date, and clears out any existing job histories with Get-Job | Remove-Job. We then loop 1..20|%{...} and each iteration execute Start-Job passing it the script block {$x=date;sleep 3;((date)-$x).ticks/1e7} for the job (meaning each job will execute that script block). We pipe that output to >$null in order to suppress the feedback (i.e., job name, status, etc.) that gets returned.

The script block sets $x to Get-Date, then Start-Sleep for 3 seconds, then takes a new Get-Date reading, subtracts $x, gets the .Ticks, and divides by 1e7 to get the seconds (with precision).

Back in the main thread, so long as any job is still -Status "Running", we spin inside an empty while loop. Once that's done, we Get-Job to pull up objects for all the existing jobs, pipe those to Receive-Job which will pull up the equivalent of STDOUT (i.e., what they output), -join the results together with +, and pipe it to iex (Invoke-Expression and similar to eval). This will output the resultant sleep time plus overhead.

The final line is similar, in that it gets a new date, subtracts the original date stamp $d, gets the .Ticks, and divides by 1e7 to output the total execution time.


NB

OK, so this is a little bendy of the rules. Apparently on first execution, PowerShell needs to load a bunch of .NET assemblies from disk for the various thread operations as they're not loaded with the default shell profile. Subsequent executions, because the assemblies are already in memory, work fine. If you leave the shell window idle long enough, you'll get PowerShell's built-in garbage collection to come along and unload all those assemblies, causing the next execution to take a long time as it re-loads them. I'm not sure of a way around this.

You can see this in the execution times in the below runs. I started a fresh shell, navigated to my golfing directory, and executed the script. The first run was horrendous, but the second (executed immediately) worked fine. I then left the shell idle for a few minutes to let garbage collection come by, and then that run is again lengthy, but subsequent runs again work fine.

Example runs

Windows PowerShell
Copyright (C) 2014 Microsoft Corporation. All rights reserved.

PS H:\> c:

PS C:\> cd C:\Tools\Scripts\golfing

PS C:\Tools\Scripts\golfing> .\wait-a-minute.ps1
63.232359
67.8403415

PS C:\Tools\Scripts\golfing> .\wait-a-minute.ps1
61.0809705
8.8991164

PS C:\Tools\Scripts\golfing> .\wait-a-minute.ps1
62.5791712
67.3228933

PS C:\Tools\Scripts\golfing> .\wait-a-minute.ps1
61.1303589
8.5939405

PS C:\Tools\Scripts\golfing> .\wait-a-minute.ps1
61.3210352
8.6386886

PS C:\Tools\Scripts\golfing>

Rust, 257, 247 bytes

I use the same times as Mego's Python answer.

Really the only slightly clever bit is using i-i to get a Duration of 0 seconds.

fn main(){let n=std::time::Instant::now;let i=n();let h:Vec<_>=(0..8).map(|_|std::thread::spawn(move||{let i=n();std::thread::sleep_ms(9000);i.elapsed()})).collect();let mut t=i-i;for x in h{t+=x.join().unwrap();}print!("{:?}{:?}",t,i.elapsed());}

Prints:

Duration { secs: 71, nanos: 995877193 }Duration { secs: 9, nanos: 774491 }

Ungolfed:

fn main(){
    let n = std::time::Instant::now;
    let i = n();
    let h :Vec<_> =
        (0..8).map(|_|
            std::thread::spawn(
                move||{
                    let i = n();
                    std::thread::sleep_ms(9000);
                    i.elapsed()
                }
            )
        ).collect();
    let mut t=i-i;
    for x in h{
        t+=x.join().unwrap();
    }
    print!("{:?}{:?}",t,i.elapsed());
}

Edit: good old for loop is a bit shorter