Ported Python parallel computing sample to F #

I've ported the parallel computing sample from the Python documentation to F #. I think that you can get started by comparing the writing styles.

GIL

I tried multithreading in Python.

I tried parallel computing, but it didn't improve much. The following articles are mentioned.

Python uses the "Global Interpreter Lock (GIL)" as a mechanism to guarantee the consistency of program execution.
Apparently, Python Thread is supported to handle "blocking I / O" when making system calls ... It seems that it is not aimed at speeding up in the first place ... ..

The documentation explains how.

(Global interpreter lock) This is a mechanism used by the CPython interpreter to ensure that only one thread executes Python bytecode at a time. This simplifies the implementation of CPython by making the object model (including important built-in types such as dict) implicitly safe for concurrent access. Locking the entire interpreter makes it easy to multithread the interpreter at the cost of parallelization that multiprocessor machines incur.

However, some standard or external extension modules are designed to release the GIL when doing heavy computations such as compression and hashing. In addition, GIL is always released when performing I / O processing.

In the past, "free multithreaded" interpreters (which lock the data in service at a finer granularity) were developed, but were unsuccessful due to poor performance on a typical single processor. Attempts to overcome this performance issue are believed to make implementation more complex and increase maintenance costs.

ProcessPoolExecutor

Parallel computing in Python is multi-process, not multi-threaded. ProcessPoolExecutor takes care of managing processes and calling functions between processes. By default, it creates as many worker processes as there are logical processors and allocates processing.

Modify the ProcessPoolExecutor sample in the documentation. Check the required time by switching between parallel and series. You can compare it with multithreading just by replacing it with ThreadPoolExecutor.

parallel-sample.py


import concurrent.futures
import math

PRIMES = [
    112272535095293,
    112582705942171,
    112272535095293,
    115280095190773,
    115797848077099,
    1099726899285419]

def is_prime(n):
    if n < 2:
        return False
    if n == 2:
        return True
    if n % 2 == 0:
        return False

    sqrt_n = int(math.floor(math.sqrt(n)))
    for i in range(3, sqrt_n + 1, 2):
        if n % i == 0:
            return False
    return True

def main_parallel(cls):
    with cls() as executor:
        for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
            print('%d is prime: %s' % (number, prime))

def main():
    for number, prime in zip(PRIMES, map(is_prime, PRIMES)):
        print('%d is prime: %s' % (number, prime))

if __name__ == '__main__':
    import sys
    if sys.argv[-1] == "--process":
        main_parallel(concurrent.futures.ProcessPoolExecutor)
    elif sys.argv[-1] == "--thread":
        main_parallel(concurrent.futures.ThreadPoolExecutor)
    else:
        main()

Execution result


$ time python parallel-sample.py
112272535095293 is prime: True
112582705942171 is prime: True
112272535095293 is prime: True
115280095190773 is prime: True
115797848077099 is prime: True
1099726899285419 is prime: False

real    0m2.745s
user    0m2.703s
sys     0m0.031s

$ time python parallel-sample.py --process
112272535095293 is prime: True
112582705942171 is prime: True
112272535095293 is prime: True
115280095190773 is prime: True
115797848077099 is prime: True
1099726899285419 is prime: False

real    0m0.983s
user    0m3.984s
sys     0m0.172s

$ time python parallel-sample.py --thread
112272535095293 is prime: True
112582705942171 is prime: True
112272535095293 is prime: True
115280095190773 is prime: True
115797848077099 is prime: True
1099726899285419 is prime: False

real    0m5.527s
user    0m5.422s
sys     0m0.109s

Multi-process speeds up, but multi-threaded slows down.

F#

Port to F #. Since there is no problem in using multithreading in the .NET Framework, we will omit the implementation of multiprocessing.

parallel-sample.fsx


let PRIMES = [
    112272535095293L
    112582705942171L
    112272535095293L
    115280095190773L
    115797848077099L
    1099726899285419L]

let is_prime n =
    if n < 2L then
        false
    elif n = 2L then
        true
    elif n % 2L = 0L then
        false
    else
    let sqrt_n = int64 (floor (sqrt (float n)))
    seq {
        for i in {3L .. 2L .. sqrt_n + 1L} do
            if n % i = 0L then
                yield false
        yield true }
    |> Seq.head

let is_prime_async n = async { return is_prime n }

let main_parallel() =
    PRIMES
    |> Seq.map is_prime_async
    |> Async.Parallel
    |> Async.RunSynchronously
    |> Seq.zip PRIMES
    |> Seq.iter (fun (number, prime) -> printfn "%d is prime: %b" number prime)

let main() =
    PRIMES
    |> Seq.map is_prime
    |> Seq.zip PRIMES
    |> Seq.iter (fun (number, prime) -> printfn "%d is prime: %b" number prime)

match Array.last (System.Environment.GetCommandLineArgs()) with
| "--thread" -> main_parallel()
| _          -> main()

Execution result


$ time mono parallel-sample.exe
112272535095293 is prime: true
112582705942171 is prime: true
112272535095293 is prime: true
115280095190773 is prime: true
115797848077099 is prime: true
1099726899285419 is prime: false

real    0m0.662s
user    0m0.625s
sys     0m0.031s

$ time mono parallel-sample.exe --thread
112272535095293 is prime: true
112582705942171 is prime: true
112272535095293 is prime: true
115280095190773 is prime: true
115797848077099 is prime: true
1099726899285419 is prime: false

real    0m0.308s
user    0m0.813s
sys     0m0.078s

The effect is difficult to see unless it is a little heavier, but the speed is improving.

Process flow

It's easy to see what's happening with ʻasync and ʻAsync by looking at the types.

Asynchronize function


let is_prime n = ...        // int64 -> bool
let is_prime_async n = ...  // int64 -> Async<bool>

Apply a list of arguments


    PRIMES
    |> Seq.map is_prime_async  // seq<Async<bool>>

Combine multiple asynchronous processes


    |> Async.Parallel          // Async<bool array>

Execute asynchronous processing and return the result as an array


    |> Async.RunSynchronously  // bool array

Parallel calculations are done in the thread pool. Even if you pass a large number of calculations at once, they will be scheduled accordingly.

Recommended Posts