comp.lang.ada
 help / color / mirror / Atom feed
From: Niklas Holsti <niklas.holsti@tidorum.invalid>
Subject: Re: Broadcast / iterate to all Connection objects via Simple Components?
Date: Sun, 19 Feb 2023 16:37:51 +0200	[thread overview]
Message-ID: <k5eqhvF5lhmU1@mid.individual.net> (raw)
In-Reply-To: <aabea618-3be7-4257-aad7-5fde634c3090n@googlegroups.com>

On 2023-02-19 3:27, A.J. wrote:


> Creating a task in Ada, at least on linux, ends up creating a
> pthread,

With the current GNAT compiler, yes. But not necessarily with all Ada 
compilers, even on Linux.


> coroutines are managed by the go runtime (I believe in user space) 
> and have much less overhead to create or manage, since it's not
> creating a specific thread.

Some Ada compilers may have run-times that implement Ada tasks within 
the run-time, with minimal or no kernel/OS interaction.


> Ada 202x supports the "parallel" block[3] though I understand no 
> runtime has utilized it yet-- would that end up being a coroutine or
>  is it meant for something else?
> 
> [3] http://www.ada-auth.org/standards/22rm/html/RM-5-6-1.html

As I understand it, the parallel execution constructs (parallel blocks 
and parallel loops) in Ada 2022 are meant to parallelize computations 
using multiple cores -- that is, real parallelism, not just concurrency.

The Ada2022 RM describes each parallel computation in such a parallel 
construct as its own thread of control, but all operating within the 
same task, and all meant to be /independent/ of each other. For example, 
a computation on a vector that divides the vector into non-overlapping 
chunks and allocates one core to each chunk.

Within a parallel construct (in any of the parallel threads) it is a 
bounded error to invoke an operation that is potentially blocking. So 
the independent computations are not expected to suspend themselves, 
thus they are not co-routines.

The parallelism in parallel blocks and parallel loops is a "fork-join" 
parallelism. In other words, when the block or loop is entered all the 
parallel threads are created, and all those threads are destroyed when 
the block or loop is exited.

So they are independent threads running "in" the same task, as Dmitry 
wants, but they are not scheduled by that task in any sense. The task 
"splits" into these separate threads, and only these, until the end of 
the parallel construct.

Moreover, there are rules and checks on data-flow between the 
independent computations, meant to exclude data races. So it is not 
intended that the parallel computations (within the same parallel 
construct) should form pipes or have other inter-computation data flows.

  parent reply	other threads:[~2023-02-19 14:37 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-07 20:29 Broadcast / iterate to all Connection objects via Simple Components? A.J.
2023-02-08  8:55 ` Dmitry A. Kazakov
2023-02-08  9:55 ` Jeffrey R.Carter
2023-02-13  7:28   ` Emmanuel Briot
2023-02-13  8:30     ` Dmitry A. Kazakov
2023-02-13  8:44       ` Emmanuel Briot
2023-02-13 10:55         ` Dmitry A. Kazakov
2023-02-13 11:07           ` Emmanuel Briot
2023-02-13 11:57             ` Dmitry A. Kazakov
2023-02-13 13:22               ` Niklas Holsti
2023-02-13 15:10                 ` Dmitry A. Kazakov
2023-02-13 16:26                   ` Niklas Holsti
2023-02-13 19:48                     ` Dmitry A. Kazakov
2023-02-15  9:54                       ` Niklas Holsti
2023-02-15 10:57                         ` Dmitry A. Kazakov
2023-02-15 18:37                           ` Niklas Holsti
2023-02-19  1:27                             ` A.J.
2023-02-19  8:29                               ` Dmitry A. Kazakov
2023-02-19 14:37                               ` Niklas Holsti [this message]
2023-02-13 15:43                 ` J-P. Rosen
2023-02-13 16:40             ` Jeremy Grosser <jeremy@synack.me>
2023-02-13 20:33 ` Daniel Norte de Moraes
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox