From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on ip-172-31-65-14.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-3.2 required=3.0 tests=BAYES_00,NICE_REPLY_A, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 Path: eternal-september.org!reader01.eternal-september.org!news.swapon.de!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: Niklas Holsti Newsgroups: comp.lang.ada Subject: Re: Broadcast / iterate to all Connection objects via Simple Components? Date: Sun, 19 Feb 2023 16:37:51 +0200 Organization: Tidorum Ltd Message-ID: References: <392dd5d3-4df1-403f-b703-ee6f750dbc81n@googlegroups.com> <02802bf3-fc29-44da-bd79-21f26d122203n@googlegroups.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Trace: individual.net DHiJyW92DYiaGszdsKQNSQx5TIsgriDr6v0m0MJlZH266VI1Af Cancel-Lock: sha1:TkpQN2JHVoDWc8M5cyXYx39gwLM= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Content-Language: en-US In-Reply-To: Xref: reader01.eternal-september.org comp.lang.ada:64942 List-Id: On 2023-02-19 3:27, A.J. wrote: > Creating a task in Ada, at least on linux, ends up creating a > pthread, With the current GNAT compiler, yes. But not necessarily with all Ada compilers, even on Linux. > coroutines are managed by the go runtime (I believe in user space) > and have much less overhead to create or manage, since it's not > creating a specific thread. Some Ada compilers may have run-times that implement Ada tasks within the run-time, with minimal or no kernel/OS interaction. > Ada 202x supports the "parallel" block[3] though I understand no > runtime has utilized it yet-- would that end up being a coroutine or > is it meant for something else? > > [3] http://www.ada-auth.org/standards/22rm/html/RM-5-6-1.html As I understand it, the parallel execution constructs (parallel blocks and parallel loops) in Ada 2022 are meant to parallelize computations using multiple cores -- that is, real parallelism, not just concurrency. The Ada2022 RM describes each parallel computation in such a parallel construct as its own thread of control, but all operating within the same task, and all meant to be /independent/ of each other. For example, a computation on a vector that divides the vector into non-overlapping chunks and allocates one core to each chunk. Within a parallel construct (in any of the parallel threads) it is a bounded error to invoke an operation that is potentially blocking. So the independent computations are not expected to suspend themselves, thus they are not co-routines. The parallelism in parallel blocks and parallel loops is a "fork-join" parallelism. In other words, when the block or loop is entered all the parallel threads are created, and all those threads are destroyed when the block or loop is exited. So they are independent threads running "in" the same task, as Dmitry wants, but they are not scheduled by that task in any sense. The task "splits" into these separate threads, and only these, until the end of the parallel construct. Moreover, there are rules and checks on data-flow between the independent computations, meant to exclude data races. So it is not intended that the parallel computations (within the same parallel construct) should form pipes or have other inter-computation data flows.