From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!feeder.eternal-september.org!aioe.org!.POSTED!not-for-mail From: "Dmitry A. Kazakov" Newsgroups: comp.lang.ada Subject: Re: How to get Ada to ?cross the chasm?? Date: Tue, 8 May 2018 09:34:54 +0200 Organization: Aioe.org NNTP Server Message-ID: References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <87in88m43h.fsf@nightsong.com> <87efiuope8.fsf@nightsong.com> <87lgd1heva.fsf@nightsong.com> <87zi1gz3kl.fsf@nightsong.com> <878t8x7k1j.fsf@nightsong.com> <87k1sg2qux.fsf@nightsong.com> <87h8njmk4r.fsf@nightsong.com> <87po27fbv9.fsf@nightsong.com> NNTP-Posting-Host: MyFhHs417jM9AgzRpXn7yg.user.gioia.aioe.org Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: abuse@aioe.org User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 X-Notice: Filtered by postfilter v. 0.8.3 Content-Language: en-US Xref: reader02.eternal-september.org comp.lang.ada:52094 Date: 2018-05-08T09:34:54+02:00 List-Id: On 07/05/2018 22:29, Paul Rubin wrote: > "Dmitry A. Kazakov" writes: >> This too, because when the last reference count is zeroed it is only >> one task that is still aware of the object, therefore its finalization >> can be done without locking. > > The big object contains many references to other objects, and those > objects' refcounts don't necessarily become zero when the big object's > last reference goes away. If referenced object counts are > 1 they are not going to be finalized. So the point stands, no locking is ever required upon finalization. >>> So you still get arbitrarily long pauses. >> No, because I know the structures I deal with and can avoid that >> choosing proper design. > > In other words you are manually managing memory, which is what > refcounting is supposed to get you out of. For big objects, yes, because default methods fail. For large objects there is usually additional knowledge about their allocation order which is not available for the compiler, e.g. a LIFO order etc. This allows a much more effective and simple memory management than making forward references, backward references, strong references, weak references straight. > If you spent even one minute > figuring out that proper design, that's a minute that a GC-using > programmer could have spent doing something productive. Not at all. This is an old discussion about up-front analysis and design vs. "spinal-cord-programming". Ada was designed for people who do not consider investing their time in software design useless. >> Millisecond is very long for many applications, > > In such a system you better not use a refcount scheme either, without a > WCET analysis. > > I'd add as well: most refcount systems I know of don't move objects > around in memory, so you get memory fragmentation and high cache miss > rates. GC systems typically compact the heap when the GC runs, with > heap sizes chosen so that "minor" collections put the most recently > allocated (and in practice most frequently used) part of the heap into > the processor's L2 cache. You get fragmentation due to excessive use of pointers and not caring about memory management in the first place. I don't want objects moving in the memory that is for sure. It is a huge distributed performance hit because all pointers must be managed in all tasks and in all container objects. It is wasting resources per definition, because moving object solves no actual problem space issue, it is a purely an artifact of faulty implementation. >> 1. It is three machine instructions load; increment-and-swap; branch >> if zero, when done in a lock-free manner. > > What processor do you mean, that has an increment-and-swap instruction? > I don't think the x86 has this. You would use LOCK CMPXCHG which is > quite slow. On Intel it could be fetch-and-add. Anything a modern processor has is in order of magnitude faster than any GC implementation, even taking and releasing the spin lock, should the processor had no atomic instructions at all, is greatly faster. And the impact is further diminished when penalties for dereferencing managed pointers while locking the target from being moved is taken into account. >> Why? Determinism means same output/behavior for same >> input. Concurrency introduces non-determinism only in presence of race >> conditions. > > Concurrency means that the program responds to input from multiple > independent sources whose timing is outside of the program's control. No, that is irrelevant. Determinism is a property of the system and not of its inputs. Consider it a black box. You feed the inputs and get the outputs. How many little threads are in the box does not matter. [...] > So when multiprocessors and critical timing are involved, determinism in > real-world programming simply doesn't exist. Any concurrency scheme or > WCET analysis must take this into account. It is a fallacy argument. All physical world is non-deterministic, yet it is no problem to create a deterministic system out of stochastically misbehaving atoms and molecules. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de