From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!feeder.eternal-september.org!aioe.org!.POSTED!not-for-mail From: "Dmitry A. Kazakov" Newsgroups: comp.lang.ada Subject: Re: How to get Ada to ?cross the chasm?? Date: Wed, 9 May 2018 10:25:02 +0200 Organization: Aioe.org NNTP Server Message-ID: References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <87efiuope8.fsf@nightsong.com> <87lgd1heva.fsf@nightsong.com> <87zi1gz3kl.fsf@nightsong.com> <878t8x7k1j.fsf@nightsong.com> <87k1sg2qux.fsf@nightsong.com> <87h8njmk4r.fsf@nightsong.com> <87po27fbv9.fsf@nightsong.com> <87h8nhwhef.fsf@nightsong.com> NNTP-Posting-Host: MyFhHs417jM9AgzRpXn7yg.user.gioia.aioe.org Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Complaints-To: abuse@aioe.org User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 Content-Language: en-US X-Notice: Filtered by postfilter v. 0.8.3 Xref: reader02.eternal-september.org comp.lang.ada:52149 Date: 2018-05-09T10:25:02+02:00 List-Id: On 09/05/2018 07:02, Paul Rubin wrote: > "Dmitry A. Kazakov" writes: >> If referenced object counts are > 1 they are not going to be >> finalized. So the point stands, no locking is ever required upon >> finalization. > > I'm still perplexed by this. You have to decrement all those refcounts. > While this is happening, other threads may also be messing with them. Yes, but you cannot increment a count without holding another increment on it. Thus the count is already > 1 when another task tries to increment it. Therefore my decrement will never result in 0. > You need locks (at least in the form of atomic instructions which work > by hardware locks, i.e. that are much slower than normal instructions) > to prevent data races. That is right, but the only penalty you get is probably spilling the cache, while other methods would require full locking, possibly context switches and necessarily priority inversion and increasing granularity. > When you decrement the counts, some of them might reach zero so their > objects need freeing (and traversal). That too can be arbitrarily > complicated. Yes, if you have a long chain of references which all go to zero. I would argue that this is bad design from the start. 1. As Niklas said deep nesting alone is a problem. 2. If all references go coherently, then they need not to be there and should be merged into a single holder. Ada's protected object is quite handy to design elaborated locks. I used to have designs where publisher/subscriber services, enumeration, locking and reference counting were handled by one protected type. >> For large objects there is usually additional knowledge about their >> allocation order which is not available for the compiler > > Meh, maybe, though it's unclear whether this will help enough to care > about in practice, so it would have to be justified by concrete evidence > on a case by case basis. Yes, that is a point too. There is no universal solution to the problem, only a toolbox of means to advance it. GC does not fit into my toolbox. >> Not at all. This is an old discussion about up-front analysis and >> design vs. "spinal-cord-programming". Ada was designed for people who >> do not consider investing their time in software design useless. > > But in this case it sounds like you're burning the effort on solving a > problem that someone else already solved. Like if your application has > a matrix and you need its inverse, you can call a general purpose matrix > solver from a math library, or you can write a special one that uses > some property of your application's matrix. My argument is that GC is never a solution, but an attempt to sweep unsolved problems under the carpet. >> I don't want objects moving in the memory that is for sure. It is a >> huge distributed performance hit > > There's enough experience with these GC's that a claim of a significant > performance hit is only credible if it's backed by profile data showing > the GC is taking too much time for that app. The usual advice for Java > is configure the GC so it's using around 10% of the cpu cycles (assuming > you have enough memory). Even if a non-defragmenting scheme uses 0% of > the cycles, you're likely to lose more than 10% to cache misses that a > compacting scheme prevents. I doubt any such measures, being useless as they are for having no predictive force, are even methodically correct. It is incredibly difficult to estimate the time spent by the clients of GC. What is measured is probably the GC task with all distributed overhead in the clients and the overhead of the overall design uncounted. The question of the effect on the time granularity is not answered at all. >> No, that is irrelevant. Determinism is a property of the system and >> not of its inputs. Consider it a black box. You feed the inputs and >> get the outputs. How many little threads are in the box does not >> matter. > > The system includes the program and its input sources. That is the system in the loop. It is useless to talk about its properties because there is no means to observe them - all inputs and outputs are consumed. > My usual picture > of a concurrent system is a network server connected to 1000s of clients > over the internet. So the internet and its random delays are part of > the system. It can't be seen as deterministic in any useful way. It cannot be seen in any useful way. A useful way could be a client view on the system and its parts. BTW, when you reading a true random generator output, that is deterministic even if the values are random. And conversely, when you transport a sequence of natural numbers over a channel with random delays, the sequence is still deterministic. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de