From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail From: Paul Rubin Newsgroups: comp.lang.ada Subject: Re: How to get Ada to ?cross the chasm?? Date: Wed, 09 May 2018 14:33:58 -0700 Organization: A noiseless patient Spider Message-ID: <87d0y4zf7d.fsf@nightsong.com> References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <87lgd1heva.fsf@nightsong.com> <87zi1gz3kl.fsf@nightsong.com> <878t8x7k1j.fsf@nightsong.com> <87k1sg2qux.fsf@nightsong.com> <87h8njmk4r.fsf@nightsong.com> <87po27fbv9.fsf@nightsong.com> <87h8nhwhef.fsf@nightsong.com> Mime-Version: 1.0 Content-Type: text/plain Injection-Info: reader02.eternal-september.org; posting-host="a15f8e13c869a4517dec92123a650d01"; logging-data="7280"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+TA+bkK+stHPqU14y+s9Hj" User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) Cancel-Lock: sha1:JuOrQyF8JDExAgVf25FBDCyWvJs= sha1:Phf/ztvB9sUc8OIozMT7e302LNg= Xref: reader02.eternal-september.org comp.lang.ada:52171 Date: 2018-05-09T14:33:58-07:00 List-Id: "Dmitry A. Kazakov" writes: >> You need locks (at least in the form of atomic instructions > That is right, but the only penalty you get is probably spilling the > cache Oh ok, yeah, if you meant atomic increments the whole time, that clears up some confusion. The cache spill cost (around 10 cycles, as posted elsewhere) is much smaller than software locks, though still much larger than ordinary arithmetic. As mentioned earlier, the cost of atomic refcount operations has proved too expensive for Python, though Python uses refcounts far more extensively than an Ada application likely would. The Python tests of atomic refcount operations were done some years ago so it might be worth trying them again with present-day hardware, hmm. > Yes, if you have a long chain of references which all go to zero. I > would argue that this is bad design from the start. I don't see the problem with a long chain of references per se: linked lists are another ancient data structure and there's nothing wrong with having a long one or managing its storage automatically. That was done all the time in Lisp. Obviously anything that potentially required traversing the list would have a WCET problem. > It is incredibly difficult to estimate the time spent by the clients > of GC. What is measured is probably the GC task with all distributed > overhead in the clients and the overhead of the overall design > uncounted. Hmm, this is an interesting claim that I won't say is wrong, but I haven't heard it before and am surprised by it and am skeptical. I've always just run the profiler and believed when it said the program was spending X% of its time in GC. Profiling usually works by sampling the CPU program counter to identify the busy parts of the code. I don't know where the other overhead would come from. There's overhead inherent in using a given data structure (say a search tree instead of a hash table) but I wouldn't ascribe that to the GC. The GC just makes it easier to choose that type of structure. As for weak pointers, it looks like C++ std::shared_ptr/weak_ptr actually involves two objects in the heap. There's the controlled object itself, and there's the shared_ptr structure. The shared_ptr structure contains a pointer to the controlled object, plus the count of strong references and the count of weak references. When the strong refcount goes to zero, the controlled object is freed, but the shared_ptr structure must be kept until both counts go to zero (so that you can tell whether a weak reference is still alive). If Ada weak pointers work the same way, this sounds like yet more storage management, plus there is still the overhead of adjusting refcounts all the time. So I'm still skeptical that this can beat a GC approach, where pointers are just ordinary machine words that can be copied freely, their types tracked statically at compile time, and with no atomic operations needed except while the GC is running. I wonder if there's a reasonable way to benchmark a comparison. Fwiw I don't see anything in Ada95 Distilled about how to manage weak references or adjust refcounts automatically, but maybe GNAT has libraries for that.