From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail From: Paul Rubin Newsgroups: comp.lang.ada Subject: Re: How to get Ada to ?cross the chasm?? Date: Sun, 06 May 2018 18:49:23 -0700 Organization: A noiseless patient Spider Message-ID: <87fu342q1o.fsf@nightsong.com> References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <87k1su7nag.fsf@nightsong.com> <87po2la2qt.fsf@nightsong.com> <87in8buttb.fsf@jacob-sparre.dk> <87wowqpowu.fsf@nightsong.com> <16406268-83df-4564-8855-9bd0fe9caac0@googlegroups.com> <87o9i2pkcr.fsf@nightsong.com> <87in88m43h.fsf@nightsong.com> <87efiuope8.fsf@nightsong.com> <87lgd1heva.fsf@nightsong.com> <87zi1gz3kl.fsf@nightsong.com> <878t8x7k1j.fsf@nightsong.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Injection-Info: reader02.eternal-september.org; posting-host="0a3ea63bc53d74a87a908f52805571cd"; logging-data="25786"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18LRhBJ2Abao0FpILnqt+QS" User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) Cancel-Lock: sha1:t18pdmd81CcSpg+AE9yP1xFlrBs= sha1:K4qG4ct3O9GX4xwfpPMK6BKYm8M= Xref: reader02.eternal-september.org comp.lang.ada:52055 Date: 2018-05-06T18:49:23-07:00 List-Id: Niklas Holsti writes: > and most certainly I want to hear about languages that solve some > problems better than Ada currently does. For example, the discussions > about Rust memory safety, and memory management in general. Someone asked earlier what has changed in the PL world since the 1980s, and it seems to me the ideas going into Rust's memory safety might be an example. I.e. they seem like an outgrowth of Girard's linear logic from the late 1980s and its interpretation from the 1990s as a logic of resources. I'm not well versed about this though, so maybe I'm wrong. Anyway, it's an area where Ada could catch up. > As I see it, when Ada was originally designed to be a wide-spectrum > language the design was successful in the sense that it covered most > application areas of the main-stream languages in use at that time; > only "weird" languages like Lisp and Snobol (as they were seen then) > were out of scope for the aims of Ada. Today, the spectrum of > languages is wider, and, in particular, languages that are enabled by > garbage collection are in common use. Sure, yet today's interpreted, dynamic languages (Python, Ruby, JS) are bodged-up sugared versions of Lisp. So what was different in the 1980s? Mainly imho, that cpu cycles and memory were a scarce resource then, but are in abundance now. So Lisp was used mostly in academia where it was feasible to use a big expensive computer as the quickest approach to solving your problem, even if a smaller one would have sufficed with more effort. E. W. Dijkstra in his 1972 Turing Award lecture famously said: "With a few very basic principles at its foundation, it [LISP] has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as “the most intelligent way to misuse a computer”. I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts." These days though, machine resources are plentiful and we can use those methods freely, so we do, outside of the areas where they don't work. > "Movement in any direction takes constant time and requires the > allocation of two new heap objects." Compare that cost (two heap > allocations) with traversing a pointer... Typically with serious GC, allocating new heap objects is basically free (you bump a pointer in the free memory region). Collecting them takes some time that you can measure with profiling, which tends to not be that large a fraction of total execution time. I've seen some studies that indicate GC bloats up a program's memory consumption by 3x-5x but doesn't have that big an effect on runtime. That's reasonably borne out by well-known benchmarks of C vs Java vs Ocaml etc. Haskell is slower (unless you write in contorted style) mostly because of lazy evaluation, I think. But ISTM that idiomatic Haskell runs around 4x slower than idiomatic C, which is perfectly sufficient most of the time.