From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail From: Paul Rubin Newsgroups: comp.lang.ada Subject: Re: How to get Ada to =?utf-8?Q?=E2=80=9Ccross?= the =?utf-8?Q?ch?= =?utf-8?Q?asm=E2=80=9D=3F?= Date: Wed, 18 Apr 2018 22:57:03 -0700 Organization: A noiseless patient Spider Message-ID: <87h8o7lowg.fsf@nightsong.com> References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <878t9nemrl.fsf@nightsong.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Injection-Info: reader02.eternal-september.org; posting-host="9a6605bac137cda67616cb9838945400"; logging-data="28506"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/FnbUnT3NRam90p4hvJtvU" User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) Cancel-Lock: sha1:QYQXjsssyofGRFCxpt6gL+VrESA= sha1:6yv4duzVlS6if9GZ0O/lC2xjC3Q= Xref: reader02.eternal-september.org comp.lang.ada:51626 Date: 2018-04-18T22:57:03-07:00 List-Id: "Dan'l Miller" writes: > I keep seeing hints in the Rust community that they are aware of the > existence of Ada but have never written a program in Ada because of > perceived insurmountable time-expenditure/odds/learning-curve. Heh, nice. They found it easier to invent a whole new language than read an Ada book ;-). > this book at the URL below is by far the most scathing critique of > C++2011 and C++2014 (and by inference C++2017)... I like that book and didn't see it as a scathing critique, but I came to it as a naive reader. > Scott Meyers _Effective C++_ series books from the 1990s had an > over-arching message effectively of: ‘just stay away from these > troubled topics in C++ and you'll be just fine.’ That's still the over-arching message of the current C++ world: https://isocpp.org/blog/2015/09/bjarne-stroustrup-announces-cpp-core-guidelines > * Even though the book is ink-on-paper, the reader can quite clearly > hear the author's open-mouth drop-jawed gasps Heehee, I must have missed those. I'm sure they are there but I wasn't looking at them. > but the reader of _Effective Modern C++_ is easily left with the > feeling of: “oh really?!, certainly some other programming language > could tame the beast of these features quite differently Don't forget the requirement of not breaking legacy C++ code. Doing better by getting rid of that requirement was part of the idea of Rust. > Coq-esque and Zed-notation-esque systems of logic need to be the > future of Ada. You probably understand this stuff better than I do, so I don't mean to be telling you stuff you already know. I'm writing this partly to check my own understanding. Coq is based on constructive type theory (CTT) which is more usually associated with functional programming than imperative programming. It can be used to prove theorems about imperative programs, but also, Coq itself can be seen as an ML dialect with super-precise types. For example, the CompCert verified C compiler (compcert.inria.fr) is written in Coq. Z-notation is used to specify the behaviour of a program, which is then verified using external tools like SPARK using state transformer logic (Hoare triples, separation logic, etc). State transformer logic is practice more "powerful" than CTT, but much harder to use. CTT usually only tries to prove the denotational correctness of a program, but not its resource bounds or even its totality. So CompCert's proofs only aim to guarantee that CompCert will never generate wrong code, not that the compiler will never "diverge" (fail to produce an output, e.g. by running out of memory and crashing). Divergence in a compiler is annoying but not disastrous the way wrong code would be. If Compcert runs out of memory, you can refactor your program or compile it on a bigger computer. I remember being surprised to learn that a lot of SPARK is about proving that your programs never throw exceptions or crash or experience arithmetic overflow in machine ints. In functional programming we normally don't think about that too much. Obviously in critical realtime systems you do care about totality, the program MUST give a right answer, not merely refrain from giving wrong answers. But since even a perfectly correct program can give a wrong or divergent answer if the power fails or cosmic rays flip memory bits in the hardware, critical systems MUST also have ECC, redundant CPUs and power, etc. etc. If you don't have that hardware redundancy then Ada instead of Coq might very well be overkill on the software side. So we're back to the idea that critical realtime systems are a niche that necessarily require extreme approaches in hardware as well as the software, while non-realtime systems (even correctness-critical ones like compilers) don't need totality assurance beyond what one gets from normal software testing. That means they can use GC, which really does make life a lot simpler. Btw, here's some cheap dev boards with safety-oriented cpus (intended for medical and automotive applications): http://www.ti.com/tools-software/launchpads/launchpads.html#hercules_mcu They use cpus with dual processor cores running in lockstep and that have ECC on the internal flash and ram (I thought I remembered something about the internal data paths having it too). If you're not using a CPU like that, and you're not short of ram or cpu cycles, then what does using Ada really buy you? > Low power utilization and augmented-reality commonplace in consumer > devices begets a prohibition on luxurious interpreted-language and > automatic-garbage-collection techniques. Meh, how much of that code is really computation intensive? An augmented-reality system likely has a GPU-like processor doing the heavy computation, while the surrounding stuff (UI etc.) can well be written in Python. Here's a little embedded board (ARM Cortex M0, 256KB flash, 32KB ram) that runs Python nicely: https://www.adafruit.com/product/3500 > Even IoT devices are having processors and operating systems and > amounts of DRAM that would put some minicomputers of that era to > shame; Yep, it's great. That means we can use garbage collected, interpreted languages on them now. ;) > It seems the only way for this to be true for Ada is for Ada202X to > have a CoQ-esque or Zed-notation-esque logic-deducing theorem prover > in it. This is much harder to use than we might hope. It's not all that automatic, and the more automated ones (as mentioned above) tend to rely on garbage collection among other things. > I am hoping that #4 is not the model for Ada: “Hey, look kids, there > goes a Model Ada down the road...." Heehee, as an occasional Forth user (not for anything serious, but it's fun) I sympathize with this.