From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail From: Paul Rubin Newsgroups: comp.lang.ada Subject: Re: How to get Ada to =?utf-8?Q?=E2=80=9Ccross?= the =?utf-8?Q?ch?= =?utf-8?Q?asm=E2=80=9D=3F?= Date: Fri, 20 Apr 2018 22:38:32 -0700 Organization: A noiseless patient Spider Message-ID: <87bmed2k6f.fsf@nightsong.com> References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <878t9nemrl.fsf@nightsong.com> <87h8o7lowg.fsf@nightsong.com> <8736zqkwat.fsf@nightsong.com> <6839088c-f221-4650-a6ea-1841ae539486@googlegroups.com> <1e5f5681-0e2a-43cc-9437-2bd38078da08@googlegroups.com> <87604lvkqp.fsf@nightsong.com> <0bd80336-595a-45b6-b4e5-26c13d5859cb@googlegroups.com> <87o9idsh7f.fsf@nightsong.com> <87k1t1s9lx.fsf@nightsong.com> Mime-Version: 1.0 Content-Type: text/plain Injection-Info: reader02.eternal-september.org; posting-host="532e8b2aa0bc5beb3e34e3489a7ffcd6"; logging-data="5021"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1862Egjuae5Uh3/9qONyVQC" User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) Cancel-Lock: sha1:nvwTQ1/2dE2fF0MYVFfJDOELaWw= sha1:8eUK0Lrp4tsOTega+80I/6V1J2M= Xref: reader02.eternal-september.org comp.lang.ada:51657 Date: 2018-04-20T22:38:32-07:00 List-Id: Shark8 writes: >> 1. Of course it's a good thing, but is it CRITICAL? > How is it not? As I'm accustomed to the term, "critical" means, approximately, that the system can kill someone (in a real practical way, not just a theoretical one) if the condition is not met. It's a critical condition of pacemaker firmware that the pacemaker never stops running (because that means your heart stops). I hope that makes sense. Condition not met => dead person as an obvious result, not merely as a concocted scenario. Ok? Nowadays we can take a step back and say it's a critical condition of a compiler that (if you use it to compile firmware for pacemakers) that the compiler never generate wrong code. Bob Dewar didn't agree with this and I wanted to ask him about it, but I never got the chance, so I'm going with it. The idea is just that wrong code can make the pacemaker misbehave and kill someone. Deploying it in 1000s of pacemakers magnifies the risk. Now you're going even further and saying that the compiler is like a pacemaker, so that if it unexpectedly stops running, the result is also a dead person. I'm surprised by this claim and think it is wrong, though as I said I'm open to pursuasion. You type "make" to compile your program, the compiler runs for a little while, then whoops, it prints an error message and quits instead of generating code. Where is the dead person? I see an inconvenienced programmer but that's a lot less serious. A compiler is not like a pacemaker. > And, transitively, the criticality of software and hardware that > would produced is part of the criticality of the compiler. If the hardware has 1e-4 chance of failing on a given day and the software has 1e-6 chance, my first question is what you are going to do about the hardware, since it is 100x more likely to be the cause of failure. Until you've made the hardware much more reliable, the 1e-6 chance of software failure is in practice not different than zero. > You have to keep rebuilding until it's *RIGHT*. (It's not hard > real-time; but what happens to your SW company [I'll assume a SW-dev > company because this is C.L.A.] if you never have any runnable > executable? This is a compiler that's been tested and works, at least by the standards of good testing processes. If there's still a problem, you open a ticket with the compiler vendor and they work with you to deal with the issue. At worst you might have to refactor your program somewhat. > It'll die because it's not producing SW. I don't see a dead person there. Dead company just means some financial losses to some investors somewhere, and some programmers available for new jobs (not always nice, but it beats being dead). The investors in turn took a calculated risk and they write it off. If they want to minimize their risks, they buy treasury notes. Companies die all the time, for all sorts of reasons, usually mismanagement. A compiler repeatedly getting stuck is such an implausible cause that it's not worth thinking about, unless the much more likely possibilities are taken care of first. > Errors are fine, your compiler saying "Hey, I can only handle 128 > parenthesis nestings" or "I can't represent 2**65" or "Hey, quit > feeding me crap!" -- not that hose would be the actual error-text. Now you're saying the same thing as me: we can live with a compiler that can fail (and print an error) on certain valid inputs. > But that's qualitatively different than being able to say: this code > that is output is provably correct in translation from the input. Right, you can only assert some property about the output if you actually got output. > Every possible program WITHIN THE LIMITATIONS OF THE COMPUTER *AND* > provably correct translation. That's what you get with CompCert, more or less. The idea of SPARK is to also guarantee that you will never reach the limitations of the computer. It's for situations where signalling an error because you hit a limit is not allowed. It is hard, much harder than a CompCert-level proof. > The key is in the intermediate representation Nah, you still have the same problem. Those deeply nested expressions may have produced IR that makes your register allocator blow its stack. > (1) I reject the idea that /CRASHING/ is at all in the same category > as a controlled exit. OK, terminology. It's a user mode program, so crashing just means it stops unexpectedly for whatever reason. If there's a memory leak, at some point the memory allocator prints an error message and the program stops. Or if an Ada program signals a constraint error, some runtime exception handler does something similar. There's not actually an uncontrolled exit like when some godawful C program dereferences an invalid pointer. We're gentlemen here after all ;-). I've been calling both of the above situations crashes but if you want different terminology like "controlled exit", then fine, call it that. Actually I think the best mathematical term for this is "divergence" and it's more neutral than either "crash" or "controlled exit", so is that ok with you? If your program says 2+2=5 then that's a wrong answer, but if it prints a diagnostic and performs a controlled exit, we call it "divergence" rather than "wrong answer" because you never actually got an answer. We certainly can't call it a right answer! SPARK aims for something different, which is to prove totality for a real machine. Totality means the absence of divergence. Totality for a compiler means that for any valid input, the compiler MUST produce output that matches the input. Any other result, including any controlled exit other than "compilation is complete", is not allowed. CompCert AIUI it right has a proof of totality on an abstract machine. You then simulate the abstract machine on a real machine (pretty efficiently, using Ocaml). The abstract machine has unlimited memory, so at some point the simulation might conceivably hit the real computer's limits and experience "controlled exit", i.e. diverge. If you are saying divergences of that sort can be ok, then fine, we're on the same page, we agree that a compiler doesn't need provable totality. There are other programs that DO need totality proofs, and that's what SPARK is for. > (2) Why should there be a VC for maximum memory? Because you haven't proven totality if you haven't proven that the program won't exhaust memory. Do you actually use SPARK? I haven't done anything with it yet, but I've looked at Barnes' book about it, and it puts a LOT of emphasis on issues like this. > Why are you so hung up on memory exhaustion? Because it's a way for programs to diverge. If you want to prove that a program never diverges, you have to prove that it doesn't exhaust memory. I'm treating it as important because SPARK treats it as important. That's why SPARK (iirc) doesn't allow recursion, and Ada (even without SPARK) doesn't allow dynamic memory allocation. Actually if you also don't allow indirect function calls, you may already have a syntactic proof of bounded memory, so you might not need a special VC for it. Recursion is everywhere in most compilers, by the way (lots of tree structure). So if you're not allowed to use it, your style is already cramped pretty badly. > I believe there needs to be a bigger push towards correctness, and Ada > probably needs a couple of good "killer apps" to really "cross the > chasm", and a proven compiler/IDE would hit the mark on both of those. For the reasons we've discussed, I don't think provable totality is a crucial or even achievable property of a compiler. It can be an important property of an operating system, so maybe an OS would be a better "chasm crosser" than a compiler.