From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,b8b8a54001adc4d2 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news1.google.com!news.maxwell.syr.edu!border1.nntp.dca.giganews.com!nntp.giganews.com!local1.nntp.dca.giganews.com!nntp.megapath.net!news.megapath.net.POSTED!not-for-mail NNTP-Posting-Date: Wed, 12 Jan 2005 13:56:03 -0600 From: "Randy Brukardt" Newsgroups: comp.lang.ada References: <1104516913.718856.94090@z14g2000cwz.googlegroups.com> <1104544963.930877.75170@c13g2000cwb.googlegroups.com> <1104595073.731663.180100@c13g2000cwb.googlegroups.com> <_ZCdnTiNfpH0qHncRVn-2Q@megapath.net> Subject: Re: Possible Ada deficiency? Date: Wed, 12 Jan 2005 13:57:40 -0600 X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 5.50.4807.1700 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4910.0300 Message-ID: NNTP-Posting-Host: 64.32.209.38 X-Trace: sv3-Lor9jmvupq4NSeh6X2RiAo21j4ImEPgG6J4dY/z+5dY4HHGE2d6eZTVcci5j6HZhULkNGlvs+uezsdh!6S/uVOSKK4MlsA89kJqIlxE48vNzO6WPxjgFuXUOqQR5OAUKaLfDzRWglbUJ5vi6XPTLTRuN6Ql2 X-Complaints-To: abuse@megapath.net X-DMCA-Complaints-To: abuse@megapath.net X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.22 Xref: g2news1.google.com comp.lang.ada:7695 Date: 2005-01-12T13:57:40-06:00 List-Id: "Robert A Duff" wrote in message news:wcc4qhn4q26.fsf@shell01.TheWorld.com... > "Randy Brukardt" writes: > > Sure, but if the extra rebuild time isn't significant, who cares? > > It's still significant today. I won't be completely satisfied until my > rebuild time (after changing one or several files) is less than 0.2 > second (because 0.2 second is unnoticeable at the human interaction > level). OK, but I was talking about significant compared to the "normal" build time. I don't think that sort of build time will ever be practical for Ada. My goal is the next "step" up in human response, which is somwhere between 10-30 seconds. As long as the build is done before I finish taking a drink, its quick enough. (You're going to pause anyway, 'cause you can't work continuously for hours...). > > I certainly wasn't the only one to share generics by default. > > I have no problem with sharing by default. But I wonder if Ichbiah's > team had viewed that as the philosophy, why didn't they define the > semantics that way? (Analogy with procedure calls: the RM does not > define them in terms of copying the called procedure to the call site, > and then expect pragma Not_Inline to turn that off!) I don't know; I presume they were confused as to what they were defining (that happens when it hasn't really been done before). > And why did they forbid recursive instantiations? Probably to allow alternate implementations; they were not sure which model made sense, so they allowed them all. ... > > There are many rules of Ada 95 that make sharing impractical (not > > impossible, just impractical). > > I've heard the same comment from some folks at Rational. > It surprised me at the time, because the design team really > tried to make generic body sharing *easier* in Ada 95. Honest. Certainly, the effects of "aliased" were more in the "unintended consequences" category. > One thing that makes sharing hard (in Ada 83) is the semantics of > exceptions declared in generics. Tucker pushed quite hard to change > that rule, but it was seen as too incompatible, so we gave up on that > idea. > > But don't we get a little credit for tightening up the generic contract > model? ;-) Sure. But "assume-the-best" means lots of additional stuff to pass to the generic, and not all at the head. So it makes it harder than just having it illegal as appeared to be the case in Ada 83. > > > I don't understand that, either. Are you saying that the run-time > > > overhead of the always-share model is now acceptable? > > > > No, I was thinking of the time/space requirements for building such a > > compiler "properly". We didn't really implement my design because our hosts > > didn't have enough memory to store the intermediate code for the entire > > program, and a disk-based solution would be way too slow. It would be > > practical on modern machines (other than that I can't predict the > > performance of the optimization pass, which would be critical to making the > > scheme work. We'd have to build it to see). > > > > Note that with the design that I had in mind, the run-time overhead of > > "always share" (as you put it) would be greatly reduced by partial > > evaluation and inlining optimizations -- done at link-time so that all of > > the instantiations and calls are known. Plus, pragma Inline could be used to > > reduce the overhead to nearly nil on critical stuff. The problem isn't the > > run-time overhead so much as it is supporting the optimizations necessary to > > mitigate it. > > If you're doing that sort of inlining, I wouldn't call it "always share" > anymore. I'd call it "sometimes share" or "partial sharing". The whole > point would be to reduce or eliminate what I called "the run-time > overhead of the always-share model". I find that confused. Inlining and partial evaluation are optimizations, applied after the semantics of the program is encoded. After all, automated inlining and partial evaluation would be useful in many cases having nothing to do with generics. OTOH, "always share" is a basic part of the semantics, and it is the only thing that the front-end knows. The front-end never does any inlining. I suppose from the point of view of a user, the effect is essentially the same. But from the point of view of the processor design, the optimizer and the front-end know almost nothing about each other, and things that happen in one have no bearing on what happens in the other. (In our case, that distinction is quite strictly enforced. I've fixed the optimizer independently of the front-end on many occassions, even giving customers the fixed back-end to use with an older front-end.) As I've said before, this is all in theory; we never built the link-time optimizer, these days because of tiny little problems (like the fact that our debugger format assumes one source file per generated code file; and that intemediate code label numbers would have to resequenced; etc.). So perhaps it wouldn't work well in practice. Randy.