From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!feeder.eternal-september.org!nntp-feed.chiark.greenend.org.uk!ewrotcd!newsfeed.xs3.de!io.xs3.de!news.jacob-sparre.dk!franka.jacob-sparre.dk!pnx.dk!.POSTED.rrsoftware.com!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: generating and compiling a very large file Date: Mon, 4 Jun 2018 16:56:09 -0500 Organization: JSA Research & Innovation Message-ID: References: Injection-Date: Mon, 4 Jun 2018 21:56:09 -0000 (UTC) Injection-Info: franka.jacob-sparre.dk; posting-host="rrsoftware.com:24.196.82.226"; logging-data="4316"; mail-complaints-to="news@jacob-sparre.dk" X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.7246 Xref: reader02.eternal-september.org comp.lang.ada:52934 Date: 2018-06-04T16:56:09-05:00 List-Id: "Shark8" wrote in message news:e1aaa443-00b7-4217-8fe8-dfc9098e247b@googlegroups.com... On Sunday, June 3, 2018 at 1:14:40 PM UTC-6, Stephen Leake wrote: >> I'm working on a parser generator. One of the files generated is very >> large; > >The idea of a large aggregate is good,... I can't speak to GNAT specifically, but in the case of Janus/Ada, the best solution would be to use a number of medium-size aggregates. The usual performance problem in Janus/Ada is the optimizer, and very large aggregates get really slow as the code to construct them gets rearranged by the optimizer. OTOH, lots of tiny subprogram calls also can get rearranged by the optimizer. So I'd suggest (a) turning off optimization to see if that alone is the culprit, and (b) using fewer calls using medium size aggregates - one for each state, perhaps? BTW, what Janus/Ada does for its own LALR parse tables is have a program to stream out a binary representation of them to a file ("Janus1.Ovl"), the compiler then streams that in to start. (It's a pure binary read/write - these days, I'd use Stream_IO to do it, avoid stream attributes as they can easily drop to component-by-component.) A text file would be quite a bit slower, since Text_IO requires a lot more processing that just pure binary I/O. Randy.