From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=BAYES_00,XPRIO autolearn=no autolearn_force=no version=3.4.6 Path: eternal-september.org!reader02.eternal-september.org!news.uzoreto.com!weretis.net!feeder8.news.weretis.net!newsfeed.xs3.de!callisto.xs3.de!news.jacob-sparre.dk!franka.jacob-sparre.dk!pnx.dk!.POSTED.rrsoftware.com!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: String Buffer Date: Thu, 2 Dec 2021 23:25:52 -0600 Organization: JSA Research & Innovation Message-ID: References: Injection-Date: Fri, 3 Dec 2021 05:25:53 -0000 (UTC) Injection-Info: franka.jacob-sparre.dk; posting-host="rrsoftware.com:24.196.82.226"; logging-data="32556"; mail-complaints-to="news@jacob-sparre.dk" X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Response X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.7246 Xref: reader02.eternal-september.org comp.lang.ada:63196 List-Id: "Jeffrey R.Carter" wrote in message news:sobcg4$tc4$1@dont-email.me... > On 2021-12-02 21:15, Dmitry A. Kazakov wrote: >> >> It is true only to a certain degree. When comparing algorithms it is >> valid to claim inefficiency without any context if computational >> complexity sufficiently differs. >> >> For example, an O(N) algorithm is unquestionably inefficient comparing to >> O(log N) one. I leave marginal cases of very small N. > > This is false. If you have the O(N) algorithm and it meets your > requirements, then using it is more efficient than implementing the O(log > N) algorithm. If you have both and the O(N) algorithm meets your > requirements, the effort to get your data into a form where you can apply > the O(log N) algorithm may still outweigh the time difference. > > As for "very small N", I have worked on systems where linear search was > efficient for sequences of 100,000 items. And it isn't unusual that you *are* working mostly with small N. Case in point: The Janus/Ada parser table lookup routine. At one point, I tried to speed it up by implementing a binary search algorithm. (The table is logically a 2-dimensional array with the action being the element, but the majority of the elements are error markers [which have no information]. The usual storage is to omit those and store pairs of terminal/action. That's many times smaller for a language like Ada.) The binary search seemed to *slow down* parsing! I spent a week building instrumented versions, and determined that the average number of pairs was about 8, and the binary search (which necessarily was much more complex than a linear search) was slower until the number of pairs was about 20. So it only helped the largest states. I ended up doing two other things instead: coding the lookup routine in assembler, which got the lookup loop down to about 5 instructions (it might be possible for a compiler to write that loop with fancy optimizations, but I've never seen one that could, and Janus/Ada certainly isn't able), and then I restructured the parse table to optimize lookups (terminals don't appear with a uniform frequency in code). The net result was that average lookup takes 1.6 iterations (of a 5 instruction loop); the binary version (which executed a lot more code) took somewhere around 3.5 on average. The point is, simpler can often be better (especially as it is more likely to work the first time). And if it proves necessary to improve it, nothing really substitutes for instrumenting the actual application. Randy.