From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!.POSTED!not-for-mail From: Paul Rubin Newsgroups: comp.lang.ada Subject: Re: How to get Ada to ?cross the chasm?? Date: Thu, 10 May 2018 14:57:23 -0700 Organization: A noiseless patient Spider Message-ID: <8736yz18e4.fsf@nightsong.com> References: <1c73f159-eae4-4ae7-a348-03964b007197@googlegroups.com> <87po2la2qt.fsf@nightsong.com> <87in8buttb.fsf@jacob-sparre.dk> <87wowqpowu.fsf@nightsong.com> <16406268-83df-4564-8855-9bd0fe9caac0@googlegroups.com> <87o9i2pkcr.fsf@nightsong.com> <87in88m43h.fsf@nightsong.com> <87efiuope8.fsf@nightsong.com> <87lgd1heva.fsf@nightsong.com> <87zi1gz3kl.fsf@nightsong.com> <878t8x7k1j.fsf@nightsong.com> <87fu342q1o.fsf@nightsong.com> <87mux9645j.fsf@nightsong.com> Mime-Version: 1.0 Content-Type: text/plain Injection-Info: reader02.eternal-september.org; posting-host="8ee8a87c586ad688bbbd7b3afaf6ee4e"; logging-data="5506"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+EdYJua2OtjopbcOFyjgGz" User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) Cancel-Lock: sha1:A7iqEwtfapJbn1CRh7MunF96oaE= sha1:DBVeyx0RgnCQdc5bIRgxiIFkqr4= Xref: reader02.eternal-september.org comp.lang.ada:52216 Date: 2018-05-10T14:57:23-07:00 List-Id: Niklas Holsti writes: > I had in mind the vendor-specific SW tools for creating UIs to > databases and other on-line systems, before the standard windowing > systems and before the web. Oh ok, yeah, those existed on mainframes and were probably written in assembler and COBOL. LISP was still ivory-tower in that era, I think. > Looks nice, and why not. There are certainly applications for > microcontrollers where it is worth-while to use larger HW models to > gain programming convenience and margins. Yes, even in the hardware world, there's claims that 8-bit MCUs are essentially obsolete and you might as well use a 32-bitter (e.g. Cortex-M0) no matter how simple your application is. I wonder what the most energy efficient commodity MCUs you can get are. Current ARM stuff is apparently quite a bit more efficient than the familiar (though older) PIC and AVR stuff, and even than the TI MSP430 which made energy efficiency a big selling point. TI now seems to be moving towards ARM even in that product line, the MSP432 being Cortex-M4 based. > in my applications the bigger chunks of data usually end up being sent > from one thread to another I'm not sure if this suffices but I know that the GHC GC stops all the threads while it is running, so it can rearrange the heaps at that time. Between GC runs, you can copy stuff freely. In Erlang, all inter-process (and therefore inter-heap) communication is by copying, except for some large pointer-free objects that are reference counted, and the overhead doesn't seem too bad. Here, an Erlang guy claims that the Curiosity Mars Rover, while programmed in C, uses an Erlang-like approach to achive reliability: http://jlouisramblings.blogspot.com/2012/08/getting-25-megalines-of-code-to-behave.html > (I found a list of Forth applications in space at > http://web.archive.org/web/20101024223709/http://forth.gsfc.nasa.gov/. Almost > all from the USA, and most using the Harris Forth-oriented processor.) The main attraction of the Harris processor for space applications was that the chip itself was radiation hardened. I might ask some Forth people whether much Forth stuff was launched into space on conventional processors. I know it was used on the ground a lot for stuff like telescope control. > can one afford to buy, launch, and supply energy to a 4x more powerful > processor, just to have the luxury of a functional or GC programming > language? Under the 90-10 rule the idea is you'd accept the 4x slowdown to more easily write the 90% of the code that's using 10% of the execution cycles, while still writing the other 10% the "hard" way. So the total execution time changes from 0.1+0.9 to 0.4+0.9, a 1.3x slowdown, nowhere near as bad as 4x. There's probably a 1.3x typical slowdown from C to Ada, but I'd consider that to be worth it for Ada's higher safety. Also the 4x figure was for Haskell, and I think it comes mostly from laziness rather than GC (you have to use lifted types everywhere, etc.) With OCaml I believe the typical slowdown is much less than 2x. > Indeed, on small processors with weak instruction sets and short > registers, using an interpreter can be very advantageous, decreasing > code size with minor impact on speed. Even the Apollo spacecraft had > part of their guidance SW implemented in that way. Memory was probably more of a constraint than electrical power, or the interpreter may have just been more convenient. I remember reading that the astronauts' drinking water on Apollo came as a by-product from the hydrogen fuel cells that produced the electricity. To produce enough water for that, the amount of power must have also been pretty high. > The CPU load in my current application is about 40%. Slowing down the > code by factor of 4x would be impossible with this HW. 1.3x would be ok though ;). > group within the European Space Agency who were promoting Java for > on-board SW (I'm not sure if this was some real-time Java with limited > GC, or normal Java). You can program Java with minimal use of the GC by just avoiding calling "new". Lots of Javacard processors have no GC at all. You call "new" to create a few objects when the applet starts, and keep using them through the entire run. You even can program Lisp the same way, using data mutation functions like setq (set! in Scheme) heavily, and realtime Scheme programming is sometimes done that way. Gerry Sussman (Scheme co-inventor) programmed the mirror support system for a big telescope in Scheme that had to respond to mirror vibration at I think some kHz, so he must have done something to avoid GC pauses. It's harder in Haskell where mutation is more of a dirty word.