comp.lang.ada
 help / color / mirror / Atom feed
* Re: Query about monitor (passive) task optimization
@ 1993-07-30 17:51 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-07-30 17:51 UTC (permalink / raw)


In article <CAzCvo.34p@ddciiny.UUCP> jls@ddciiny.UUCP (Jonathan Schilling) writ
es:
>
>Several of DDC-I's compiler products now have automatic compiler recognition
>of monitor tasks, meaning there is no pragma, compiler switch, or any other
>user action required -- the compiler (with the runtime system) just does the 
>optimization whenever it recognizes an eligible task.

Congratulations! I mean in no way to diminish this accomplishment, but 
point out that those of us who have been teaching tasking for 10
years, and LIKE the tasking model, have always taught that a good compiler
can, and should, seek out and do these optimizations.

That DDC-I is the first in 10 years to do it is a commentary on IMHO
the monumental risk-averseness and lack of ingenuity in the the Ada compiler
business. This industry has preferred quantity (how many validation
certificates do YOU have?) to quality (how well are we REALLY exploiting
the possibilities of the language). 'Nuff said.
>
>We're in the process of writing a paper describing this approach, and its
>pros and cons relative to the pragma approach, and we'd like to include 
>references to any related work in this area.  So my query is, does anyone
>know of any other Ada compilers that use the automatic recognition approach?
>
Not that I've ever heard of. Let me know if you find one, please?

Oh - I suppose the universities might have helped industry do some of
these things, just like in other areas of computing. Unfortunately we
never got sources to play with. Maybe with GNAT...

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-07-31  3:27 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-07-31  3:27 UTC (permalink / raw)


Mike assumes that automatic recognition of passive tasks is a good thing.
He is apparently unaware that this is by no means obvious, and indeed most
of the Ada folks in realtime areas that I have talked to do not AT ALL like
the idea of such automatic recognition, and much prefer explicit control
over thread structure.

Many of us felt in the Ada 9X process that the needs for efficient 
tasking would better be met by formalizing and structuring the opportunities
for this kind of task optimization, but it was quite clear that the realtime
community much prefers the explicit approach as exemplified by the protected
type feature of 9X.

I also know that Alsys found in deciding how to proceed that people in 
general much preferred the kind of explicit pragma approach that Verdix
uses to the kind of automatic recognition that DDC does.

So, sure it's nice to see different vendors doing different things and taking
different approaches, but Mike's "finally! at last! someone doing something
reasonable for tasking!" approach is plainly inappropriate.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-01  3:25 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-01  3:25 UTC (permalink / raw)


In article <23corr$a8g@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wr
ites:
>Mike assumes that automatic recognition of passive tasks is a good thing.
>He is apparently unaware that this is by no means obvious, and indeed most
>of the Ada folks in realtime areas that I have talked to do not AT ALL like
>the idea of such automatic recognition, and much prefer explicit control
>over thread structure.
Well, obviously we are in disagreement here. Somewhere I got the idea that
the purpose of high-level languages was PRECISELY to buffer the programmer
from these little implementation details and learn to have faith in the
compiler. As compilers mature they get better and better at the various
optimizations, which (should) become more and more transparent to the
programmer, who simply ends up delighted at the performance but doesn't
know or care why.

Yes, there may be some realtime guys who object to losing ANY 
control. These are the same folks who use shared variables, and indeed
probanly use a lot of global data because parameter passing is, at least
in their minds, "inefficient". I can't dispute their perceived need to
do these things; I'll stipulate that they've gotta do what they've gotta do.

What about the rest of the world? Realtime ain't all there is. Tasking is
not just a realtime structure, it's also an ABSTRACTION mechanism. Somehow
in trying to keep the realtimers happy, the rest of us got lost in the dust.

Tasking need not be nearly as "inefficient" as it is sometimes claimed
to be. I assert that we (usually) could not care less about the
order of evaluation of expressions (and if we do care we use parentheses,
etc., to second-guess the compiler). There are lots of things we usually
leave to the optimizer to sort out for us. Do we need a pragma to get
a short loop unrolled? A constant calculation pulled out of a loop?
No, that's motherhood.

I claim that tasking is (rather, should have been) in a similar category.
It's an abstraction mechanism; we use it to express concurrent algorithms,
or rather to express algorithms in a concurrent manner. That's what we
were told in the early days of tasking; some of us actually still believe
it. It's the business of the compiler writer to get everything done for
us in the most optimal way. Over time, the compilers get better and
better, and we get happier and happier because the better and better
gets more and more transparent, requiring less micro-managing from us.

If certain kinds of tasks can be implemented more efficiently, without
my having to second-guess with pragmas, so much the better. I hope
that Robert hasn't been around the real-time Establishment so long that 
he's forgotten the _rest_ of what tasking is supposed to be about.
>
>Many of us felt in the Ada 9X process that the needs for efficient 
>tasking would better be met by formalizing and structuring the opportunities
>for this kind of task optimization, but it was quite clear that the realtime
>community much prefers the explicit approach as exemplified by the protected
>type feature of 9X.

I quite agree that the 9X process has listened to the realtimers. That
is not inappropriate, but I think it may have been carried to the extreme.
Protected types are basically taking us back to monitors. Maybe they
should have been there all along. But that's only half the story. I'm afraid
that being able to satisfy their R/T customers with protected types, the
vendor folks will let themselves off the hook from really trying to
optimize tasking for the rest of us. 

I don't deny that it's important to listen to the customer. But listening
ONLY to the (current) customer deprives the compiler folks of the great
thrill of getting out front and leading, instead of following the
(government) customer all the time. Robert, it's all part of the same
mentality.
>
>I also know that Alsys found in deciding how to proceed that people in 
>general much preferred the kind of explicit pragma approach that Verdix
>uses to the kind of automatic recognition that DDC does.

I'm people. Nobody asked me. They asked their realtime customers.  They
asked mostly the guys who want to control every bit and every microsecond,
and neglected those of us who think we are paying compiler writers
to get REALLY GOOD (over time) at optimizing stuff without bothering us
with details like pragmas. I'd be less prone to kvetch about the obscene
prices if I thought our money was buying that kind of technical leadership.
>
>So, sure it's nice to see different vendors doing different things and taking
>different approaches, but Mike's "finally! at last! someone doing something
>reasonable for tasking!" approach is plainly inappropriate.
>
What the heck, I won't split hairs with you. Good people can disagree,
and we are plainly in disagreement on this. 

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02  1:57 Jonathan Schilling
  0 siblings, 0 replies; 30+ messages in thread
From: Jonathan Schilling @ 1993-08-02  1:57 UTC (permalink / raw)


In article <23corr$a8g@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wr
ites:
>Mike assumes that automatic recognition of passive tasks is a good thing.
>He is apparently unaware that this is by no means obvious, and indeed most
>of the Ada folks in realtime areas that I have talked to do not AT ALL like
>the idea of such automatic recognition, and much prefer explicit control
>over thread structure.

I've heard of these objections before, but I don't fully understand them. 
Assuming the optimization is transparent to the programmer, and does not 
in any way change the semantics of the program, what "control" is being
lost?  From within the program, using only standard Ada, one wouldn't
even be able to detect whether the optimization had happened or not 
(this might be doable with CIFO-type interfaces, depending on the
runtime system implementation).  The only difference is that the program
runs faster.

-- 
Jonathan Schilling
DDC-I, Inc.
uunet!ddciiny!jls

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02  3:30 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-02  3:30 UTC (permalink / raw)


In article <CB403J.74M@ddciiny.UUCP> jls@ddciiny.UUCP (Jonathan Schilling) writ
es:
>In article <23corr$a8g@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) w
rites:
>>Mike assumes that automatic recognition of passive tasks is a good thing.
>>He is apparently unaware that this is by no means obvious, and indeed most
>>of the Ada folks in realtime areas that I have talked to do not AT ALL like
>>the idea of such automatic recognition, and much prefer explicit control
>>over thread structure.
>
>I've heard of these objections before, but I don't fully understand them. 
>Assuming the optimization is transparent to the programmer, and does not 
>in any way change the semantics of the program, what "control" is being
>lost?  From within the program, using only standard Ada, one wouldn't
>even be able to detect whether the optimization had happened or not 
>(this might be doable with CIFO-type interfaces, depending on the
>runtime system implementation).  The only difference is that the program
>runs faster.
>
Exactly. That's why I applauded DDC-I for doing it.

The reason for the objection is that people do not trust their compilers,
especially in the realtime business. As I said in my big long post yesterday,
this gives me a sense of deja vu. Back in the 70's the efficiency fanatics
hated procedure calls, especially with parameters. Some realtime guys still 
do, and - for their projects - they may have no choice. What has rankled
is that the Ada compiler folks have not gotten out in front and led their
customers instead of following them. If they had, maybe there would be
more customers outside the defense business.

Ada is a parallelism language, too, not just a realtime one. Where are the
code generators for the parallel machines? The Ada houses are, for the
most part, opting out of that market, preferring - as usual - the
comfortable realtime crowd in the defense industry.

Ada is, in its way, a very innovative language, especially in the tasking
area. No other major language has anything comparable. One of my biggest
disappointments, as a teacher and fan of concurrency, is that the Ada
houses have systematically neglected really exploiting these and other
innovative language constructs in Ada, preferring the tried and true.

Another example: are there any vendors out there who've tried to go after
a piece of the Fortran-oriented market by mapping its multidimensional
constrained arrays in column-major form? The LRM allows any old storage
mapping. Teachers like me explain this seeming gaffe by pointing out that
a compiler oriented at the Fortran crowd _could_ use column-major
storage to interface with old Fortran codes, while one for some
nonlinear architecture _could_ map the arrays nonlinearly (in a
tree structure as used, I think, on the old Univac architecture),
and a "classical" code generator could use the old row-major mapping.
Or indeed, with implementation-dependent pragmas, they could do all 3.

To my knowledge, everyone is sticking to the tried-and-true row major.
Yes, I know Ada9X is providing a pragma to cover this sort of
storage mapping. Did anybody do it without waiting for 9X? Not
to my knowledge. I'd _love_ to be wrong here...

If these things are so obvious to mere academics, what is keeping the
vendors from aggressively going after these markets with some real
innovation and leadership in Ada compilers? In my view, nothing but
the "beltway bandit" mentality is holding them back.

Verdix could do this instead of keeping their compiler folks busy by
giving them a C++ system to write...

Sure it costs money to get out in front. I have no business degree, but
somebody told me long ago that a good business person knows you have to
spend money to make money...

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02  6:41 Bjorn Kallberg
  0 siblings, 0 replies; 30+ messages in thread
From: Bjorn Kallberg @ 1993-08-02  6:41 UTC (permalink / raw)


In article <CB403J.74M@ddciiny.UUCP> jls@ddciiny.UUCP (Jonathan
Schilling) writes:

>I've heard of these objections before, but I don't fully understand them. 
>Assuming the optimization is transparent to the programmer, and does not 
>in any way change the semantics of the program, what "control" is being
>lost?  

I thought a Real Time program was a program, where timing was an essential
part of the semantics. Thus, semantics is changed.

>From within the program, using only standard Ada, one wouldn't
>even be able to detect whether the optimization had happened or not 
>(this might be doable with CIFO-type interfaces, depending on the
>runtime system implementation).  The only difference is that the program
>runs faster.

Or suddenly slower, when you change the program in some little way, so
the compiler can not apply the optimization any longer, that it previously did.
And you do not get a warning or error, which you will if you have explicitly
told the compiler, that this must be a passive task.

By the way, are not the automatic type conversions made by some programming
languages, like PL1, quite wonderful? You don't have to write a lot of
silly and unneccesary stuff. 

>
>-- 
>Jonathan Schilling
>DDC-I, Inc.
>uunet!ddciiny!jls

Bj|rn K{llberg
bjkae@celsiustech.se

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02 13:19  Arthur Evans
  0 siblings, 0 replies; 30+ messages in thread
From:  Arthur Evans @ 1993-08-02 13:19 UTC (permalink / raw)


jls@ddciiny.UUCP (Jonathan Schilling) quotes Robert Dewar:
>>Mike [Feldman] assumes that automatic recognition of passive tasks is
>>a good thing.  He is apparently unaware that this is by no means
>>obvious, and indeed most of the Ada folks in realtime areas that I
>>have talked to do not AT ALL like the idea of such automatic
>>recognition, and much prefer explicit control over thread structure.

Schilling replies:
>I've heard of these objections before, but I don't fully understand them. 
>Assuming the optimization is transparent to the programmer, and does not 
>in any way change the semantics of the program, what "control" is being
>lost?  From within the program, using only standard Ada, one wouldn't
>even be able to detect whether the optimization had happened or not 
>(this might be doable with CIFO-type interfaces, depending on the
>runtime system implementation).  The only difference is that the program
>runs faster.

mfeldman@seas.gwu.edu (Michael Feldman) replies:
> Exactly. That's why I applauded DDC-I for doing it.

> The reason for the objection is that people do not trust their
> compilers, especially in the realtime business.
> [MORE, DELETED]

That's not the reason that was cited by the real-time folks in Ada 9X
design discussions.

Consider a real time application with tight time constraints.  The
programmer, who is familiar with the requirements of the vendor's
compiler, codes the task with care so that the compiler will be able to
perform clever optimizations.  The code meets its requirements.

Later, in maintenance, another programmer makes a seemingly trivial
modification to the source code.  An effect, unfortunately, is that the
conditions for performing the optimization are no longer met.  The code
is still semantically correct, but the application fails in ways
difficult to diagnose because a timing constraint is sometimes missed.

The advantage of the pragma approach is that it precludes this
possibility.  Use of the pragma says, in effect, "I want the compiler to
perform this optimization, and I promise to avoid certain features that
preclude its use."  If a change in maintenance causes the promise not to
be kept, the complier can provide a clear diagnostic.

Should Ada cater to the hard real-time folks as above, or to the general
community that Mike speaks for?  Well, yes, it should -- to both.  But
keep in mind that Ada's initial sponsor had in mind embedded
applications, and that such applications usually have hard real-time
requirements.

Now, having said all that, I can agree with Mike's comment.  Here it is
10 years after standardization of Ada-83.  It may not be surprising that
vendors first took the pragma approach; it's sad that after all this
time so few have implemented the optimizations.  The right approach, I
would say, is to implement the optimizations _and_ provide a pragma that
makes the promise that the program will meet the requirements for the
optimization.

Art Evans
----------------------------------------------
Arthur Evans, Jr, PhD           Ada Consultant
461 Fairview Road
Pittsburgh PA  15238-1933
412-963-0839
ae@sei.cmu.edu

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02 14:35 Jonathan Schilling
  0 siblings, 0 replies; 30+ messages in thread
From: Jonathan Schilling @ 1993-08-02 14:35 UTC (permalink / raw)


In article <1993Aug2.064113.938@celsiustech.se> bjkae@celsiustech.se (Bjorn Kal
lberg) writes:
>In article <CB403J.74M@ddciiny.UUCP> jls@ddciiny.UUCP (Jonathan
>Schilling) writes:
>
>>I've heard of these objections before, but I don't fully understand them. 
>>Assuming the optimization is transparent to the programmer, and does not 
>>in any way change the semantics of the program, what "control" is being
>>lost?  
>
>I thought a Real Time program was a program, where timing was an essential
>part of the semantics. Thus, semantics is changed.
>
>>From within the program, using only standard Ada, one wouldn't
>>even be able to detect whether the optimization had happened or not 
>>(this might be doable with CIFO-type interfaces, depending on the
>>runtime system implementation).  The only difference is that the program
>>runs faster.
>
>Or suddenly slower, when you change the program in some little way, so
>the compiler can not apply the optimization any longer, that it previously did
.
>And you do not get a warning or error, which you will if you have explicitly
>told the compiler, that this must be a passive task.

These are valid points, but they extend to virtually every language feature
in Ada, not just synchronization tasks.  For instance, changing one of the
bounds of an array type declaration from a static to a dynamic value could 
well result in *much* more code being generated for object declarations and
operations of that type.  Yet most compilers that I know of will not issue 
a warning when this happens, nor have a pragma to explicitly "control" it.

The essence of compiler optimization in Ada is recognizing simple instances
of potentially complex constructs, and generating only the code appropriate
for the simple case.  This goes on all the time in Ada, with array types,
record types, discriminants, slices, type conversions, tasks, representation
clauses, and so on.  While compiler user documentation may attempt to outline
what construct usages the compiler will see as "simple", the coverage is
rarely complete.  Users often stumble across minor source changes that
produce significant differences in object efficiency, for reasons that are
understandable to the compiler writers but less so to anyone else.

Unless someone wants to change Ada, or create another real-time language,
such that the language specification itself imposes execution-time
efficiency requirements on every construct in the language (has anyone 
ever tried this?), I'm not convinced of the usefulness of giving special
treatment to the monitor task optimization.


-- 
Jonathan Schilling
DDC-I, Inc.
uunet!ddciiny!jls

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02 17:47 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-02 17:47 UTC (permalink / raw)


In article <1993Aug2.064113.938@celsiustech.se> bjkae@celsiustech.se (Bjorn Kal
lberg) writes:
>
>I thought a Real Time program was a program, where timing was an essential
>part of the semantics. Thus, semantics is changed.

Certainly. And I argued that there are a lot of uses of tasking beyond
realtime, which have gotten short shrift from the Ada companies overall.
>
>>From within the program, using only standard Ada, one wouldn't
>>even be able to detect whether the optimization had happened or not 
>>(this might be doable with CIFO-type interfaces, depending on the
>>runtime system implementation).  The only difference is that the program
>>runs faster.
>
>Or suddenly slower, when you change the program in some little way, so
>the compiler can not apply the optimization any longer, that it previously did
.
>And you do not get a warning or error, which you will if you have explicitly
>told the compiler, that this must be a passive task.
>
How does this differ from _any_ optimization? Do you want compiler-dependent
pragmas for every teeny little optimization? I doubt it.

>By the way, are not the automatic type conversions made by some programming
>languages, like PL1, quite wonderful? You don't have to write a lot of
>silly and unneccesary stuff. 

Different issue. These implicit conversions, which caused everyone so much
heartache, were part of the language design, not an implementer choice.
The passive-task thing is an IMPLEMENTATION-DEPENDENT PRAGMA. Or, in the
case of DDC-I, they just do it.

My objection was to some Ada users feeling that they need to control every
bit and microsecond of tasking. Why don't they insist on pragmas to
control every expression evaluation? I suspect it's because programmers
understand expressions and do not understand concurrency. And when they
don't understand something, they get antsy if they can't control it.

What Ada was supposed to be about was a common language for which the
validation process guaranteed most behavior. This was supposed to lead
to a very active and innovative compiler industry that would find lots
of neat ways of exploiting the language features, and lots of users
who would learn the features and trust their compilers.

So did it happen that way?

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02 18:13 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-02 18:13 UTC (permalink / raw)


In article <1993Aug2.091924.19310@sei.cmu.edu> ae@sei.cmu.edu (Arthur Evans) wr
ites:
>
>That's not the reason that was cited by the real-time folks in Ada 9X
>design discussions.
>
>Consider a real time application with tight time constraints.  The
>programmer, who is familiar with the requirements of the vendor's
>compiler, codes the task with care so that the compiler will be able to
>perform clever optimizations.  The code meets its requirements.
>
>Later, in maintenance, another programmer makes a seemingly trivial
>modification to the source code.  An effect, unfortunately, is that the
>conditions for performing the optimization are no longer met.  The code
>is still semantically correct, but the application fails in ways
>difficult to diagnose because a timing constraint is sometimes missed.

I would assert that it is really not too hard to understand what a
passive task is. Turning a passive task into something else is NOT
a "seemingly trivial change", except maybe to a programmer who is
clueless about what concurrency's all about. (Maybe that is part of
the problem: clueless programmers?)
>
>The advantage of the pragma approach is that it precludes this
>possibility.  Use of the pragma says, in effect, "I want the compiler to
>perform this optimization, and I promise to avoid certain features that
>preclude its use."  If a change in maintenance causes the promise not to
>be kept, the complier can provide a clear diagnostic.

If 1815-A had required this pragma, I would have no objection. (Well,
I'd rather it be automatic, but a pragma is a decent mechanism for
optimization, used all over Ada for this sort of thing.) But under the
circumstances, it's implementation-dependent. So much for a common
language with reasonably portable program behavior.
>
>Should Ada cater to the hard real-time folks as above, or to the general
>community that Mike speaks for?  Well, yes, it should -- to both.  But
>keep in mind that Ada's initial sponsor had in mind embedded
>applications, and that such applications usually have hard real-time
>requirements.

All the more reason for passive tasks being identified as such in the
LRM, maybe by pragma. Fine. Or all the major vendors could agree on
a de facto standard pragma PASSIVE, with the same qualifiying conditions.
Fine. But what we've got now is a sort of feature war, which is not what
Ada was supposed to be about.

It is instructive to read the Rationale again on this subject, and maybe
the LRM. There tasking is discussed as a _language_ feature, an abstraction
mechanism. Two assumptions are implicit: (1) users would take the trouble
to understand the feature, and (2) compiler writers would (learn how to)
implement it with increasing efficiency and sophistication in each new
release. That's not what we got, is it?
>
>Now, having said all that, I can agree with Mike's comment.  Here it is
>10 years after standardization of Ada-83.  It may not be surprising that
>vendors first took the pragma approach; it's sad that after all this
>time so few have implemented the optimizations.  The right approach, I
>would say, is to implement the optimizations _and_ provide a pragma that
>makes the promise that the program will meet the requirements for the
>optimization.
>
Well, OK. I suppose we need a "pragma NO_SIDE_EFFECTS" for each arithmetic
expression we want the compiler to optimize. :-)

Seriously - my point is that Ada compiler houses have been IMHO market
trend-followers, instead of trend-setters. These guys are (were supposed to 
be) the Best and the Brightest: young, well-educated, enthusiastic, and
working with a rich language that really exercised their minds. From the
private mail I get from current and former techies in these companies,
I'm getting the picture that there are a lot of frustrated techies
out there, stifled by myopic management looking mainly at the next
contract or the next quarter instead of the next decade. It all comes
down to short-term return-on-investment, doesn't it?

Sigh...

Mike

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-02 18:17 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-02 18:17 UTC (permalink / raw)


In article <CB4z6w.863@ddciiny.UUCP> jls@ddciiny.UUCP (Jonathan Schilling) writ
es:

[stuff deleted]
>
>These are valid points, but they extend to virtually every language feature
>in Ada, not just synchronization tasks.  For instance, changing one of the
>bounds of an array type declaration from a static to a dynamic value could 
>well result in *much* more code being generated for object declarations and
>operations of that type.  Yet most compilers that I know of will not issue 
>a warning when this happens, nor have a pragma to explicitly "control" it.

Once your customers read this, you may indeed get a bunch of requests to
implement it. After all, it's another way to second-guess the compiler. :-)
>
>The essence of compiler optimization in Ada is recognizing simple instances
>of potentially complex constructs, and generating only the code appropriate
>for the simple case.  This goes on all the time in Ada, with array types,
>record types, discriminants, slices, type conversions, tasks, representation
>clauses, and so on.  While compiler user documentation may attempt to outline
>what construct usages the compiler will see as "simple", the coverage is
>rarely complete.  Users often stumble across minor source changes that
>produce significant differences in object efficiency, for reasons that are
>understandable to the compiler writers but less so to anyone else.

And this will ALWAYS be true of high-level languages and optimization.
If you've gotta control every bit, write it in assembler. If you're
writing it in a HLL, these things will bite you now and then. Yep.
That's life.
>
>Unless someone wants to change Ada, or create another real-time language,
>such that the language specification itself imposes execution-time
>efficiency requirements on every construct in the language (has anyone 
>ever tried this?), I'm not convinced of the usefulness of giving special
>treatment to the monitor task optimization.

Exactly!

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-03 10:11 Bjorn Kallberg
  0 siblings, 0 replies; 30+ messages in thread
From: Bjorn Kallberg @ 1993-08-03 10:11 UTC (permalink / raw)


In article <1993Aug2.174739.16569@seas.gwu.edu> mfeldman@seas.gwu.edu (Michael 
Feldman) writes:

>How does this differ from _any_ optimization? Do you want compiler-dependent
>pragmas for every teeny little optimization? I doubt it.

It is a matter of scale. Changing from a passive task to a full task
implementation may presumably increase the time a factor of ten, and may be 
a factor of 100, if it is a simple case with no guards etc. 
A full rendez-vous is one of the more expensive constructs in the language.

On the other hand, I would be very glad to have advice from the compiler.
"This is a passive task, if you allow me I will optimize it for you".


>>By the way, are not the automatic type conversions made by some programming
>>languages, like PL1, quite wonderful? You don't have to write a lot of
>>silly and unneccesary stuff. 
>
>Different issue. These implicit conversions, which caused everyone so much
>heartache, were part of the language design, not an implementer choice.
>The passive-task thing is an IMPLEMENTATION-DEPENDENT PRAGMA. Or, in the
>case of DDC-I, they just do it.

Now you are talking about standardisation, i.e. similarities between '
different compilers. From a person using just one compiler, it is similar
enought. To trust the compiler without explicit advice, or not to trust
the compiler. 

>
>My objection was to some Ada users feeling that they need to control every
>bit and microsecond of tasking. Why don't they insist on pragmas to
>control every expression evaluation? I suspect it's because programmers
>understand expressions and do not understand concurrency. And when they
>
One of the difficulties we have found using Ada for large project is, that
it is so easy to write code, where the source code is very compact and
"elegant", but the generated executable is painstakingly large and in-
efficient. This can naturally be overcome, but it needs an explicit
effort.


>
>What Ada was supposed to be about was a common language for which the
>validation process guaranteed most behavior. This was supposed to lead
>to a very active and innovative compiler industry that would find lots
>of neat ways of exploiting the language features, and lots of users
>who would learn the features and trust their compilers.
>
>So did it happen that way?
>
No, and as the compilers are less than optimal, it is still more important
to know what actually happens. 

>Mike Feldman

bj|rn K{llberg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-03 15:26 Jonathan Schilling
  0 siblings, 0 replies; 30+ messages in thread
From: Jonathan Schilling @ 1993-08-03 15:26 UTC (permalink / raw)


In article <1993Aug2.181750.19343@seas.gwu.edu> mfeldman@seas.gwu.edu (Michael 
Feldman) writes:
>
> [discussion of compiler optimization and performance traps in Ada]
>
>And this will ALWAYS be true of high-level languages and optimization.
>If you've gotta control every bit, write it in assembler. If you're
>writing it in a HLL, these things will bite you now and then. Yep.
>That's life.

Yes.  It should be pointed out that Ada is in no way unique in this area.
C++ has a number of tricky areas relating to real-time performance (that
incidentally programmers coming from Ada will be better prepared to deal
with than those coming from C).  Even in C the user has to worry about
whether the standard library functions are reentrant, whether they have
hidden extrefs that aren't appropriate for real-time, whether the malloc
algorithm behaves reasonably, etc.  [The July 1993 issue of "Embedded
Systems Programming" has two long articles on these subjects.]  Then if
the programmer is using a rl-time kernel product to provide concurrency
support, there are the performance characteristics of that product to
learn, and occasionally get caught by.

-- 
Jonathan Schilling
DDC-I, Inc.
uunet!ddciiny!jls

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-03 20:01 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-03 20:01 UTC (permalink / raw)


In article <1993Aug3.101131.15013@celsiustech.se> bjkae@celsiustech.se (Bjorn K
allberg) writes:
>In article <1993Aug2.174739.16569@seas.gwu.edu> mfeldman@seas.gwu.edu (Michael
 Feldman) writes:
>
>>How does this differ from _any_ optimization? Do you want compiler-dependent
>>pragmas for every teeny little optimization? I doubt it.
>
>It is a matter of scale. Changing from a passive task to a full task
>implementation may presumably increase the time a factor of ten, and may be 
>a factor of 100, if it is a simple case with no guards etc. 
>A full rendez-vous is one of the more expensive constructs in the language.
>
Yes, it is. That is EXACTLY why the compiler writers should be exerting
more effort to detect special cases and use heuristics to cut it down.
Instead, what they are doing is asking US to specify, in compiler-
dependent detail, the special cases for them. That is a cop-out.

>On the other hand, I would be very glad to have advice from the compiler.
>"This is a passive task, if you allow me I will optimize it for you".
>
I have no problem with such an advisory. 

Indeed, another advisory would be even better: "this appears to be a hybrid 
task that is difficult to optimize. Consider doing blah-blah restructuring
to make it more efficient; if you do, it'll run 10 times faster." 

I know, in general this is an expert-system problem. So? There's LOT of human 
domain expertise to draw on. Are the vendors building this kind of smarts into 
the front-ends of their systems? If they did, it would propagate to all
their zillion validations, because these big families have common
front-ends. One would think they'd see money in that.

(They could even fund some university projects to dope out all the
issues and suggest the heuristics and rules. Students are cheap.
A programmer, loaded up with overhead, costs $100k a year or more.
Students are a whole lot cheaper. Somehow the Ada companies forgot
that. Remember the days when TeleSoft employed UCSD students?)
>
>>>By the way, are not the automatic type conversions made by some programming
>>>languages, like PL1, quite wonderful? You don't have to write a lot of
>>>silly and unneccesary stuff. 
>>
>>Different issue. These implicit conversions, which caused everyone so much
>>heartache, were part of the language design, not an implementer choice.
>>The passive-task thing is an IMPLEMENTATION-DEPENDENT PRAGMA. Or, in the
>>case of DDC-I, they just do it.
>
>Now you are talking about standardisation, i.e. similarities between '
>different compilers. From a person using just one compiler, it is similar
>enought. To trust the compiler without explicit advice, or not to trust
>the compiler. 

It's still a different issue. If anything, it was much better defined
in PL/I. The compiler always knew what it was doing, and what it was doing
conformed (assuming the compiler was not buggy) to well-defined rules in
the PL/I "LRM". (I know, I taught that junk for years!) In Ada's case,
the LRM leaves a lot PURPOSELY unspecified, hoping that programmers will
get used to the idea of an abstract structure. In fact, in the early days
of Ada compilers the vendors WOULD NOT EVEN TELL YOU IF YOU ASKED. I
remember once, some years ago, asking the lead techie at an Ada vendor
to give me a list of the context-switch points in their tasking runtime.
His answer? "That's proprietary information." That's the truth! Sheesh.
>
>>
>>My objection was to some Ada users feeling that they need to control every
>>bit and microsecond of tasking. Why don't they insist on pragmas to
>>control every expression evaluation? I suspect it's because programmers
>>understand expressions and do not understand concurrency. And when they
>>
>One of the difficulties we have found using Ada for large project is, that
>it is so easy to write code, where the source code is very compact and
>"elegant", but the generated executable is painstakingly large and in-
>efficient. This can naturally be overcome, but it needs an explicit
>effort.

That is precisely what I meant. Either there was something subtle you
weren't understanding, or the compiler was not doing what you paid it to do.
I mean no offense, but I said a day or so ago that a lot of this is
due to dumb programmers or dumb compilers or both. As a teacher, I
can help to smarten up the people. I can't help to smarten up the compilers,
though, unless the vendors really want to. And I doubt that they do.
>
>
>>
>>What Ada was supposed to be about was a common language for which the
>>validation process guaranteed most behavior. This was supposed to lead
>>to a very active and innovative compiler industry that would find lots
>>of neat ways of exploiting the language features, and lots of users
>>who would learn the features and trust their compilers.
>>
>>So did it happen that way?
>>
>No, and as the compilers are less than optimal, it is still more important
>to know what actually happens. 

Naturally. I always find myself working at two levels. The first could be
described as "here's how it ought to be"; the second could be described
as "you gotta do what you gotta do". With a _given_ compiler on a _given_
project, you often end up with the second. So do my teaching and
consulting clients. That should not prevent us from pestering the
vendors to do a better job in earning that $25k per seat.

Cheers -

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-03 20:10 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-03 20:10 UTC (permalink / raw)


In article <CB6w8I.9xt@ddciiny.UUCP> jls@ddciiny.UUCP (Jonathan Schilling) writ
es:
>In article <1993Aug2.181750.19343@seas.gwu.edu> mfeldman@seas.gwu.edu (Michael
 Feldman) writes:
>>
>> [discussion of compiler optimization and performance traps in Ada]
>>
>>And this will ALWAYS be true of high-level languages and optimization.
>>If you've gotta control every bit, write it in assembler. If you're
>>writing it in a HLL, these things will bite you now and then. Yep.
>>That's life.
>
>Yes.  It should be pointed out that Ada is in no way unique in this area.
>C++ has a number of tricky areas relating to real-time performance (that
>incidentally programmers coming from Ada will be better prepared to deal
>with than those coming from C).  Even in C the user has to worry about
>whether the standard library functions are reentrant, whether they have
>hidden extrefs that aren't appropriate for real-time, whether the malloc
>algorithm behaves reasonably, etc.  [The July 1993 issue of "Embedded
>Systems Programming" has two long articles on these subjects.]  Then if
>the programmer is using a rl-time kernel product to provide concurrency
>support, there are the performance characteristics of that product to
>learn, and occasionally get caught by.
>
You betcha. People who try to use concurrency without understanding it
are in for BIG trouble. The nice thing about Ada is that at least the
concurrency constructs are up front, in the language, where (IMHO)
they belong, not buried in what amounts to just a subprogram library. 
(POSIX or whatever) That was, you may recall, the original idea. People
were actually supposed to USE tasks, not end-run them as many are doing
now. Understanding tasks, and USING them, was supposed to spur the
vendors into working _really_ hard to optimize the hell out of them.

So what do we have now? POSIX/Ada bindings. Packages. Subroutine calls.
I'm not dumping on the good work done in developing the POSIX/Ada
bindings; I'm saying that such things should not really be necessary,
if the compilers were doing what we pay for them to do - give us good, 
correct code that really exploits the hardware and the power of the language.

THAT, my friends, is where Ada could win BIG. If only...

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09  4:41 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-08-09  4:41 UTC (permalink / raw)


First of all, Mike says that "we are in disagreement". If we means Robert and
Mike, then he is reading my original message carelessly, since nowhere in that
message did I express my opinion on the matter -- I merely reported the
empirical fact that a substantial number of real-time people object to
automatic recognition of passive tasks.

Well disagreeing with that would say you did not believe this was the case,
but the rest of Mike's message makes it clear that he does recognize this
(although he doesn't like it).

Now, what do I think about this? Well I don't really see a huge market for
optimized tasking that is not under control of the user. Mike says he want
s this factility, but I have trouble believing this, because I doubt he has
applications where this kind of optimization is important. Instead I would
guess he is speaking not for himself, but for a constituency that he imagines
exists. This consituency is not the hard real-time folks [Mike it isn't just
some psychological glitch that makes these people want to control thinks, they
have to have close control if they are to ensure that deadlines are met etc.]

Does this constituency in fact exist? Well I doubt it is significant. Most
people either don't care very much about efficiency of tasking or they care
very much. If you care very much, then you just can't afford to leave it to
some unknown algorithm in the compiler for passive tasking optimization, 
resulting in a situation where you have no idea whether a task you write
represents a real thread of control with its attendant overhead or not.

Mike speaks glibly of improving compiler technology, but the trouble with
optimizations of this kind is that the only technology that is in sight is
the ability to recognize certain special cases. Either these cases are
extremelye easy to describe and program or they are not. If they are, then
you at least want compiler support to make sure that you do not stray outside
the recognized idioms, and you might as well have a pragma for this purpose.
Otherwise you get a situation where a small maintenance change breaks the
optimization pattern, and suddenly there is a big non-linear response in the
performance of the program. If on the other hand the description of what cases
can be optimized is complex (as is likely to be the case, because the compiler
typically will be operating at that stage on a tree that has been subjected
to a series of hard to specify and describe transformations), then you are
in really deep water, since no one can tell what is going on.

A similar sort of situation arises in transformational programming in general.
Should a compiler try major transformations on its own (e.g. replacing a 
simple high level statement of a sort with a complex algorithmn such as
heap sort, or do recursion rearrangement to reduce space usage to linear
instead of exponential?) Well of course such transformations are of course
possible automaticaly in a limited set of circumstances, but you tend to get
a very difficult to use system if you try to do things automatically.

In short, I think it is probably a bad idea to put too much effort into this
kind of optimization. Believe me, there are *plenty* of opportunities for
more straightforward optimization that have by no means been fully exploited.
As a result I think it would be a bad idea for vendors to put much effort
in this area.

One more thing. As we well know, Mike is on the lookout for cheap compilers
for educational use, which is fair enough, but he cannot at the same time
expect to influence the commercial vendors in terms of what optimizations
they might offer. If indeed the constituency that he imagines exists, it
would presumably have resources to encourage the vendors to move in this
direction. The fact that vendors have not heard people clamoring for this
kind of support I think legitimately indicates that this constuency does
not in fact exist. 

The facile belief that compilers can do amazing optimizations is often
heard. Indeed I think the original Ada design was in some cases badly
influenced by this belief (why for instance is there no proper control
over storage management in Ada 83 -- at least part of the answer is that
some people thought that compilers would be able to do an amazing job
of implementing garbage collectors and special storage management
optimizations. Well ten years ago I would have said this was an unrealistic
expectation. Ten years later my opinion is not changed one iota, since
there has been essentially no advance in compiler optimization technology
(of a relevant kind) in this time.

	Robert
	Robert
	Robert   tt

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09  4:47 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-08-09  4:47 UTC (permalink / raw)


Jonothan Shilling asks why people do not want automatic task optimization.
After all, he asks, it only makes programs run faster.

Well the point is that in realtime programs, running faster is not typically
a requirement. The requirement is that all tasks complete their assigned
tasks on schedule. General optimizations that may or may not speed up some
tasks, depending on what the compiler can figure out, are not much help to
a programmer who must demonstrate that the program he writes will meet its
deadlines.

Imagine a worst case. Suppose a compiler used some probabilistic algorithm
to estimate how it should optimize tasks. This is not unreasonable. The
compiler has incomplete information about the program being compiled, so
game theory would suggest that an optimizal strategy would be a mixed
stratecy involving probabilistic choices. Well such a compiler would be
unusable in real-time contexts -- every time you compiled, you might find
that different tasks were optimized and the program would change its
execution characteristics and perhaps correctness.
\
On the other hand, this compiler would, on average, execute Ada programs
faster than a compiler which did not do such probabistic optimization.

The question is: are there applications which use tasking, and are basically
interested in overall speed throughput, rather than simply that tasks
meet their deadlines. Well certainly one can construct examples, but I 
suspect that in real life such examples are few and far between.

Anyone care to report experience with applications of this type (i.e. ones
where the issue is not meeting deadlines, but just generally fast execution.
Remember they must also be applications where tasking performance is
in fact important to overall performance).

		Robert

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09  4:57 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-08-09  4:57 UTC (permalink / raw)


Mike certainly seems to be in ranting an raving mode again ("Beltway
bandit mentality") etc.

I think his comments betray a real lack of familiarity with the reality of
production compilers. Elaborate optimizations are almost always a mistake.
They don't pay off in real life nearly as much as people hope, and they
tend to make compilers very expensive to produce, very expensive to maintain
and unreliable.

The maintenance issue in particular is important. A typical situation is that
a complex optimization is installed. It works on all the test programs, and
works well for a while, then a customer discovers a bug where the optimizer
screws up [I don't suppose anyone reading this newsgroup has *ever* had
such an experience :-) ]

The maintenance required is a complete reanalysis of the optimization
algorithms to fix the problems arising presumably from some bad analysis
in the first place. In practice maintenance people are not capable of this,
and the easy maintenance fix is simply to disconnect the optimization. Fran
Allen, of IBM, once said in a talk I heard (I don't know if this is in print)
that nearly half of the optimizations in the IBM optimizating PL1 compiler
were disconnected during the maintenance process.

Certainly one might criticize the Ada vendors and second guess how they
had spent their money (Mike you might want to talk to one of the many
investors who lost their money and tell them that you are irritated that
Ada compiler companies have never invested money up front to try to make
money!) However, it is clear to me that if Mike directed the spending of
such money he would manage to blow it much faster.

You want to know where I would spend money: bindings, bindings, and then
if any was left over, bindings.


	Robert

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09 11:02 Richard Kenner
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Kenner @ 1993-08-09 11:02 UTC (permalink / raw)


In article <244lfv$6o@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wri
tes:
>Fran Allen, of IBM, once said in a talk I heard (I don't know if this is
>in print) that nearly half of the optimizations in the IBM optimizating PL1
>compiler were disconnected during the maintenance process.

I've heard the same thing, but I can say that it's not universally
true.  In the few years I've been involved in GCC maintenance, I can
only think of a very few example of optimizations that have had to be
turned off.  The only major one that comes to mind is that we used to
fold the C expression
		(&foo != 0)
as always being false until someone pointed out "pragma weak" in
System V.

Sometimes we disable some special case of an optimization until we can
fix it, but we usually do so within a few weeks.  The last such
incident was that RMS had to disable some shift simplification on July
26 in an obscure case because some general routine wasn't working
properly, but I fixed that function and reenabled the optimization
yesterday.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09 12:14 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-08-09 12:14 UTC (permalink / raw)


Regarding Richard's notes on GCC: one point here is that GCC realy does not
do any complex global optimizations of the type attempted in the optimizing
PL1 compiler, at least as far as I understand its structure. Perhaps the
experience that Richard reports is actually a confirmation that such
optimizations are the ones that tend to cause trouble. GCC generates extremely
good local code, and its success in competing against what on paper would
seem to be much more ambitious attempts at optimization just go to show that
concentrating on good local code gets you most of the way without needing
very complex optimizations.

The kind of optimizations we are talking about here: recognizing entire
tasks as obeying a set of restrictions (typically involving intermodule
link time analysis if the bodies are separately compiled), and then performing
(again at link time in the general case) extensive transformations on the
program tree are definitely not the kind of thing that GCC goes in for (GCC
does not even attempt intermodule register allocation, a la MIPS, let alone
more complex intermodule transformations).

Incidentally, if you think that compiling the spec and body of a task in the
same unit is an acceptable restriction for activating this kind of optimizat
ion, then I would react that I would *much* prefer a pragma in the spec to
this kind of severely destructuring kind of restriction.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09 14:48 Jonathan Schilling
  0 siblings, 0 replies; 30+ messages in thread
From: Jonathan Schilling @ 1993-08-09 14:48 UTC (permalink / raw)


In article <244ktf$61@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wri
tes:
>Jonothan Shilling asks why people do not want automatic task optimization.
>After all, he asks, it only makes programs run faster.
>
>Well the point is that in realtime programs, running faster is not typically
>a requirement. The requirement is that all tasks complete their assigned
>tasks on schedule. General optimizations that may or may not speed up some
>tasks, depending on what the compiler can figure out, are not much help to
>a programmer who must demonstrate that the program he writes will meet its
>deadlines.
 
I concede your point for the general case, where tasks map to some
tangible application function (an airplane's navigation, an engine's
operation, etc.).  But in the particular case of monitor (passive)
tasks, the tasks are somewhat artificial, in that they exist only
to sychronize access to some resource.  They don't really map to
any application function, and probably don't have any assigned
schedule.  It seems to me that you would always want such tasks
to run as fast as possible.

>Imagine a worst case. Suppose a compiler used some probabilistic algorithm
>to estimate how it should optimize tasks. This is not unreasonable. The
>compiler has incomplete information about the program being compiled, so
>game theory would suggest that an optimizal strategy would be a mixed
>stratecy involving probabilistic choices. Well such a compiler would be
>unusable in real-time contexts -- every time you compiled, you might find
>that different tasks were optimized and the program would change its
>execution characteristics and perhaps correctness.
>
>On the other hand, this compiler would, on average, execute Ada programs
>faster than a compiler which did not do such probabistic optimization.

I certainly agree that any compiler that did this should tie the 
optimization to a pragma PROBABILISTIC, and then to a pragma
ARE_YOU_SURE_YOU_REALLY_WANT_TO_DO_THIS.

Obviously, any optimizations that involve tradeoffs (whether time vs.
space, task vs. task, instruction scheduling vs. debugging clarity) 
should be under some kind of user control.  But most Ada optimizations, 
tasking or non-tasking, don't involve tradeoffs; they involve recognizing
simple instances of generally complex constructs.  It's those that I'm in 
favor of compilers doing automatically.

-- 
Jonathan Schilling
DDC-I, Inc.
uunet!ddciiny!jls

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-09 21:15 Robert I. Eachus
  0 siblings, 0 replies; 30+ messages in thread
From: Robert I. Eachus @ 1993-08-09 21:15 UTC (permalink / raw)


In article <244ktf$61@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wri
tes:

  > The question is: are there applications which use tasking, and are
  > basically interested in overall speed throughput, rather than
  > simply that tasks meet their deadlines. Well certainly one can
  > construct examples, but I suspect that in real life such examples
  > are few and far between.

    Actually, I do know of one major exception, but it only reinforces
Robert Dewar's case.  At MITRE (and at some government contractors)
there is a major need for simulating systems to determine that the
real-time requirements are met.  Since the simulation uses a simulated
clock, and the simulations often run much slower than real-time,
anything which speeds the simulation up would be useful.

    Since the people doing the simulations are exactly those who care
the most for PREDICTABLE behavior in the real system, don't expect us
to ask for automatic reorderings.
--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-10  1:38 cis.ohio-state.edu!math.ohio-state.edu!darwin.sura.net!seas.gwu.edu!mfeld
  0 siblings, 0 replies; 30+ messages in thread
From: cis.ohio-state.edu!math.ohio-state.edu!darwin.sura.net!seas.gwu.edu!mfeld @ 1993-08-10  1:38 UTC (permalink / raw)


In article <244khh$58@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wri
tes:

[stuff deleted]
>
>In short, I think it is probably a bad idea to put too much effort into this
>kind of optimization. Believe me, there are *plenty* of opportunities for
>more straightforward optimization that have by no means been fully exploited.
>As a result I think it would be a bad idea for vendors to put much effort
>in this area.

OK, you convinced me.
>
>One more thing. As we well know, Mike is on the lookout for cheap compilers
>for educational use, which is fair enough, but he cannot at the same time
>expect to influence the commercial vendors in terms of what optimizations
>they might offer. If indeed the constituency that he imagines exists, it
>would presumably have resources to encourage the vendors to move in this
>direction. The fact that vendors have not heard people clamoring for this
>kind of support I think legitimately indicates that this constuency does
>not in fact exist. 

Well, I certainly have two goals, and the constituencies are in fact
different ones. There are three things I'd like to see:

(1) cheap and fast commercial compilers to teach large classes:
    These are in reasonably good supply now that the vendors
    have heard our pleas and recognized the virtues of partnership.
    Who cares how fast the executables are - it's how fast they
    compile "hello world" that counts. This is the WATFIV genre.

(2) compilers that come with source code, to encourage the kind of
    experimentation with new features and optimization that, at
    least in part, made the C craze happen. Researchers and their grad
    students got their mitts on C compiler sources, which led to all
    wonderful dialects and extended languages, and caused these folks
    to have LOTS of FUN. It's no fun hacking a compiler for which no
    source files are available. Ada/Ed is a start; GNAT is what we need
    and will get. My compiler class will study GNAT as an artifact.
    Maybe we'll even hack on it; who knows? QED.

(3) Commercial Ada compilers that can compete with and beat the best of 
    those for the other languages. We are getting there for some kinds of
    algorithms. Robert has not persuaded me that Ada compilers are
    exploiting the language the very best they can. He says there
    seems to be no constituency for such things. That's because he
    seems to be looking only in the Ada community. He keeps looking
    at the choir; I keep trying to make the congregation bigger.

    Does he truly believe that real-live support for the functionality
    of ADAR wouldn't serve as a wedge to get Ada into more MIS shops?

    Does he truly believe that real-live support for a good Fortran
    interface wouldn't serve as a wedge to get Ada into more engineering
    shops?

    I would like to see some vendor come forward and say "we have done
    a real, live, market study of engineering shops and we are convinced
    that those guys just wanna stay with Fortran instead of interfacing
    their old codes to new Ada." If my engineering school is any example,
    the way they are getting out of Fortran is _re-writing_ all those old
    codes - in C. They have NO IDEA there's an alternative.

I have no problem whatsoever in thinking there is room in the world for
all three kinds of compilers. Indeed, the early ramblings on Ada
predicted that these would exist by now, because the standard would
eliminate feature wars and therefore compilers would compete on these
other factors, on a level playing field. It didn't happen. That doesn't
mean it _couldn't_ happen. I can still lobby for it, and will.
>
>The facile belief that compilers can do amazing optimizations is often
>heard. Indeed I think the original Ada design was in some cases badly
>influenced by this belief (why for instance is there no proper control
>over storage management in Ada 83 -- at least part of the answer is that
>some people thought that compilers would be able to do an amazing job
>of implementing garbage collectors and special storage management
>optimizations. Well ten years ago I would have said this was an unrealistic
>expectation. Ten years later my opinion is not changed one iota, since
>there has been essentially no advance in compiler optimization technology
>(of a relevant kind) in this time.
A lot of this kind of stuff happens in the universities, as witness the
people who have built whole careers on tweaking Lisp garbage collectors.
Real funded research on Ada issues has just about dried up - just try to
get an NSF grant to work on Ada. Besides, research on optimizations can
only happen if compiler sources are available. Let's see what happens in
a few years after we've all had our chances to hack on GNAT.

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-10  1:47 cis.ohio-state.edu!math.ohio-state.edu!darwin.sura.net!seas.gwu.edu!mfeld
  0 siblings, 0 replies; 30+ messages in thread
From: cis.ohio-state.edu!math.ohio-state.edu!darwin.sura.net!seas.gwu.edu!mfeld @ 1993-08-10  1:47 UTC (permalink / raw)


In article <244lfv$6o@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wri
tes:
>Mike certainly seems to be in ranting an raving mode again ("Beltway
>bandit mentality") etc.

Thanks for the compliment.
>
[stuff deleted, mostly good]

>Certainly one might criticize the Ada vendors and second guess how they
>had spent their money (Mike you might want to talk to one of the many
>investors who lost their money and tell them that you are irritated that
>Ada compiler companies have never invested money up front to try to make
>money!) However, it is clear to me that if Mike directed the spending of
>such money he would manage to blow it much faster.

If you are offering to introduce me to some of these nameless, faceless
investors, I'm ready when you are. 

I'd like to hear their side of the story, especially whether any of them,
or the recipients of their money, focused on Ada early-on as dual-use.
I _do_ hear a lot of vendors _now_ saying "y'know, we never really focused
on marketing; we never thought Ada would be interesting beyond defense."
Come now, Robert, you've seen this myopia too. Yes, I've called it -
for effect - the "Beltway Bandit" mentality. Firy language, sure. But
actually it's true. They shot straight for the government market and
mostly ignored others till quite recently. Is this not true?
>
>You want to know where I would spend money: bindings, bindings, and then
>if any was left over, bindings.
>
Well, OK, I can't quarrel with that. Where are all these bindings?
(Well, _you_ don't have the money...I guess nobody else thought of it.)

Mike

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-10 13:44 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-08-10 13:44 UTC (permalink / raw)


Grumble, grumble! Mike, don't over-generalize what I say. I have not said that
I though that there was in general no constituency for trying to improve Ada
compilers. Instead I just expressed opinions on specific areas: automatic
passivization of tasks (I now think this is probably an idea that does not
fly effectively in the Ada context), and specialized Fortran interface
stuff (I think that's quite worthwhile, but I would not hold my breath
for the Fortran crowd to come beat down our walls -- rather I think of it
as a way of letting Ada users take better advantage of all that Fortran
stuff out there!) As you know, my emphasis is on bindings. If I want to
write an enterprise wide application using an OS/2 client-server network,
would I choose Ada? At the moment I have to say no because I do not have
an effective PM binding with any of the commercial compilers [the binding
supplied with the Alsys compiler is welcome, but is fairly thin, and probably
suitable only for limited mucking around].

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-10 13:51 Robert Dewar
  0 siblings, 0 replies; 30+ messages in thread
From: Robert Dewar @ 1993-08-10 13:51 UTC (permalink / raw)


Mike, one point that I keep making. It is absolutely NOT fair to paint all
Ada vendors as entirely defence oriented. In fact Alsys was founded by JDI
on the principle of specifically going after the non-military market. While
this succeeded fairly well in France, it was of limited success in the US.
I think part of the problem is that even though there were some successes
(you have the list!) compiler companies for any compiler for any language
have a heck of a hard time making money. It is true that Alsys has turned
to the defence industry more in recent years -- you go where the money is
in any business. It may also be true that Alsys did not make the right
decisions, who knows? but it is truly unfair to include Alsys in your
blanket criticism of Ada companies attitudes.

I suspect the criticism is also inappropriate for several other Ada vendors,
but I am not close enough to speak for them so I will let them speak for
themselves [just an example: Rational's involvement with the new government
information system in Singapore seems to me a clear counter-example and I
think there are probably many others].

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-10 23:14 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-10 23:14 UTC (permalink / raw)


In article <2488np$2dj@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wr
ites:
>Grumble, grumble! Mike, don't over-generalize what I say. I have not said that
>I though that there was in general no constituency for trying to improve Ada
>compilers. Instead I just expressed opinions on specific areas: automatic
>passivization of tasks (I now think this is probably an idea that does not
>fly effectively in the Ada context), and specialized Fortran interface
>stuff (I think that's quite worthwhile, but I would not hold my breath
>for the Fortran crowd to come beat down our walls -- rather I think of it
>as a way of letting Ada users take better advantage of all that Fortran
>stuff out there!) 

Well, whichever way you want it. You and I are close enough in age to 
have gone through the PL/I fiasco. Undoubtedly we differ on the main causes
of PL/I's failure to catch on in the scientific area (PL/I is moderately
successful in the IBM world, in IS applications). I spent a number of
years around (would-be) PL/I folks in both areas. The objections at the 
time, among the folks I knew, fell in two areas:

- inefficient compilers. PL/I-F was a dog; it's amazing it worked at all,
  could hardly have been otherwise given IBM's state of knowledge and
  desperate hiring - Brooks has a lot to say on this in "The Mythical
  Man Month". But compilers improve over time.

- IBM's perceived contempt for science and engineering in jumping to
  row-major array mapping. This gave the Fortranners the excuse they
  needed to resist PL/I. It's easy for guys like you and me to say,
  with hackerish snottiness, "just flip the subscripts, dummy." 
  But an excuse is an excuse, and it worked, didn't it?

So what's this got to do with Ada?

PL/I was a creature of the mid-60's. Ada became a reality nearly 20 years
later. In writing my data structures book (circa 1984) I first became
aware of the _wonderful_ Ada habit of _not_ micro-managing storage reps.

Indeed, I think it was _you_ I asked whether I had read the LRM correctly
that no multidimensional storage mapping is predefined. Indeed the
founding fathers and mothers of Ada were keen enough to see that it
was a Good Idea to leave this to the implementers; one of the reasons
mentioned was that _some_ compilers could support column-major arrays
and therefore hook up nicely to all that Fortran (IMSL, say), without
making _all_ compilers do it. 

You are probably right that _now_ the Fortranners might not beat our doors 
down. Maybe they would not have in, say, 1985 either, but NASA and all 
those guys in the National Labs would have found it a whole lot easier to 
buy into Ada if there were compilers that did right by Fortran libraries. 
What is so painful is that it would not have been hard or expensive to 
do - compilers can flip subscripts quite easily, can't they? This isn't
the brain surgery that, say, detecting passive tasks would be.

As it is, my good colleagues in engineering tell me to let them know when
there are some good, moderately-priced, finite-element packages available 
in Ada. At this rate, hell will freeze over first.

>As you know, my emphasis is on bindings. If I want to
>write an enterprise wide application using an OS/2 client-server network,
>would I choose Ada? At the moment I have to say no because I do not have
>an effective PM binding with any of the commercial compilers [the binding
>supplied with the Alsys compiler is welcome, but is fairly thin, and probably
>suitable only for limited mucking around].

Oh, I quite agree. Some _real_ _good_ bindings and UIMS tools would be
great. 'Course, as you say, the vendors have no $$ to develop these
things the right way. Try Meridian's Windows binding. Usable but REAL
thin. You can learn to write Ada/Windows apps just by transliterating
the C code in a good Windows book. If I have to do _that_, why would
I choose to use Ada? Only someone who had no choice would bother.

How 'bout some bindings that do _good_ Ada?

Oh - I've got another idea for a binding. How 'bout IMSL?

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-10 23:49 Michael Feldman
  0 siblings, 0 replies; 30+ messages in thread
From: Michael Feldman @ 1993-08-10 23:49 UTC (permalink / raw)


In article <248946$2f4@schonberg.cs.nyu.edu> dewar@cs.nyu.edu (Robert Dewar) wr
ites:
>Mike, one point that I keep making. It is absolutely NOT fair to paint all
>Ada vendors as entirely defence oriented. In fact Alsys was founded by JDI
>on the principle of specifically going after the non-military market. While
>this succeeded fairly well in France, it was of limited success in the US.

Since you brought it up, I must respond. It may look purely self-serving
to say so, but Alsys missed out on a chance to hook students and teachers,
in the years before 1991 when, of all the Ada vendors, Ada was BY FAR the
most contemptuous of dealing with guys like us. You may have been too
close to Alsys to see how they were treating the rest of us. Dangerous
as it is to argue post-hoc-ergo-propter-hoc, I can't help thinking that
there may have been some relationship.

>I think part of the problem is that even though there were some successes
>(you have the list!) compiler companies for any compiler for any language
>have a heck of a hard time making money. It is true that Alsys has turned
>to the defence industry more in recent years -- you go where the money is
>in any business. It may also be true that Alsys did not make the right
>decisions, who knows? but it is truly unfair to include Alsys in your
>blanket criticism of Ada companies attitudes.

Alsys is indeed very successful abroad in things like air traffic control.
And don't get me wrong - obviously I have no problem with any company selling
to defense. It's when they get focused _only_ on defense that I worry.
(Not just compiler companies, either - I knew something strange was
happening in this country when Singer stopped selling sewing machines and
Westinghouse stopped selling lightbulbs.)

As for tarring them all with the same brush - I have worked very hard to
keep my criticisms general and my compliments specific. I walk a fine
line, because I do _not_ want to single out individual companies or
individual people for abuse. My own integrity is at stake on this. What 
I flame at is a _culture_ in the Ada business. Some companies are maybe
a little better here, a little worse there. But no Ada company that I could
observe, in this country, with the possible exception of RR in the early
days, set out to make its name a household word (as it were). Alsys
included. Meridian is as close as one gets. At least they advertised
in the Ada issue of CACM...

Surely the Software Through Pictures compnany is not that much larger or 
richer than the Ada shops, yet IDE finds the bucks to put color full-pagers 
frequently in IEEE Software. Name recognition counts in this stuff.
(Oh, BTW - you'd almost never guess from the IDE ads that their product
talks to Ada as well as C++...)

Ada houses are, now, starting to change. I don't want to take more credit
than is due me (or some of my colleagues who hit these issues in other 
forums), and arguing back from effect to cause is dangerous, but I think
we have had some effect. Undoubtedly the major cause is the shrinking
defense budgets.

Whatever the cause, the change - ESPECIALLY at Alsys - is _most_ welcome.
>
>I suspect the criticism is also inappropriate for several other Ada vendors,
>but I am not close enough to speak for them so I will let them speak for
>themselves [just an example: Rational's involvement with the new government
>information system in Singapore seems to me a clear counter-example and I
>think there are probably many others].

I have not heard about the Singapore system, and it's not mentioned in
Rational's Summer 93 newsletter. Nor was it on the list Rational sent
me a few months ago. When/where was this announced?

Rational is also showing a new face, and a most welcome one at that.

Mike Feldman

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-11  2:34 agate!usenet.ins.cwru.edu!magnus.acs.ohio-state.edu!math.ohio-state.edu!u
  0 siblings, 0 replies; 30+ messages in thread
From: agate!usenet.ins.cwru.edu!magnus.acs.ohio-state.edu!math.ohio-state.edu!u @ 1993-08-11  2:34 UTC (permalink / raw)


In article <1993Aug3.201003.448@seas.gwu.edu> mfeldman@seas.gwu.edu (Michael Fe
ldman) writes:

>So what do we have now? POSIX/Ada bindings. Packages. Subroutine calls.
>I'm not dumping on the good work done in developing the POSIX/Ada
>bindings; I'm saying that such things should not really be necessary,
>if the compilers were doing what we pay for them to do - give us good, 
>correct code that really exploits the hardware and the power of the language.

Mike, we appreciate the fact that you continue to be a supporter of the
quality of work done on the POSIX Ada bindings.  I am afraid the statement
quoted above may mislead some readers as to the intent of the POSIX Ada
bindings.  Our intent was not to provide alternate methods to accomplish
the same things as are available in the language, but to provide a
portable method of access to operations that either have no analogue
in the language, or there is no portable way to access them from a
correctly written Ada program.

For example, Ada provides no way to call on a program written in a
different language.  Yes, you can call on procedures written in other
languages (although not portably), but no mechanism to execute another
program.  This is not a deficiency in the Ada language, but a perfect
opportunity for a secondary standard to fill a gap.  Other areas that were
filled by the POSIX Ada standard is specification of file names, a
definition of the mapping of Ada files onto OS files (to aid in
inter-language cooperation) and portable signal handling, to name a few.

In all of these cases, it is not a lack of quality implementations, but
simply a lack of a common definition for how the implementors should
express the concepts.

Throughout the deliberations on the standard, it was debated whether a
binding interface to a feature should be provided, or whether to allow
access to particular OS functions up to the compiler.  If the concept could
be presented clearly in Ada, a binding to the feature was not produced.

Similar deliberations are underway right now on the POSIX Ada Realtime
standard, with the debate being obviously most heated on how much of the
POSIX threads interface to reveal and how much to hide behind the tasking
model, and how much of the mapping between tasks and threads to define.
Clearly this work is being heavily influenced by 9X.  Your help on these
decisions would be greatly appreciated.

Jim

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Query about monitor (passive) task optimization
@ 1993-08-13  7:32 agate!howland.reston.ans.net!darwin.sura.net!sgiblab!munnari.oz.au!goanna
  0 siblings, 0 replies; 30+ messages in thread
From: agate!howland.reston.ans.net!darwin.sura.net!sgiblab!munnari.oz.au!goanna @ 1993-08-13  7:32 UTC (permalink / raw)


In article <1993Aug10.231441.4042@seas.gwu.edu>, mfeldman@seas.gwu.edu (Michael
 Feldman) writes:
> - IBM's perceived contempt for science and engineering in jumping to
>   row-major array mapping. This gave the Fortranners the excuse they
>   needed to resist PL/I. It's easy for guys like you and me to say,
>   with hackerish snottiness, "just flip the subscripts, dummy." 
>   But an excuse is an excuse, and it worked, didn't it?

For what it's worth, "iSUB defining" can be used to get column major in PL/I.
With a little help from the preprocessor, it can even be hidden.
The design of PL/I did receive a lot of input from SHARE.

> >As you know, my emphasis is on bindings. If I want to
> >write an enterprise wide application using an OS/2 client-server network,
> >would I choose Ada? At the moment I have to say no because I do not have
> >an effective PM binding with any of the commercial compilers [the binding
> >supplied with the Alsys compiler is welcome, but is fairly thin, and probabl
y
> >suitable only for limited mucking around].
 
> Oh, I quite agree. Some _real_ _good_ bindings and UIMS tools would be
> great.

I was looking at Presentation Manager recently, and every couple of pages
I was thinking "if only there was a thick binding for Ada for this stuff,
it would be _so_ much easier to use."  Would designing a PM thick binding
make a good project in a software engineering course?  I'd like to see it
happen.  If I wasn't swamped with N dozen other things I'd like to MAKE it
happen.
-- 
Richard A. O'Keefe; ok@goanna.cs.rmit.oz.au; RMIT, Melbourne, Australia.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~1993-08-13  7:32 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1993-08-09  4:41 Query about monitor (passive) task optimization Robert Dewar
  -- strict thread matches above, loose matches on Subject: below --
1993-08-13  7:32 agate!howland.reston.ans.net!darwin.sura.net!sgiblab!munnari.oz.au!goanna
1993-08-11  2:34 agate!usenet.ins.cwru.edu!magnus.acs.ohio-state.edu!math.ohio-state.edu!u
1993-08-10 23:49 Michael Feldman
1993-08-10 23:14 Michael Feldman
1993-08-10 13:51 Robert Dewar
1993-08-10 13:44 Robert Dewar
1993-08-10  1:47 cis.ohio-state.edu!math.ohio-state.edu!darwin.sura.net!seas.gwu.edu!mfeld
1993-08-10  1:38 cis.ohio-state.edu!math.ohio-state.edu!darwin.sura.net!seas.gwu.edu!mfeld
1993-08-09 21:15 Robert I. Eachus
1993-08-09 14:48 Jonathan Schilling
1993-08-09 12:14 Robert Dewar
1993-08-09 11:02 Richard Kenner
1993-08-09  4:57 Robert Dewar
1993-08-09  4:47 Robert Dewar
1993-08-03 20:10 Michael Feldman
1993-08-03 20:01 Michael Feldman
1993-08-03 15:26 Jonathan Schilling
1993-08-03 10:11 Bjorn Kallberg
1993-08-02 18:17 Michael Feldman
1993-08-02 18:13 Michael Feldman
1993-08-02 17:47 Michael Feldman
1993-08-02 14:35 Jonathan Schilling
1993-08-02 13:19  Arthur Evans
1993-08-02  6:41 Bjorn Kallberg
1993-08-02  3:30 Michael Feldman
1993-08-02  1:57 Jonathan Schilling
1993-08-01  3:25 Michael Feldman
1993-07-31  3:27 Robert Dewar
1993-07-30 17:51 Michael Feldman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox