comp.lang.ada
 help / color / mirror / Atom feed
* Ada.Execution_Time
@ 2010-12-12  4:19 BrianG
  2010-12-12  5:27 ` Ada.Execution_Time Jeffrey Carter
  2010-12-12 16:56 ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 2 replies; 124+ messages in thread
From: BrianG @ 2010-12-12  4:19 UTC (permalink / raw)


Has anyone actually used Ada.Execution_Time?  How is it supposed to be used?

I tried to use it for two (what I thought would be) simple uses: 
display the execution time of one task and sum the of execution time of 
a group of related tasks.

In both cases, I don't see anything in that package (or in 
Ada.Real_Time, which appears to be needed to use it) that provides any 
straightforward way to use the value reported.

For display, you can't use CPU_Time directly, since it's private.  You 
can Split it, but you get a Time_Span, which is also private.  So the 
best you can do is Split it, and then convert the parts to a type that 
can be used (like duration).

For summing, there is "+", but only between CPU_Time and Time_Span, so 
you can't add two CPU_Times.  Perhaps you can use Split, sum the 
seconds, and then use "+" to add the fractions to the next Clock (before 
repeating Split/add/"+" with it, then you need to figure out what to do 
with the last fractional second), but that seems an odd intended use.

The best I could come up with was to create my own function, like this 
(using the same definition for T as in Clock), which can be used for both:

function Task_CPU_Time return Duration (T : ...) is
    Sec      : Ada.Execution_Time.Seconds_Count;
    Fraction : Ada.Real_Time.Time_Span;
begin
    Ada.Execution_Time.Split(Ada.Execution_Time.Clock(T), Sec, Fraction);
    return To_Duration(Ada.Real_Time.Seconds(Integer(Sec)))
         + To_Duration(Fraction);
end Task_CPU_Time;

Wouldn't it make sense to put something like this into that package? 
Then, at least, there'd be something that's directly available to use - 
and you wouldn't need another package.  (I'm not sure about the 
definitions of CPU_Time and Duration, and whether the conversions would 
be guaranteed to work.)

--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-12  4:19 Ada.Execution_Time BrianG
@ 2010-12-12  5:27 ` Jeffrey Carter
  2010-12-12 16:56 ` Ada.Execution_Time Jeffrey Carter
  1 sibling, 0 replies; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-12  5:27 UTC (permalink / raw)


On 12/11/2010 09:19 PM, BrianG wrote:
>
> function Task_CPU_Time return Duration (T : ...) is
> Sec : Ada.Execution_Time.Seconds_Count;
> Fraction : Ada.Real_Time.Time_Span;
> begin
> Ada.Execution_Time.Split(Ada.Execution_Time.Clock(T), Sec, Fraction);
> return To_Duration(Ada.Real_Time.Seconds(Integer(Sec)))
> + To_Duration(Fraction);
> end Task_CPU_Time;

I think you're over complicating things.

function To_Duration (Time : Ada.Execution_Time.CPU_Time) return Duration is
    Seconds  : Ada.Real_Time.Seconds_Count;
    Fraction : Ada.Real_Time.Time_Span;
begin -- To_Duration
    Ada.Execution_Time.Split (Time, Seconds, Fraction);

    return Duration (Seconds) + Ada.Real_Time.To_Duration (Fraction);
end To_Duration;

-- 
Jeff Carter
"This school was here before you came,
and it'll be here before you go."
Horse Feathers
48



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-12  4:19 Ada.Execution_Time BrianG
  2010-12-12  5:27 ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-12 16:56 ` Jeffrey Carter
  2010-12-12 21:59   ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-12 16:56 UTC (permalink / raw)


On 12/11/2010 09:19 PM, BrianG wrote:
>
> function Task_CPU_Time return Duration (T : ...) is
> Sec : Ada.Execution_Time.Seconds_Count;
> Fraction : Ada.Real_Time.Time_Span;
> begin
> Ada.Execution_Time.Split(Ada.Execution_Time.Clock(T), Sec, Fraction);
> return To_Duration(Ada.Real_Time.Seconds(Integer(Sec)))
> + To_Duration(Fraction);
> end Task_CPU_Time;

I think you're over complicating things.

function To_Duration (Time : Ada.Execution_Time.CPU_Time) return Duration is
    Seconds  : Ada.Real_Time.Seconds_Count;
    Fraction : Ada.Real_Time.Time_Span;
begin -- To_Duration
    Ada.Execution_Time.Split (Time, Seconds, Fraction);

    return Duration (Seconds) + Ada.Real_Time.To_Duration (Fraction);
end To_Duration;

-- 
Jeff Carter
"This school was here before you came,
and it'll be here before you go."
Horse Feathers
48



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-12 16:56 ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-12 21:59   ` BrianG
  2010-12-12 22:08     ` Ada.Execution_Time BrianG
  2010-12-13  9:28     ` Ada.Execution_Time Georg Bauhaus
  0 siblings, 2 replies; 124+ messages in thread
From: BrianG @ 2010-12-12 21:59 UTC (permalink / raw)


Jeffrey Carter wrote:
> On 12/11/2010 09:19 PM, BrianG wrote:
>>
...
> I think you're over complicating things.
> 
> function To_Duration (Time : Ada.Execution_Time.CPU_Time) return 
> Duration is
>    Seconds  : Ada.Real_Time.Seconds_Count;
>    Fraction : Ada.Real_Time.Time_Span;
> begin -- To_Duration
>    Ada.Execution_Time.Split (Time, Seconds, Fraction);
> 
>    return Duration (Seconds) + Ada.Real_Time.To_Duration (Fraction);
> end To_Duration;
> 
That's what I get for evolving thru several iterations before coming up 
with a function.  (BTW, where is To_Duration defined for integer types? 
  The only one in the Index is for Time_Span.)

But my question still remains:  What's the intended use of 
Ada.Execution_Time?  Is there an intended use where its content 
(CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?

--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-12 21:59   ` Ada.Execution_Time BrianG
@ 2010-12-12 22:08     ` BrianG
  2010-12-13  9:28     ` Ada.Execution_Time Georg Bauhaus
  1 sibling, 0 replies; 124+ messages in thread
From: BrianG @ 2010-12-12 22:08 UTC (permalink / raw)


BrianG wrote:
> Jeffrey Carter wrote:
>> On 12/11/2010 09:19 PM, BrianG wrote:
> ...
>                   (BTW, where is To_Duration defined for integer types? 
>  The only one in the Index is for Time_Span.)
OOPS, forget that part.  I misread your code.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-12 21:59   ` Ada.Execution_Time BrianG
  2010-12-12 22:08     ` Ada.Execution_Time BrianG
@ 2010-12-13  9:28     ` Georg Bauhaus
  2010-12-13 22:25       ` Ada.Execution_Time Randy Brukardt
  2010-12-15  0:16       ` Ada.Execution_Time BrianG
  1 sibling, 2 replies; 124+ messages in thread
From: Georg Bauhaus @ 2010-12-13  9:28 UTC (permalink / raw)


On 12/12/10 10:59 PM, BrianG wrote:


> But my question still remains: What's the intended use of Ada.Execution_Time? Is there an intended use where its content (CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?

I think that your original posting mentions a use that is quite
consistent with what the rationale says: each task has its own time.
Points in time objects can be split into values suitable for
arithmetic, using Time_Span objects.  Then, from the result of
arithmetic, produce an object suitable for print, as desired.


While this seems like having to write a bit much,
it makes things explicit, like Ada forces one to be
in many cases.   That' how I explain the series of
steps to myself.

Isn't it just like "null;" being required to express
the null statement?  It seems to me to be a logical
consequence of requiring that intents must be stated
explicitly.

<rant>
Some found the explicit null statement to be unusual,
bothersome, and confusing in the presence of a pragma.
Thus it was dropped by the language designers.

The little learning it took, the few words of explanation,
explicitness of intent dropped in favor of a special case in Ada
2012 which lets one use a pragma in place of a null statement.
(And re-introduce "null;" once rewriting / removing debug stuff
/ etc is taking place.)

Let's hope we can buy support tools in the future to
help us ensure the effects of language special casing can
be bridled per project.
</rant>



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-13  9:28     ` Ada.Execution_Time Georg Bauhaus
@ 2010-12-13 22:25       ` Randy Brukardt
  2010-12-13 22:42         ` Ada.Execution_Time J-P. Rosen
                           ` (2 more replies)
  2010-12-15  0:16       ` Ada.Execution_Time BrianG
  1 sibling, 3 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-13 22:25 UTC (permalink / raw)


"Georg Bauhaus" <rm-host.bauhaus@maps.futureapps.de> wrote in message 
news:4d05e737$0$6980$9b4e6d93@newsspool4.arcor-online.net...
...
> <rant>
> Some found the explicit null statement to be unusual,
> bothersome, and confusing in the presence of a pragma.
> Thus it was dropped by the language designers.

The logic is that you need a "null;" statement when there is nothing in some 
list of statements. A pragma (or label) is not "nothing", so the requirement 
for "null;" is illogical in those cases.

> The little learning it took, the few words of explanation,
> explicitness of intent dropped in favor of a special case in Ada
> 2012 which lets one use a pragma in place of a null statement.

Yes. This is primarily an issue for a pragma Assert.

> (And re-introduce "null;" once rewriting / removing debug stuff
> / etc is taking place.)

True, but that is always the case when removing debug stuff. The change here 
has no real effect on that. Most of mine is some sort of logging:

       if Something then
            <Lot of code>
       else
            Log_Item ("Something is False");
       end if;

If I remove the logging, I have to add a "null;" or remove the "else".

The situation for a pragma in Ada 2012 is no different:

       if Something then
            <Lot of code>
       else
            pragma Assert (not Something_Else);
       end if;

And it seems less likely that you would change this than the first form, and 
neither seems that likely.

(Note that this was not a change I cared about much in either direction. The 
use of pragmas for executable things is bad language design IMHO, and in any 
I simply don't use them for that sort of purpose, because they are much too 
limited to be of much use in a complex system.)

> Let's hope we can buy support tools in the future to
> help us ensure the effects of language special casing can
> be bridled per project.

I'd suggest you stick with Ada 2005. Ada 2012 is all about easier ways to 
write Ada code: not just these tweaks, but also conditional expressions, 
expression functions, iterator syntax, indexing of containers, the reference 
aspect (giving automatic dereferencing) are all "syntax sugar". That is, 
they're all about making it easier to write (and in most cases, read) Ada 
code in a style that is closer to the problem rather than the solution. (One 
could also put all of the contract stuff into this category, as you can 
write preconditions, postconditions, invariants, and predicates using pragma 
Assert -- it's just a lot more reliable for the compiler to decide where 
they need to go.)

                                               Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-13 22:25       ` Ada.Execution_Time Randy Brukardt
@ 2010-12-13 22:42         ` J-P. Rosen
  2010-12-14  3:31         ` Ada.Execution_Time Jeffrey Carter
  2010-12-14  8:17         ` Ada.Execution_Time Vinzent Hoefler
  2 siblings, 0 replies; 124+ messages in thread
From: J-P. Rosen @ 2010-12-13 22:42 UTC (permalink / raw)


Le 13/12/2010 23:25, Randy Brukardt a �crit :
>> The little learning it took, the few words of explanation,
>> explicitness of intent dropped in favor of a special case in Ada
>> 2012 which lets one use a pragma in place of a null statement.
> 
> Yes. This is primarily an issue for a pragma Assert.
> 
[...]
> (Note that this was not a change I cared about much in either direction. The 
> use of pragmas for executable things is bad language design IMHO, and in any 
And to think that "assert" was a statement in (preliminary) Ada 1980...

-- 
---------------------------------------------------------
           J-P. Rosen (rosen@adalog.fr)
Adalog a d�m�nag� / Adalog has moved:
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-13 22:25       ` Ada.Execution_Time Randy Brukardt
  2010-12-13 22:42         ` Ada.Execution_Time J-P. Rosen
@ 2010-12-14  3:31         ` Jeffrey Carter
  2010-12-14 15:42           ` Ada.Execution_Time Robert A Duff
  2010-12-14  8:17         ` Ada.Execution_Time Vinzent Hoefler
  2 siblings, 1 reply; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-14  3:31 UTC (permalink / raw)


On 12/13/2010 03:25 PM, Randy Brukardt wrote:
>
> The logic is that you need a "null;" statement when there is nothing in some
> list of statements. A pragma (or label) is not "nothing", so the requirement
> for "null;" is illogical in those cases.

The logic I recall from watching videos of Ichbiah, Barnes, and Firth presenting 
Ada (80) at the Ada Launch was that a null statement indicates that the sequence 
of statements (SOS) was intentionally null; it was contrasted to the single 
semicolon used for the null statement in some other languages, which is easily 
missed when reading, easily accidentally deleted when editing, and generally 
considered a Bad Thing. Another justification is that it prevents the reader 
from wondering if something was accidentally deleted.

As such, a pragma might be "something" and as such not require a null statement, 
but I would disagree about a label. A label by itself would make me wonder what 
happened to the statement it labels.

Given the argument that the null statement is needed when there is no other SOS, 
that SOS refers to executable statements, and that neither a pragma nor a label 
are considered such, I would guess this is contrary to the intention of the 
original language designers.

I so like the idea that something explicit is required when a region 
deliberately contains nothing that I'd like to see "null;" as a declaration that 
is required when a declarative region contains nothing else.

-- 
Jeff Carter
"He didn't get that nose from playing ping-pong."
Never Give a Sucker an Even Break
110



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-13 22:25       ` Ada.Execution_Time Randy Brukardt
  2010-12-13 22:42         ` Ada.Execution_Time J-P. Rosen
  2010-12-14  3:31         ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-14  8:17         ` Vinzent Hoefler
  2010-12-14 15:51           ` Ada.Execution_Time Adam Beneschan
                             ` (2 more replies)
  2 siblings, 3 replies; 124+ messages in thread
From: Vinzent Hoefler @ 2010-12-14  8:17 UTC (permalink / raw)


Randy Brukardt wrote:

> "Georg Bauhaus" <rm-host.bauhaus@maps.futureapps.de> wrote in message
> news:4d05e737$0$6980$9b4e6d93@newsspool4.arcor-online.net...
> ...
>> <rant>
>> Some found the explicit null statement to be unusual,
>> bothersome, and confusing in the presence of a pragma.
>> Thus it was dropped by the language designers.
>
> The logic is that you need a "null;" statement when there is nothing in some
> list of statements. A pragma (or label) is not "nothing", so the requirement
> for "null;" is illogical in those cases.

I believe, back in the old days, there was a requirement that the presence or
absence of a pragma shall have no effect on the legality of the program, wasn't
it?

Well, even if it just was that it "shall have no effect on a legal program", I
still wonder why it is so necessary to introduce the possibility to turn an
illegal program (without the null statement) into a legal one merely by adding
some random pragma where a "sequence of statements" was expected. A pragma is
/not/ a statement.

I agree with Georg here, this is an unnecessary change with no apparent use,
it doesn't support neither of the three pillars of the Ada language "safety",
"readability", or "maintainability".


Vinzent.

-- 
Beaten by the odds since 1974.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14  3:31         ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-14 15:42           ` Robert A Duff
  2010-12-14 16:17             ` Ada.Execution_Time Jeffrey Carter
  2010-12-14 19:10             ` Ada.Execution_Time Warren
  0 siblings, 2 replies; 124+ messages in thread
From: Robert A Duff @ 2010-12-14 15:42 UTC (permalink / raw)


Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:

> The logic I recall from watching videos of Ichbiah, Barnes, and Firth
> presenting Ada (80) at the Ada Launch was that a null statement
> indicates that the sequence of statements (SOS) was intentionally null;
> it was contrasted to the single semicolon used for the null statement in
> some other languages, which is easily missed when reading, easily
> accidentally deleted when editing, and generally considered a Bad
> Thing.

It's kind of a "belt and suspenders" solution.

In C, you can say:

    for (<stuff-with-side-effects>)
        ;

and it's indeed easy to miss the empty statement,
especially if the ";" is on the same line as the "for",
and/or the following code is mis-indented.

Ada solves this two ways -- you have to write "end loop;"
and you also have to write "null;".  The "end loop;"
already solves the problem.

Similar issue with dangling "else" -- Ada doesn't have them
because of "end if".  (Amazingly, in 2010, people continue
to design programming languages with the dangling "else"
problem.  No excuse for it!)

> As such, a pragma might be "something" and as such not require a null
> statement, but I would disagree about a label. A label by itself would
> make me wonder what happened to the statement it labels.

Conceptually, a label does not label a statement -- it labels
a place in the code.  The Ada syntax rules are confused in
this regard.  I mean, when you say "goto L;" you don't mean
to execute the statement labeled <<L>> (and then come back here),
you mean to jump to the place marked <<L>>, and continue
on from there.

So if you want to jump to the end of a statement list, e.g.

    ...loop
        ...
        if ... then
            ...
            goto Continue;
        end if;
        ...
        <<Continue>>
    end loop;

it's just noise to put "null;" after <<Continue>>.

It's no big deal, of course, since gotos are rare.

> Given the argument that the null statement is needed when there is no
> other SOS, that SOS refers to executable statements, and that neither a
> pragma nor a label are considered such,...

Well, a pragma Assert is a lot like a statement.
I find it really annoying to have to write "null;"
before or after some Asserts.  Pure noise, IMHO.
(Again, no big deal.)

>... I would guess this is contrary
> to the intention of the original language designers.

Almost everything that changed in Ada 95, 2005, and 2012 is
contrary to the original intent.  Indeed, during the
Ada 9X project, Jean Ichbiah was quite angry that
we were scribbling graffiti all over his near-perfect
work of art.  So be it.

> I so like the idea that something explicit is required when a region
> deliberately contains nothing that I'd like to see "null;" as a
> declaration that is required when a declarative region contains nothing
> else.

I know.  I've seen your code with "-- null;" in empty declarative
parts.  I'm sure you realize that in this case, yours is a minority
opinion.

It's another belt and suspenders thing.  If you forgot to
declare anything in the declarative part, you'll likely
get errors when you refer to those missing declarations.
Unless, of course, you forgot the code as well.  When
we see:

    procedure P is
    begin
        null;
    end P;

how do we know the programmer didn't REALLY mean:

    procedure P is
        Message: constant String := "Hello, world.";
    begin
        Put_Line (Message);
    end P;

?

Should we write:

    procedure P is
        Message: constant String := "Hello, world.";
    begin
        Put_Line (Message);
        null;
    end P;

to indicate that we really did NOT want to do anything after
the Put_Line?

;-)

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14  8:17         ` Ada.Execution_Time Vinzent Hoefler
@ 2010-12-14 15:51           ` Adam Beneschan
  2010-12-14 15:53           ` Ada.Execution_Time Robert A Duff
  2010-12-14 19:43           ` Ada.Execution_Time anon
  2 siblings, 0 replies; 124+ messages in thread
From: Adam Beneschan @ 2010-12-14 15:51 UTC (permalink / raw)


On Dec 14, 12:17 am, "Vinzent Hoefler"
<0439279208b62c95f1880bf0f8776...@t-domaingrabbing.de> wrote:

> > The logic is that you need a "null;" statement when there is nothing in some
> > list of statements. A pragma (or label) is not "nothing", so the requirement
> > for "null;" is illogical in those cases.
>
> I believe, back in the old days, there was a requirement that the presence or
> absence of a pragma shall have no effect on the legality of the program, wasn't
> it?

RM83 2.8(8): "An implementation is not allowed to define pragmas whose
presence or absence influences the legality of the text outside such
pragmas."  But note that this applied only to *implementation-defined*
pragmas; language-defined pragmas could influence legality (in
particular, the INTERFACE pragma could make an illegal program, i.e.
one in which a subprogram declaration didn't have a corresponding
body, legal).  I don't think this was intended to be a statement about
the *syntax* rules, since the syntax rules are defined by the language
and can't be changed by the implementation, although I suppose that
this rule could have been a reflection of an unstated principle that
was used when the syntax rules were designed.

The current version of this rule is in 2.8(16-19) and is only
Implementation Advice.

                                    -- Adam



>
> Well, even if it just was that it "shall have no effect on a legal program", I
> still wonder why it is so necessary to introduce the possibility to turn an
> illegal program (without the null statement) into a legal one merely by adding
> some random pragma where a "sequence of statements" was expected. A pragma is
> /not/ a statement.
>
> I agree with Georg here, this is an unnecessary change with no apparent use,
> it doesn't support neither of the three pillars of the Ada language "safety",
> "readability", or "maintainability".
>
> Vinzent.
>
> --
> Beaten by the odds since 1974.




^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14  8:17         ` Ada.Execution_Time Vinzent Hoefler
  2010-12-14 15:51           ` Ada.Execution_Time Adam Beneschan
@ 2010-12-14 15:53           ` Robert A Duff
  2010-12-14 17:17             ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-15 22:52             ` Ada.Execution_Time Keith Thompson
  2010-12-14 19:43           ` Ada.Execution_Time anon
  2 siblings, 2 replies; 124+ messages in thread
From: Robert A Duff @ 2010-12-14 15:53 UTC (permalink / raw)


"Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de>
writes:

> I believe, back in the old days, there was a requirement that the presence or
> absence of a pragma shall have no effect on the legality of the program, wasn't
> it?

Yes, something like that.  It applied only to implementation-defined
pragmas, which smells fishy right there.

Anyway, it was a pretty silly rule.  So you can erase all the pragmas,
and your program is still legal.  But the program now does something
different (i.e. wrong) at run time.  How is this beneficial?

Try erasing all the pragmas Abort_Defer from your program!
Or for a language-defined one, try erasing all the pragmas
Elaborate_All.  Either way, your (still-legal) program
will be completely broken.

> Well, even if it just was that it "shall have no effect on a legal program", I
> still wonder why it is so necessary to introduce the possibility to turn an
> illegal program (without the null statement) into a legal one merely by adding
> some random pragma where a "sequence of statements" was expected.

People don't add "random" pragmas.  They add useful ones.

>...A pragma is
> /not/ a statement.

True, but a sequence_of_statements can contain pragmas.  Huh.
A pragma can act as a statement, but can't BE a statement.

> I agree with Georg here, this is an unnecessary change with no apparent use,
> it doesn't support neither of the three pillars of the Ada language "safety",
> "readability", or "maintainability".

It certainly supports readability.  I find this:

    if Debug_Mode then
        pragma Assert(Is_Good(X));
    end if;

slightly more readable than:

    if Debug_Mode then
        null;
        pragma Assert(Is_Good(X));
    end if;

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 15:42           ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 16:17             ` Jeffrey Carter
  2010-12-14 19:10             ` Ada.Execution_Time Warren
  1 sibling, 0 replies; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-14 16:17 UTC (permalink / raw)


On 12/14/2010 08:42 AM, Robert A Duff wrote:
>
> Almost everything that changed in Ada 95, 2005, and 2012 is
> contrary to the original intent.  Indeed, during the
> Ada 9X project, Jean Ichbiah was quite angry that
> we were scribbling graffiti all over his near-perfect
> work of art.  So be it.

I know. Good thing he didn't see some of the current changes.

> I know.  I've seen your code with "-- null;" in empty declarative
> parts.  I'm sure you realize that in this case, yours is a minority
> opinion.

Of course.

-- 
Jeff Carter
"From this day on, the official language of San Marcos will be Swedish."
Bananas
28



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 15:53           ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 17:17             ` Dmitry A. Kazakov
  2010-12-14 17:45               ` Ada.Execution_Time Robert A Duff
  2010-12-15 22:52             ` Ada.Execution_Time Keith Thompson
  1 sibling, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-14 17:17 UTC (permalink / raw)


On Tue, 14 Dec 2010 10:53:40 -0500, Robert A Duff wrote:

> "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de>
> writes:
> 
>> I believe, back in the old days, there was a requirement that the presence or
>> absence of a pragma shall have no effect on the legality of the program, wasn't
>> it?
> 
> Or for a language-defined one, try erasing all the pragmas
> Elaborate_All.  Either way, your (still-legal) program
> will be completely broken.

Because Elaborate_All should never become a pragma. Two wrongs don't make
one right.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 17:17             ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-14 17:45               ` Robert A Duff
  2010-12-14 18:23                 ` Ada.Execution_Time Adam Beneschan
  0 siblings, 1 reply; 124+ messages in thread
From: Robert A Duff @ 2010-12-14 17:45 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Because Elaborate_All should never become a pragma. Two wrongs don't make
> one right.

Elaborate_All is a pragma because Elaborate is a pragma.  ;-)

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 17:45               ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 18:23                 ` Adam Beneschan
  2010-12-14 21:02                   ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 124+ messages in thread
From: Adam Beneschan @ 2010-12-14 18:23 UTC (permalink / raw)


On Dec 14, 9:45 am, Robert A Duff <bobd...@shell01.TheWorld.com>
wrote:
> "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> writes:
>
> > Because Elaborate_All should never become a pragma. Two wrongs don't make
> > one right.
>
> Elaborate_All is a pragma because Elaborate is a pragma.  ;-)

And I guess Elaborate shouldn't have been a pragma, either, since it
affects the semantics; you can write a program that is not guaranteed
to work correctly unless you use this pragma.

When I was trying to look into whether such things really fit the
definition, I ran into trouble finding a clear non-language-specific
definition of what a "pragma" should be.  The Ada RM's have very clear
definitions.  In RM83: "A pragma is used to convey information to the
compiler".  That's crystal clear, except that in my view the entire
text of the program conveys information to the compiler, so it's not
really clear what distinguishes a "pragma" from any other line in the
source.  Ada 95 changed this to "A pragma is a compiler directive".
Which of course clears everything up.  Of course, my twisted mind
thinks an assignment statement is a compiler directive, since it
directs the compiler to generate code that assigns something to
something else, so I'm not sure that this is a useful definition.

Even in Ada 83, there were at least three different flavors of
pragmas.  Some (LIST, PAGE) had no effect on the operation of the
resulting code.  Some (OPTIMIZE, INLINE, PACK) could affect the
compiler's choice of what kind of code to generate, but the code would
produce the same results (unless the code explicitly did something to
create a dependency on the compiler's choice, such as relying on 'SIZE
of a record that may or may not be packed).  And others (ELABORATE,
PRIORITY, SHARED, INTERFACE) definitely affected the results---the
program's behavior would potentially be different (or, in the case of
INTERFACE, be illegal) if the pragma were missing.  I'm having trouble
figuring out a common thread that ties all these kinds of pragmas into
one unified concept---except, perhaps, that they are things that the
language designers found it PRAGMAtic to shove into the "pragma"
statement instead of inventing new syntax.  :) :) :)

And personally, I'm fine with that.  We can argue and discuss and
strive endlessly to come up with a flawless language; but during the
time it takes to perfect the language (and then for implementors to
implement it), everyone else is stuck using C++ and Java while they're
waiting for us, which cannot be good for the world.  So a bit of
artistic inelegance is, to me, a small price to pay.  I guess that's
because I'm too much of a ... well ... a pragma-tist?

                                   -- Adam



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 15:42           ` Ada.Execution_Time Robert A Duff
  2010-12-14 16:17             ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-14 19:10             ` Warren
  2010-12-14 20:36               ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 124+ messages in thread
From: Warren @ 2010-12-14 19:10 UTC (permalink / raw)


Robert A Duff expounded in
news:wccpqt4cqdr.fsf@shell01.TheWorld.com: 

> Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:
..
>> Given the argument that the null statement is needed when
>> there is no other SOS, that SOS refers to executable
>> statements, and that neither a pragma nor a label are
>> considered such,... 
> 
> Well, a pragma Assert is a lot like a statement.
> I find it really annoying to have to write "null;"
> before or after some Asserts.  Pure noise, IMHO.
> (Again, no big deal.)

Personally I think an Assert "statement" (non pragma) could be 
added. Then the assertion _is_ a "statement". I would further 
suggest that _that_ would be active unless explicitly defeated 
by compile option(s) or by pragma <grin>.

Warren



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14  8:17         ` Ada.Execution_Time Vinzent Hoefler
  2010-12-14 15:51           ` Ada.Execution_Time Adam Beneschan
  2010-12-14 15:53           ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 19:43           ` anon
  2010-12-14 20:09             ` Ada.Execution_Time Adam Beneschan
  2 siblings, 1 reply; 124+ messages in thread
From: anon @ 2010-12-14 19:43 UTC (permalink / raw)


In <op.vno2nwchlzeukk@jellix.jlfencey.com>, "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de> writes:
>Randy Brukardt wrote:
>
>
>I believe, back in the old days, there was a requirement that the presence or
>absence of a pragma shall have no effect on the legality of the program, wasn't
>it?

Actually, it still does: Ada 83 thriough 2012 states in Chapter 2 

2.8 Pragmas

In Ada 83 it states:

    A pragma that is not language-defined has no effect if  its  identifier  is
    not  recognized  by  the  (current)  implementation.  Furthermore, a pragma
    (whether language-defined or implementation-defined) has no effect  if  its
    placement  or  its  arguments  do not correspond to what is allowed for the
    pragma.  The region of text over which a pragma has an  effect  depends  on
    the pragma. 

    Note: 

    It  is  recommended  (but not required) that implementations issue warnings
    for pragmas that are not recognized and therefore ignored. 


In Ada 95 .. 2012 states:


                         Implementation Requirements

13    The implementation shall give a warning message for an unrecognized
pragma name.

                         Implementation Permissions

15    An implementation may ignore an unrecognized pragma even if it violates
some of the Syntax Rules, if detecting the syntax error is too complex.

                            Implementation Advice

16    Normally, implementation-defined pragmas should have no semantic effect
for error-free programs; that is, if the implementation-defined pragmas are
removed from a working program, the program should still be legal, and should
still have the same semantics.




In Ada 83 the unrecognized pragmas was syntactically check and skipped
with an optional  simple warning that the compiler will skip that pragma.  
But in Ada 94 .. 2012 it is a question to what the Implementation will do.
It kinds of kills the Ada concept of "predictable" and that's a shame for all 
who controls the design of Ada.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 19:43           ` Ada.Execution_Time anon
@ 2010-12-14 20:09             ` Adam Beneschan
  0 siblings, 0 replies; 124+ messages in thread
From: Adam Beneschan @ 2010-12-14 20:09 UTC (permalink / raw)


On Dec 14, 11:43 am, a...@att.net wrote:

> In Ada 95 .. 2012 states:
>
>                          Implementation Requirements
>
> 13    The implementation shall give a warning message for an unrecognized
> pragma name.
>
>                          Implementation Permissions
>
> 15    An implementation may ignore an unrecognized pragma even if it violates
> some of the Syntax Rules, if detecting the syntax error is too complex.
>
>                             Implementation Advice
>
> 16    Normally, implementation-defined pragmas should have no semantic effect
> for error-free programs; that is, if the implementation-defined pragmas are
> removed from a working program, the program should still be legal, and should
> still have the same semantics.
>
> In Ada 83 the unrecognized pragmas was syntactically check and skipped
> with an optional  simple warning that the compiler will skip that pragma.  
> But in Ada 94 .. 2012 it is a question to what the Implementation will do.
> It kinds of kills the Ada concept of "predictable" and that's a shame for all
> who controls the design of Ada.

They did add "pragma Restrictions(No_Implementation_Pragmas)".  So
those users who want that predictability back can have it.

                             -- Adam





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 19:10             ` Ada.Execution_Time Warren
@ 2010-12-14 20:36               ` Dmitry A. Kazakov
  2010-12-14 20:48                 ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-14 20:36 UTC (permalink / raw)


On Tue, 14 Dec 2010 19:10:44 +0000 (UTC), Warren wrote:

> Robert A Duff expounded in
> news:wccpqt4cqdr.fsf@shell01.TheWorld.com: 
> 
>> Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:
> ..
>>> Given the argument that the null statement is needed when
>>> there is no other SOS, that SOS refers to executable
>>> statements, and that neither a pragma nor a label are
>>> considered such,... 
>> 
>> Well, a pragma Assert is a lot like a statement.
>> I find it really annoying to have to write "null;"
>> before or after some Asserts.  Pure noise, IMHO.
>> (Again, no big deal.)
> 
> Personally I think an Assert "statement" (non pragma) could be 
> added. Then the assertion _is_ a "statement".

Once I suggested:

   raise <exception> when <condition>;

> I would further 
> suggest that _that_ would be active unless explicitly defeated 
> by compile option(s) or by pragma <grin>.

However implemented, the idea is bad.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 20:36               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-14 20:48                 ` Jeffrey Carter
  0 siblings, 0 replies; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-14 20:48 UTC (permalink / raw)


On 12/14/2010 01:36 PM, Dmitry A. Kazakov wrote:
>
> Once I suggested:
>
>     raise <exception> when <condition>;

Yes, I suggested that, too. Also

return [<expression>] [when <condition>];

You could also argue for

goto <label> [when <condition>];

but I'd rather make goto as unattractive to use as possible.

-- 
Jeff Carter
"From this day on, the official language of San Marcos will be Swedish."
Bananas
28



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 18:23                 ` Ada.Execution_Time Adam Beneschan
@ 2010-12-14 21:02                   ` Randy Brukardt
  0 siblings, 0 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-14 21:02 UTC (permalink / raw)


"Adam Beneschan" <adam@irvine.com> wrote in message 
news:10872143-12a2-4d25-bb08-e236b15d2c18@o14g2000prn.googlegroups.com...
...
>Even in Ada 83, there were at least three different flavors of
>pragmas.  Some (LIST, PAGE) had no effect on the operation of the
>resulting code.  Some (OPTIMIZE, INLINE, PACK) could affect the
>compiler's choice of what kind of code to generate, but the code would
>produce the same results (unless the code explicitly did something to
>create a dependency on the compiler's choice, such as relying on 'SIZE
>of a record that may or may not be packed).  And others (ELABORATE,
>PRIORITY, SHARED, INTERFACE) definitely affected the results---the
>program's behavior would potentially be different (or, in the case of
>INTERFACE, be illegal) if the pragma were missing.  I'm having trouble
>figuring out a common thread that ties all these kinds of pragmas into
>one unified concept---except, perhaps, that they are things that the
>language designers found it PRAGMAtic to shove into the "pragma"
>statement instead of inventing new syntax.  :) :) :)

I think you've got it. None of these things (in the last category) ought to 
have been pragmas in the first place.

Note that pragmas are one of the few ways that implementers have to 
represent implementation-defined information, so in practice, we have lots 
of things that ought to never have been pragmas.

At least Ada 2012 has finally come to grips with this, in that the aspect 
clause will be able to be used rather than almost all of the existing 
pragmas. (But not Elaborate, as the syntax doesn't work in a context clause, 
and no one has the energy to invent some other syntax solely for that 
purpose.) Note, however, that there will still be uses for the old pragmas 
(if you want to hide the aspects in the private part, for instance). But 
they should be used much less often.

                                      Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-13  9:28     ` Ada.Execution_Time Georg Bauhaus
  2010-12-13 22:25       ` Ada.Execution_Time Randy Brukardt
@ 2010-12-15  0:16       ` BrianG
  2010-12-15 19:17         ` Ada.Execution_Time jpwoodruff
                           ` (3 more replies)
  1 sibling, 4 replies; 124+ messages in thread
From: BrianG @ 2010-12-15  0:16 UTC (permalink / raw)


Georg Bauhaus wrote:
> On 12/12/10 10:59 PM, BrianG wrote:
> 
> 
>> But my question still remains: What's the intended use of 
>> Ada.Execution_Time? Is there an intended use where its content 
>> (CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?
> 
> I think that your original posting mentions a use that is quite
> consistent with what the rationale says: each task has its own time.
> Points in time objects can be split into values suitable for
> arithmetic, using Time_Span objects.  Then, from the result of
> arithmetic, produce an object suitable for print, as desired.
> 
> 
> While this seems like having to write a bit much,
> it makes things explicit, like Ada forces one to be
> in many cases.   That' how I explain the series of
> steps to myself.
> 
> Isn't it just like "null;" being required to express
> the null statement?  It seems to me to be a logical
> consequence of requiring that intents must be stated
> explicitly.
> 
I have no problem with verbosity or explicitness, and that's not what I 
was asking about.

My problem is that what is provided in the package in question does not 
provide any "values suitable for arithmetic" or provide "an object 
suitable for print" (unless all you care about is the number of whole 
seconds with no information about the (required) fraction, which seems 
rather limiting).  Time_Span is a private type, defined in another 
package.  If all I want is CPU_Time (in some form), why do I need 
Ada.Real_Time?  Also, why are "+" and "-" provided as they are defined? 
  (And why Time_Span?  I thought that was the difference between two 
times, not the fractional part of time.)

Given the rest of this thread, I would guess my answer is "No, no one 
actually uses Ada.Execution_Time".

--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15  0:16       ` Ada.Execution_Time BrianG
@ 2010-12-15 19:17         ` jpwoodruff
  2010-12-15 21:42           ` Ada.Execution_Time Pascal Obry
  2010-12-15 21:40         ` Ada.Execution_Time Simon Wright
                           ` (2 subsequent siblings)
  3 siblings, 1 reply; 124+ messages in thread
From: jpwoodruff @ 2010-12-15 19:17 UTC (permalink / raw)


BrianG's discussion spurred my interest, so now I am able to
contradict his

On Dec 14, 5:16 pm, BrianG <briang...@gmail.com> wrote:
>
> Given the rest of this thread, I would guess my answer is "No, no one
> actually uses Ada.Execution_Time".
>

Let me describe my experiment, which ends in a disappointing
observation about Ada.Execution_Time.

For some years I've had a package that defines a "Counter" object that
resembles a stop-watch.  Made curious by BrianG's question, I
re-implemented the abstraction over Ada.Execution_Time.

Unfortunately introduction of Ada.Real_Time can  cause an otherwise
successful program to raise STORAGE_ERROR :EXCEPTION_STACK_OVERFLOW.
This happens even if the program formerly ran only in an environment
task.


with Ada.Real_Time ;   -- This context leads to failure
Procedure Stack_Splat is
   Too_Big : array (1..1_000_000) of Float ;
begin
   null ;
end Stack_Splat ;


I haven't found the documentation that explains my observation, but
it's pretty clear that Ada.Real_Time in the context implies
substantially different run-time memory strategy.  I suppose there are
compile options to affect this; it can be an exercise for a later day.

Here is the package CPU, which is potentially useful in programs that
use stack gently.

-- PACKAGE FOR CPU TIME CALCULATIONS.
-----------------------------------------------------------------------
--
--              Author:         John P Woodruff
--                              jpwoodruff@gmail.com
--

-- This package owes a historic debt to the similarly named service
-- Created 19-SEP-1986 by Mats Weber.  This specification is largely
-- defined by that work.

--  15-apr-2004: However, Weber's package was implemented by unix
calls
--  (alternatively VMS calls).  JPW has adapted this package
--  specification twice: first to use the Ada.Calendar.Clock
--  functions, then later (when I discovered deMontmollin's WIN-PAQ)
--  to use windows32 low-level calls. Now only the thinnest of Mats
--  Weber's traces can be seen.

--  December 2010: the ultimate version is possible now that Ada2005
--  gives us Ada.Execution_Time.  The object CPU_Counter times a task
--  (presently limited to the current task), and a Report about a
--  CPU_Counter might make interesting reading. Unhappily when this
--  package is introduced, gnat allocates smaller space for the
--  environment task than in the absence of tasking.  Therefore
--  instrumented programs may blow stack in cases that uninstrumented
--  programs do not.

with Ada.Execution_Time ;

package CPU is

   -- Type of a CPU time counter.  Each object of this type is an
   --  independent stop-watch that times the task in which it is
   --  declared.  Possible enhancement: bind a CPU_Counter to a
   --  Task_ID different from Current_Task.

   type CPU_Counter is limited private;

   ------------------------------------------------------------------
   -- The operations for a counter are Start, Stop and Clear.
   -- A counter that is Stopped retains the time already accrued,
until
   -- it is Cleared.

   --  A stopped counter can be started and will add to the time
   --  already accrued - just as though it had run continuously.
   --  It is not necessary to stop a counter in order to read
   --  its value.

   procedure Start_Counter (The_Counter : in out CPU_Counter);
   procedure Stop_Counter  (The_Counter : in out CPU_Counter);
   procedure Clear_Counter (The_Counter : in out CPU_Counter);

   ------------------------------------------------------------------
   -- There are two groups of reporting functions:

   --  CPU_Time returns the time used during while the counter has
   --  been running.


   function CPU_Time (Of_Counter : CPU_Counter) return Duration ;

   -- Process_Lifespan returns the total CPU since the process
   --  started.  (Does not rely on any CPU_counter having been
   --  started.)

   function Process_Lifespan  return Duration ;


   Counter_Not_Started : exception;

   --------------------------------------------------------------
   --  The other reporting functions produce printable reports for
   --  a counter, or for the process as a whole (the procedure
   --  writes to standard output). Reports do not effect the
   --  counter.  Use a prefix string to label the output according
   --  to the activity being timed.

   procedure Report_Clock (Watch      : in CPU_Counter;
                           Prefix     : in String := "") ;

   function  Report_Clock (Watch      : in CPU_Counter) return
String ;


   procedure Report_Process_Lifespan ;

   function  Report_Process_Lifespan return String ;

private

   type CPU_Counter is
      record
         Identity       : Natural := 0 ;
         Accrued_Time   : Ada.Execution_Time.CPU_Time
                        := Ada.Execution_Time.CPU_Time_First ;
         Running        : Boolean := False;
         Start_CPU      : Ada.Execution_Time.Cpu_Time
                        := Ada.Execution_Time.Clock ;
      end record;

end CPU ;

---------------------------------
-- Creation : 19-SEP-1986 by Mats Weber.
-- Revision : 16-Jul-1992 by Mats Weber, enhanced portability by
adding
--                                       separate package
System_Interface.
-- JPW 23 Sep 00 use ada.calendar.clock (because it works on Windows)
-- jpw 14apr04 *found* the alternative.  Montmollin's win-paq product
--     defines win32_timing: the operating system function.
-- jpw 14dec10  reimplement and substantially simplify using
Ada.Execution_Time


with Ada.Text_Io;
with Ada.Real_Time ;

package body CPU is
   ----------------
   use type Ada.Execution_Time.Cpu_Time ;

   Next_Counter_Identity : Natural := 0 ;

   function To_Duration (Time : Ada.Execution_Time.CPU_Time) return
Duration is
      -- thanks to Jeff Carter comp.lang.ada 11dec10
      Seconds  : Ada.Real_Time.Seconds_Count;
      Fraction : Ada.Real_Time.Time_Span;
   begin     -- To_Duration
      Ada.Execution_Time.Split (Time, Seconds, Fraction);
      return Duration (Seconds) + Ada.Real_Time.To_Duration
(Fraction);
   end To_Duration;


   procedure Start_Counter (The_Counter : in out CPU_Counter) is
   begin
      if The_Counter.Identity > 0 then
         --  This is a restart:  identity and accrued times remain
         if not The_Counter.Running then
            The_Counter.Start_CPU := Ada.Execution_Time.Clock ;
         end if ;
         The_Counter.Running   := True ;
      else   -- this clock has never started before
         Next_Counter_Identity       := Next_Counter_Identity + 1;
         The_Counter.Identity        := Next_Counter_Identity ;
         The_Counter.Running         := True ;
         The_Counter.Start_CPU       := Ada.Execution_Time.Clock ;
      end if ;
   end Start_Counter;


   procedure Stop_Counter  (The_Counter : in out CPU_Counter)is
      Now : Ada.Execution_Time.Cpu_Time
          := Ada.Execution_Time.Clock ;
   begin
      if  The_Counter.Identity > 0 and The_Counter.Running then
         -- accrue time observed up to now
         The_Counter.Accrued_Time := The_Counter.Accrued_Time +
           (Now - The_Counter.Start_CPU) ;
         The_Counter.Running := False ;
      end if ;
   end Stop_Counter ;


   procedure Clear_Counter  (The_Counter : in out CPU_Counter) is
      -- the counter becomes "as new" ready to start.
   begin
      The_Counter.Running := False ;
      The_Counter.Accrued_Time := Ada.Execution_Time.CPU_Time_First ;
   end Clear_Counter ;


   function CPU_Time (Of_Counter  : CPU_Counter) return Duration is
      Now : Ada.Execution_Time.Cpu_Time
          := Ada.Execution_Time.Clock ;
   begin
      if Of_Counter.Identity <= 0 then
         raise Counter_Not_Started ;
      end if;
      if not Of_Counter.Running then
         return To_Duration (Of_Counter.Accrued_Time) ;
      else
         return To_Duration (Of_Counter.Accrued_Time +
                             (Now - Of_Counter.Start_CPU)) ;
      end if ;
   end CPU_Time;

   function Process_Lifespan return Duration is
   begin
      return To_Duration (Ada.Execution_Time.Clock) ;
   end Process_Lifespan ;

   function Report_Duration (D : in Duration) return String is
   begin
      if D < 1.0 then
         declare
            Millisec : String := Duration'Image (1_000.0 * D);
         begin
            return  MilliSec(MilliSec'First .. MilliSec'Last-6) & "
msec" ;
         end ;
      elsif D < 60.0 then
         declare Sec : String :=  Duration'Image (D);
         begin
            return Sec(Sec'First .. Sec'Last-6)  & " sec" ;  -- fewer
signficant figs
         end ;
      else
         declare
            Minutes : Integer := Integer(D) / 60 ;
            Seconds : Duration := D - Duration(Minutes) * 60.0 ;
            Sec : String := Duration'Image (Seconds) ;
         begin
            return Integer'Image (Minutes) & " min " &
              Sec(Sec'First .. Sec'Last-6) & " sec" ;
         end ;
      end if ;
   end Report_Duration ;


   procedure Report_Clock (Watch      : in CPU_Counter;
                           Prefix     : in String := "") is
      use Ada.Text_IO ;
   begin
      Put (Prefix & Report_Clock (Watch)) ;
      New_Line ;
   end Report_Clock;


   function Report_Clock (Watch      : in CPU_Counter) return String
is
   begin
      return
        " <" & Integer'Image(Watch.Identity) & "> " &
        Report_Duration (CPU_Time (Watch)) ;
   end Report_Clock ;


   procedure Report_Process_Lifespan is
      use Ada.Text_IO ;
   begin
      Put (Report_Process_Lifespan) ;
      New_Line ;
   end Report_Process_Lifespan ;


   function  Report_Process_Lifespan return String is
      use Ada.Text_IO ;
   begin
      return "Process Lifespan: " & Report_Duration
(Process_Lifespan) ;
   end Report_Process_Lifespan ;

end CPU ;



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15  0:16       ` Ada.Execution_Time BrianG
  2010-12-15 19:17         ` Ada.Execution_Time jpwoodruff
@ 2010-12-15 21:40         ` Simon Wright
  2010-12-15 23:40           ` Ada.Execution_Time BrianG
  2010-12-15 22:05         ` Ada.Execution_Time Randy Brukardt
  2010-12-17  8:59         ` Ada.Execution_Time anon
  3 siblings, 1 reply; 124+ messages in thread
From: Simon Wright @ 2010-12-15 21:40 UTC (permalink / raw)


BrianG <briang000@gmail.com> writes:

> Given the rest of this thread, I would guess my answer is "No, no one
> actually uses Ada.Execution_Time".

Certainly not if they're using Mac OS X:

   gcc -c cpu.adb
   Execution_Time is not supported in this configuration
   compilation abandoned



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 19:17         ` Ada.Execution_Time jpwoodruff
@ 2010-12-15 21:42           ` Pascal Obry
  2010-12-16  3:54             ` Ada.Execution_Time jpwoodruff
  0 siblings, 1 reply; 124+ messages in thread
From: Pascal Obry @ 2010-12-15 21:42 UTC (permalink / raw)
  To: jpwoodruff

Le 15/12/2010 20:17, jpwoodruff a �crit :
> Unfortunately introduction of Ada.Real_Time can  cause an otherwise
> successful program to raise STORAGE_ERROR :EXCEPTION_STACK_OVERFLOW.
> This happens even if the program formerly ran only in an environment
> task.
> 
> 
> with Ada.Real_Time ;   -- This context leads to failure
> Procedure Stack_Splat is
>    Too_Big : array (1..1_000_000) of Float ;
> begin
>    null ;
> end Stack_Splat ;

Probably because the stack size is smaller in the context of tasking
runtime. Just increase the stack (see corresponding linker option) for
the environment task. Nothing really blocking or I did I miss your point?

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|    http://www.obry.net  -  http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B




^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15  0:16       ` Ada.Execution_Time BrianG
  2010-12-15 19:17         ` Ada.Execution_Time jpwoodruff
  2010-12-15 21:40         ` Ada.Execution_Time Simon Wright
@ 2010-12-15 22:05         ` Randy Brukardt
  2010-12-16  1:14           ` Ada.Execution_Time BrianG
  2010-12-16  8:45           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-17  8:59         ` Ada.Execution_Time anon
  3 siblings, 2 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-15 22:05 UTC (permalink / raw)


"BrianG" <briang000@gmail.com> wrote in message 
news:ie91co$cko$1@news.eternal-september.org...
...
> My problem is that what is provided in the package in question does not 
> provide any "values suitable for arithmetic" or provide "an object 
> suitable for print" (unless all you care about is the number of whole 
> seconds with no information about the (required) fraction, which seems 
> rather limiting).

Having missed your original question, I'm confused as to where you are 
finding the quoted text above. I don't see anything like that in the 
Standard. Since it is not in the standard, there is no reason to expect 
those statements to be true. (Even the standard is wrong occassionally, 
other materials are wrong a whole lot more often.)

>  Time_Span is a private type, defined in another package.  If all I want 
> is CPU_Time (in some form), why do I need Ada.Real_Time?  Also, why are 
> "+" and "-" provided as they are defined? (And why Time_Span?  I thought 
> that was the difference between two times, not the fractional part of 
> time.)

I think you are missing the point of CPU_Time. It is an abstract 
representation of some underlying counter. There is no requirement that this 
counter have any particular value -- in particular it is not necessarily 
zero when a task is created. So the only operations that are meaningful on a 
value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
is misnamed, because it is *not* some sort of time type.

The package uses Ada.Real_Time because no one wanted to invent a new kind of 
time. The only alternative would have been to use Calendar, which does not 
have to be as accurate. (Of course, the real accuracy depends on the 
underlying target; CPU_Time has to be fairly inaccurate on Windows simply 
because the underlying counters are not very accurate, at least in the 
default configuration.)

My guess is that no one thought about the fact that Time_Span is only an 
alias for Duration; it's definitely something that I didn't know until you 
complained. (I know I've confused Time and Time_Span before, must have done 
that here, too). So there probably was no good reason that Time_Span was 
used instead of Duration in the package. But that seems to indicate a flaw 
in Ada.Real_Time, not one for execution time.

In any case, the presumption is that interesting CPU_Time differences are 
relatively short, so that Time_Span is sufficient (as it will hold at least 
one day).

> Given the rest of this thread, I would guess my answer is "No, no one 
> actually uses Ada.Execution_Time".

Can't answer that. I intended to use it to replace some hacked debugging 
code, but I've never gotten around to actually implementing it (I did do a 
design, but there is of course a difference...).

                                  Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-14 15:53           ` Ada.Execution_Time Robert A Duff
  2010-12-14 17:17             ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-15 22:52             ` Keith Thompson
  2010-12-15 23:14               ` Ada.Execution_Time Adam Beneschan
  1 sibling, 1 reply; 124+ messages in thread
From: Keith Thompson @ 2010-12-15 22:52 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> writes:
> "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de>
> writes:
[...]
>> I agree with Georg here, this is an unnecessary change with no apparent use,
>> it doesn't support neither of the three pillars of the Ada language "safety",
>> "readability", or "maintainability".
>
> It certainly supports readability.  I find this:
>
>     if Debug_Mode then
>         pragma Assert(Is_Good(X));
>     end if;
>
> slightly more readable than:
>
>     if Debug_Mode then
>         null;
>         pragma Assert(Is_Good(X));
>     end if;

So, um, why is Assert a pragma rather than a statement?

   if Debug_Mode then
      assert Is_Good(X);
   end if;

As somebody pointed out, it was defined that way in Ada 80.

Or am I opening a huge can of worms by asking that question?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 22:52             ` Ada.Execution_Time Keith Thompson
@ 2010-12-15 23:14               ` Adam Beneschan
  2010-12-17  0:44                 ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 124+ messages in thread
From: Adam Beneschan @ 2010-12-15 23:14 UTC (permalink / raw)


On Dec 15, 2:52 pm, Keith Thompson <ks...@mib.org> wrote:
> Robert A Duff <bobd...@shell01.TheWorld.com> writes:
>
>
>
>
>
> > "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776...@t-domaingrabbing.de>
> > writes:
> [...]
> >> I agree with Georg here, this is an unnecessary change with no apparent use,
> >> it doesn't support neither of the three pillars of the Ada language "safety",
> >> "readability", or "maintainability".
>
> > It certainly supports readability.  I find this:
>
> >     if Debug_Mode then
> >         pragma Assert(Is_Good(X));
> >     end if;
>
> > slightly more readable than:
>
> >     if Debug_Mode then
> >         null;
> >         pragma Assert(Is_Good(X));
> >     end if;
>
> So, um, why is Assert a pragma rather than a statement?
>
>    if Debug_Mode then
>       assert Is_Good(X);
>    end if;
>
> As somebody pointed out, it was defined that way in Ada 80.
>
> Or am I opening a huge can of worms by asking that question?

Somebody on the ARG might have a more authoritative answer.  My
reading of AI95-286 is that a number of Ada compilers had already
implemented the Assert pragma and there was a lot of code using it.
Of course, those compilers couldn't have added "assert" as a statement
on their own, but adding an implementation-defined pragma is OK.

I'm guessing that there was probably code out there that used Assert
as a procedure, so adding this as a reserved word would have caused
problems.

                              -- Adam



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 21:40         ` Ada.Execution_Time Simon Wright
@ 2010-12-15 23:40           ` BrianG
  0 siblings, 0 replies; 124+ messages in thread
From: BrianG @ 2010-12-15 23:40 UTC (permalink / raw)


Simon Wright wrote:
> BrianG <briang000@gmail.com> writes:
> 
>> Given the rest of this thread, I would guess my answer is "No, no one
>> actually uses Ada.Execution_Time".
> 
> Certainly not if they're using Mac OS X:
> 
>    gcc -c cpu.adb
>    Execution_Time is not supported in this configuration
>    compilation abandoned
I get the same with the version of Linux I'm currently on (Ubuntu 9.04 I 
think); I had assumed it's because the version of gcc/gnat is rather old 
- 4.3.3.  (The later Ubuntu versions have problems with my eeepc - and 
introduce stupid interface changes with no easy way to revert.)

--Bg



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 22:05         ` Ada.Execution_Time Randy Brukardt
@ 2010-12-16  1:14           ` BrianG
  2010-12-16  5:46             ` Ada.Execution_Time Jeffrey Carter
                               ` (3 more replies)
  2010-12-16  8:45           ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 4 replies; 124+ messages in thread
From: BrianG @ 2010-12-16  1:14 UTC (permalink / raw)


Randy Brukardt wrote:
> "BrianG" <briang000@gmail.com> wrote in message 
> news:ie91co$cko$1@news.eternal-september.org...
> ...
>> My problem is that what is provided in the package in question does not 
>> provide any "values suitable for arithmetic" or provide "an object 
>> suitable for print" (unless all you care about is the number of whole 
>> seconds with no information about the (required) fraction, which seems 
>> rather limiting).
> 
> Having missed your original question, I'm confused as to where you are 
> finding the quoted text above. I don't see anything like that in the 
> Standard. Since it is not in the standard, there is no reason to expect 
> those statements to be true. (Even the standard is wrong occassionally, 
> other materials are wrong a whole lot more often.)
> 
The quoted text was from the post I responded to.  It was Georg's 
attempt to explain the package.  I agree that they are not in the RM; my 
original question was what is the intended purpose of the package - the 
content doesn't seem useful for any use I can think of.

>>  Time_Span is a private type, defined in another package.  If all I want 
>> is CPU_Time (in some form), why do I need Ada.Real_Time?  Also, why are 
>> "+" and "-" provided as they are defined? (And why Time_Span?  I thought 
>> that was the difference between two times, not the fractional part of 
>> time.)
> 
> I think you are missing the point of CPU_Time. It is an abstract 
> representation of some underlying counter. There is no requirement that this 
> counter have any particular value -- in particular it is not necessarily 
> zero when a task is created. So the only operations that are meaningful on a 
> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
> is misnamed, because it is *not* some sort of time type.
Then the package is misnamed too - How is "Execution_Time" not a time? 
Wouldn't tying it explicitly to Real_Time imply some relation to "real 
time" (whether that makes sense or not)?  Using Duration could help 
that, since it's implementation-defined.

One of my problems is that difference (and sum) isn't provided between 
CPU_Time's, only with a Time_Span.  But you can only convert a portion 
of a CPU_Time to Time_Span.  When is that useful (as opposed to 
Splitting both CPU_Times)?

A function "-" (L, R : CPU_Time) return Time_Span (or better, Duration) 
would be required for what you describe above (actually, if what you say 
is true, then that and Clock are all that's required).

> 
> The package uses Ada.Real_Time because no one wanted to invent a new kind of 
> time. The only alternative would have been to use Calendar, which does not 
> have to be as accurate. (Of course, the real accuracy depends on the 
> underlying target; CPU_Time has to be fairly inaccurate on Windows simply 
> because the underlying counters are not very accurate, at least in the 
> default configuration.)
(I think that is inherent in anything of this type, but I'd think it's 
hard to specify that in the RM:)
> 
> My guess is that no one thought about the fact that Time_Span is only an 
> alias for Duration; it's definitely something that I didn't know until you 
> complained. (I know I've confused Time and Time_Span before, must have done 
> that here, too). So there probably was no good reason that Time_Span was 
> used instead of Duration in the package. But that seems to indicate a flaw 
> in Ada.Real_Time, not one for execution time.
I wasn't aware that it was an alias.  I had assumed it was there in case 
Duration didn't have the range or precision required for Time_Span (or 
something like that).

The other part of my problem is that I can only convert to another 
private type (for part of the value).  It seems to me equivalent to 
defining Sequential_IO and Direct_IO (etc) without File_Type - requiring 
the use of Text_IO any time you want to Open, Close, etc a file.  :-)

> 
> In any case, the presumption is that interesting CPU_Time differences are 
> relatively short, so that Time_Span is sufficient (as it will hold at least 
> one day).
But that is not provided - that would require a "-" between two 
CPU_Time's returning a Time_Span.  Unless all CPU_Time's are always less 
than a second, you can't get there easily.

> 
>> Given the rest of this thread, I would guess my answer is "No, no one 
>> actually uses Ada.Execution_Time".
> 
> Can't answer that. I intended to use it to replace some hacked debugging 
> code, but I've never gotten around to actually implementing it (I did do a 
> design, but there is of course a difference...).
> 
>                                   Randy.
> 



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 21:42           ` Ada.Execution_Time Pascal Obry
@ 2010-12-16  3:54             ` jpwoodruff
  2010-12-17  7:11               ` Ada.Execution_Time Stephen Leake
  0 siblings, 1 reply; 124+ messages in thread
From: jpwoodruff @ 2010-12-16  3:54 UTC (permalink / raw)


I learn that Ada.Execution_Time isn't widely implemented, so there
isn't a lot of point to pursue a portable abstraction.  I'm thinking I
might as well revert to Gautier's Windows implementation.  The draft I
posted y'day is probably no more portable than that one.


On Dec 15, 2:42 pm, Pascal Obry <pas...@obry.net> wrote:

>
> Probably because the stack size is smaller in the context of tasking
> runtime. Just increase the stack (see corresponding linker option) for
> the environment task. Nothing really blocking or I did I miss your point?
>

That is clearly the case.

Still, I'm non-plussed that I can't write a service - CPU.Counter in
the example - that can hide its implementation from the host program.

It occurs to me that the designers of the D.14 specification for
Ada.Execution_Time did not consider the prospect of measuring a single
environment task's performance.  Otherwise the package might be
factored so that function Clock did not presume multiple tasks.

Here's a flippant suggestion: maybe there should be a pragma to set
stack size.  I'd bury such a pragma inside package CPU so the
Ada.Execution_Time doesn't get linked into too-small an executable.

If I could do that, my user doesn't get a stack splat from an
instrument that worked correctly while running smaller tests.

John



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  1:14           ` Ada.Execution_Time BrianG
@ 2010-12-16  5:46             ` Jeffrey Carter
  2010-12-16 16:13               ` Ada.Execution_Time BrianG
  2010-12-16 11:37             ` Ada.Execution_Time Simon Wright
                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-16  5:46 UTC (permalink / raw)


On 12/15/2010 06:14 PM, BrianG wrote:
>
> One of my problems is that difference (and sum) isn't provided between
> CPU_Time's, only with a Time_Span. But you can only convert a portion of a
> CPU_Time to Time_Span. When is that useful (as opposed to Splitting both
> CPU_Times)?
>
> A function "-" (L, R : CPU_Time) return Time_Span (or better, Duration) would be
> required for what you describe above (actually, if what you say is true, then
> that and Clock are all that's required).

> But that is not provided - that would require a "-" between two CPU_Time's
> returning a Time_Span. Unless all CPU_Time's are always less than a second, you
> can't get there easily.

 From ARM D.14:

function "-"  (Left : CPU_Time; Right : CPU_Time)  return Time_Span;

-- 
Jeff Carter
"Clear? Why, a 4-yr-old child could understand this
report. Run out and find me a 4-yr-old child. I can't
make head or tail out of it."
Duck Soup
94



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 22:05         ` Ada.Execution_Time Randy Brukardt
  2010-12-16  1:14           ` Ada.Execution_Time BrianG
@ 2010-12-16  8:45           ` Dmitry A. Kazakov
  2010-12-16 16:49             ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-16  8:45 UTC (permalink / raw)


On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:

> I think you are missing the point of CPU_Time. It is an abstract 
> representation of some underlying counter. There is no requirement that this 
> counter have any particular value -- in particular it is not necessarily 
> zero when a task is created. So the only operations that are meaningful on a 
> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
> is misnamed, because it is *not* some sort of time type.

Any computer time is a representation of some counter. I think the point is
CPU_Time is not a real time, i.e. a time (actually the process driving the
corresponding counter) related to what people used to call "time" in the
external world. CPU_Time is what is usually called "simulation time." One
could use Duration or Time_Span in place of CPU_Time, but the concern is
that on some architectures, with multiple time sources, this would
introduce an additional inaccuracy. Another argument against it is that
there could be no fair translation from the CPU usage counter to
Duration/Time_Span (which is the case for Windows).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  1:14           ` Ada.Execution_Time BrianG
  2010-12-16  5:46             ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-16 11:37             ` Simon Wright
  2010-12-16 17:24               ` Ada.Execution_Time BrianG
  2010-12-17  0:35               ` New AdaIC site (was: Ada.Execution_Time) Randy Brukardt
  2010-12-16 13:08             ` Ada.Execution_Time Peter C. Chapin
  2010-12-16 18:17             ` Ada.Execution_Time Jeffrey Carter
  3 siblings, 2 replies; 124+ messages in thread
From: Simon Wright @ 2010-12-16 11:37 UTC (permalink / raw)


BrianG <briang000@gmail.com> writes:

>> Arguably, CPU_Time is misnamed, because it is *not*
>> some sort of time type.
> Then the package is misnamed too - How is "Execution_Time" not a time?

A 'time type'
[http://www.adaic.com/resources/add_content/standards/05rm/html/RM-9-6.html]
(6) can be used as the argument for a delay statement. Wouldn't make a
lot of sense for an execution time! (well, perhaps one could think of
some obscure use ...)

I see the standards have moved, time to update my bookmarks!



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  1:14           ` Ada.Execution_Time BrianG
  2010-12-16  5:46             ` Ada.Execution_Time Jeffrey Carter
  2010-12-16 11:37             ` Ada.Execution_Time Simon Wright
@ 2010-12-16 13:08             ` Peter C. Chapin
  2010-12-16 17:32               ` Ada.Execution_Time BrianG
  2010-12-16 18:17             ` Ada.Execution_Time Jeffrey Carter
  3 siblings, 1 reply; 124+ messages in thread
From: Peter C. Chapin @ 2010-12-16 13:08 UTC (permalink / raw)


On 2010-12-15 20:14, BrianG wrote:

> Then the package is misnamed too - How is "Execution_Time" not a time?
> Wouldn't tying it explicitly to Real_Time imply some relation to "real
> time" (whether that makes sense or not)?  Using Duration could help
> that, since it's implementation-defined.

To me "execution time" sounds like a measure of how long a program has
run (in some sense). In other words it sounds like some kind of time
interval. "The execution time of this process was 10.102 seconds."

However, people often use "time" to refer to some sort of absolute
clock. "What time is it? It is now 8:07am on 2010-12-16." The basic
confusion is that the term "time" is extremely ambiguous in ordinary
usage. Not only is it used both for time intervals and absolute time
values, but there are several different kinds of time one might talk about.

Peter



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  5:46             ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-16 16:13               ` BrianG
  0 siblings, 0 replies; 124+ messages in thread
From: BrianG @ 2010-12-16 16:13 UTC (permalink / raw)


Jeffrey Carter wrote:
>  From ARM D.14:
> 
> function "-"  (Left : CPU_Time; Right : CPU_Time)  return Time_Span;
> 

Must be the "draft" I'm still using.  Time to find GNAT's adainclude on 
that computer.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  8:45           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-16 16:49             ` BrianG
  2010-12-16 17:52               ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: BrianG @ 2010-12-16 16:49 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:
> 
>> I think you are missing the point of CPU_Time. It is an abstract 
>> representation of some underlying counter. There is no requirement that this 
>> counter have any particular value -- in particular it is not necessarily 
>> zero when a task is created. So the only operations that are meaningful on a 
>> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
>> is misnamed, because it is *not* some sort of time type.
> 
> Any computer time is a representation of some counter. I think the point is
> CPU_Time is not a real time, i.e. a time (actually the process driving the
> corresponding counter) related to what people used to call "time" in the
> external world. CPU_Time is what is usually called "simulation time." One
> could use Duration or Time_Span in place of CPU_Time, but the concern is
> that on some architectures, with multiple time sources, this would
> introduce an additional inaccuracy. Another argument against it is that
> there could be no fair translation from the CPU usage counter to
> Duration/Time_Span (which is the case for Windows).
> 
Isn't any "time" related to a computer nothing but a "simulation time"? 
  Yes, some times may be intended to emulate clock-on-the-wall-time, but 
that doesn't mean they're a very good emulation (ever measure the 
accuracy of a PC that's not synched to something?  You can get a watch 
free in a box of cereal that's orders of magnitude better.).  That's why 
we have Calendar, Real_Time, and CPU_Time - they're meant to be 
different things, but they are all "time" in some sense.  CPU_Time is 
obviously an approximation, dependent on the RTS, OS, task scheduler, etc.

What's so particularly bad about Windows (aside from the normal Windows 
things)?  Granted, I'm only doing simple prototyping (for non-Windows 
eventual use), but it seems a "fair" approximation.  When I added a 
1-second loop doing nonsense work (to get any measured value), it reads 
about 1 second, within about 5% (which is at least as good as I would 
have expected, given the 'normal' jitter on Delay).

--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16 11:37             ` Ada.Execution_Time Simon Wright
@ 2010-12-16 17:24               ` BrianG
  2010-12-16 17:45                 ` Ada.Execution_Time Adam Beneschan
  2010-12-17  0:35               ` New AdaIC site (was: Ada.Execution_Time) Randy Brukardt
  1 sibling, 1 reply; 124+ messages in thread
From: BrianG @ 2010-12-16 17:24 UTC (permalink / raw)


Simon Wright wrote:
> BrianG <briang000@gmail.com> writes:
> 
>>> Arguably, CPU_Time is misnamed, because it is *not*
>>> some sort of time type.
>> Then the package is misnamed too - How is "Execution_Time" not a time?
> 
> A 'time type'
> [http://www.adaic.com/resources/add_content/standards/05rm/html/RM-9-6.html]
> (6) can be used as the argument for a delay statement. Wouldn't make a
> lot of sense for an execution time! (well, perhaps one could think of
> some obscure use ...)
(Actually for a delay_until.)  That paragraph seems to contradict the 
previous one which says "any nonlimited type".  Shouldn't (6) define 
Real_Time.Time since it's not Calendar.Time and isn't 
implementation-defined?  At least now Randy's comments make sense - I 
hadn't realized there was a language-defined concept of special 
"time-types" that had special uses (I hadn't realized the standard 
defines a use for certain private types that is not explicitly evident 
in the code - any other "magic" uses like this?)

> 
> I see the standards have moved, time to update my bookmarks!



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16 13:08             ` Ada.Execution_Time Peter C. Chapin
@ 2010-12-16 17:32               ` BrianG
  0 siblings, 0 replies; 124+ messages in thread
From: BrianG @ 2010-12-16 17:32 UTC (permalink / raw)


Peter C. Chapin wrote:
> On 2010-12-15 20:14, BrianG wrote:
> 
>> Then the package is misnamed too - How is "Execution_Time" not a time?
>> Wouldn't tying it explicitly to Real_Time imply some relation to "real
>> time" (whether that makes sense or not)?  Using Duration could help
>> that, since it's implementation-defined.
> 
> To me "execution time" sounds like a measure of how long a program has
> run (in some sense). In other words it sounds like some kind of time
> interval. "The execution time of this process was 10.102 seconds."
"CPU_Time" makes it more clear - it's not the execution time of the 
program, but the amount of CPU it has used.  If there's time-sharing, it 
can be less than the elapsed time used.  Or if there's multiple cores, 
it may be greater (although that may be unlikely as the task level).
> 
> However, people often use "time" to refer to some sort of absolute
> clock. "What time is it? It is now 8:07am on 2010-12-16." The basic
> confusion is that the term "time" is extremely ambiguous in ordinary
> usage. Not only is it used both for time intervals and absolute time
> values, but there are several different kinds of time one might talk about.
> 
> Peter



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16 17:24               ` Ada.Execution_Time BrianG
@ 2010-12-16 17:45                 ` Adam Beneschan
  2010-12-16 21:13                   ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 1 reply; 124+ messages in thread
From: Adam Beneschan @ 2010-12-16 17:45 UTC (permalink / raw)


On Dec 16, 9:24 am, BrianG <briang...@gmail.com> wrote:
> Simon Wright wrote:
> > BrianG <briang...@gmail.com> writes:
>
> >>> Arguably, CPU_Time is misnamed, because it is *not*
> >>> some sort of time type.
> >> Then the package is misnamed too - How is "Execution_Time" not a time?
>
> > A 'time type'
> > [http://www.adaic.com/resources/add_content/standards/05rm/html/RM-9-6...]
> > (6) can be used as the argument for a delay statement. Wouldn't make a
> > lot of sense for an execution time! (well, perhaps one could think of
> > some obscure use ...)
>
> (Actually for a delay_until.)  That paragraph seems to contradict the
> previous one which says "any nonlimited type".

No, not really.  The rule that says "any nonlimited type" is a Name
Resolution Rule.  Those rules control how the language resolves
possibly ambiguous statements.  It's important to realize that Name
Resolution Rules are not legality rules, and it's possible for
something to be illegal and still satisfy the Name Resolution Rules,
which means that a "possible interpretation" can still cause an
ambiguity even if it's illegal.  Example:

   type Int is new Integer;
   function Overloaded (N : Integer) return Integer;
   function Overloaded (N : Integer) return Character;

   X : Int := Int (Overloaded (5));

This last call to Overloaded is ambiguous (and therefore illegal) even
though one definition of Overloaded returns a Character which cannot
legally be converted to Int.  The type conversion from a Character-
returning function to Int still satisfies the Name Resolution Rules
(4.6(6)).  Moral: Don't look at Name Resolution Rules if you're trying
to figure out whether something is legal.  (Other than when trying to
figure out whether something is unambiguous.)

> Shouldn't (6) define
> Real_Time.Time since it's not Calendar.Time and isn't
> implementation-defined?  

It should probably say "language-defined or implementation-defined".
D.8(18) does say that Real_Time.Time is one of those "time types".

                                 -- Adam



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16 16:49             ` Ada.Execution_Time BrianG
@ 2010-12-16 17:52               ` Dmitry A. Kazakov
  2010-12-17  8:49                 ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-16 17:52 UTC (permalink / raw)


On Thu, 16 Dec 2010 11:49:13 -0500, BrianG wrote:

> Dmitry A. Kazakov wrote:
>> On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:
>> 
>>> I think you are missing the point of CPU_Time. It is an abstract 
>>> representation of some underlying counter. There is no requirement that this 
>>> counter have any particular value -- in particular it is not necessarily 
>>> zero when a task is created. So the only operations that are meaningful on a 
>>> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
>>> is misnamed, because it is *not* some sort of time type.
>> 
>> Any computer time is a representation of some counter. I think the point is
>> CPU_Time is not a real time, i.e. a time (actually the process driving the
>> corresponding counter) related to what people used to call "time" in the
>> external world. CPU_Time is what is usually called "simulation time." One
>> could use Duration or Time_Span in place of CPU_Time, but the concern is
>> that on some architectures, with multiple time sources, this would
>> introduce an additional inaccuracy. Another argument against it is that
>> there could be no fair translation from the CPU usage counter to
>> Duration/Time_Span (which is the case for Windows).
>> 
> Isn't any "time" related to a computer nothing but a "simulation time"? 

No, Ada.Calendar.Time and Ada.Real_Time.Time are derived from quartz
generators, which are physical devices. CPU_Time is derived from the time
the task owned a processor. This is a task time, the time in a simulated
Universe where nothing but the task exists. This Universe is not real, so
its time is not.

Or to put it otherwise, when Time has a value T, then under certain
conditions this has some meaning invariant to the program and the task
being run. For Ada.Real_Time.Time it is only the time differences T2 - T1,
which have this meaning. CPU_Time has no physical meaning. 2s might be 2.5s
real time or 1 year real time.

> Yes, some times may be intended to emulate clock-on-the-wall-time, but 
> that doesn't mean they're a very good emulation (ever measure the 
> accuracy of a PC that's not synched to something?

But the intent was to emulate the real time, whatever accuracy the result
night have.

> CPU_Time is 
> obviously an approximation, dependent on the RTS, OS, task scheduler, etc.

An approximation of what?

> What's so particularly bad about Windows (aside from the normal Windows 
> things)?

Windows counts full quants. It means that if the task (thread) enters a
non-busy waiting, e.g. for I/O or for other event, *before* it has spent
its quant, the quant is not counted (if I correctly remember). In effect,
you theoretically could have 0 CPU time with 99% processor load. Using the
task manager, you might frequently observe the effect of this: moderate CPU
load, but everything is frozen.

(I don't checked this behavior since Windows Server 2003, maybe they fixed
it in Vista, 7 etc).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  1:14           ` Ada.Execution_Time BrianG
                               ` (2 preceding siblings ...)
  2010-12-16 13:08             ` Ada.Execution_Time Peter C. Chapin
@ 2010-12-16 18:17             ` Jeffrey Carter
  3 siblings, 0 replies; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-16 18:17 UTC (permalink / raw)


On 12/15/2010 06:14 PM, BrianG wrote:
>
> One of my problems is that difference (and sum) isn't provided between
> CPU_Time's, only with a Time_Span. But you can only convert a portion of a
> CPU_Time to Time_Span. When is that useful (as opposed to Splitting both
> CPU_Times)?
>
> A function "-" (L, R : CPU_Time) return Time_Span (or better, Duration) would be
> required for what you describe above (actually, if what you say is true, then
> that and Clock are all that's required).

> But that is not provided - that would require a "-" between two CPU_Time's
> returning a Time_Span. Unless all CPU_Time's are always less than a second, you
> can't get there easily.

 From ARM D.14:

function "-"  (Left : CPU_Time; Right : CPU_Time)  return Time_Span;

-- 
Jeff Carter
"Clear? Why, a 4-yr-old child could understand this
report. Run out and find me a 4-yr-old child. I can't
make head or tail out of it."
Duck Soup
94



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16 17:45                 ` Ada.Execution_Time Adam Beneschan
@ 2010-12-16 21:13                   ` Jeffrey Carter
  0 siblings, 0 replies; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-16 21:13 UTC (permalink / raw)


On 12/16/2010 10:45 AM, Adam Beneschan wrote:
>
> It should probably say "language-defined or implementation-defined".
> D.8(18) does say that Real_Time.Time is one of those "time types".

That's a horrible way to phrase it. I would say, "some other time type".

-- 
Jeff Carter
"Have you gone berserk? Can't you see that that man is a ni?"
Blazing Saddles
38



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: New AdaIC site (was: Ada.Execution_Time)
  2010-12-16 11:37             ` Ada.Execution_Time Simon Wright
  2010-12-16 17:24               ` Ada.Execution_Time BrianG
@ 2010-12-17  0:35               ` Randy Brukardt
  1 sibling, 0 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-17  0:35 UTC (permalink / raw)


"Simon Wright" <simon@pushface.org> wrote in message 
news:m24oaerlsi.fsf@pushface.org...
...
> I see the standards have moved, time to update my bookmarks!

I'd wait a day or two while we get the glitches out of these new sites. The 
domains are pointing a mix of old and new servers at the moment...

                                    Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15 23:14               ` Ada.Execution_Time Adam Beneschan
@ 2010-12-17  0:44                 ` Randy Brukardt
  2010-12-17 17:54                   ` Ada.Execution_Time Warren
  2010-12-20 21:28                   ` Ada.Execution_Time Keith Thompson
  0 siblings, 2 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-17  0:44 UTC (permalink / raw)


"Adam Beneschan" <adam@irvine.com> wrote in message 
news:dfcf048b-bb6e-4993-b62a-9147bad3a6ff@j32g2000prh.googlegroups.com...
On Dec 15, 2:52 pm, Keith Thompson <ks...@mib.org> wrote:
...
>> So, um, why is Assert a pragma rather than a statement?
>>
>> if Debug_Mode then
>> assert Is_Good(X);
>> end if;
...
>Somebody on the ARG might have a more authoritative answer.  My
>reading of AI95-286 is that a number of Ada compilers had already
>implemented the Assert pragma and there was a lot of code using it.
>Of course, those compilers couldn't have added "assert" as a statement
>on their own, but adding an implementation-defined pragma is OK.

That's one reason. The other is that you can't put a statement into a 
declarative part (well, you can, but you need to use a helper generic and a 
helper procedure, along with an instantiation, which is insane -- although 
it is not that unusual to see that done in a program). A lot of asserts fit 
most naturally into the declarative part (precondition ones, for instance, 
although those will be better defined separately in Ada 2012).

                                                  Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16  3:54             ` Ada.Execution_Time jpwoodruff
@ 2010-12-17  7:11               ` Stephen Leake
  0 siblings, 0 replies; 124+ messages in thread
From: Stephen Leake @ 2010-12-17  7:11 UTC (permalink / raw)


jpwoodruff <jpwoodruff@gmail.com> writes:

> Here's a flippant suggestion: maybe there should be a pragma to set
> stack size.  

If it set a _minimum_ stack size, that might be useful. It doesn't know
what else I have in my task, so it can't set the
_actual_ stack size!

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-16 17:52               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-17  8:49                 ` Niklas Holsti
  2010-12-17  9:32                   ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-17  8:49 UTC (permalink / raw)



>>> On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:
>>>
>>>> I think you are missing the point of CPU_Time. It is an abstract 
>>>> representation of some underlying counter. There is no requirement that this 
>>>> counter have any particular value -- in particular it is not necessarily 
>>>> zero when a task is created.

Are you sure, Randy? RM D.14 13/2 says "For each task, the execution 
time value is set to zero at the creation of the task."

>>>> So the only operations that are meaningful on a 
>>>> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
>>>> is misnamed, because it is *not* some sort of time type.

Additions of CPU_Time values should also be meaningful. As I understand 
it, Ada.Execution_Time is meant for use in task scheduling, where it is 
essential that when several tasks start at the same time on one 
processor, the sum of their CPU_Time values at any later instant is 
close to the real elapsed time, Ada.Real_Time.Time_Span. Assuming that 
CPU_Time starts at zero for each task, see above.

Dmitry A. Kazakov wrote:
> CPU_Time has no physical meaning. 2s might be 2.5s
> real time or 1 year real time.

CPU_Time values have physical meaning after being summed over all tasks. 
The sum should be the real time, as closely as possible.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-15  0:16       ` Ada.Execution_Time BrianG
                           ` (2 preceding siblings ...)
  2010-12-15 22:05         ` Ada.Execution_Time Randy Brukardt
@ 2010-12-17  8:59         ` anon
  2010-12-19  3:07           ` Ada.Execution_Time BrianG
  3 siblings, 1 reply; 124+ messages in thread
From: anon @ 2010-12-17  8:59 UTC (permalink / raw)


In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>Georg Bauhaus wrote:
>> On 12/12/10 10:59 PM, BrianG wrote:
>> 
>> 
>>> But my question still remains: What's the intended use of 
>>> Ada.Execution_Time? Is there an intended use where its content 
>>> (CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?
>> 
>> I think that your original posting mentions a use that is quite
>> consistent with what the rationale says: each task has its own time.
>> Points in time objects can be split into values suitable for
>> arithmetic, using Time_Span objects.  Then, from the result of
>> arithmetic, produce an object suitable for print, as desired.
>> 
>> 
>> While this seems like having to write a bit much,
>> it makes things explicit, like Ada forces one to be
>> in many cases.   That' how I explain the series of
>> steps to myself.
>> 
>> Isn't it just like "null;" being required to express
>> the null statement?  It seems to me to be a logical
>> consequence of requiring that intents must be stated
>> explicitly.
>> 
>I have no problem with verbosity or explicitness, and that's not what I 
>was asking about.
>
>My problem is that what is provided in the package in question does not 
>provide any "values suitable for arithmetic" or provide "an object 
>suitable for print" (unless all you care about is the number of whole 
>seconds with no information about the (required) fraction, which seems 
>rather limiting).  Time_Span is a private type, defined in another 
>package.  If all I want is CPU_Time (in some form), why do I need 
>Ada.Real_Time?  Also, why are "+" and "-" provided as they are defined? 
>  (And why Time_Span?  I thought that was the difference between two 
>times, not the fractional part of time.)
>
>Given the rest of this thread, I would guess my answer is "No, no one 
>actually uses Ada.Execution_Time".
>
>--BrianG

Ada.Execution_Time is use for performance and reliability monitoring of the 
cpu resource aka a MCP (Tron). The control program can monitor the cpu 
usage of each task and decide which task needs to give up the CPU for the 
next task. 

With a shared server system it monitors which web site is over using the 
cpu and shutdown that web site temporality or permanent. One example too 
many Java Servlet on a web site.

For Ada 2012, it will mostly like will be use in the Ada runtime to balance 
the load for an Ada partition on multiple cores. Aka job scheduler for multiple 
tasks on multiple CPUs.

For the average Ada programmer, its another Ada package that most will 
never use because they will just use Ada.Real_Time.  The only problem 
is that Ada.Real_Time is an accumulation of times. A small list of these 
times includes VS swapping, IO processing, any cpu handled interrupts, 
as well as times for the task to execute as well as time the task sleeps 
while other tasks are executing. In some cases the Ada.Execution_Time
package can replace the Ada.Real_Time with only altering the with/use 
statements.

Some programmers might use this package to try to improve performance of 
an algorithm.

And a few might use this package for debugging like to prevent tasks from 
running away with the CPU resources. Such as stopping this type of 
logical condition from occurring at runtime:

  with x86 ; -- defines x86 instructions subset
  use  x86 ;

  task body run is

    begin
      Disable_Interrupts ;
      loop            -- Endless loop. 
        null ;        -- Which optimizes to one jump instruction
      end loop ;
    end run ;


Which when optimize can shutdown a cpu or computer system. And requires 
either a non-maskable reset or a full power cold restart without being able 
to save critical data or closing files.

Also, in history this package would be use to calculate the cpu usage charges 
for a customer.




^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17  8:49                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-17  9:32                   ` Dmitry A. Kazakov
  2010-12-17 11:50                     ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-17  9:32 UTC (permalink / raw)


On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> CPU_Time has no physical meaning. 2s might be 2.5s
>> real time or 1 year real time.
> 
> CPU_Time values have physical meaning after being summed over all tasks. 
> The sum should be the real time, as closely as possible.

1. Not tasks, but threads + kernel services + kernel drivers + CPU
frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated

2. Not so anyway under many OSes => again, cannot be mandated

3. The intended purpose of CPU_Time has nothing to do with this constraint.
Nobody is interested in knowing if the actual sum is close or not to the
real time duration. It is a simulation time, which *could* be projected to
the real time in order to estimate potential CPU load. And the result
depends on the premises made.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17  9:32                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-17 11:50                     ` Niklas Holsti
  2010-12-17 13:10                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-17 11:50 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> CPU_Time has no physical meaning. 2s might be 2.5s
>>> real time or 1 year real time.
>> CPU_Time values have physical meaning after being summed over all tasks. 
>> The sum should be the real time, as closely as possible.
> 
> 1. Not tasks, but threads + kernel services + kernel drivers + CPU
> frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated

I believe we are talking about the intended meaning of 
Ada.Execution_Time.CPU_Time, not about how far it can be precisely 
"mandated" (standardized).

Appendix D is about real-time systems, and I believe it is aimed in 
particular at systems built with Ada tasks and the Ada RTS. In such 
systems there may or may not be CPU time -- "overhead" -- that is not 
included in the CPU_Time of any task. See the last sentence in RM D.14 
11/2: "It is implementation defined which task, if any, is charged the 
execution time that is consumed by interrupt handlers and run-time 
services on behalf of the system". In most systems there will be some 
such non-task overhead, but in a "pure Ada" system it should be small 
relative to the total CPU_Time of the tasks.

By "CPU frequency slowdowns" I assume you mean a system that varies the 
CPU clock frequency, for example to reduce energy consumption when load 
is low. This do not necessarily conflict with Ada.Execution_Time and the 
physical meaning of CPU_Time, although it may make implementation 
harder. One implementation could be to drive the CPU-time counter by a 
fixed clock (a timer clock), not by the CPU clock.

> 2. Not so anyway under many OSes => again, cannot be mandated

Whether or not all OSes support the concepts of Ada.Execution_Time is 
irrelevant to a discussion of the intended meaning of CPU_Time.

> 3. The intended purpose of CPU_Time has nothing to do with this constraint.
> Nobody is interested in knowing if the actual sum is close or not to the
> real time duration.

Real-time task scheduling and schedulability analysis is *all* about 
adding up task execution times (CPU_Time values, in principle) and 
comparing the sums to real-time deadlines (durations). I do believe 
there are some people, here and there, who are interested in such things...

In practice, since tasks in real-time Ada systems are usually created 
once at system start and are thereafter repeatedly activated (triggered) 
for each job (each deadline), the total CPU_Time of a task is less 
relevant for scheduling decisions than is the increase in CPU_Time since 
the last activation of the task. Using the services of 
Ada.Execution_Time, that increment is represented as a Time_Span. From 
this point of view, it is understandable that Ada.Execution_Time does 
not provide an addition operation "+" (Left, Right : CPU_Time) return 
CPU_Time.

> It is a simulation time, which *could* be projected to
> the real time in order to estimate potential CPU load.

"Simulation", "projection"... convey no meaning to me.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17 11:50                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-17 13:10                       ` Dmitry A. Kazakov
  2010-12-18 21:20                         ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-17 13:10 UTC (permalink / raw)


On Fri, 17 Dec 2010 13:50:26 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> CPU_Time has no physical meaning. 2s might be 2.5s
>>>> real time or 1 year real time.
>>> CPU_Time values have physical meaning after being summed over all tasks. 
>>> The sum should be the real time, as closely as possible.
>> 
>> 1. Not tasks, but threads + kernel services + kernel drivers + CPU
>> frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated
> 
> I believe we are talking about the intended meaning of 
> Ada.Execution_Time.CPU_Time, not about how far it can be precisely 
> "mandated" (standardized).
> 
> Appendix D is about real-time systems, and I believe it is aimed in 
> particular at systems built with Ada tasks and the Ada RTS. In such 
> systems there may or may not be CPU time -- "overhead" -- that is not 
> included in the CPU_Time of any task. See the last sentence in RM D.14 
> 11/2: "It is implementation defined which task, if any, is charged the 
> execution time that is consumed by interrupt handlers and run-time 
> services on behalf of the system". In most systems there will be some 
> such non-task overhead, but in a "pure Ada" system it should be small 
> relative to the total CPU_Time of the tasks.

Yes, this is what I meant. CPU_Time does not have the meaning:

"CPU_Time values have physical meaning after being summed over all tasks. 
The sum should be the real time, as closely as possible."

Anyway, even if the sum of components has a physical meaning that does not
imply that the components have it.

> By "CPU frequency slowdowns" I assume you mean a system that varies the 
> CPU clock frequency, for example to reduce energy consumption when load 
> is low. This do not necessarily conflict with Ada.Execution_Time and the 
> physical meaning of CPU_Time, although it may make implementation 
> harder. One implementation could be to drive the CPU-time counter by a 
> fixed clock (a timer clock), not by the CPU clock.

I am not a language lawyer, but I bet that an implementation of
Ada.Execution_Time.Split that ignores any CPU frequency changes when
summing up processor ticks consumed by the task would be legal.

>> 2. Not so anyway under many OSes => again, cannot be mandated
> 
> Whether or not all OSes support the concepts of Ada.Execution_Time is 
> irrelevant to a discussion of the intended meaning of CPU_Time.

ARM usually does not intend what would be impossible to implement.

> "Simulation", "projection"... convey no meaning to me.

http://en.wikipedia.org/wiki/Discrete_event_simulation

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17  0:44                 ` Ada.Execution_Time Randy Brukardt
@ 2010-12-17 17:54                   ` Warren
  2010-12-20 21:28                   ` Ada.Execution_Time Keith Thompson
  1 sibling, 0 replies; 124+ messages in thread
From: Warren @ 2010-12-17 17:54 UTC (permalink / raw)


Randy Brukardt expounded in news:ieebpl$rt1$1@munin.nbi.dk:

> "Adam Beneschan" <adam@irvine.com> wrote in message 
> news:dfcf048b-bb6e-4993-b62a-9147bad3a6ff@j32g2000prh.google
> groups.com... On Dec 15, 2:52 pm, Keith Thompson
> <ks...@mib.org> wrote: ...
>>> So, um, why is Assert a pragma rather than a statement?
>>>
>>> if Debug_Mode then
>>> assert Is_Good(X);
>>> end if;
> ...
>>Somebody on the ARG might have a more authoritative answer.
>> My reading of AI95-286 is that a number of Ada compilers
>>had already implemented the Assert pragma and there was a
>>lot of code using it. Of course, those compilers couldn't
>>have added "assert" as a statement on their own, but adding
>>an implementation-defined pragma is OK. 
> 
> That's one reason. The other is that you can't put a
> statement into a declarative part (well, you can, but you
> need to use a helper generic and a helper procedure, along
> with an instantiation, which is insane -- although it is
> not that unusual to see that done in a program). A lot of
> asserts fit most naturally into the declarative part
> (precondition ones, for instance, although those will be
> better defined separately in Ada 2012). 
> 
>                                                   Randy.

I've never even thought to try that. I'll have to remember 
that.

Warren



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17 13:10                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-18 21:20                         ` Niklas Holsti
  2010-12-19  9:57                           ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-18 21:20 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Fri, 17 Dec 2010 13:50:26 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> CPU_Time has no physical meaning. 2s might be 2.5s
>>>>> real time or 1 year real time.
>>>> CPU_Time values have physical meaning after being summed over all tasks. 
>>>> The sum should be the real time, as closely as possible.
>>> 1. Not tasks, but threads + kernel services + kernel drivers + CPU
>>> frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated
>> I believe we are talking about the intended meaning of 
>> Ada.Execution_Time.CPU_Time, not about how far it can be precisely 
>> "mandated" (standardized).
>>
>> Appendix D is about real-time systems, and I believe it is aimed in 
>> particular at systems built with Ada tasks and the Ada RTS. In such 
>> systems there may or may not be CPU time -- "overhead" -- that is not 
>> included in the CPU_Time of any task. See the last sentence in RM D.14 
>> 11/2: "It is implementation defined which task, if any, is charged the 
>> execution time that is consumed by interrupt handlers and run-time 
>> services on behalf of the system". In most systems there will be some 
>> such non-task overhead, but in a "pure Ada" system it should be small 
>> relative to the total CPU_Time of the tasks.
> 
> Yes, this is what I meant. CPU_Time does not have the meaning:
> 
> "CPU_Time values have physical meaning after being summed over all tasks. 
> The sum should be the real time, as closely as possible."

I said "as closely as possible", and I don't expect to find many systems 
in which they are exactly equal. But I still think that this (ideally 
exact, in practice approximate) relationship reflects the intended 
physical meaning of Ada.Execution_Time.CPU_Time: the elapsed real time 
(Time_Span) is divided into task execution times (CPU_Time) through task 
scheduling.

> Anyway, even if the sum of components has a physical meaning that does not
> imply that the components have it.

If you have only one task, the sum is identical to the term, so their 
physical meanings are the same. Generalize for the case of many tasks.

Next, we can argue if quarks have physical meaning, or if only hadrons 
do... :-)

The concept and measurement of "the execution time of a task" does 
become problematic in complex processors that have hardware 
multi-threading and can run several tasks in more or less parallel 
fashion, without completely isolating the tasks from each other. 
Schedulability analysis in such systems is difficult since the 
"execution time" of a task depends on which other tasks are running at 
the same time.

>> By "CPU frequency slowdowns" I assume you mean a system that varies the 
>> CPU clock frequency, for example to reduce energy consumption when load 
>> is low. This do not necessarily conflict with Ada.Execution_Time and the 
>> physical meaning of CPU_Time, although it may make implementation 
>> harder. One implementation could be to drive the CPU-time counter by a 
>> fixed clock (a timer clock), not by the CPU clock.
> 
> I am not a language lawyer, but I bet that an implementation of
> Ada.Execution_Time.Split that ignores any CPU frequency changes when
> summing up processor ticks consumed by the task would be legal.

Whether or not such an implementation is formally legal, that would 
require very perverse interpretations of the text in RM D.14.  You would 
have to argue that a system with a lowered CPU clock frequency, running 
a single task with no interrupts, is only "executing the task" for a 
small part of each clock cycle, and the rest of each clock cycle is 
spent on some kind of system overhead. I don't think that is what the RM 
authors intended.

You may be right that the RM has no formal requirement that would 
prevent such an implementation. (In fact, some variable-frequency 
scheduling methods may prefer to measure task "execution times" in units 
of processor ticks, not in real-time units like seconds.) But the 
implementation could not implement the function "-" (Left, Right : 
CPU_Time) return Time_Span to give a meaningful result, with the normal 
meaning of Time_Span, since the result would be the same Time_Span for a 
high CPU frequency and for a low one.

>>> 2. Not so anyway under many OSes => again, cannot be mandated
>> Whether or not all OSes support the concepts of Ada.Execution_Time is 
>> irrelevant to a discussion of the intended meaning of CPU_Time.
> 
> ARM usually does not intend what would be impossible to implement.

Not all OSes are designed for real-time systems. As I understand it, the 
ARM sensibly intends Annex D to be implemented in real-time OSes or in 
bare-Ada-RTS systems, not under Windows.

Even under Windows, as I understand earlier posts in this thread, 
problems arise only if task interruptions, suspensions, and preemptions 
are so frequent that the "quant" truncation is a significant part of the 
typical uninterrupted execution time. Moreover, an Ada RTS running on 
Windows could of course use another clock or timer to measure execution 
time, if the Windows functionality is unsuitable.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17  8:59         ` Ada.Execution_Time anon
@ 2010-12-19  3:07           ` BrianG
  2010-12-19  4:01             ` Ada.Execution_Time Vinzent Hoefler
  2010-12-19 22:54             ` Ada.Execution_Time anon
  0 siblings, 2 replies; 124+ messages in thread
From: BrianG @ 2010-12-19  3:07 UTC (permalink / raw)


anon@att.net wrote:
> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>> Georg Bauhaus wrote:
>>> On 12/12/10 10:59 PM, BrianG wrote:
...
[Lots of meaningless comments deleted.]
> For the average Ada programmer, its another Ada package that most will 
> never use because they will just use Ada.Real_Time.  
[etc.]
>  In some cases the Ada.Execution_Time
> package can replace the Ada.Real_Time with only altering the with/use 
> statements.
If you mean that they both define a Clock and a Split, maybe.  If you mean
any program that actually does anything, that's not possible.  That was my
original comment:  Execution_Time does not provide any types/operations
useful, without also 'with'ing Real_Time.
> 
[etc.]
(Don't know why I bother w/ this msg.)



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19  3:07           ` Ada.Execution_Time BrianG
@ 2010-12-19  4:01             ` Vinzent Hoefler
  2010-12-19 11:00               ` Ada.Execution_Time Niklas Holsti
                                 ` (2 more replies)
  2010-12-19 22:54             ` Ada.Execution_Time anon
  1 sibling, 3 replies; 124+ messages in thread
From: Vinzent Hoefler @ 2010-12-19  4:01 UTC (permalink / raw)


BrianG wrote:

> If you mean that they both define a Clock and a Split, maybe.  If you mean
> any program that actually does anything, that's not possible.  That was my
> original comment:  Execution_Time does not provide any types/operations
> useful, without also 'with'ing Real_Time.

Yes, but so what? The intention of Ada.Execution_Time wasn't to provide the
user with means to instrument the software and to Text_IO some mostly
meaningless values (any decent profiler can do that for you), but rather a
way to implement user-defined schedulers based on actual CPU usage. You may
want to take a look at the child packages Timers and Group_Budget to see the
intended usage.

And well, if you're ranting about CPU_Time, Real_Time.Time_Span is not much
better. It's a pain in the ass to convert an Ada.Real.Time_Span to another
type to interface with OS-specific time types (like time_t) if you're opting
for speed, portability and accuracy.

BTW, has anyone any better ideas to convert TimeSpan into a record containing
seconds and nanoseconds than this:

    function To_Interval (TS : in Ada.Real_Time.Time_Span)
                          return ASAAC_Types.TimeInterval is
       Nanos_Per_Sec : constant                         := 1_000_000_000.0;
       One_Second    : constant Ada.Real_Time.Time_Span :=
                         Ada.Real_Time.Milliseconds (1000);
       Max_Interval  : constant Ada.Real_Time.Time_Span :=
                         Integer (ASAAC_Types.Second'Last) * One_Second;
       Seconds       : ASAAC_Types.Second;
       Nano_Seconds  : ASAAC_Types.Nanosec;
    begin
       declare
          Sub_Seconds : Ada.Real_Time.Time_Span;
       begin
          if TS >= Max_Interval then
             Seconds      := ASAAC_Types.Second'Last;
             Nano_Seconds := ASAAC_Types.Nanosec'Last;
          elsif TS < Ada.Real_Time.Time_Span_Zero then
             Seconds      := ASAAC_Types.Second'First;
             Nano_Seconds := ASAAC_Types.Nanosec'First;
          else
             Seconds      := ASAAC_Types.Second (TS / One_Second);
             Sub_Seconds  := TS - (Integer (Seconds) * One_Second);
             Nano_Seconds :=
               ASAAC_Types.Nanosec
                 (Nanos_Per_Sec * Ada.Real_Time.To_Duration (Sub_Seconds));
          end if;
       end;

       return
         ASAAC_Types.TimeInterval'(Sec  => Seconds,
                                   NSec => Nano_Seconds);
    end To_Interval;

The solution I came up with here generally works, but suffers some potential
overflow problems and doesn't look very efficient to me (although that'a minor
problem given the context it's usually used in).


Vinzent.

-- 
You know, we're sitting on four million pounds of fuel, one nuclear weapon,
and a thing that has 270,000 moving parts built by the lowest bidder.
Makes you feel good, doesn't it?
   --  Rockhound, "Armageddon"



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-18 21:20                         ` Ada.Execution_Time Niklas Holsti
@ 2010-12-19  9:57                           ` Dmitry A. Kazakov
  2010-12-25 11:31                             ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-19  9:57 UTC (permalink / raw)


On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Fri, 17 Dec 2010 13:50:26 +0200, Niklas Holsti wrote:
>> 
> Next, we can argue if quarks have physical meaning, or if only hadrons 
> do... :-)

I thought about it! (:-)) But the example of a "half of a car" might be
better.

> The concept and measurement of "the execution time of a task" does 
> become problematic in complex processors that have hardware 
> multi-threading and can run several tasks in more or less parallel 
> fashion, without completely isolating the tasks from each other. 

No, the concept is just fine, it is the interpretation of the measured
values in the way you wanted, which causes problems. That is the core of my
point. The measure is not real time.

>>> By "CPU frequency slowdowns" I assume you mean a system that varies the 
>>> CPU clock frequency, for example to reduce energy consumption when load 
>>> is low. This do not necessarily conflict with Ada.Execution_Time and the 
>>> physical meaning of CPU_Time, although it may make implementation 
>>> harder. One implementation could be to drive the CPU-time counter by a 
>>> fixed clock (a timer clock), not by the CPU clock.
>> 
>> I am not a language lawyer, but I bet that an implementation of
>> Ada.Execution_Time.Split that ignores any CPU frequency changes when
>> summing up processor ticks consumed by the task would be legal.
> 
> Whether or not such an implementation is formally legal, that would 
> require very perverse interpretations of the text in RM D.14.

RM D.14 defines CPU_Tick constant, of which physical equivalent (if we
tried to enforce your interpretation) is not constant for many CPU/OS
combinations. On a such platform the implementation would be as perverse as
RM D.14 is. But the perversion is only because of the interpretation.

> (In fact, some variable-frequency 
> scheduling methods may prefer to measure task "execution times" in units 
> of processor ticks, not in real-time units like seconds.)

Exactly. As a simulation time RM D.14 is perfectly OK. It can be used for
CPU load estimations, while the "real time" implementation could not. BTW,
even for measurements people usually have in mind (e.g. comparing resources
consumed by tasks), simulation time would be more fair. The problem is with
I/O, because I/O is a "real" thing.

> But the 
> implementation could not implement the function "-" (Left, Right : 
> CPU_Time) return Time_Span to give a meaningful result, with the normal 
> meaning of Time_Span, since the result would be the same Time_Span for a 
> high CPU frequency and for a low one.

The result is not meaningful as real time, but it is as simulation time.

> Moreover, an Ada RTS running on 
> Windows could of course use another clock or timer to measure execution 
> time, if the Windows functionality is unsuitable.

I read one study of Java RTS, unfortunately I lost the link. They faced
this problem. In order to measure the real (statistically unbiased etc) CPU
time, they implemented a Windows driver or service (I don't remember if
they also had some hardware), which monitored, frequently enough, the
thread occupying the processor. Ada RTS could use a similar technique.
However even this were not enough, one should really hook on the OS
scheduler to get fair real CPU time.

BTW, I never checked Linux or VxWorks for that. Anybody knows if it were
possible to implement RM D.14 in this interpretation under these OSes?

The requirement is that the OS scheduler accumulated the differences
between the RT clock readings when the task lost and gained the processor.
From what I know about VxWorks, I doubt it much.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19  4:01             ` Ada.Execution_Time Vinzent Hoefler
@ 2010-12-19 11:00               ` Niklas Holsti
  2010-12-21  0:37                 ` Ada.Execution_Time Randy Brukardt
  2010-12-19 12:27               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-21  0:32               ` Ada.Execution_Time Randy Brukardt
  2 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-19 11:00 UTC (permalink / raw)


Vinzent Hoefler wrote:
> BrianG wrote:
> 
>> If you mean that they both define a Clock and a Split, maybe.  If you 
>> meanany program that actually does anything, that's not possible.
>> That was my original comment:  Execution_Time does not provide any
>>types/operations useful, without also 'with'ing Real_Time.
> 
> Yes, but so what? The intention of Ada.Execution_Time wasn't to provide the
> user with means to instrument the software and to Text_IO some mostly
> meaningless values (any decent profiler can do that for you), but rather a
> way to implement user-defined schedulers based on actual CPU usage.

That is also my understanding of the intention. Moreover, since task 
scheduling for real-time systems unavoidably deals both with execution 
times and with real times, I think it is natural that both 
Ada.Execution_Time and Ada.Real_Time are required.

> And well, if you're ranting about CPU_Time, Real_Time.Time_Span is not much
> better. It's a pain in the ass to convert an Ada.Real.Time_Span to another
> type to interface with OS-specific time types (like time_t) if you're 
> opting for speed, portability and accuracy.

If your target type is OS-specific, it seems harsh to require full 
portability of the conversion.

> BTW, has anyone any better ideas to convert TimeSpan into a record 
> containing seconds and nanoseconds than this:

I may not have better ideas, but I do have some comments on your code.

>    function To_Interval (TS : in Ada.Real_Time.Time_Span)
>                          return ASAAC_Types.TimeInterval is

The following are constants independent of the parameters:

>       Nanos_Per_Sec : constant                         := 1_000_000_000.0;
>       One_Second    : constant Ada.Real_Time.Time_Span :=
>                         Ada.Real_Time.Milliseconds (1000);

(Why not ... := Ada.Real_Time.Seconds (1) ?)

>       Max_Interval  : constant Ada.Real_Time.Time_Span :=
>                         Integer (ASAAC_Types.Second'Last) * One_Second;

... so I would move the above declarations into the surrounding package, 
at least for One_Second and Max_Interval. Of course a compiler might do 
that optimization in the code, too. (By the way, Max_Interval is a bit 
less than the largest value of TimeInterval, since the above expression 
has no NSec part.)

>       Seconds       : ASAAC_Types.Second;
>       Nano_Seconds  : ASAAC_Types.Nanosec;
>    begin
>       declare
>          Sub_Seconds : Ada.Real_Time.Time_Span;
>       begin

The following tests for ranges seem unavoidable in any conversion 
between types defined by different sources. I don't see how 
Ada.Real_Time can be blamed for this.  Of course I cannot judge if the 
result (saturation at 'Last or 'First) is right for your application. As 
you say, there are potential overflow problems, already in the 
computation of Max_Interval above.

>          if TS >= Max_Interval then
>             Seconds      := ASAAC_Types.Second'Last;
>             Nano_Seconds := ASAAC_Types.Nanosec'Last;

An alternative approach to the over-range condition TS >= Max_Interval 
is to make the definition of the application-specific type 
ASAAC_Types.Second depend on the actual range of Ada.Real_Time.Time_Span 
so that over-range becomes impossible. Unfortunately I don't see how 
this could be done portably by static expressions in the declaration of 
ASAAC_Types.Second, so it would have to be a subtype declaration with an 
upper bound of To_Duration(Time_Span_Last)-1.0. This raises 
Constraint_Error at elaboration if the base type is too narrow.

>          elsif TS < Ada.Real_Time.Time_Span_Zero then
>             Seconds      := ASAAC_Types.Second'First;
>             Nano_Seconds := ASAAC_Types.Nanosec'First;

The above under-range test seems to be forced by the fact that 
ASAAC_Types.TimeInterval is unable to represent negative time intervals, 
  while Ada.Real_Time.Time_Span can do that. This problem is hardly a 
shortcoming in Ada.Real_Time.

>          else
>             Seconds      := ASAAC_Types.Second (TS / One_Second);
>             Sub_Seconds  := TS - (Integer (Seconds) * One_Second);
>             Nano_Seconds :=
>               ASAAC_Types.Nanosec
>                 (Nanos_Per_Sec * Ada.Real_Time.To_Duration (Sub_Seconds));

An alternative method converts the whole TS to Duration and then 
extracts the seconds and nanoseconds:

    TS_Dur : Duration;

    TS_Dur := To_Duration (TS);
    Seconds := ASAAC_Types.Second (TS_Dur - 0.5);
    Nano_Seconds := ASAAC_Types.Nanosec (
       Nanos_Per_Sec * (TS_Dur - Duration (Seconds)));

This, too, risks overflow in the multiplication, since the guaranteed 
range of Duration only extends to 86_400. Moreover, using Duration may 
lose precision (see below).

>          end if;
>       end;
> 
>       return
>         ASAAC_Types.TimeInterval'(Sec  => Seconds,
>                                   NSec => Nano_Seconds);
>    end To_Interval;
> 
> The solution I came up with here generally works, but suffers some 
> potential overflow problems

I think they are unavoidable unless you take care to make the range of 
the application-defined types (ASAAC_Types) depend on the range of the 
implementations of the standard types and also do the multiplication in 
some type with sufficient range, that you define.

> and doesn't look very efficient to me (although that'a minor
> problem given the context it's usually used in).

Apart from the definition of the constants (which can be moved out of 
the function), and the range checks (which depend on the application 
types in ASAAC_Types), the real conversion consists of a division, a 
subtraction, two multiplications and one call of To_Duration. This does 
not seem excessive to me, considering the nature of that target type. 
The alternative method that starts by converting all of TS to Duration 
avoids the division.

Still, this example suggests that Ada.Real_Time perhaps should provide a 
Split operation that divides a Time_Span into an integer number of 
Seconds and a sub-second Duration.

A problem that you don't mention is that the use of Duration may cause 
loss of precision. Duration'Small may be as large as 20 milliseconds (RM 
9.6(27)), although at most 100 microseconds are advised (RM 9.6(30)), 
while the Time_Span resolution must be 20 microseconds or better (RM 
D.8(30)). Perhaps Annex D should require better Duration resolution?

Loss of precision could be avoided by doing the multiplication in 
Time_Span instead of in Duration:

    Nano_Seconds := ASAAC_Types.Nanosec (
       To_Duration (Nanos_Per_Sec * Sub_Seconds));

but the overflow risk is perhaps larger, since Time_Span_Last may not be 
larger than 3600 (RM D.8(31)).

I have met with similar tricky problems in conversions between types of 
different origins in other contexts, too. I don't think that these 
problems mean that Ada.Real_Time is defective.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19  4:01             ` Ada.Execution_Time Vinzent Hoefler
  2010-12-19 11:00               ` Ada.Execution_Time Niklas Holsti
@ 2010-12-19 12:27               ` Dmitry A. Kazakov
  2010-12-21  0:32               ` Ada.Execution_Time Randy Brukardt
  2 siblings, 0 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-19 12:27 UTC (permalink / raw)


On Sun, 19 Dec 2010 05:01:22 +0100, Vinzent Hoefler wrote:

> And well, if you're ranting about CPU_Time, Real_Time.Time_Span is not much
> better. It's a pain in the ass to convert an Ada.Real.Time_Span to another
> type to interface with OS-specific time types (like time_t) if you're opting
> for speed, portability and accuracy.

Well, speaking from my experience, the OS-specific time types are hardly
usable because of the OS services working with these types. The problem is
not conversion, but the implementations, which *do* use these [broken]
services. They might have a catastrophic accuracy. For example, under
VxWorks we had to replace the AdaCore implementation of Ada.Real_Time with
our own implementation. This stuff is inherently non-portable.

But the interface of Ada.Real_Time is portable, so I see absolutely no
point in replacing it with something OS-specific. It won't get either
portability or accuracy.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19  3:07           ` Ada.Execution_Time BrianG
  2010-12-19  4:01             ` Ada.Execution_Time Vinzent Hoefler
@ 2010-12-19 22:54             ` anon
  2010-12-20  3:14               ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 124+ messages in thread
From: anon @ 2010-12-19 22:54 UTC (permalink / raw)


In <iejsu9$lct$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>anon@att.net wrote:
>> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>> Georg Bauhaus wrote:
>>>> On 12/12/10 10:59 PM, BrianG wrote:
>....
>[Lots of meaningless comments deleted.]
>> For the average Ada programmer, its another Ada package that most will 
>> never use because they will just use Ada.Real_Time.  
>[etc.]
>>  In some cases the Ada.Execution_Time
>> package can replace the Ada.Real_Time with only altering the with/use 
>> statements.
>If you mean that they both define a Clock and a Split, maybe.  If you mean
>any program that actually does anything, that's not possible.  That was my
>original comment:  Execution_Time does not provide any types/operations
>useful, without also 'with'ing Real_Time.
>> 
>[etc.]
>(Don't know why I bother w/ this msg.)


Kind of funny you cutting down "TOP" which can be found on most Linux 
boxes  Which allow anyone with the CPU options turn on to see both the 
Real_Time and CPU_Time.

An Example: Just use you your favorite music/video player and play a 2 
min song.  Real_Time will show the 2 min for the song and the 1 to 3 
seconds for the CPU_Time. 

And I know one professor that is now station in Poland that did enjoy 
giving his student assignment to decrease the amount of CPU execution 
time that an algorithm used. Which meant a student normally had to 
rewrite the algorithm. His student just loved him for that!


Note: Not sure of the name for the Windows version of Linux's top.


As for "Pragmas" that may effect the "CPU Execution Time. 
Three may pragmas that have been disabled in GNAT and they are:

    pragma Optimize     :  Gnat uses gcc command line -O(0,1,2,3.4).
                           Which does effect compiler language 
                           translation.

    pragma System_Name  :  Gnat uses default or gcc command line to 
    pragma Storage_Unit :  determine target. Target processor and its 
                           normal data size can effect CPU time.



And for web Servers:

Check with any main web hosting for the "Terms" or "Agreement".  In 
the document you will see a paragraph that states the web site excess 
from 10 to 25% of the resources will be shutdown.

Examples: 

    From Host: http://www.micfo.com/agreement

    6 SERVER RESOURCE USAGE


        The Client agrees to utilize "micfo's" Server Resources as set 
        out in clause 6.2.1, 6.2,2:

    6.2.1
        Shared Hosting; 7% of the CPU in any given twenty two (22) 
        Business Days.
    6.2.2
        Reseller Hosting; 10% of the CPU in any given twenty two 
        (22) Business Days.


        Also: from Host: http://www.hostgator.com/tos/tos.php

        User may not: 

        1) Use 25% or more of system resources for longer then 90 
           seconds. There are numerous activities that could cause 
           such problems; these include: CGI scripts, FTP, PHP, 
           HTTP, etc.

       12) Only use https protocol when necessary; encrypting and 
           decrypting communications is noticeably more CPU-intensive 
           than unencrypted communications.


How to you think these or any other web hosting company could measure the 
complete system resources without measuring the CPU Execution Time on a 
given user or application! In Ada, the routines can now use the package 
called "Ada.CPU_Execution_Time".





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19 22:54             ` Ada.Execution_Time anon
@ 2010-12-20  3:14               ` BrianG
  2010-12-22 14:30                 ` Ada.Execution_Time anon
  0 siblings, 1 reply; 124+ messages in thread
From: BrianG @ 2010-12-20  3:14 UTC (permalink / raw)


anon@att.net wrote:
> In <iejsu9$lct$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>> anon@att.net wrote:
>>> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>>> Georg Bauhaus wrote:
>>>>> On 12/12/10 10:59 PM, BrianG wrote:
>> ....
> 
> Kind of funny you cutting down "TOP" which can be found on most Linux 
> boxes  Which allow anyone with the CPU options turn on to see both the 
> Real_Time and CPU_Time.
Please provide any comment I made for/against "top".  Since my original 
question was based on Windows and I've stated that my (current) Linux 
doesn't support this package, that seems unlikely (even if I didn't 
trust my memory).
> 
...
> Note: Not sure of the name for the Windows version of Linux's top.
There is (that I know of) no real equiv to "top" - as in a command-line 
program.  The equiv to GNOME's "System Monitor" (for example - a gui 
program) would be Task Manager (and I made no comment about that either).
> 
...
> 
> How to you think these or any other web hosting company could measure the 
> complete system resources without measuring the CPU Execution Time on a 
> given user or application! In Ada, the routines can now use the package 
> called "Ada.CPU_Execution_Time".
> 
I have no problem with what Execution_Time does (as evidenced by the 
fact that I asked a question about its use) - it measures exactly what I 
want, an estimate of the CPU time used by a task.  My problem is with 
the way it is defined - it provides, by itself, no "value" of that that 
a using program can make use of to print or calculate (i.e. you also 
need Real_Time for that, which is silly - and I disagree that that is 
necessarily required in any case - my program didn't need it otherwise).
> 
--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-17  0:44                 ` Ada.Execution_Time Randy Brukardt
  2010-12-17 17:54                   ` Ada.Execution_Time Warren
@ 2010-12-20 21:28                   ` Keith Thompson
  2010-12-21  3:23                     ` Ada.Execution_Time Robert A Duff
  1 sibling, 1 reply; 124+ messages in thread
From: Keith Thompson @ 2010-12-20 21:28 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
> "Adam Beneschan" <adam@irvine.com> wrote in message 
> news:dfcf048b-bb6e-4993-b62a-9147bad3a6ff@j32g2000prh.googlegroups.com...
> On Dec 15, 2:52 pm, Keith Thompson <ks...@mib.org> wrote:
> ...
>>> So, um, why is Assert a pragma rather than a statement?
>>>
>>> if Debug_Mode then
>>> assert Is_Good(X);
>>> end if;
> ...
>>Somebody on the ARG might have a more authoritative answer.  My
>>reading of AI95-286 is that a number of Ada compilers had already
>>implemented the Assert pragma and there was a lot of code using it.
>>Of course, those compilers couldn't have added "assert" as a statement
>>on their own, but adding an implementation-defined pragma is OK.
>
> That's one reason. The other is that you can't put a statement into a 
> declarative part (well, you can, but you need to use a helper generic and a 
> helper procedure, along with an instantiation, which is insane -- although 
> it is not that unusual to see that done in a program). A lot of asserts fit 
> most naturally into the declarative part (precondition ones, for instance, 
> although those will be better defined separately in Ada 2012).

So add an assert operator that always yields True:

    declare
        Dummy: constant Boolean := assert some_expression; -- assert operator
    begin
        assert some_other_expression; -- assert statement
    end;

Though the use of "Assert" as an identifier in existing code is
certainly an issue.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19  4:01             ` Ada.Execution_Time Vinzent Hoefler
  2010-12-19 11:00               ` Ada.Execution_Time Niklas Holsti
  2010-12-19 12:27               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-21  0:32               ` Randy Brukardt
  2 siblings, 0 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-21  0:32 UTC (permalink / raw)


"Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de> 
wrote in message news:op.vnxz4kmplzeukk@jellix.jlfencey.com...
...
> Yes, but so what? The intention of Ada.Execution_Time wasn't to provide 
> the
> user with means to instrument the software and to Text_IO some mostly
> meaningless values (any decent profiler can do that for you), but rather a
> way to implement user-defined schedulers based on actual CPU usage. You 
> may
> want to take a look at the child packages Timers and Group_Budget to see 
> the
> intended usage.

Probably, but I wanted to use Ada.Execution_Time to *write* a "decent 
profiler" for our Ada programs. Since Janus/Ada doesn't use the system 
tasking facilities (mostly for historical reasons), existing profilers don't 
do a good job if the program includes any tasks. I've used various hacks 
based on Windows facilities to do part of the job, but Ada.Execution_Time 
would provide better information (especially for tasks).

Similarly, anyone that wanted portable profiling information probably would 
prefer to use Ada.Execution_Time rather than to invent something new (and 
neecessarily not as portable). I found this sort of usage at least as 
compelling as the real-time usages.

                                       Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19 11:00               ` Ada.Execution_Time Niklas Holsti
@ 2010-12-21  0:37                 ` Randy Brukardt
  2010-12-21  1:20                   ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 1 reply; 124+ messages in thread
From: Randy Brukardt @ 2010-12-21  0:37 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8n66ucFnavU1@mid.individual.net...
...
> A problem that you don't mention is that the use of Duration may cause 
> loss of precision. Duration'Small may be as large as 20 milliseconds (RM 
> 9.6(27)), although at most 100 microseconds are advised (RM 9.6(30)), 
> while the Time_Span resolution must be 20 microseconds or better (RM 
> D.8(30)). Perhaps Annex D should require better Duration resolution?

The rules for Duration were chosen so that it would not require more than a 
32-bit type. Not all embedded processors are set up to handle 64-bit numbers 
efficiently...

(As time moves on, this is less of a consideration than it used to be, but 
it still seems like a possible problem. Time_Span itself doesn't suffer from 
the problem since as a private type it can be represented as a record with 
several components. Duration is a visible fixed point type.) 





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-21  0:37                 ` Ada.Execution_Time Randy Brukardt
@ 2010-12-21  1:20                   ` Jeffrey Carter
  0 siblings, 0 replies; 124+ messages in thread
From: Jeffrey Carter @ 2010-12-21  1:20 UTC (permalink / raw)


On 12/20/2010 05:37 PM, Randy Brukardt wrote:
>
> The rules for Duration were chosen so that it would not require more than a
> 32-bit type. Not all embedded processors are set up to handle 64-bit numbers
> efficiently...

Not much of a difference on an 8-bit processor, surely?

-- 
Jeff Carter
"Crucifixion's a doddle."
Monty Python's Life of Brian
82



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-20 21:28                   ` Ada.Execution_Time Keith Thompson
@ 2010-12-21  3:23                     ` Robert A Duff
  2010-12-21  8:04                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: Robert A Duff @ 2010-12-21  3:23 UTC (permalink / raw)


Keith Thompson <kst-u@mib.org> writes:

> So add an assert operator that always yields True:
>
>     declare
>         Dummy: constant Boolean := assert some_expression; -- assert operator
>     begin
>         assert some_other_expression; -- assert statement
>     end;

If asserts had their own syntax, we could allow them wherever
we like -- "assert blah;" could be both a statement and
a declarative_item.

I really don't like having to declare dummy booleans.
It gets even more annoying when you have several.
What are you going to call them?  Dummy_1, Dummy_2,
and Dummy_3?  And then after some maintenance,
Dummy_1, Dummy_2_and_a_half, and Dummy_3?
Seems like an awful lot of noise -- assertions should
be easy!  (Of course, you might remember me complaining
that the declare/begin/end is just noise, too.)

Anyway, the strong syntactic separation between declarations
and statements makes no sense in a language where declarations
are executable code.  I think it's just wrong-headed thinking
inherited from Pascal.

> Though the use of "Assert" as an identifier in existing code is
> certainly an issue.

I've certainly written procedures called Assert that do
what you might expect.  This was quite common before
pragma Assert existed.

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-21  3:23                     ` Ada.Execution_Time Robert A Duff
@ 2010-12-21  8:04                       ` Dmitry A. Kazakov
  2010-12-21 17:19                         ` Ada.Execution_Time Robert A Duff
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-21  8:04 UTC (permalink / raw)


On Mon, 20 Dec 2010 22:23:12 -0500, Robert A Duff wrote:

> Anyway, the strong syntactic separation between declarations
> and statements makes no sense in a language where declarations
> are executable code.  I think it's just wrong-headed thinking
> inherited from Pascal.

It still does have sense in a language with lexical scopes. You need some
syntactically recognizable point where all things of the same scope become
usable.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-21  8:04                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-21 17:19                         ` Robert A Duff
  2010-12-21 17:43                           ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: Robert A Duff @ 2010-12-21 17:19 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Mon, 20 Dec 2010 22:23:12 -0500, Robert A Duff wrote:
>
>> Anyway, the strong syntactic separation between declarations
>> and statements makes no sense in a language where declarations
>> are executable code.  I think it's just wrong-headed thinking
>> inherited from Pascal.
>
> It still does have sense in a language with lexical scopes. You need some
> syntactically recognizable point where all things of the same scope become
> usable.

Yes, you need such a "syntactically recognizable point".
But that doesn't require any separation of declarations
from statements.  If My_Assert is just a regular
user-defined procedure, then I see nothing wrong with:

    procedure P is
        X : T1 := ...;
        My_Assert(Is_Good(X)); -- Not Ada!
        Y : T2 := ...;
        My_Assert(not Is_Evil(Y));
        ...

without sprinkling "declares" and "begins" all over.

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-21 17:19                         ` Ada.Execution_Time Robert A Duff
@ 2010-12-21 17:43                           ` Dmitry A. Kazakov
  0 siblings, 0 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-21 17:43 UTC (permalink / raw)


On Tue, 21 Dec 2010 12:19:47 -0500, Robert A Duff wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> On Mon, 20 Dec 2010 22:23:12 -0500, Robert A Duff wrote:
>>
>>> Anyway, the strong syntactic separation between declarations
>>> and statements makes no sense in a language where declarations
>>> are executable code.  I think it's just wrong-headed thinking
>>> inherited from Pascal.
>>
>> It still does have sense in a language with lexical scopes. You need some
>> syntactically recognizable point where all things of the same scope become
>> usable.
> 
> Yes, you need such a "syntactically recognizable point".
> But that doesn't require any separation of declarations
> from statements.  If My_Assert is just a regular
> user-defined procedure, then I see nothing wrong with:
> 
>     procedure P is
>         X : T1 := ...;
>         My_Assert(Is_Good(X)); -- Not Ada!
>         Y : T2 := ...;
>         My_Assert(not Is_Evil(Y));
>         ...
> 
> without sprinkling "declares" and "begins" all over.

There are too many issues which are wrong here.

1. The exceptions from My_Assert cannot be handled in P.

2. Checking an instance of T1 is not bound to the type. It is done upon
some arbitrary usage of T1 in an arbitrary procedure P.

3. Assuming that checking is really bound to the procedure P, then I want
to be sure that checking is not premature, that P has elaborated all stuff
belonging there, *before* I am starting to check things.

4. Ada *cannot* handle errors upon initialization and finalization. We have
to fix that first before even considering checks in such contexts.

etc.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-20  3:14               ` Ada.Execution_Time BrianG
@ 2010-12-22 14:30                 ` anon
  2010-12-22 20:09                   ` Ada.Execution_Time BrianG
  0 siblings, 1 reply; 124+ messages in thread
From: anon @ 2010-12-22 14:30 UTC (permalink / raw)


In <iemhm8$4up$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>anon@att.net wrote:
>> In <iejsu9$lct$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>> anon@att.net wrote:
>>>> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>>>> Georg Bauhaus wrote:
>>>>>> On 12/12/10 10:59 PM, BrianG wrote:
>>> ....
>> 
>> Kind of funny you cutting down "TOP" which can be found on most Linux 
>> boxes  Which allow anyone with the CPU options turn on to see both the 
>> Real_Time and CPU_Time.
>Please provide any comment I made for/against "top".  Since my original 
>question was based on Windows and I've stated that my (current) Linux 
>doesn't support this package, that seems unlikely (even if I didn't 
>trust my memory).
>> 
>....
>> Note: Not sure of the name for the Windows version of Linux's top.
>There is (that I know of) no real equiv to "top" - as in a command-line 
>program.  The equiv to GNOME's "System Monitor" (for example - a gui 
>program) would be Task Manager (and I made no comment about that either).
>> 
>....
>> 
>> How to you think these or any other web hosting company could measure the 
>> complete system resources without measuring the CPU Execution Time on a 
>> given user or application! In Ada, the routines can now use the package 
>> called "Ada.CPU_Execution_Time".
>> 
>I have no problem with what Execution_Time does (as evidenced by the 
>fact that I asked a question about its use) - it measures exactly what I 
>want, an estimate of the CPU time used by a task.  My problem is with 
>the way it is defined - it provides, by itself, no "value" of that that 
>a using program can make use of to print or calculate (i.e. you also 
>need Real_Time for that, which is silly - and I disagree that that is 
>necessarily required in any case - my program didn't need it otherwise).
>> 
>--BrianG
There has been many third party versions of this package over the years 
and most of them included a Linux version. The Windows is specific to GNAT. 
And it just like GNAT the Windows version is not complete either.

For Execution_Time there are three package. GNAT has 
  Ada.Execution_Time                    for both Linux-MaRTE and Windows
  Ada.Execution_Time.Timers             only for Linux-MaRTE
  Ada.Execution_Time.Group_Budgets      Unimplemented Yet



In using the Linux-MaRTE version ( 2 packages ) 
  Ada.Execution_Time 
  Ada.Execution_Time.Timers 

Of course in this example you could use ( Note: they are fully implemented )
  Ada.Real_Time
  Ada.Real_Time.Timing_Events


An algorithm comparison program might look like:

with Ada.Execution_Time ;
with Ada.Execution_Time.Timers ;

procedure Compare_Algorithm is

    task type Algorithm_1 is
      use Ada.Execution_Time.Timers ;
    begin
      begin
        Set_Handler ( ... ) ; -- start timer
        Algorithm_1 ;
        Time_Remaining ( ... ) ; -- sample timer
      end ;
      Cancel_Handler ( ... ) ; release timer
    end ;    

    task type Algorithm_2 is
      use Ada.Execution_Time.Timers ;
    begin
      begin
        Set_Handler ( ... ) ; -- start timer
        Algorithm_2 ;
        Time_Remaining ( ... ) ; -- sample timer
      end ;
      Cancel_Handler ( ... ) ; release timer
    end ;    

  use Ada.Execution_Time ;

begin
  -- Start tasks
  -- wait until all tasks finish
  -- compare times using Execution_Time
  -- Print summary of comparison
end ;





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-22 14:30                 ` Ada.Execution_Time anon
@ 2010-12-22 20:09                   ` BrianG
  0 siblings, 0 replies; 124+ messages in thread
From: BrianG @ 2010-12-22 20:09 UTC (permalink / raw)


anon@att.net wrote:
> In <iemhm8$4up$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>> anon@att.net wrote:
>> I have no problem with what Execution_Time does (as evidenced by the 
>> fact that I asked a question about its use) - it measures exactly what I 
>> want, an estimate of the CPU time used by a task.  My problem is with 
>> the way it is defined - it provides, by itself, no "value" of that that 
>> a using program can make use of to print or calculate (i.e. you also 
>> need Real_Time for that, which is silly - and I disagree that that is 
>> necessarily required in any case - my program didn't need it otherwise).
>> --BrianG
> There has been many third party versions of this package over the years 
> and most of them included a Linux version. The Windows is specific to GNAT. 
> And it just like GNAT the Windows version is not complete either.
My comments have been about the RM-defined package and its lack of 
usability (if that wasn't already clear).  What may or may not be 
provided by implementation-specific packages is irrelevant.

> 
> For Execution_Time there are three package. GNAT has 
>   Ada.Execution_Time                    for both Linux-MaRTE and Windows
>   Ada.Execution_Time.Timers             only for Linux-MaRTE
>   Ada.Execution_Time.Group_Budgets      Unimplemented Yet
> 
When you say "GNAT" here you should probably specify what version you 
mean.  I haven't looked in this area, but I was under the impression 
that GNAT has implemented all of Ada'05 for quite a while now (in Pro; 
maybe it has not yet all reached GCC or GPL releases?).

> 
> 
> In using the Linux-MaRTE version ( 2 packages ) 
>   Ada.Execution_Time 
>   Ada.Execution_Time.Timers 
See below.
> 
> Of course in this example you could use ( Note: they are fully implemented )
>   Ada.Real_Time
>   Ada.Real_Time.Timing_Events
So, instead of using a functionality already provided (with a slight 
kludgy issue), I should implement it entirely myself?  I still don't see 
how this could get me CPU_Time (even a "simulation" value).

> 
> 
> An algorithm comparison program might look like:
> 
> with Ada.Execution_Time ;
> with Ada.Execution_Time.Timers ;
Given the below program, please add some of the missing details to show 
how this can be useful without also "with Ada.Real_Time".  Neither 
Execution_Time or Execution_Time.Timers provides any value that can be 
used directly.

> 
> procedure Compare_Algorithm is
> 
>     task type Algorithm_1 is
>       use Ada.Execution_Time.Timers ;
>     begin
>       begin
>         Set_Handler ( ... ) ; -- start timer
>         Algorithm_1 ;
>         Time_Remaining ( ... ) ; -- sample timer
>       end ;
>       Cancel_Handler ( ... ) ; release timer
>     end ;    
> 
>     task type Algorithm_2 is
>       use Ada.Execution_Time.Timers ;
>     begin
>       begin
>         Set_Handler ( ... ) ; -- start timer
>         Algorithm_2 ;
>         Time_Remaining ( ... ) ; -- sample timer
>       end ;
>       Cancel_Handler ( ... ) ; release timer
>     end ;    
> 
>   use Ada.Execution_Time ;
> 
> begin
>   -- Start tasks
>   -- wait until all tasks finish
>   -- compare times using Execution_Time
Provide details here (line above and line below)- without relying on 
CPU_Time or Time_Span, which are both private.
>   -- Print summary of comparison
> end ;
> 
> 
--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-19  9:57                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-25 11:31                             ` Niklas Holsti
  2010-12-26 10:25                               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-27 22:11                               ` Ada.Execution_Time Randy Brukardt
  0 siblings, 2 replies; 124+ messages in thread
From: Niklas Holsti @ 2010-12-25 11:31 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
...
>> The concept and measurement of "the execution time of a task" does
>> become problematic in complex processors that have hardware 
>> multi-threading and can run several tasks in more or less parallel
>> fashion, without completely isolating the tasks from each other.
> 
> No, the concept is just fine,

Fine for what? For schedulability analysis, fine for on-line scheduling,
... ?

However, this is a side issue, since we are (or at least I am)
discussing what the RM intends with Ada.Execution_Time, which must be
read in the context of D.2.1, which assumes that there is a clearly
defined set of "processors" and each processor executes exactly one
task at a time.

> it is the interpretation of the measured values in the way you
> wanted, which causes problems. That is the core of my point. The
> measure is not real time.

I still disagree, if we are talking about the intent of the RM. You have
not given any arguments, based on the RM text, to support your position.

>>> I am not a language lawyer, but I bet that an implementation of 
>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>> when summing up processor ticks consumed by the task would be
>>> legal.
>> Whether or not such an implementation is formally legal, that would
>> require very perverse interpretations of the text in RM D.14.
> 
> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
> we tried to enforce your interpretation) is not constant for many
> CPU/OS combinations.

The behaviour of some CPU/OS is irrelevant to the intent of the RM. As 
already said, an Ada implementation on such CPU/OS could use its own 
mechanisms for execution-time measurements.

> On a such platform the implementation would be as perverse as RM D.14
> is. But the perversion is only because of the interpretation.

Bah. I think that when RM D.14 says "time", it really means time. You
think it means something else, perhaps a CPU cycle count. I think the 
burden of proof is on you.

It seems evident to me that the text in D.14 must be interpreted using
the concepts in D.2.1, "The Task Dispatching Model", which clearly
specifies real-time points when a processor starts to execute a task and
stops executing a task. To me, and I believe to most readers of the RM,
the execution time of a task is the sum of these time slices, thus a
physical, real time.

>> (In fact, some variable-frequency scheduling methods may prefer to
>> measure task "execution times" in units of processor ticks, not in
>> real-time units like seconds.)
> 
> Exactly. As a simulation time RM D.14 is perfectly OK.

I put my comment, above, in parentheses because it is a side issue.

And you still have not defined what you mean by "simulation time", and
how you come there from the RM text.

> It can be used for CPU load estimations,

How, if it has "no physical meaning", as you claim?

> while the "real time" implementation could not.

Why not? That is the way ordinary schedulability analysis works. And
again, it is irrevelant that some CPU/OS do not provide a good way to
measure the physical execution time of tasks.

> BTW, even for measurements people usually have in mind (e.g.
> comparing resources consumed by tasks), simulation time would be more
> fair.

What is "simulation time"?

> The problem is with I/O, because I/O is a "real" thing.

I assume by "I/O" you mean time consumed in waiting for some external
event. I agree that such I/O is a (solvable) problem in schedulability 
analysis, but it is not relevant for understanding the intent of RM D.14.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-25 11:31                             ` Ada.Execution_Time Niklas Holsti
@ 2010-12-26 10:25                               ` Dmitry A. Kazakov
  2010-12-27 12:44                                 ` Ada.Execution_Time Niklas Holsti
  2010-12-27 17:24                                 ` Ada.Execution_Time Robert A Duff
  2010-12-27 22:11                               ` Ada.Execution_Time Randy Brukardt
  1 sibling, 2 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-26 10:25 UTC (permalink / raw)


On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
> ...
>>> The concept and measurement of "the execution time of a task" does
>>> become problematic in complex processors that have hardware 
>>> multi-threading and can run several tasks in more or less parallel
>>> fashion, without completely isolating the tasks from each other.
>> 
>> No, the concept is just fine,
> 
> Fine for what? For schedulability analysis, fine for on-line scheduling,
> ... ?

Applicability of a concept is not what makes it wrong or right.

> However, this is a side issue, since we are (or at least I am)
> discussing what the RM intends with Ada.Execution_Time, which must be
> read in the context of D.2.1, which assumes that there is a clearly
> defined set of "processors" and each processor executes exactly one
> task at a time.

Why? Scheduling does not need Ada.Execution_Time, it is the
Ada.Execution_Time implementation, which needs some input from the
scheduler.

How do you explain that CPU_Time, a thing about time sharing, appears in
the real-time systems annex D?

> You have
> not given any arguments, based on the RM text, to support your position.

I am not a language lawyer to interpret the RM texts. My argument was to
common sense.

>>>> I am not a language lawyer, but I bet that an implementation of 
>>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>>> when summing up processor ticks consumed by the task would be
>>>> legal.
>>> Whether or not such an implementation is formally legal, that would
>>> require very perverse interpretations of the text in RM D.14.
>> 
>> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
>> we tried to enforce your interpretation) is not constant for many
>> CPU/OS combinations.
> 
> The behaviour of some CPU/OS is irrelevant to the intent of the RM.

Nope, one of the killer arguments ARG people deploy to reject most
reasonable AI's is: too difficult to implement on some obscure platform for
which Ada never existed and never will. (:-))

> As 
> already said, an Ada implementation on such CPU/OS could use its own 
> mechanisms for execution-time measurements.

Could or must? Does GNAT this?

>> On a such platform the implementation would be as perverse as RM D.14
>> is. But the perversion is only because of the interpretation.
> 
> Bah. I think that when RM D.14 says "time", it really means time. You
> think it means something else, perhaps a CPU cycle count. I think the 
> burden of proof is on you.
> 
> It seems evident to me that the text in D.14 must be interpreted using
> the concepts in D.2.1, "The Task Dispatching Model", which clearly
> specifies real-time points when a processor starts to execute a task and
> stops executing a task. To me, and I believe to most readers of the RM,
> the execution time of a task is the sum of these time slices, thus a
> physical, real time.

If that was the intent, then I really do not understand why CPU_Time was
introduced in addition to Ada.Real_Time.Time / Time_Span.

> And you still have not defined what you mean by "simulation time", and
> how you come there from the RM text.

Simulation time is a model of real time a physical process might have under
certain conditions.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-26 10:25                               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-27 12:44                                 ` Niklas Holsti
  2010-12-27 15:28                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-27 17:24                                 ` Ada.Execution_Time Robert A Duff
  1 sibling, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-27 12:44 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
>> ...
>>>> The concept and measurement of "the execution time of a task" does
>>>> become problematic in complex processors that have hardware 
>>>> multi-threading and can run several tasks in more or less parallel
>>>> fashion, without completely isolating the tasks from each other.
>>> No, the concept is just fine,
>> Fine for what? For schedulability analysis, fine for on-line scheduling,
>> ... ?
> 
> Applicability of a concept is not what makes it wrong or right.

I think it does. This concept, "the execution time of a task", stands 
for a number. A number is useless if it has no application in a 
calculation. I don't know of any other meaning of "wrongness" for a 
numerical concept.

>> However, this is a side issue, since we are (or at least I am)
>> discussing what the RM intends with Ada.Execution_Time, which must be
>> read in the context of D.2.1, which assumes that there is a clearly
>> defined set of "processors" and each processor executes exactly one
>> task at a time.
> 
> Why?

Because RM D.14 uses terms defined in RM D.2.1, for example "executing".

> Scheduling does not need Ada.Execution_Time,

The standard schedulers defined in the RM do no not need 
Ada.Execution_Time but, as remarked earlier in this thread, one of the 
purposes of Ada.Execution_Time is to support the implementation of 
non-standard scheduling algorithms that may make on-line scheduling 
decisions that depend on the actual execution times of tasks. For 
example, scheduling based on "slack time".

> it is the Ada.Execution_Time implementation, which needs some
> input from the scheduler.

We must be precise about our terms, here. Using terms as defined in 
http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
input from the task *dispatcher* -- the part of the kernel that suspends 
and resumes tasks. It does not need input from the *scheduler*, which is 
the kernel part that selects the next task to be executed from among the 
ready tasks.

The term "dispatching" is defined differently in RM D.2.1 to mean the 
same as "scheduling" in the Wikipedia entry.

> How do you explain that CPU_Time, a thing about time sharing, appears in
> the real-time systems annex D?

You don't think that execution time is important for real-time systems?

In my view, CPU_Time is a measure of "real time", so its place in annex 
D is natural. In your view, CPU_Time is not "real time", which should 
make *you* surprised that it appears in annex D.

>> You have
>> not given any arguments, based on the RM text, to support your position.
> 
> I am not a language lawyer to interpret the RM texts. My argument was to
> common sense.

To me it seems that your argument is based on the difficulty (in your 
opinion) of implementing Ada.Execution_Time in some OSes such as MS 
Windows, if the RM word "time" is taken to mean real time.

It is common sense that some OSes are not designed (or not well 
designed) for real-time systems. Even a good real-time OS may not 
support all real-time methodologies, for example scheduling algorithms 
that depend on actual execution times.

>>>>> I am not a language lawyer, but I bet that an implementation of 
>>>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>>>> when summing up processor ticks consumed by the task would be
>>>>> legal.
>>>> Whether or not such an implementation is formally legal, that would
>>>> require very perverse interpretations of the text in RM D.14.
>>> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
>>> we tried to enforce your interpretation) is not constant for many
>>> CPU/OS combinations.
>> The behaviour of some CPU/OS is irrelevant to the intent of the RM.
> 
> Nope, one of the killer arguments ARG people deploy to reject most
> reasonable AI's is: too difficult to implement on some obscure platform for
> which Ada never existed and never will. (:-))

Apparently such arguments, if any were made in this case, were not valid 
enough to prevent the addition of Ada.Execution_Time to the RM.

Is your point that Ada.Execution_Time was accepted only because the ARG 
decided that the word "time" in RM D.14 should not be understood to mean 
real time? I doubt that very much... Surely such an unusual meaning of 
"time" should have been explained in the RM.

>> As already said, an Ada implementation on such CPU/OS could
>> use its own mechanisms for execution-time measurements.
> 
> Could or must? Does GNAT this?

I don't much care, it is irrelevant for understanding what the RM means. 
Perhaps the next version of MS Windows will have better support for 
measuring real task execution times; would that change the intent of the 
RM? Of course not.

>> It seems evident to me that the text in D.14 must be interpreted using
>> the concepts in D.2.1, "The Task Dispatching Model", which clearly
>> specifies real-time points when a processor starts to execute a task and
>> stops executing a task. To me, and I believe to most readers of the RM,
>> the execution time of a task is the sum of these time slices, thus a
>> physical, real time.
> 
> If that was the intent, then I really do not understand why CPU_Time was
> introduced in addition to Ada.Real_Time.Time / Time_Span.

Because (as I understand it) different processors/OSes have different 
mechanisms for measuring execution times and real times, and the 
mechanism most convenient for CPU_Time may use a different numerical 
type (range, scale, and precision) than the mechanisms and types used 
for Ada.Real_Time.Time, Time_Span, and Duration. This is evident in the 
different minimum ranges and precisions defined in the RM for these 
types. Randy remarked on this earlier in this thread.

>> And you still have not defined what you mean by "simulation time", and
>> how you come there from the RM text.
> 
> Simulation time is a model of real time a physical process might have under
> certain conditions.

Thank you. But I still do not see how your definition could be applied 
in this context, so we are back at the start of the post... :-)

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 12:44                                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-27 15:28                                   ` Dmitry A. Kazakov
  2010-12-27 20:11                                     ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-27 15:28 UTC (permalink / raw)


On Mon, 27 Dec 2010 14:44:53 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
>>> ...
>>>>> The concept and measurement of "the execution time of a task" does
>>>>> become problematic in complex processors that have hardware 
>>>>> multi-threading and can run several tasks in more or less parallel
>>>>> fashion, without completely isolating the tasks from each other.
>>>> No, the concept is just fine,
>>> Fine for what? For schedulability analysis, fine for on-line scheduling,
>>> ... ?
>> 
>> Applicability of a concept is not what makes it wrong or right.
> 
> I think it does. This concept, "the execution time of a task", stands 
> for a number. A number is useless if it has no application in a 
> calculation.

Technically, CPU_Time is not a number in any sense. It is not a numeric Ada
type and it is not a model of a mathematical number (not even additive).

Anyway, the execution time can be used in calculations independently on
whether and how you could apply the results of such calculations.

You can add the height of your house to the distance to the Moon, the
interpretation is up to you.

>>> However, this is a side issue, since we are (or at least I am)
>>> discussing what the RM intends with Ada.Execution_Time, which must be
>>> read in the context of D.2.1, which assumes that there is a clearly
>>> defined set of "processors" and each processor executes exactly one
>>> task at a time.
>> 
>> Why?
> 
> Because RM D.14 uses terms defined in RM D.2.1, for example "executing".

These terms stay valid for non-real time systems.

>> Scheduling does not need Ada.Execution_Time,
> 
> The standard schedulers defined in the RM do no not need 
> Ada.Execution_Time but, as remarked earlier in this thread, one of the 
> purposes of Ada.Execution_Time is to support the implementation of 
> non-standard scheduling algorithms that may make on-line scheduling 
> decisions that depend on the actual execution times of tasks. For 
> example, scheduling based on "slack time".

Even if somebody liked to undergo such an adventure, he also could use
Elementary_Functions in the scheduler. That would not make the nature of
Elementary_Functions any different.

>> it is the Ada.Execution_Time implementation, which needs some
>> input from the scheduler.
> 
> We must be precise about our terms, here. Using terms as defined in 
> http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
> input from the task *dispatcher* -- the part of the kernel that suspends 
> and resumes tasks.

Let's call it dispatcher. Time sharing needs some measure of consumed CPU
time. My points stand:

1. Time sharing has little to do with real-time systems.

2. It would be extremely ill-advised to use Ada.Execution_Time instead of
direct measures for an implementation of time sharing algorithms.

>> How do you explain that CPU_Time, a thing about time sharing, appears in
>> the real-time systems annex D?
> 
> You don't think that execution time is important for real-time systems?

Real-time systems work with the real-time, they real-time intervals
(durations) are of much minor interest. Execution time is of no interest,
because a real-time system does not care to balance the CPU load.

> In my view, CPU_Time is a measure of "real time", so its place in annex 
> D is natural. In your view, CPU_Time is not "real time", which should 
> make *you* surprised that it appears in annex D.

It does not surprise me, because there is no "time-sharing systems" annex,
or better it be "it is not what you think" or "if you think you need this,
your are wrong." There are some other Ada features we could move there.
(:-))

>>> You have
>>> not given any arguments, based on the RM text, to support your position.
>> 
>> I am not a language lawyer to interpret the RM texts. My argument was to
>> common sense.
> 
> To me it seems that your argument is based on the difficulty (in your 
> opinion) of implementing Ada.Execution_Time in some OSes such as MS 
> Windows, if the RM word "time" is taken to mean real time.
> 
> It is common sense that some OSes are not designed (or not well 
> designed) for real-time systems. Even a good real-time OS may not 
> support all real-time methodologies, for example scheduling algorithms 
> that depend on actual execution times.

I disagree with almost everything here. To start with, comparing real-time
clock services of Windows and of VxWorks, we would notice that Windows is
far superior in both accuracy and precision. Yet Windows is a half-baked
time-sharing OS, while VxWorks is one of the leading real-time OSes. Why is
it so? Because real-time applications do not need clock much. They are
real-time because their sources of time are *real*. These are hardware
interrupts, while timer interrupts are of a much lesser interest. I don't
care how much processor time my control loop takes so long it manages to
write the outputs when the actuators expect them. Measuring the CPU time
would bring me nothing. It is useless before the run, because it is not a
proof that the deadlines will be met. It useless at run-time because there
are easier and safer ways to detect faults.

>>>>>> I am not a language lawyer, but I bet that an implementation of 
>>>>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>>>>> when summing up processor ticks consumed by the task would be
>>>>>> legal.
>>>>> Whether or not such an implementation is formally legal, that would
>>>>> require very perverse interpretations of the text in RM D.14.
>>>> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
>>>> we tried to enforce your interpretation) is not constant for many
>>>> CPU/OS combinations.
>>> The behaviour of some CPU/OS is irrelevant to the intent of the RM.
>> 
>> Nope, one of the killer arguments ARG people deploy to reject most
>> reasonable AI's is: too difficult to implement on some obscure platform for
>> which Ada never existed and never will. (:-))
> 
> Apparently such arguments, if any were made in this case, were not valid 
> enough to prevent the addition of Ada.Execution_Time to the RM.

That is because ARG didn't intend to reject it! Somebody wanted it no
matter what (like interfaces, limited results, asserts, then, like, I am
afraid, if-operators now). The rest was minimizing the damage...

> Is your point that Ada.Execution_Time was accepted only because the ARG 
> decided that the word "time" in RM D.14 should not be understood to mean 
> real time? I doubt that very much... Surely such an unusual meaning of 
> "time" should have been explained in the RM.

It is explained by its name: "execution time." Execution means not real,
unreal time (:-)).

>>> As already said, an Ada implementation on such CPU/OS could
>>> use its own mechanisms for execution-time measurements.
>> 
>> Could or must? Does GNAT this?
> 
> I don't much care, it is irrelevant for understanding what the RM means. 
> Perhaps the next version of MS Windows will have better support for 
> measuring real task execution times; would that change the intent of the 
> RM? Of course not.

You suggested that Ada implementations would/could attempt to be consistent
with your interpretation of CPU_Time. But it seems that at least one of the
leading Ada vendors does not care. Is it a laziness on their side or maybe
you just expected too much?

>>> It seems evident to me that the text in D.14 must be interpreted using
>>> the concepts in D.2.1, "The Task Dispatching Model", which clearly
>>> specifies real-time points when a processor starts to execute a task and
>>> stops executing a task. To me, and I believe to most readers of the RM,
>>> the execution time of a task is the sum of these time slices, thus a
>>> physical, real time.
>> 
>> If that was the intent, then I really do not understand why CPU_Time was
>> introduced in addition to Ada.Real_Time.Time / Time_Span.
> 
> Because (as I understand it) different processors/OSes have different 
> mechanisms for measuring execution times and real times, and the 
> mechanism most convenient for CPU_Time may use a different numerical 
> type (range, scale, and precision) than the mechanisms and types used 
> for Ada.Real_Time.Time, Time_Span, and Duration.

I see no single reason why this could happen. Obviously, if talking about a
real-time system as you insist, the only possible choice for CPU_Time is
Time_Span, because to be consistent with the interpretation you propose it
must be derived from Ada.Real_Time clock.

My point is that RM intentionally leaves it up to the implementation to
choose a CPU_Time source independent on Ada.Real_Time.Clock. This why
different range and precision come into consideration.

>>> And you still have not defined what you mean by "simulation time", and
>>> how you come there from the RM text.
>> 
>> Simulation time is a model of real time a physical process might have under
>> certain conditions.
> 
> Thank you. But I still do not see how your definition could be applied 
> in this context, so we are back at the start of the post... :-)

Under some conditions (e.g. no task switching) an execution time interval
could be numerically equal to a real time interval.

But in my view the execution time is not even a simulation time of some
ideal (real) clock. It is a simulation time of some lax recurrent process,
e.g. scheduling activity, of which frequency is not even considered
constant. It can be any garbage, and it likely is.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-26 10:25                               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-27 12:44                                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-27 17:24                                 ` Robert A Duff
  2010-12-27 22:02                                   ` Ada.Execution_Time Randy Brukardt
  1 sibling, 1 reply; 124+ messages in thread
From: Robert A Duff @ 2010-12-27 17:24 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Nope, one of the killer arguments ARG people deploy to reject most
> reasonable AI's is: too difficult to implement on some obscure platform for
> which Ada never existed and never will. (:-))

The ARG and others have been guilty of that sort of argument in the
past, although I think "most reasonable AI's" is an exaggeration.
I think that line of reasoning is wrong -- I think it's just fine to have
things like Ada.Directories, even though many embedded systems don't
have directories.  It means that there's some standardization across
systems that DO have directories.  Those that don't can either
provide some minimal/useless implementation, or else appeal
to RM-1.1.3(6).

I think today's ARG is less inclined to follow that wrong line
of reasoning.

(I don't much like the design of Ada.Directories, and I think
you (Dmitry) agree with me about that, but it's beside the
point.)

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 15:28                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-27 20:11                                     ` Niklas Holsti
  2010-12-27 21:34                                       ` Ada.Execution_Time Simon Wright
  2010-12-27 21:53                                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 2 replies; 124+ messages in thread
From: Niklas Holsti @ 2010-12-27 20:11 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Mon, 27 Dec 2010 14:44:53 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
>>>> ...
>>>>>> The concept and measurement of "the execution time of a task" does
>>>>>> become problematic in complex processors that have hardware 
>>>>>> multi-threading and can run several tasks in more or less parallel
>>>>>> fashion, without completely isolating the tasks from each other.
>>>>> No, the concept is just fine,
>>>> Fine for what? For schedulability analysis, fine for on-line scheduling,
>>>> ... ?
>>> Applicability of a concept is not what makes it wrong or right.
>> I think it does. This concept, "the execution time of a task", stands 
>> for a number. A number is useless if it has no application in a 
>> calculation.
> 
> Technically, CPU_Time is not a number in any sense. It is not a numeric Ada
> type and it is not a model of a mathematical number (not even additive).

RM D.14(12/2): "The type CPU_Time represents the execution time of a 
task. The set of values of this type corresponds one-to-one with an 
implementation-defined range of mathematical integers". Thus, a number.

However, the sub-thread above was not about CPU_Time in 
Ada.Execution_Time, but about the general concept "the execution time of 
a task". (Serves me right for introducing side issues, although my 
intentions were good, I think.)

> Anyway, the execution time can be used in calculations independently on
> whether and how you could apply the results of such calculations.
> 
> You can add the height of your house to the distance to the Moon, the
> interpretation is up to you.

You are being absurd.

Dmitri, your arguments are becoming so weird that I am starting to think 
that you are just trolling or goading me.

>>> it is the Ada.Execution_Time implementation, which needs some
>>> input from the scheduler.
>> We must be precise about our terms, here. Using terms as defined in 
>> http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
>> input from the task *dispatcher* -- the part of the kernel that suspends 
>> and resumes tasks.
> 
> Let's call it dispatcher. Time sharing needs some measure of consumed CPU
> time. My points stand:
> 
> 1. Time sharing has little to do with real-time systems.

What do you mean by "time sharing"? The classical mainframe system used 
interactively by many terminals? What on earth does that have to do with 
our discussion? Such a system of course must have concurrent tasks or 
processes in some form, but so what?

If by "time sharing algorithms" (below in your point 2) you mean what is 
usually called "task scheduling algorithms", where several tasks 
time-share the same processor by a task-switching (dispatching) 
mechanism, your point 1 is bizarre. Priority-scheduled task switching is 
the canonical architecture for real-time systems.

> 2. It would be extremely ill-advised to use Ada.Execution_Time instead of
> direct measures for an implementation of time sharing algorithms.

If by "direct measures" you mean the use of some external measuring 
device such as an oscilloscope or logic analyzer, such measures are 
available only externally, to the developers, not within the Ada program 
itself. The whole point of Ada.Execution_Time is that it is available to 
the Ada program itself, enabling run-time decisions based on the actual 
execution times of tasks.

> Real-time systems work with the real-time, they real-time intervals
> (durations) are of much minor interest. Execution time is of no interest,
> because a real-time system does not care to balance the CPU load.

I am made speechless (or should I say "typing-less"). If that is your 
view, there is no point in continuing this discussion because we do not 
agree on what a real-time program is.

>>>> You have
>>>> not given any arguments, based on the RM text, to support your position.
>>> I am not a language lawyer to interpret the RM texts. My argument was to
>>> common sense.
>> To me it seems that your argument is based on the difficulty (in your 
>> opinion) of implementing Ada.Execution_Time in some OSes such as MS 
>> Windows, if the RM word "time" is taken to mean real time.
>>
>> It is common sense that some OSes are not designed (or not well 
>> designed) for real-time systems. Even a good real-time OS may not 
>> support all real-time methodologies, for example scheduling algorithms 
>> that depend on actual execution times.
> 
> I disagree with almost everything here. To start with, comparing real-time
> clock services of Windows and of VxWorks, we would notice that Windows is
> far superior in both accuracy and precision.

So what? Real-time systems need determinism. The clock only has to be 
accurate enough.

If your tasks suffer arbitrary millisecond-scale suspensions or 
dispatching delays (as is rumored for Windows) a microsecond-level clock 
accuracy is no help.

> Yet Windows is a half-baked
> time-sharing OS, while VxWorks is one of the leading real-time OSes. Why is
> it so? Because real-time applications do not need clock much. They are
> real-time because their sources of time are *real*. These are hardware
> interrupts, while timer interrupts are of a much lesser interest.

Both are important. Many control systems are driven by timers that 
trigger periodic tasks. In my experience (admittedly limited), it is 
rare for sensors to generate periodic input streams on their own, they 
must usually be sampled by periodic reads. You are right, however, that 
some systems, such as automobile engine control units, have external 
triggers, such as shaft-rotation interrupts.

Anyway, your point has do to with the "time" that *activates* tasks, not 
with the measurement of task-specific execution times. So this is 
irrelevant.

> I don't
> care how much processor time my control loop takes so long it manages to
> write the outputs when the actuators expect them.

You should care, if the processor must also have time for some other 
tasks of lower priority, which are preempted by the control-loop task.

> Measuring the CPU time
> would bring me nothing. It is useless before the run, because it is not a
> proof that the deadlines will be met.

In some cases (simple code or extensive tests, deterministic processor) 
CPU-time measurements can be used to prove that deadlines are met. But 
of course static analysis of the worst-case execution time is better.

> It useless at run-time because there
> are easier and safer ways to detect faults.

If you mean "deadline missed" or "task overrun" faults, you are right 
that there are other detection methods. Still, Ada.Execution_Time may 
help to *anticipate*, and thus mitigate, such faults.

For example, assume that the computation in a control algorithm consists 
of two consecutive stages where the first stage processes the inputs 
into a state model and the second stage computes the control outputs 
from the state model. Using Ada.Execution_Time or 
Ada.Execution_Time.Timers the program could detect an unexpectedly high 
CPU usage in the first stage, and fall back to a simpler, faster 
algorithm in the second stage, to ensure that some control outputs are 
computed before the deadline.

But you are again ignoring other run-time uses of execution-time 
measurements, such as advanced scheduling algorithms.

>> Is your point that Ada.Execution_Time was accepted only because the ARG 
>> decided that the word "time" in RM D.14 should not be understood to mean 
>> real time? I doubt that very much... Surely such an unusual meaning of 
>> "time" should have been explained in the RM.
> 
> It is explained by its name: "execution time." Execution means not real,
> unreal time (:-)).

Nonsense. I spend some part of my time asleep, some time awake. Both 
"sleeping time" and "awake time" are (pieces of) real time. A task 
spends some of its time being executed, some of its time not being 
executed (waiting or ready).

>>> If that was the intent, then I really do not understand why CPU_Time was
>>> introduced in addition to Ada.Real_Time.Time / Time_Span.
>> Because (as I understand it) different processors/OSes have different 
>> mechanisms for measuring execution times and real times, and the 
>> mechanism most convenient for CPU_Time may use a different numerical 
>> type (range, scale, and precision) than the mechanisms and types used 
>> for Ada.Real_Time.Time, Time_Span, and Duration.
> 
> I see no single reason why this could happen. Obviously, if talking about a
> real-time system as you insist, the only possible choice for CPU_Time is
> Time_Span, because to be consistent with the interpretation you propose it
> must be derived from Ada.Real_Time clock.

I have only said that the sum of the CPU_Times of all tasks executing on 
the same processor should be close to the real elapsed time, since the 
CPU's time is shared between the tasks. This does not mean that 
Ada.Execution_Time.CPU_Time and Ada.Real_Time.Time must have a common 
time source, only that both time sources must approximate physical, real 
time.

> My point is that RM intentionally leaves it up to the implementation to
> choose a CPU_Time source independent on Ada.Real_Time.Clock. This why
> different range and precision come into consideration.

I agree. But in both cases the intent is to approximate physical, real 
time, not some "simulation time" where one "simulation second" could be 
one year of real time.

> Under some conditions (e.g. no task switching) an execution time interval
> could be numerically equal to a real time interval.

Yes! Therefore, under these conditions, CPU_Time (when converted to a 
Time_Span or Duration) does have a physical meaning. So we agree. At last.

And under the task dispatching model in RM D.2.1, these conditions can 
be extended to task switching scenarios with the result that the sum of 
the CPU_Times of the tasks (for one processor) will be numerically close 
to the elapsed real time interval.

> But in my view the execution time is not even a simulation time of some
> ideal (real) clock. It is a simulation time of some lax recurrent process,
> e.g. scheduling activity, of which frequency is not even considered
> constant.

You may well have this view, but I don't see that your view has anything 
to do with Ada.Execution_Time as defined in the Ada RM.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 20:11                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-27 21:34                                       ` Simon Wright
  2010-12-28 10:01                                         ` Ada.Execution_Time Niklas Holsti
  2010-12-27 21:53                                       ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 124+ messages in thread
From: Simon Wright @ 2010-12-27 21:34 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> Nonsense. I spend some part of my time asleep, some time awake. Both
> "sleeping time" and "awake time" are (pieces of) real time. A task
> spends some of its time being executed, some of its time not being
> executed (waiting or ready).

And, just to be clear, CPU_Time corresponds to the "awake time"?

I thought I understood pretty much what was intended in the execution
time annex, even if it didn't seem to have much relevance to my work,
but this discussion has managed to confuse me thoroughly.

A minor aside -- as a user, I find the use of Time_Span here and in
Ada.Real_Time very annoying. It's perfectly clear that what's meant is
Duration.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 20:11                                     ` Ada.Execution_Time Niklas Holsti
  2010-12-27 21:34                                       ` Ada.Execution_Time Simon Wright
@ 2010-12-27 21:53                                       ` Dmitry A. Kazakov
  2010-12-28 14:14                                         ` Ada.Execution_Time Simon Wright
  2010-12-28 14:46                                         ` Ada.Execution_Time Niklas Holsti
  1 sibling, 2 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-27 21:53 UTC (permalink / raw)


On Mon, 27 Dec 2010 22:11:18 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> Technically, CPU_Time is not a number in any sense. It is not a numeric Ada
>> type and it is not a model of a mathematical number (not even additive).
> 
> RM D.14(12/2): "The type CPU_Time represents the execution time of a 
> task. The set of values of this type corresponds one-to-one with an 
> implementation-defined range of mathematical integers". Thus, a number.

This is true for any value of any type due to finiteness. I think
D.14(12/2) refers to an ability to implement Split. But that is irrelevant.
CPU_Time is not declared numeric, it does not have +.

>>>> it is the Ada.Execution_Time implementation, which needs some
>>>> input from the scheduler.
>>> We must be precise about our terms, here. Using terms as defined in 
>>> http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
>>> input from the task *dispatcher* -- the part of the kernel that suspends 
>>> and resumes tasks.
>> 
>> Let's call it dispatcher. Time sharing needs some measure of consumed CPU
>> time. My points stand:
>> 
>> 1. Time sharing has little to do with real-time systems.
> 
> What do you mean by "time sharing"? The classical mainframe system used 
> interactively by many terminals? What on earth does that have to do with 
> our discussion? Such a system of course must have concurrent tasks or 
> processes in some form, but so what?

Because this is the only case where execution time might be any relevant to
the algorithm of task switching.

> If by "time sharing algorithms" (below in your point 2) you mean what is 
> usually called "task scheduling algorithms", where several tasks 
> time-share the same processor by a task-switching (dispatching) 
> mechanism, your point 1 is bizarre. Priority-scheduled task switching is 
> the canonical architecture for real-time systems.

, which architecture makes execution time irrelevant to switching
decisions.

>> 2. It would be extremely ill-advised to use Ada.Execution_Time instead of
>> direct measures for an implementation of time sharing algorithms.
> 
> If by "direct measures" you mean the use of some external measuring 
> device such as an oscilloscope or logic analyzer, such measures are 
> available only externally, to the developers, not within the Ada program 
> itself.

You forgot one external device called real time clock.

> The whole point of Ada.Execution_Time is that it is available to 
> the Ada program itself, enabling run-time decisions based on the actual 
> execution times of tasks.

, which would be ill-advised to do.

>>>>> You have
>>>>> not given any arguments, based on the RM text, to support your position.
>>>> I am not a language lawyer to interpret the RM texts. My argument was to
>>>> common sense.
>>> To me it seems that your argument is based on the difficulty (in your 
>>> opinion) of implementing Ada.Execution_Time in some OSes such as MS 
>>> Windows, if the RM word "time" is taken to mean real time.
>>>
>>> It is common sense that some OSes are not designed (or not well 
>>> designed) for real-time systems. Even a good real-time OS may not 
>>> support all real-time methodologies, for example scheduling algorithms 
>>> that depend on actual execution times.
>> 
>> I disagree with almost everything here. To start with, comparing real-time
>> clock services of Windows and of VxWorks, we would notice that Windows is
>> far superior in both accuracy and precision.
> 
> So what? Real-time systems need determinism. The clock only has to be 
> accurate enough.

Sorry, are you saying that lesser accuracy of real-time clock is a way to
achieve determinism?
 
> If your tasks suffer arbitrary millisecond-scale suspensions or 
> dispatching delays (as is rumored for Windows) a microsecond-level clock 
> accuracy is no help.

And conversely, the catastrophic accuracy of the VxWorks real-time clock
service does not hinder its usability for real-time application. Which is
my point. You don't need good real-time clock in so many real-time
applications, and you never need execution time there.

> Anyway, your point has do to with the "time" that *activates* tasks, not 
> with the measurement of task-specific execution times.

Exactly

> So this is irrelevant.

No, it is the execution time which is irrelevant for real-time systems,
because of the way tasks are activated there.

>> I don't
>> care how much processor time my control loop takes so long it manages to
>> write the outputs when the actuators expect them.
> 
> You should care, if the processor must also have time for some other 
> tasks of lower priority, which are preempted by the control-loop task.

Why? It is straightforward: the task of higher priority level owns the
processor.

>> Measuring the CPU time
>> would bring me nothing. It is useless before the run, because it is not a
>> proof that the deadlines will be met.
> 
> In some cases (simple code or extensive tests, deterministic processor) 
> CPU-time measurements can be used to prove that deadlines are met.

It is difficult to imagine. This is done either statically or else by
running tests. Execution time has drawbacks of both approaches and
advantages of none.

> For example, assume that the computation in a control algorithm consists 
> of two consecutive stages where the first stage processes the inputs 
> into a state model and the second stage computes the control outputs 
> from the state model. Using Ada.Execution_Time or 
> Ada.Execution_Time.Timers the program could detect an unexpectedly high 
> CPU usage in the first stage, and fall back to a simpler, faster 
> algorithm in the second stage, to ensure that some control outputs are 
> computed before the deadline.

No, in our systems we use a different schema. The "fall back" values are
always evaluated first. They must become ready at the end of each cycle. A
finer estimation is evaluated in background and used when ready. Actually,
according to your schema, the finer estimation always "fail" because it is
guaranteed too complex for one cycle. It takes 10-100 cycles to compute at
least. So your schema would not work. In general, real-time systems are
usually designed for the worst case scenario, because when something
unanticipated indeed happens you could have no time to do anything else.
 
>>> Is your point that Ada.Execution_Time was accepted only because the ARG 
>>> decided that the word "time" in RM D.14 should not be understood to mean 
>>> real time? I doubt that very much... Surely such an unusual meaning of 
>>> "time" should have been explained in the RM.
>> 
>> It is explained by its name: "execution time." Execution means not real,
>> unreal time (:-)).
> 
> Nonsense. I spend some part of my time asleep, some time awake. Both 
> "sleeping time" and "awake time" are (pieces of) real time. A task 
> spends some of its time being executed, some of its time not being 
> executed (waiting or ready).

A very good example. Now consider your perception of time. Does it
correspond to the real time? No it does not. The time spent in sleep can be
sensed from very short to very long. This felt time is an analogue of the
task execution time. You better do not use this subjective time to decide
when to have next meal. That could end in obesity.

>>>> If that was the intent, then I really do not understand why CPU_Time was
>>>> introduced in addition to Ada.Real_Time.Time / Time_Span.
>>> Because (as I understand it) different processors/OSes have different 
>>> mechanisms for measuring execution times and real times, and the 
>>> mechanism most convenient for CPU_Time may use a different numerical 
>>> type (range, scale, and precision) than the mechanisms and types used 
>>> for Ada.Real_Time.Time, Time_Span, and Duration.
>> 
>> I see no single reason why this could happen. Obviously, if talking about a
>> real-time system as you insist, the only possible choice for CPU_Time is
>> Time_Span, because to be consistent with the interpretation you propose it
>> must be derived from Ada.Real_Time clock.
> 
> I have only said that the sum of the CPU_Times of all tasks executing on 
> the same processor should be close to the real elapsed time, since the 
> CPU's time is shared between the tasks. This does not mean that 
> Ada.Execution_Time.CPU_Time and Ada.Real_Time.Time must have a common 
> time source, only that both time sources must approximate physical, real 
> time.

What is the reason to use different sources?

>> My point is that RM intentionally leaves it up to the implementation to
>> choose a CPU_Time source independent on Ada.Real_Time.Clock. This why
>> different range and precision come into consideration.
> 
> I agree. But in both cases the intent is to approximate physical, real 
> time, not some "simulation time" where one "simulation second" could be 
> one year of real time.

Certainly the latter. Consider a system with n-processors. The execution
time second will be 1/n of the real time second. With shared memory it will
be f*1/n, where f is some unknown factor. That works in both directions, on
a single processor board with memory connected over some bus system, it
could be f<1, because some external devices might block CPU (and thus your
task) while accessing the memory. Note that the error is systematic. It is
not an approximation of real time.

>> Under some conditions (e.g. no task switching) an execution time interval
>> could be numerically equal to a real time interval.
> 
> Yes! Therefore, under these conditions, CPU_Time (when converted to a 
> Time_Span or Duration) does have a physical meaning. So we agree. At last.

Since these conditions are never met, the model you have in mind is
inadequate (wrong).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 17:24                                 ` Ada.Execution_Time Robert A Duff
@ 2010-12-27 22:02                                   ` Randy Brukardt
  2010-12-27 22:43                                     ` Ada.Execution_Time Robert A Duff
  0 siblings, 1 reply; 124+ messages in thread
From: Randy Brukardt @ 2010-12-27 22:02 UTC (permalink / raw)


"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message 
news:wcc39pj86y4.fsf@shell01.TheWorld.com...
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>
>> Nope, one of the killer arguments ARG people deploy to reject most
>> reasonable AI's is: too difficult to implement on some obscure platform 
>> for
>> which Ada never existed and never will. (:-))
>
> The ARG and others have been guilty of that sort of argument in the
> past, although I think "most reasonable AI's" is an exaggeration.
> I think that line of reasoning is wrong -- I think it's just fine to have
> things like Ada.Directories, even though many embedded systems don't
> have directories.  It means that there's some standardization across
> systems that DO have directories.  Those that don't can either
> provide some minimal/useless implementation, or else appeal
> to RM-1.1.3(6).

They're supposed to provide a useless implementation that raises Use_Error 
for most of the operations. There are a pair of Notes to that effect 
(A.16(129-130)). I don't think features should ever be designed so that 
implementations have to appeal to 1.1.3(6) - my preference would be that 
that paragraph not exist with the language itself sufficiently flexible 
where it matters. (Otherwise, implementations could leave out anything that 
they want and appeal to 1.1.3(6). I think that both interfaces and 
coextensions are "impractical" to implement for the benefit gained, so does 
that mean I can ignore them and still have a complete Ada compiler??)

                                 Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-25 11:31                             ` Ada.Execution_Time Niklas Holsti
  2010-12-26 10:25                               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-27 22:11                               ` Randy Brukardt
  2010-12-29 12:48                                 ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 124+ messages in thread
From: Randy Brukardt @ 2010-12-27 22:11 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8nm30fF7r9U1@mid.individual.net...
> Dmitry A. Kazakov wrote:
>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
...
>> On a such platform the implementation would be as perverse as RM D.14
>> is. But the perversion is only because of the interpretation.
>
> Bah. I think that when RM D.14 says "time", it really means time. You
> think it means something else, perhaps a CPU cycle count. I think the 
> burden of proof is on you.
>
> It seems evident to me that the text in D.14 must be interpreted using
> the concepts in D.2.1, "The Task Dispatching Model", which clearly
> specifies real-time points when a processor starts to execute a task and
> stops executing a task. To me, and I believe to most readers of the RM,
> the execution time of a task is the sum of these time slices, thus a
> physical, real time.

For the record, I agree more with Dmitry than Niklas here. At least the 
interpretation *I* had when this package was proposed was that it had only a 
slight relationship to real-time. My understanding was that it was intended 
to provide a window into whatever facilities the underlying system had for 
execution "time" counting. That had no defined relationship with what Ada 
calls "time". As usch, I think the name "execution time" is misleading, (and 
I recall some discussions about that in the ARG), but no one had a better 
name that made any sense at all.

In particular, there is no requirement in the RM or anywhere else that these 
"times" sum to any particular answer. I don't quite see how there could be, 
unless you were going to require a tailored Ada target system (which is 
definitely not going to be a requirement).

Perhaps the proposers (from the IRTAW meetings) had something else in mind, 
but if so, they communicated it very poorly.

                             Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 22:02                                   ` Ada.Execution_Time Randy Brukardt
@ 2010-12-27 22:43                                     ` Robert A Duff
  0 siblings, 0 replies; 124+ messages in thread
From: Robert A Duff @ 2010-12-27 22:43 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> They're supposed to provide a useless implementation that raises Use_Error 
> for most of the operations. There are a pair of Notes to that effect 
> (A.16(129-130)).

OK, good enough.

>... I don't think features should ever be designed so that 
> implementations have to appeal to 1.1.3(6) - my preference would be that 
> that paragraph not exist with the language itself sufficiently flexible 
> where it matters.

But something like 1.1.3(6) has to exist in every language definition,
at least implicitly.  Computers are finite machines, so there will
always be things that are "impossible or impractical", and it is
impossible for any language designer to predict what that means
in all cases.

>...(Otherwise, implementations could leave out anything that 
> they want and appeal to 1.1.3(6).

Well, not really.  That para says "given the execution environment", not
"given the compiler-writer's whim, or laziness, or lack of interest".
One could claim that "if X = 0..." is impractical to implement, but of
course people would laugh at that claim.

>... I think that both interfaces and 
> coextensions are "impractical" to implement for the benefit gained, so does 
> that mean I can ignore them and still have a complete Ada compiler??)

I definitely disagree about interfaces.  I might agree about
coextensions, depending on my mood on any particular day of the week.
But yeah, you can implement what you like, and claim it's an Ada
compiler.  Whether people buy it is a question that lies outside any ISO
standard.

Standards are optional!  The Ada standard doesn't require anybody to do
anything.  Still, there's a community that can decide, informally, that
so-and-so compiler is an implementation of Ada 2005 (despite a few minor
bugs), and such-and-such compiler is not.

- Bob



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 21:34                                       ` Ada.Execution_Time Simon Wright
@ 2010-12-28 10:01                                         ` Niklas Holsti
  2010-12-28 14:17                                           ` Ada.Execution_Time Simon Wright
  0 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-28 10:01 UTC (permalink / raw)


Simon Wright wrote:
> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
> 
>> Nonsense. I spend some part of my time asleep, some time awake. Both
>> "sleeping time" and "awake time" are (pieces of) real time. A task
>> spends some of its time being executed, some of its time not being
>> executed (waiting or ready).
> 
> And, just to be clear, CPU_Time corresponds to the "awake time"?

Perhaps that is the natural choice, but I did not mean the choice to be 
significant. Dmitry was saying that "execution time" is not "time", just 
because it uses the prefix or qualification "execution". By that 
reasoning, "awake time" would not be "time", "chicken soup" would not be 
"soup", etc.

> I thought I understood pretty much what was intended in the execution
> time annex, even if it didn't seem to have much relevance to my work,
> but this discussion has managed to confuse me thoroughly.

Randy's last post in this thread, in which he agrees with Dmitry, has 
the same effect on me. I hope that further discussion with Randy will 
converge to something.

Did your earlier understanding resemble Dmitry's, or mine? Or neither?

> A minor aside -- as a user, I find the use of Time_Span here and in
> Ada.Real_Time very annoying. It's perfectly clear that what's meant is
> Duration.

I think Time_Span and Duration are different representations of the same 
physical thing, a span of time that can be physically measured in 
seconds. The reasons for having two (possibly) different representations 
(two types) have been discussed before: different requirements on range 
and precision. Still, the differences are important only for processors 
that are very small and weak, in today's scale, so perhaps this 
distinction is no longer needed and the types could be merged.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 21:53                                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 14:14                                         ` Simon Wright
  2010-12-28 15:08                                           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-28 14:46                                         ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 124+ messages in thread
From: Simon Wright @ 2010-12-28 14:14 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> And conversely, the catastrophic accuracy of the VxWorks real-time
> clock service does not hinder its usability for real-time application.

Catastrophic?

The Radstone PPC7A cards (to take one example) have two facilities: (a)
the PowerPC decrementer, run off a crystal with some not-too-good quoted
accuracy (50 ppm, I think), and (b) a "real time clock".

The RTC would be much better termed a time-of-day clock, since what it
provides is the date and time to 1 second precision. It also needs
battery backup, not easy to justify on naval systems (partly because it
adversely affects the shelf life of the boards, partly because navies
don't like noxious chemicals in their equipment).

We never used the RTC.

The decrementer is the facility used by VxWorks, and hence by GNAT under
VxWorks, to support time; both Ada.Calendar and Ada.Real_Time (we are
still at Ada 95 so I have no idea about Ada.Execution_Time). We run with
clock interrupts at 1 ms and (so far as we can tell from using bus
analysers) the interrupts behave perfectly reliably.

For a higher-resolution view of time we've extended Ada.Calendar, using
the PowerPC's mftb (Move From Time Base) instruction to measure sub-tick
intervals (down to 40 ns).




^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 10:01                                         ` Ada.Execution_Time Niklas Holsti
@ 2010-12-28 14:17                                           ` Simon Wright
  0 siblings, 0 replies; 124+ messages in thread
From: Simon Wright @ 2010-12-28 14:17 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> Randy's last post in this thread, in which he agrees with Dmitry, has
> the same effect on me. I hope that further discussion with Randy will
> converge to something.
>
> Did your earlier understanding resemble Dmitry's, or mine? Or neither?

Yours, I think.

It seems strange to have CPU_Time bearing only a tenuous relationship
to what we might normally call "time", and then to have the difference
between two CPU_Times turn out to be a Time_Span!



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 21:53                                       ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-28 14:14                                         ` Ada.Execution_Time Simon Wright
@ 2010-12-28 14:46                                         ` Niklas Holsti
  2010-12-28 15:42                                           ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-28 14:46 UTC (permalink / raw)


Dmitry, from now on I am going to respond only to those of your comments 
that to me seem to make some kind of sense and have not been discussed 
before. From my omissions you may deduce my opinion of the remainder.

Dmitry A. Kazakov wrote:
>>> 2. It would be extremely ill-advised to use Ada.Execution_Time instead of
>>> direct measures for an implementation of time sharing algorithms.
Niklas Holsti replied:
>> If by "direct measures" you mean the use of some external measuring 
>> device such as an oscilloscope or logic analyzer, such measures are 
>> available only externally, to the developers, not within the Ada program 
>> itself.
Dmitry:
> You forgot one external device called real time clock.

And you seem to forget that Ada.Execution_Time may be implemented by 
reading a real-time clock. As has been said before.

Dmitry:
>>> I don't
>>> care how much processor time my control loop takes so long it manages to
>>> write the outputs when the actuators expect them.
Niklas:
>> You should care, if the processor must also have time for some other 
>> tasks of lower priority, which are preempted by the control-loop task.
Dmitry:
> Why? It is straightforward: the task of higher priority level owns the
> processor.

Do you mean that your system has only one task with real-time deadlines, 
and no CPU time has to be left for lower-priority tasks, and no CPU time 
is taken by higher-priority tasks? Then scheduling is trivial for your 
system and your system is a poor example for a discussion about scheduling.

Niklas:
>> For example, assume that the computation in a control algorithm consists 
>> of two consecutive stages where the first stage processes the inputs 
>> into a state model and the second stage computes the control outputs 
>> from the state model. Using Ada.Execution_Time or 
>> Ada.Execution_Time.Timers the program could detect an unexpectedly high 
>> CPU usage in the first stage, and fall back to a simpler, faster 
>> algorithm in the second stage, to ensure that some control outputs are 
>> computed before the deadline.
Dmitry:
> No, in our systems we use a different schema.

So what? I said nothing (and know nothing) about your system, any 
resemblance is coincidental. And there can be several valid schemas.

> The "fall back" values are
> always evaluated first. They must become ready at the end of each cycle. A
> finer estimation is evaluated in background and used when ready.

So your problem is different (the coarse values are the rule). Different 
problem, different solution.

> In general, real-time systems are
> usually designed for the worst case scenario,

Yes, but this is often criticized as inefficient, and there are 
scheduling methods that make good use of the difference (slack) between 
actual and worst-case execution times. These methods need something like 
Ada.Execution_Time. As has been said before.

> Consider a system with n-processors. The execution
> time second will be 1/n of the real time second.

No. If you have n workers digging a ditch, you must pay each of them the 
same amount of money each hour as if you had one worker. So the "digging 
hour" is still one hour, although the total amount of work that can be 
done in one hour is n digging-hours. You are confusing the total amount 
of work with the amount of work per worker.

With n processors the system can do n seconds worth of execution in one 
real-time second. But each processor still executes for one second. And 
as I understand the Ada task dispatching/scheduling model, one task 
cannot execute at the same time on more than one processor, so one task 
cannot accumulate more than one second of execution time in one second 
of real time.

(At the risk of introducing another side issue, I note that an 
automatically parallelizing compiler might make a task use several 
processors in parallel, at least for some of the time. But I don't think 
that RM D2.1 considers this possibility.)

> With shared memory it will
> be f*1/n, where f is some unknown factor.

I agree that variable memory access times, and other dynamic timing that 
may depend on the number of processors and on how they share resources, 
is a complicating factor in the analysis of CPU loads and 
schedulability. Indeed this is why I fear that the concept of "the 
execution time of task" is becoming fearsomely context-dependent and 
therefore problematic -- something that you disagreed with.

For the definition of Ada.Execution_Time, memory access latency is 
relevant only if the Ada RTS suspends tasks that are waiting for memory 
access, so that they are "not executing" until the memory access is 
completed. Most RTOSes do not suspend tasks in that way, I believe.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 14:14                                         ` Ada.Execution_Time Simon Wright
@ 2010-12-28 15:08                                           ` Dmitry A. Kazakov
  2010-12-28 16:18                                             ` Ada.Execution_Time Simon Wright
  2010-12-31  0:40                                             ` Ada.Execution_Time BrianG
  0 siblings, 2 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-28 15:08 UTC (permalink / raw)


On Tue, 28 Dec 2010 14:14:57 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> And conversely, the catastrophic accuracy of the VxWorks real-time
>> clock service does not hinder its usability for real-time application.
> 
> Catastrophic?
>
> The Radstone PPC7A cards (to take one example) have two facilities: (a)
> the PowerPC decrementer, run off a crystal with some not-too-good quoted
> accuracy (50 ppm, I think), and (b) a "real time clock".
> 
> The RTC would be much better termed a time-of-day clock, since what it
> provides is the date and time to 1 second precision. It also needs
> battery backup, not easy to justify on naval systems (partly because it
> adversely affects the shelf life of the boards, partly because navies
> don't like noxious chemicals in their equipment).
> 
> We never used the RTC.
>
> The decrementer is the facility used by VxWorks, and hence by GNAT under
> VxWorks, to support time; both Ada.Calendar and Ada.Real_Time (we are
> still at Ada 95 so I have no idea about Ada.Execution_Time). We run with
> clock interrupts at 1 ms and (so far as we can tell from using bus
> analysers) the interrupts behave perfectly reliably.

Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we used
prior to it had poor performance) The problem was that Ada.Real_Time.Clock
had the accuracy of the clock interrupts, i.e. 1ms, which is by all
accounts catastrophic for a 1.7GHz processor. You can switch some tasks
forth and back between two clock changes.
 
> For a higher-resolution view of time we've extended Ada.Calendar, using
> the PowerPC's mftb (Move From Time Base) instruction to measure sub-tick
> intervals (down to 40 ns).

So did we in our case. VxWorks has means to access the Pentium high
resolution counter. We took Ada.Real_Time and replaced the Clock function
with one that used the counter multiplied by its frequency.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 14:46                                         ` Ada.Execution_Time Niklas Holsti
@ 2010-12-28 15:42                                           ` Dmitry A. Kazakov
  2010-12-28 16:27                                             ` Ada.Execution_Time (see below)
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-28 15:42 UTC (permalink / raw)


On Tue, 28 Dec 2010 16:46:20 +0200, Niklas Holsti wrote:

> And you seem to forget that Ada.Execution_Time may be implemented by 
> reading a real-time clock. As has been said before.

Only if you have control over the OS or else can hook on task switches. I
think it is doable under VxWorks. But I doubt that AdaCore would do this.

> Dmitry:
>>>> I don't
>>>> care how much processor time my control loop takes so long it manages to
>>>> write the outputs when the actuators expect them.
> Niklas:
>>> You should care, if the processor must also have time for some other 
>>> tasks of lower priority, which are preempted by the control-loop task.
> Dmitry:
>> Why? It is straightforward: the task of higher priority level owns the
>> processor.
> 
> Do you mean that your system has only one task with real-time deadlines, 
> and no CPU time has to be left for lower-priority tasks, and no CPU time 
> is taken by higher-priority tasks? Then scheduling is trivial for your 
> system and your system is a poor example for a discussion about scheduling.

Right, a real-time system is usually a bunch of tasks activated according
to their priority levels. This is why I doubted that Ada.Execution_Time
might be useful there.

> Niklas:
>>> For example, assume that the computation in a control algorithm consists 
>>> of two consecutive stages where the first stage processes the inputs 
>>> into a state model and the second stage computes the control outputs 
>>> from the state model. Using Ada.Execution_Time or 
>>> Ada.Execution_Time.Timers the program could detect an unexpectedly high 
>>> CPU usage in the first stage, and fall back to a simpler, faster 
>>> algorithm in the second stage, to ensure that some control outputs are 
>>> computed before the deadline.
> Dmitry:
>> No, in our systems we use a different schema.
> 
> So what? I said nothing (and know nothing) about your system, any 
> resemblance is coincidental. And there can be several valid schemas.

No, the point was rather that your schema is not typical for a real-time
system.

>> Consider a system with n-processors. The execution
>> time second will be 1/n of the real time second.
> 
> No. If you have n workers digging a ditch, you must pay each of them the 
> same amount of money each hour as if you had one worker. So the "digging 
> hour" is still one hour, although the total amount of work that can be 
> done in one hour is n digging-hours. You are confusing the total amount 
> of work with the amount of work per worker.

The ditch has 10 digging hours, this is a virtual time, which can be 1 real
hour if I had 10 workers or 26 hours if I have only one (26 = 24(8) + 2, 8
hours per day). With one worker it can even be 26 + 48 if he starts Friday,
or even more if she takes a leave for child rearing (:-)).

The sum of working hours is a measure of work. It is not a measure of time.

   Time = Work / Power

You can use it to estimate the real time required to complete the work ...
or just re-read the The Mythical Man-Month... (:-))

> With n processors the system can do n seconds worth of execution in one 
> real-time second. But each processor still executes for one second. And 
> as I understand the Ada task dispatching/scheduling model, one task 
> cannot execute at the same time on more than one processor, so one task 
> cannot accumulate more than one second of execution time in one second 
> of real time.

Yes.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 15:08                                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 16:18                                             ` Simon Wright
  2010-12-28 16:34                                               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-31  0:40                                             ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 124+ messages in thread
From: Simon Wright @ 2010-12-28 16:18 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we
> used prior to it had poor performance) The problem was that
> Ada.Real_Time.Clock had the accuracy of the clock interrupts,
> i.e. 1ms, which is by all accounts catastrophic for a 1.7GHz
> processor. You can switch some tasks forth and back between two clock
> changes.

Our experience was that where there are timing constraints to be met, or
cyclic timing behaviours to implement, a milliscond is OK.

We did consider running the VxWorks tick at 100 us but this was quite
unnecessary!



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 15:42                                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 16:27                                             ` (see below)
  2010-12-28 16:55                                               ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: (see below) @ 2010-12-28 16:27 UTC (permalink / raw)


At least part of this discussion is motivated by the abysmal facilities for
the measurement of elapsed time on some (all?) modern architectures.

My current Ada project is the emulation of the KDF9, a computer introduced
50 years ago. It had a hardware clock register that was incremented by 1
every 32 logic clock cycles and could be read by a single instruction taking
4 logic clock cycles (the CPU ran on a 1MHz logic clock).

Using this feature, the OS could keep track of the CPU time used by a
process to within 32 logic clock cycles per time slice (typically better
than 1 part in 1_000). Summing many such slices gives a total with much
better relative error than that of the individual slices, of course.

Dmitri reports a modern computer with a timer having a resolution that is
thousands or millions of times worse than the CPU's logic clock.

Why has this aspect of computer architecture degenerated so much, I wonder?
And why have software people not made more of a push for improvements?

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 16:18                                             ` Ada.Execution_Time Simon Wright
@ 2010-12-28 16:34                                               ` Dmitry A. Kazakov
  0 siblings, 0 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-28 16:34 UTC (permalink / raw)


On Tue, 28 Dec 2010 16:18:11 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we
>> used prior to it had poor performance) The problem was that
>> Ada.Real_Time.Clock had the accuracy of the clock interrupts,
>> i.e. 1ms, which is by all accounts catastrophic for a 1.7GHz
>> processor. You can switch some tasks forth and back between two clock
>> changes.
> 
> Our experience was that where there are timing constraints to be met, or
> cyclic timing behaviours to implement, a milliscond is OK.
> 
> We did consider running the VxWorks tick at 100 us but this was quite
> unnecessary!

We actually have it set at 100 us, I believe.

But we need high accuracy clock not for switching tasks. It is for time
stamping and frequency measurements.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 16:27                                             ` Ada.Execution_Time (see below)
@ 2010-12-28 16:55                                               ` Dmitry A. Kazakov
  2010-12-28 19:41                                                 ` Ada.Execution_Time (see below)
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-28 16:55 UTC (permalink / raw)


On Tue, 28 Dec 2010 16:27:18 +0000, (see below) wrote:

> Dmitri reports a modern computer with a timer having a resolution that is
> thousands or millions of times worse than the CPU's logic clock.

You get me wrong, the timer resolution is OK, it is the system service
which does not use it properly. In the case of VxWorks the system time is
incremented from the timer interrupts, e.g. by 1ms. You can set interrupts
to each 1um spending all processor time handling interrupts. It is an OS
architecture problem. System time should have been taken from the real time
counter.

> Why has this aspect of computer architecture degenerated so much, I wonder?
> And why have software people not made more of a push for improvements?

The computer architecture did not degenerate. Modern processor and
motherboard have multiple time sources 3-4. Some of them have a very high
resolution and reliable, e.g. keep on counting in the sleep mode etc. It is
usually the standard OS services to blame for not using these clocks.
Practically in any OS there is a backdoor to get a decent real-time clock.
Then the journey begins. You need to synchronize readings from that clock
(usually a 64-bit counter) with the system time (of miserable accuracy) in
order to get a decent UTC stamp. This is doable using some statistical
method, depending on your needs (monotonic, or not, etc). Shame that the OS
does not do this.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 16:55                                               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 19:41                                                 ` (see below)
  2010-12-28 20:03                                                   ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: (see below) @ 2010-12-28 19:41 UTC (permalink / raw)


On 28/12/2010 16:55, in article 1oq6oggi7rtzj.4u4yyq6m8r74$.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Tue, 28 Dec 2010 16:27:18 +0000, (see below) wrote:
> 
>> Dmitri reports a modern computer with a timer having a resolution that is
>> thousands or millions of times worse than the CPU's logic clock.
> 
> You get me wrong, the timer resolution is OK, it is the system service
> which does not use it properly. In the case of VxWorks the system time is
> incremented from the timer interrupts, e.g. by 1ms. You can set interrupts
> to each 1um spending all processor time handling interrupts. It is an OS
> architecture problem. System time should have been taken from the real time
> counter.

Surely the interrupt rate does not matter. The KDF9 clock interrupted once
every 2^20 us, but could be read to the nearest 32 us. Can the clock you
speak of not be interrogated between interrupts?

> 
>> Why has this aspect of computer architecture degenerated so much, I wonder?
>> And why have software people not made more of a push for improvements?
> 
> The computer architecture did not degenerate. [...] It is
> usually the standard OS services to blame for not using these clocks.

I guess the second part of my question stands. 8-)

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 19:41                                                 ` Ada.Execution_Time (see below)
@ 2010-12-28 20:03                                                   ` Dmitry A. Kazakov
  2010-12-28 22:39                                                     ` Ada.Execution_Time Simon Wright
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-28 20:03 UTC (permalink / raw)


On Tue, 28 Dec 2010 19:41:40 +0000, (see below) wrote:

> On 28/12/2010 16:55, in article 1oq6oggi7rtzj.4u4yyq6m8r74$.dlg@40tude.net,
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:
> 
>> On Tue, 28 Dec 2010 16:27:18 +0000, (see below) wrote:
>> 
>>> Dmitri reports a modern computer with a timer having a resolution that is
>>> thousands or millions of times worse than the CPU's logic clock.
>> 
>> You get me wrong, the timer resolution is OK, it is the system service
>> which does not use it properly. In the case of VxWorks the system time is
>> incremented from the timer interrupts, e.g. by 1ms. You can set interrupts
>> to each 1um spending all processor time handling interrupts. It is an OS
>> architecture problem. System time should have been taken from the real time
>> counter.
> 
> Surely the interrupt rate does not matter. The KDF9 clock interrupted once
> every 2^20 us, but could be read to the nearest 32 us. Can the clock you
> speak of not be interrogated between interrupts?

Yes, the TSC can be read any time, and, if I correctly remember, each
reading will give a new value. PPC RT clock is also reliable.

There could be certain issues for multi-core processors, you should take
care that the task synchronizing the counter with UTC would not jump from
core to core. Alternatively you should synchronize clocks of individual
cores.

>>> Why has this aspect of computer architecture degenerated so much, I wonder?
>>> And why have software people not made more of a push for improvements?
>> 
>> The computer architecture did not degenerate. [...] It is
>> usually the standard OS services to blame for not using these clocks.
> 
> I guess the second part of my question stands. 8-)

The OSes did not become degenerate it always were. (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 20:03                                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 22:39                                                     ` Simon Wright
  2010-12-29  9:07                                                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: Simon Wright @ 2010-12-28 22:39 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Yes, the TSC can be read any time, and, if I correctly remember, each
> reading will give a new value. PPC RT clock is also reliable.
>
> There could be certain issues for multi-core processors, you should
> take care that the task synchronizing the counter with UTC would not
> jump from core to core. Alternatively you should synchronize clocks of
> individual cores.

Using the TSC on a MacBook Pro gives very unreliable results (I rather
think the core you're using goes to sleep and you may or may not wake
up using the same core!). However, the system clock (Ada.Calendar) is
precise to a microsecond.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 22:39                                                     ` Ada.Execution_Time Simon Wright
@ 2010-12-29  9:07                                                       ` Dmitry A. Kazakov
  0 siblings, 0 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-29  9:07 UTC (permalink / raw)


On Tue, 28 Dec 2010 22:39:31 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> Yes, the TSC can be read any time, and, if I correctly remember, each
>> reading will give a new value. PPC RT clock is also reliable.
>>
>> There could be certain issues for multi-core processors, you should
>> take care that the task synchronizing the counter with UTC would not
>> jump from core to core. Alternatively you should synchronize clocks of
>> individual cores.
> 
> Using the TSC on a MacBook Pro gives very unreliable results (I rather
> think the core you're using goes to sleep and you may or may not wake
> up using the same core!).

Can it be because the processor changes the TSC frequency when it goes into
the sleep mode? I thought Intel has fixed that bug.

I cannot think out a good time service for a multi-core with unsynchronized
TSC's. Intel should simply fix that mess.

> However, the system clock (Ada.Calendar) is precise to a microsecond.

It likely uses a programmable timer. TSC gives fraction of nanosecond.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-27 22:11                               ` Ada.Execution_Time Randy Brukardt
@ 2010-12-29 12:48                                 ` Niklas Holsti
  2010-12-29 14:30                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-30  5:06                                   ` Ada.Execution_Time Randy Brukardt
  0 siblings, 2 replies; 124+ messages in thread
From: Niklas Holsti @ 2010-12-29 12:48 UTC (permalink / raw)


Randy, I'm glad that you are participating in this thread. My duologue 
with Dmitry is becoming repetitive and our views entrenched.

We have been discussing several things, although the focus is on the 
intended meaning and properties of Ada.Execution_Time. As I am not an 
ARG member I have based my understanding on the (A)RM text. If the text 
does not reflect the intent of the ARG, I will be glad to know it, but 
perhaps the ARG should then consider resolving the conflict by 
confirming or changing the text.

Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:8nm30fF7r9U1@mid.individual.net...
>> Dmitry A. Kazakov wrote:
>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
> ...
>>> On a such platform the implementation would be as perverse as RM D.14
>>> is. But the perversion is only because of the interpretation.
>> Bah. I think that when RM D.14 says "time", it really means time. You
>> think it means something else, perhaps a CPU cycle count. I think the 
>> burden of proof is on you.
>>
>> It seems evident to me that the text in D.14 must be interpreted using
>> the concepts in D.2.1, "The Task Dispatching Model", which clearly
>> specifies real-time points when a processor starts to execute a task and
>> stops executing a task. To me, and I believe to most readers of the RM,
>> the execution time of a task is the sum of these time slices, thus a
>> physical, real time.
> 
> For the record, I agree more with Dmitry than Niklas here. At least the 
> interpretation *I* had when this package was proposed was that it had only a 
> slight relationship to real-time.

Oh. What would that slight relationship be? Or was it left unspecified?

> My understanding was that it was intended 
> to provide a window into whatever facilities the underlying system had for 
> execution "time" counting.

Of course, as long as those facilities are good enough for the RM 
requirements and for the users; otherwise, the implementor might improve 
on the underlying system as required. The same holds for Ada.Real_Time. 
If the underlying system is a bare-board Ada RTS, the Ada 95 form of the 
RTS probably had to be extended to support Ada.Execution_Time.

I'm sure that the proposers of the package Ada.Execution_Time expected 
the implementation to use the facilities of the underlying system. But I 
am also confident that they had in mind some specific uses of the 
package and that these uses require that the values provided by 
Ada.Execution_Time have certain properties that can reasonably be 
expected of "execution time", whether or not these properties are 
expressly written as requirements in the RM.

Examples of these uses are given in the paper by A. Burns and A.J. 
Wellings, "Programming Execution-Time Servers in Ada 2005," pp.47-56, 
27th IEEE International Real-Time Systems Symposium (RTSS'06), 2006. 
http://doi.ieeecomputersociety.org/10.1109/RTSS.2006.39.

You put "time" in quotes, Randy. Don't you agree that there *is* a 
valid, physical concept of "the execution time of a task" that can be 
measured in units of physical time, seconds say? At least for processors 
that only execute one task at a time, and whether or not the system 
provides facilities for measuring this time?

I think that the concept exists and that it matches the description in 
RM D.14 (11/2), using the background from D.2.1, whether or not the ARG 
intended this match.

If you agree that such "execution time" values are conceptually well 
defined, do you not think that the "execution time counting facilities" 
of real-time OSes are meant to measure these values, to some practical 
level of accuracy?

If so, then even if Ada.Execution_Time is intended as only a window into 
these facilities, it is still intended to provide measures of the 
physical execution time of tasks, to some practical level of accuracy.

> That had no defined relationship with what Ada calls "time".
> As usch, I think the name "execution time" is misleading, (and 
> I recall some discussions about that in the ARG), but no one had a better 
> name that made any sense at all.

Do you remember if these discussions concerned the name of the package, 
the name of the type CPU_Time, or the very concept "execution time"? If 
the question was of the terms "time" versus "duration", I think 
"duration" would have been more consistent with earlier Ada usage, but 
"execution time" is more common outside Ada, for example in the acronym 
WCET for Worst-Case Execution Time.

The fact that Ada.Execution_Time provides a subtraction operator for 
CPU_Time that yields Time_Span, which can be further converted to 
Duration, leads the RM reader to assume some relationship, at least that 
spans of real time and spans of execution time can be measured in the 
same physical units (seconds).

It has already been said, and not only by me, that Ada.Execution_Time is 
intended (among other things, perhaps) to be used for implementing task 
scheduling algorithms that depend on the accumulated execution time of 
the tasks. This is supported by the Burns and Wellings paper referenced 
above. In such algorithms I believe it is essential that the execution 
times are physical times because they are used in formulae that relate 
(sums of) execution-time spans to spans of real time.

Dmitry has agreed with some of my statements on this point, for example:

- A task cannot accumulate execution time at a higher rate than real 
time. For example, in one real-time second the CPU_Time of a task cannot 
increase by more than one second.

- If only one task is executing on a processor, the execution time of 
that task increases (or "could increase") at the same rate as real time.

Do you agree that we can expect these statements to be true? (On the 
second point, system overhead should of course be taken into account, on 
which more below.)

> In particular, there is no requirement in the RM or anywhere else that these 
> "times" sum to any particular answer.

I agree that the RM has no such explicit requirement. I made this claim 
to counter Dmitry's assertion that CPU_Time has no physical meaning, and 
of course I accept that the sum will usually be less than real elapsed 
time because the processor spends some time on non-task activities.

The last sentence of RM D.14 (11/2) says "It is implementation defined 
which task, if any, is charged the execution time that is consumed by 
interrupt handlers and run-time services on behalf of the system". This 
sentence strongly suggests to me that the author of this paragraph had 
in mind that the total available execution time (span) equals the real 
time (span), that some of this total is charged to the tasks, but that 
some of the time spent in interrupt handlers etc. need not be charged to 
tasks.

The question is how much meaning should be read into ordinary words like 
"time" when used in the RM without a formal definition.

If the RM were to say that L is the length of a piece of string S, 
measured in meters, and that some parts of S are colored red, some blue, 
and some parts may not be colored at all, surely we could conclude that 
the sum of the lengths in meters of the red, blue, and uncolored parts 
equals L? And that the sum of the lengths of the red and blue parts is 
at most L? And that, since we like colorful things, we hope that the 
length of the uncolored part is small?

I think the case of summing task execution time spans is analogous.

> I don't quite see how there could be, 
> unless you were going to require a tailored Ada target system (which is 
> definitely not going to be a requirement).

I don't want such a requirement. The acceptable overhead (fraction of 
execution time not charged to tasks) depends on the application.

Moreover, on a multi-process systems (an Ada program running under 
Windows or Linux, for example) some of the CPU time is spent on other 
processes, all of which would be "overhead" from the point of view of 
the Ada program. I don't think that the authors of D.14 had such systems 
in mind.

> Perhaps the proposers (from the IRTAW meetings) had something else in mind, 
> but if so, they communicated it very poorly.

Do you remember who they were? Are the IRTAW minutes or proposals 
accessible on the web?

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 12:48                                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-29 14:30                                   ` Dmitry A. Kazakov
  2010-12-29 16:19                                     ` Ada.Execution_Time (see below)
                                                       ` (2 more replies)
  2010-12-30  5:06                                   ` Ada.Execution_Time Randy Brukardt
  1 sibling, 3 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-29 14:30 UTC (permalink / raw)


On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:

> Dmitry has agreed with some of my statements on this point, for example:
> 
> - A task cannot accumulate execution time at a higher rate than real 
> time. For example, in one real-time second the CPU_Time of a task cannot 
> increase by more than one second.

Hold on, that is only true if a certain model of CPU_Time measurement used.
There are many potential models. The one we discussed was the model A:

Model A. Get an RTC reading upon activation. Each time CPU_Time is
requested by Clock get another RTC reading, build the difference, add the
accumulator to the result. Upon task deactivation, get the difference and
update the accumulator.

This is a very strong model. Weaker models:

Model A.1. Get RTC upon activation and deactivation. Update the accumulator
upon deactivation. When the task is active CPU_Time does not change.

Model B. Use a time source different from RTC. This what Windows actually
doest.

Model B.1. Like A.1, CPU_Time freezes when the task is active.

Model C. Asynchronous task monitoring process

...

Note that in either model the counter readings are rounded. Windows rounds
toward zero, which why you never get more load than 100%. But it is
thinkable and expectable that some systems would round away from zero or to
the nearest bound. So the statement holds only if you have A (maybe C) + a
corresponding rounding.

> - If only one task is executing on a processor, the execution time of 
> that task increases (or "could increase") at the same rate as real time.

This also may be wrong if a B model is used. In particular, task switching
may be (and I think is) driven by the programmable timer interrupts. The
real-time clock may be driven by the TSC. Since these two are physically
different, unsynchronized time sources, the effect can be any. It is to
expect a systematic error accumulated with the time.

> The question is how much meaning should be read into ordinary words like 
> "time" when used in the RM without a formal definition.

Time as physical concept is not absolute. There is no *the* real time, but
many real times and even more unreal ones. I don't think RM can go into
this. Not only because it would be not Ada's business, but because
otherwise it would have to use some reference time. Ada does not have this,
it intentionally refused to have it when introduced Ada.Real_Time.Time. The
same arguments which were used then apply now. CPU_Time is a third time by
default absolutely independent on the other two.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 14:30                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-29 16:19                                     ` (see below)
  2010-12-29 16:51                                       ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-29 20:32                                     ` Ada.Execution_Time Niklas Holsti
  2010-12-30 19:23                                     ` Ada.Execution_Time Niklas Holsti
  2 siblings, 1 reply; 124+ messages in thread
From: (see below) @ 2010-12-29 16:19 UTC (permalink / raw)


On 29/12/2010 14:30, in article aooml6t0ezs4.4srxtfm9z00r.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:
> 
>> Dmitry has agreed with some of my statements on this point, for example:
>> 
>> - A task cannot accumulate execution time at a higher rate than real
>> time. For example, in one real-time second the CPU_Time of a task cannot
>> increase by more than one second.
> 
> Hold on, that is only true if a certain model of CPU_Time measurement used.
> There are many potential models. The one we discussed was the model A:
> 
> Model A. Get an RTC reading upon activation. Each time CPU_Time is
> requested by Clock get another RTC reading, build the difference, add the
> accumulator to the result. Upon task deactivation, get the difference and
> update the accumulator.
> 
> This is a very strong model. Weaker models:
> 
> Model A.1. Get RTC upon activation and deactivation. Update the accumulator
> upon deactivation. When the task is active CPU_Time does not change.
> 
> Model B. Use a time source different from RTC. This what Windows actually
> doest.
> 
> Model B.1. Like A.1, CPU_Time freezes when the task is active.
> 
> Model C. Asynchronous task monitoring process
> 
> ...
> Note that in either model the counter readings are rounded. ...
> 
>> - If only one task is executing on a processor, the execution time of
>> that task increases (or "could increase") at the same rate as real time.
> 
> This also may be wrong if a B model is used. ...
> 
>> The question is how much meaning should be read into ordinary words like
>> "time" when used in the RM without a formal definition.
> 
> Time as physical concept is not absolute. There is no *the* real time, but
> many real times and even more unreal ones.  ...

I hope we can agree that Ada is defined "sensibly".

From this I deduce that the intent for CPU_Time is that it be a useful
approximation to the sum of the durations (small "d") of the intervals of
local inertial-frame physical time in which the task is in the running
state.

It seems to me that the only grey area is the degree of approximation that
is acceptable for the result to be "useful".

Dmitri raises some devils-advocate issues around that. Some of them might be
considered to be dismissed by the assumption of sensible definition. Others
might not be so clear. Perhaps the ARG consider these to be issues of
implementation quality rather than semantics.

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 16:19                                     ` Ada.Execution_Time (see below)
@ 2010-12-29 16:51                                       ` Dmitry A. Kazakov
  2010-12-29 19:57                                         ` Ada.Execution_Time (see below)
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-29 16:51 UTC (permalink / raw)


On Wed, 29 Dec 2010 16:19:13 +0000, (see below) wrote:

> From this I deduce that the intent for CPU_Time is that it be a useful
> approximation to the sum of the durations (small "d") of the intervals of
> local inertial-frame physical time in which the task is in the running
> state.

There exist more pragmatic considerations than the relativity theory. The
duration you refer above is according to which clock?

- Real time CPU counter
- Programmable timer
- BIOS clock
- OS system clock
- Ada.Real_Time.Clock
- Ada.Calendar.Clock
- an NTP server from the given list ...
  ...

> It seems to me that the only grey area is the degree of approximation that
> is acceptable for the result to be "useful".

That is the next can of worms to open. Once you decided which clock you
take, you would have to define the sum of *which* durations according to
this clock you are going to approximate. This would be way more difficult
and IMO impossible. What is the "CPU" is presence of many cores? What does
it mean for the task to be "active" keeping in mind the cases it could get
blocked without losing the "CPU" (defined above)?

In my humble opinion, ARG defined Ada.Execution_Time in the most
*reasonable* way, in particular, allowing to deliver whatever garbage the
underlying OS service spits.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 16:51                                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-29 19:57                                         ` (see below)
  2010-12-29 21:20                                           ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 124+ messages in thread
From: (see below) @ 2010-12-29 19:57 UTC (permalink / raw)


On 29/12/2010 16:51, in article jw9ocxiajasa.142oku2z0e6rx$.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Wed, 29 Dec 2010 16:19:13 +0000, (see below) wrote:
> 
>> From this I deduce that the intent for CPU_Time is that it be a useful
>> approximation to the sum of the durations (small "d") of the intervals of
>> local inertial-frame physical time in which the task is in the running
>> state.
> 
> There exist more pragmatic considerations than the relativity theory. The
> duration you refer above is according to which clock?

I referred to time dilation to preempt your bringing it up. 8-)

> - Real time CPU counter
> - Programmable timer
> - BIOS clock
> - OS system clock
> - Ada.Real_Time.Clock
> - Ada.Calendar.Clock
> - an NTP server from the given list ...
>   ...

Quite so, but these issues, and rounding, and so on, are all subsumed under
"useful approximation".  They are implementation dependent in the best of
circumstances, and so need to be specified by the implementer.

>> It seems to me that the only grey area is the degree of approximation that
>> is acceptable for the result to be "useful".
> 
> That is the next can of worms to open. Once you decided which clock you
> take, you would have to define the sum of *which* durations according to
> this clock you are going to approximate. This would be way more difficult
> and IMO impossible. What is the "CPU" is presence of many cores? What does
> it mean for the task to be "active" keeping in mind the cases it could get
> blocked without losing the "CPU" (defined above)?

I think you are are creating unnecessary difficulties. Note that I said
nothing about CPUs, only about process states. The elapsed time between
dispatching a task/process and pre-empting or blocking it is a well defined
physical quantity. It has nothing to do with cores, and I've no idea what
"blocked without losing the CPU" means. In my dictionary that is simply
self-contradictory. But if it does mean something in some implementation,
all that is necessary is to inform of the approximation it gives rise to.

> In my humble opinion,

!-)

> ARG defined Ada.Execution_Time in the most
> *reasonable* way, in particular, allowing to deliver whatever garbage the
> underlying OS service spits.

I agree. What else could they do?
And if the implementation documents that, where is the harm?

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;






^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 14:30                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-29 16:19                                     ` Ada.Execution_Time (see below)
@ 2010-12-29 20:32                                     ` Niklas Holsti
  2010-12-29 21:21                                       ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-30 19:23                                     ` Ada.Execution_Time Niklas Holsti
  2 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-29 20:32 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:
> 
>> Dmitry has agreed with some of my statements on this point, for example:
>>
>> - A task cannot accumulate execution time at a higher rate than real 
>> time. For example, in one real-time second the CPU_Time of a task cannot 
>> increase by more than one second.
> 
> Hold on, that is only true if a certain model of CPU_Time measurement used.

In my view, it is true within the accuracy of the execution-time 
measurement method.

> There are many potential models. The one we discussed was the model A:
> 
> Model A. Get an RTC reading upon activation. Each time CPU_Time is
> requested by Clock get another RTC reading, build the difference, add the
> accumulator to the result. Upon task deactivation, get the difference and
> update the accumulator.

OK. This is, I think, the most natural model, perhaps with some 
processor performance counter or CPU-clock-cycle counter replacing the RTC.

> This is a very strong model. Weaker models:
> 
> Model A.1. Get RTC upon activation and deactivation. Update the accumulator
> upon deactivation. When the task is active CPU_Time does not change.

That model is not permitted, because the value of 
Ada.Execution_Time.Clock must change at every "CPU tick". The duration 
of a CPU tick is Ada.Execution_Time.CPU_Tick, which is at most one 
millisecond (RM D.14(20/2)).

It is true that CPU_Tick is only the "average length" of the 
constant-Clock intervals, but the implementation is also required to 
document an upper bound, which also forbids your model A.1.

(I think that this "average" definition caters for implementations where 
the execution time counter is incremented by an RTC interrrupt handler 
that may suffer some timing jitter.)

> Model B. Use a time source different from RTC. This what Windows actually
> doest.

I don't think that this is a different "model". No time source provides 
ideal, exact real time. If the time source for Ada.Execution_Time.Clock 
differs much from real time, the accuracy of the implementation is poor 
to that extent. It does not surprise me that this happens on Windows.

I admit that the RM does not specify any required accuracy for 
Ada.Execution_Time.Clock. The accuracy required for 
execution-time-dependent scheduling algorithms is generally low, I believe.

Analogously, there are no accuracy requirements on Ada.Real_Time.Clock.

> Model B.1. Like A.1, CPU_Time freezes when the task is active.

Forbidden like A.1 above.

> Model C. Asynchronous task monitoring process

That sounds weird. Please clarify.

> Note that in either model the counter readings are rounded. Windows rounds
> toward zero, which why you never get more load than 100%. But it is
> thinkable and expectable that some systems would round away from zero or to
> the nearest bound. So the statement holds only if you have A (maybe C) + a
> corresponding rounding.

So it holds within the accuracy of the measurement method, which often 
involves some sampling or rounding error. In my view.

>> - If only one task is executing on a processor, the execution time of 
>> that task increases (or "could increase") at the same rate as real time.
> 
> This also may be wrong if a B model is used. In particular, task switching
> may be (and I think is) driven by the programmable timer interrupts. The
> real-time clock may be driven by the TSC. Since these two are physically
> different, unsynchronized time sources, the effect can be any. It is to
> expect a systematic error accumulated with the time.

Again, it holds within the accuracy of the measurement method and the 
time source, which is all that one can expect.

The points I made were meant to show how CPU_Time is related, in 
principle, to real time. I entirely accept that in practice the 
relationships will be affected by measurement inaccuracies.

For the task scheduling methods that depend on actual execution times, I 
believe that long-term drifts or accumulations of errors in CPU_Time are 
unimportant. The execution time (span) measurements need to be 
reasonably accurate only over time spans similar to the period of the 
longest-period task. The overhead (execution time not charged to tasks) 
will probably be much larger, both in mean value and in variability, 
than the time-source errors.

As discussed earlier, the time source for Ada.Real_Time.Clock that 
determines when time-driven tasks are activated may need higher fidelity 
to real time.

>> The question is how much meaning should be read into ordinary words like 
>> "time" when used in the RM without a formal definition.
> 
> Time as physical concept is not absolute. There is no *the* real time, but
> many real times and even more unreal ones.

When RM D.14(11/2) defines "the execution time of a given task" as "the 
time spent by the system executing that task", the only reasonable 
reading of the second "time" is as the common-sense physical time, as 
measured by your wrist-watch or by some more precise clock.

Let's not go into relativity and quantum mechanics for this.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 19:57                                         ` Ada.Execution_Time (see below)
@ 2010-12-29 21:20                                           ` Dmitry A. Kazakov
  2010-12-30  5:13                                             ` Ada.Execution_Time Randy Brukardt
  2010-12-30 13:37                                             ` Ada.Execution_Time Niklas Holsti
  0 siblings, 2 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-29 21:20 UTC (permalink / raw)


On Wed, 29 Dec 2010 19:57:19 +0000, (see below) wrote:

> They are implementation dependent in the best of
> circumstances, and so need to be specified by the implementer.

But Niklas seems to want more than merely documentation.

> I've no idea what "blocked without losing the CPU" means.

That is when you access something over the system bus from the task and
have to wait for the bus to become free.

There are also kernel times spend on some OS book keeping and for I/O
initiated by other task. It is impossible to tell what is done on task's
behalf and what is not. D 14(11/2) leaves everything to the implementation.

>> ARG defined Ada.Execution_Time in the most
>> *reasonable* way, in particular, allowing to deliver whatever garbage the
>> underlying OS service spits.
> 
> I agree. What else could they do?
> And if the implementation documents that, where is the harm?

To me no harm.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 20:32                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-29 21:21                                       ` Dmitry A. Kazakov
  2010-12-30 13:34                                         ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-29 21:21 UTC (permalink / raw)


On Wed, 29 Dec 2010 22:32:30 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> 
>> Model C. Asynchronous task monitoring process
> 
> That sounds weird. Please clarify.

For example, in the kernel you have a timer interrupt each n us. Within the
handler you get the current TCB and increment the CPU usage counter there
by 1. CPU_Time returned by Clock yields Counter * n us. This is a quite
lightweight schema, which can be used for small and real-time systems. The
overhead is constant.

> Again, it holds within the accuracy of the measurement method and the 
> time source, which is all that one can expect.

The error is not bound. Only its deviation is bound, e.g. x seconds per
second of measurement.

>> Time as physical concept is not absolute. There is no *the* real time, but
>> many real times and even more unreal ones.
> 
> When RM D.14(11/2) defines "the execution time of a given task" as "the 
> time spent by the system executing that task", the only reasonable 
> reading of the second "time" is as the common-sense physical time, as 
> measured by your wrist-watch or by some more precise clock.

Which physical experiment could prove or disprove that a given
implementation is in agreement with this definition? Execution time is not
observable.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 12:48                                 ` Ada.Execution_Time Niklas Holsti
  2010-12-29 14:30                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-30  5:06                                   ` Randy Brukardt
  2010-12-30 23:49                                     ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 124+ messages in thread
From: Randy Brukardt @ 2010-12-30  5:06 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8o0p0lF94rU1@mid.individual.net...
> Randy, I'm glad that you are participating in this thread. My duologue 
> with Dmitry is becoming repetitive and our views entrenched.
>
> We have been discussing several things, although the focus is on the 
> intended meaning and properties of Ada.Execution_Time. As I am not an ARG 
> member I have based my understanding on the (A)RM text. If the text does 
> not reflect the intent of the ARG, I will be glad to know it, but perhaps 
> the ARG should then consider resolving the conflict by confirming or 
> changing the text.

Perhaps, but that presumes there is something wrong with the text. A lot is 
purposely left unspecified in the ARM; what causes problems is when people 
start reading stuff that is not there.

...

>> My understanding was that it was intended to provide a window into 
>> whatever facilities the underlying system had for execution "time" 
>> counting.
>
> Of course, as long as those facilities are good enough for the RM 
> requirements and for the users; otherwise, the implementor might improve 
> on the underlying system as required. The same holds for Ada.Real_Time. If 
> the underlying system is a bare-board Ada RTS, the Ada 95 form of the RTS 
> probably had to be extended to support Ada.Execution_Time.
>
> I'm sure that the proposers of the package Ada.Execution_Time expected the 
> implementation to use the facilities of the underlying system. But I am 
> also confident that they had in mind some specific uses of the package and 
> that these uses require that the values provided by Ada.Execution_Time 
> have certain properties that can reasonably be expected of "execution 
> time", whether or not these properties are expressly written as 
> requirements in the RM.

Probably, but quality of implementation is rarely specified in the Ada 
Standard. When it is, it generally is in the form of Implementation Advice 
(as opposed to hard requirements). The expectation is that implementers are 
going to provide the best implementation that they can -- implementers don't 
purposely build crippled or useless implementations. Moreover, that is 
*more* likely when the Standard is overspecified, simply because of the need 
to provide something that meets the standard.

> Examples of these uses are given in the paper by A. Burns and A.J. 
> Wellings, "Programming Execution-Time Servers in Ada 2005," pp.47-56, 27th 
> IEEE International Real-Time Systems Symposium (RTSS'06), 2006. 
> http://doi.ieeecomputersociety.org/10.1109/RTSS.2006.39.
>
> You put "time" in quotes, Randy. Don't you agree that there *is* a valid, 
> physical concept of "the execution time of a task" that can be measured in 
> units of physical time, seconds say? At least for processors that only 
> execute one task at a time, and whether or not the system provides 
> facilities for measuring this time?

I'm honestly not sure. The problem is that while such a concept might 
logicially exist, as a practical matter it cannot be measured outside of the 
most controlled circumstances. Thus, that might make sense in a bare-board 
Ada implementation, but not in any implementation running on top of any OS 
or kernel. As such, whether the concept exists is more of an "angels on the 
head of a pin" question than anything of practical importance.

> I think that the concept exists and that it matches the description in RM 
> D.14 (11/2), using the background from D.2.1, whether or not the ARG 
> intended this match.
>
> If you agree that such "execution time" values are conceptually well 
> defined, do you not think that the "execution time counting facilities" of 
> real-time OSes are meant to measure these values, to some practical level 
> of accuracy?
>
> If so, then even if Ada.Execution_Time is intended as only a window into 
> these facilities, it is still intended to provide measures of the physical 
> execution time of tasks, to some practical level of accuracy.

The problem is that that "practical level of accuracy" isn't realistic. 
Moreover, I've always viewed such facilities as "profiling" ones -- it's the 
relative magnitudes of the values that matter, not the absolute values. In 
that case, the scale of the values is not particularly relevant.

Specifically, I mean that what is important is which task is taking a lot of 
CPU. In that case, it simply is the task that has a large "execution time" 
(whatever that means) compared to the others. Typically, that's more than 
100 times the usage of the other tasks, so the units involved are hardly 
relevant.

>> That had no defined relationship with what Ada calls "time".
>> As usch, I think the name "execution time" is misleading, (and I recall 
>> some discussions about that in the ARG), but no one had a better name 
>> that made any sense at all.
>
> Do you remember if these discussions concerned the name of the package, 
> the name of the type CPU_Time, or the very concept "execution time"? If 
> the question was of the terms "time" versus "duration", I think "duration" 
> would have been more consistent with earlier Ada usage, but "execution 
> time" is more common outside Ada, for example in the acronym WCET for 
> Worst-Case Execution Time.
>
> The fact that Ada.Execution_Time provides a subtraction operator for 
> CPU_Time that yields Time_Span, which can be further converted to 
> Duration, leads the RM reader to assume some relationship, at least that 
> spans of real time and spans of execution time can be measured in the same 
> physical units (seconds).
>
> It has already been said, and not only by me, that Ada.Execution_Time is 
> intended (among other things, perhaps) to be used for implementing task 
> scheduling algorithms that depend on the accumulated execution time of the 
> tasks. This is supported by the Burns and Wellings paper referenced above. 
> In such algorithms I believe it is essential that the execution times are 
> physical times because they are used in formulae that relate (sums of) 
> execution-time spans to spans of real time.

That would be a wrong interpretation of a algorithms, I think. (Either that, 
or the algorithms themselves are heavily flawed!). The important property is 
that all of the execution times have a reasonably proportional relationship 
to the actual time spent executing each task (that hypothetical concept); 
the absolute values shouldn't matter much (just as the exact priority values 
are mostly irrelevant to scheduling decisions). Moreover, when the values 
are close, one would hope that the algorithms don't change behavior much.

The values would be trouble if they bore no relationship at all to the 
"actual time spent executing the task", but it's hard to imagine any 
real-world facility in which that was the case.

...
...
>> In particular, there is no requirement in the RM or anywhere else that 
>> these "times" sum to any particular answer.
>
> I agree that the RM has no such explicit requirement. I made this claim to 
> counter Dmitry's assertion that CPU_Time has no physical meaning, and of 
> course I accept that the sum will usually be less than real elapsed time 
> because the processor spends some time on non-task activities.
>
> The last sentence of RM D.14 (11/2) says "It is implementation defined 
> which task, if any, is charged the execution time that is consumed by 
> interrupt handlers and run-time services on behalf of the system". This 
> sentence strongly suggests to me that the author of this paragraph had in 
> mind that the total available execution time (span) equals the real time 
> (span), that some of this total is charged to the tasks, but that some of 
> the time spent in interrupt handlers etc. need not be charged to tasks.
>
> The question is how much meaning should be read into ordinary words like 
> "time" when used in the RM without a formal definition.
>
> If the RM were to say that L is the length of a piece of string S, 
> measured in meters, and that some parts of S are colored red, some blue, 
> and some parts may not be colored at all, surely we could conclude that 
> the sum of the lengths in meters of the red, blue, and uncolored parts 
> equals L? And that the sum of the lengths of the red and blue parts is at 
> most L? And that, since we like colorful things, we hope that the length 
> of the uncolored part is small?
>
> I think the case of summing task execution time spans is analogous.

I think you are inventing things. There is no such requirement in the 
standard, and that's good: I've never seen a real system in which this has 
been true.

Even the various profilers I wrote for MS-DOS (the closest system to a bare 
machine that will ever be in wide use) never had this property. I used to 
think that it was some sort of bug in my methods, but even using completely 
different ways of measuring time (counting ticks at subprogram heads vs. 
statistical probes -- I tried both) the effects still showed up. I've pretty 
much concluded that is is simply part of the nature of computer time -- much 
like floating point, it is an incomplete abstraction of the "real" time, and 
expecting too much out of it is going to lead immediately to disappointment.

>> I don't quite see how there could be, unless you were going to require a 
>> tailored Ada target system (which is definitely not going to be a 
>> requirement).
>
> I don't want such a requirement. The acceptable overhead (fraction of 
> execution time not charged to tasks) depends on the application.
>
> Moreover, on a multi-process systems (an Ada program running under Windows 
> or Linux, for example) some of the CPU time is spent on other processes, 
> all of which would be "overhead" from the point of view of the Ada 
> program. I don't think that the authors of D.14 had such systems in mind.

I disagree, in the sense that the ARG as a whole certainly considered the 
use of this facility in all environments. (I find that it would be very 
valuable for profiling on Windows, for instance, even if the results only 
have a weak relationship to reality).

It's possible that the people who proposed it originally were thinking as 
you are, but the ARG modified those proposals quite a bit; the result is 
definitely a team effort and not the work on any particular individual.

                               Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 21:20                                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-30  5:13                                             ` Randy Brukardt
  2010-12-30 13:37                                             ` Ada.Execution_Time Niklas Holsti
  1 sibling, 0 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-30  5:13 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:xdzib16pw12u$.v5dybjo9t1sb.dlg@40tude.net...
> On Wed, 29 Dec 2010 19:57:19 +0000, (see below) wrote:
>...
>>> ARG defined Ada.Execution_Time in the most
>>> *reasonable* way, in particular, allowing to deliver whatever garbage 
>>> the
>>> underlying OS service spits.
>>
>> I agree. What else could they do?
>> And if the implementation documents that, where is the harm?
>
> To me no harm.

Exactly. The issue is if you think that there is something more than that; 
for whatever reason Niklas seems to think there are requirements intended 
beyond that, and that simply isn't true. It's clear that he'd never be happy 
with a Windows implementation of Ada.Execution_Time -- too bad, because it 
still can be useful.

                              Randy.







^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 21:21                                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-30 13:34                                         ` Niklas Holsti
  0 siblings, 0 replies; 124+ messages in thread
From: Niklas Holsti @ 2010-12-30 13:34 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 22:32:30 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> Model C. Asynchronous task monitoring process
>> That sounds weird. Please clarify.
> 
> For example, in the kernel you have a timer interrupt each n us. Within the
> handler you get the current TCB and increment the CPU usage counter there
> by 1. CPU_Time returned by Clock yields Counter * n us. This is a quite
> lightweight schema, which can be used for small and real-time systems. The
> overhead is constant.

Yes, that is one possible implementation, and not a bad one, although 
the CPU_Tick will probably be relatively large, and may have some 
jitter. The definition of Ada.Execution_Time.CPU_Tick as the *average* 
constant-Clock duration would come into play.

In principle this implementation is not much different from the simple, 
hardware-driven, directly readable counter of nanoseconds or CPU clock 
cycles. The only difference is that here the "counter" is driven by a 
timer-generated interrupt, not by a hardware clock generator, and it is 
easier to make task-specific counters.

>> Again, it holds within the accuracy of the measurement method and the 
>> time source, which is all that one can expect.
> 
> The error is not bound. Only its deviation is bound, e.g. x seconds per
> second of measurement.

Yes, as for different unsynchronized clocks in general. I don't think 
this is a problem for the intended uses of Ada.Execution_Time, as I 
understand them.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 21:20                                           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-30  5:13                                             ` Ada.Execution_Time Randy Brukardt
@ 2010-12-30 13:37                                             ` Niklas Holsti
  1 sibling, 0 replies; 124+ messages in thread
From: Niklas Holsti @ 2010-12-30 13:37 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 19:57:19 +0000, (see below) wrote:
> 
>> They are implementation dependent in the best of
>> circumstances, and so need to be specified by the implementer.
> 
> But Niklas seems to want more than merely documentation.

I'll answer that in a soon-to-appear answer to Randy.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-29 14:30                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-29 16:19                                     ` Ada.Execution_Time (see below)
  2010-12-29 20:32                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-30 19:23                                     ` Niklas Holsti
  2 siblings, 0 replies; 124+ messages in thread
From: Niklas Holsti @ 2010-12-30 19:23 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:
> 
>> Dmitry has agreed with some of my statements on this point, for example:
>>
>> - A task cannot accumulate execution time at a higher rate than real 
>> time. For example, in one real-time second the CPU_Time of a task cannot 
>> increase by more than one second.
> 
> Hold on, that is only true if a certain model of CPU_Time measurement used.
> There are many potential models. The one we discussed was the model A:
> 
> Model A. Get an RTC reading upon activation. Each time CPU_Time is
> requested by Clock get another RTC reading, build the difference, add the
> accumulator to the result. Upon task deactivation, get the difference and
> update the accumulator.

[ cut ]

> Note that in either model the counter readings are rounded. Windows rounds
> toward zero, which why you never get more load than 100%. But it is
> thinkable and expectable that some systems would round away from zero or to
> the nearest bound.

The RTEMS CPU Usage Statistics function in fact rounds up: "RTEMS keeps 
track of how many clock ticks have occurred which [should be "while"] 
the task being switched out has been executing. If the task has been 
running less than 1 clock tick, then for the purposes of the statistics, 
it is assumed to have executed 1 clock tick. This results in some 
inaccuracy but the alternative is for the task to have appeared to 
execute 0 clock ticks." (Quoted from 
http://www.rtems.org/onlinedocs/releases/rtemsdocs-4.9.4/share/rtems/pdf/c_user.pdf, 
page 285).

I think that rounding up may indeed be better (safer) for real-time 
scheduling and monitoring purposes. But of course this is just changing 
the direction of the inaccuracy, the principle stands.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-30  5:06                                   ` Ada.Execution_Time Randy Brukardt
@ 2010-12-30 23:49                                     ` Niklas Holsti
  2010-12-31 23:34                                       ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2010-12-30 23:49 UTC (permalink / raw)


Before I answer Randy's points, below, I will try to summarise my 
position in this discussion. It seems that my convoluted dialog with 
Dmitry has not made it clear. I'm sorry that this makes for a longish post.

I am not proposing or asking for new requirements on Ada.Execution_Time 
in RM D.14. I accept that the accuracy and other qualities of the 
implementation are (mostly) not specified in the RM, so Randy is right 
that the implementors can (mostly) just provide an interface to whatever 
services exist, and Dmitry is also (mostly) right when he says that 
Ada.Execution_Time can deliver garbage results and still satisfy the RM 
requirements.

However, I think we all hope that, as Randy said, implementers are going 
to provide the best implementation that they can. Speaking of a "best" 
implementation implies that there is some ideal solution, perhaps not 
practically realizable, against which the quality of an implementation 
can be judged. Moreover, since the RM defines the language and is the 
basis on which implementors work, I would expect that the RM tries to 
describe this ideal solution, perhaps only through informal definitions, 
rationale, implementation advice, or annotations. This is what I have 
called the "intent of Ada.Execution_Time" and it includes such questions 
as the intended meaning of a CPU_Time value and what is meant by 
"execution time of a task".

My background for this discussion and for understanding 
Ada.Execution_Time is real-time programming and analysis, and in 
particular the various forms of schedulability analysis. In this domain 
the theory and practice depend crucially on the execution times of 
tasks, usually on the worst-case execution time (WCET) but sometimes on 
the whole range or distribution of execution times. Moreover, the theory 
and practice assume that "the execution time of a task" has a physical 
meaning and a strong relationship to real time.

For example, it is assumed (usually implicitly) that when a task is 
executing uninterrupted on a processor, the execution time of the task 
and the real time increase at the same rate -- this is more or less the 
definition of "execution time". Another (implicitly) assumed property is 
that if a processor first runs task A for X seconds of execution time, 
then switches to task B for Y seconds of execution time, the elapsed 
real time equals X + Y plus some "overhead" time for the task switch. 
(As a side comment, I admit that some of these assumptions are becoming 
dubious for complex processors where tasks can have strong interactions, 
for example through the cache.)

I have assumed, and still mostly believe, that this concept of 
"execution time of a task" is the ideal or intent behind 
Ada.Execution_Time, and that approaching this ideal is an implementer's 
goal.

My participation in this thread started with my objection to Dmitry's 
assertion that "CPU_Time has no physical meaning". I may have 
misunderstood Dmitry's thought, leading to a comedy of 
misunderstandings. Perhaps Dmitry only meant that in the absence of any 
accuracy requirements, CPU_Time may not have a useful physical meaning 
in a poor implementation of Ada.Execution_Time. I accept that, but I 
think that a good implementation should try to give CPU_Time the 
physical meaning that "execution time" has in the theory and practice of 
real-time systems, as closely as is practical and desired by the users 
of the implementation.

My comments in this thread therefore show the properties that I think a 
good implementation of Ada.Execution_Time should have, and are not 
proposed as new requirements. At most they could appear as additions to 
the rationale, advice, or annotations for RM D.14. I measure the 
"goodness" of an implementation as "usefulness for implementing 
real-time programs". Others, perhaps Randy or Dmitry, may have other 
goodness measures.

This thread started by a question about how Ada.Execution_Time is meant 
to be used. I think it is useful to discuss the properties and uses that 
can be expected of a good implementation, even if the RM also allows 
poor implementations.

Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:8o0p0lF94rU1@mid.individual.net...
>> I'm sure that the proposers of the package Ada.Execution_Time expected the 
>> implementation to use the facilities of the underlying system. But I am 
>> also confident that they had in mind some specific uses of the package and 
>> that these uses require that the values provided by Ada.Execution_Time 
>> have certain properties that can reasonably be expected of "execution 
>> time", whether or not these properties are expressly written as 
>> requirements in the RM.
> 
> Probably, but quality of implementation is rarely specified in the Ada 
> Standard. When it is, it generally is in the form of Implementation Advice 
> (as opposed to hard requirements). The expectation is that implementers are 
> going to provide the best implementation that they can -- implementers don't 
> purposely build crippled or useless implementations. Moreover, that is 
> *more* likely when the Standard is overspecified, simply because of the need 
> to provide something that meets the standard.

I agree.

>> You put "time" in quotes, Randy. Don't you agree that there *is* a valid, 
>> physical concept of "the execution time of a task" that can be measured in 
>> units of physical time, seconds say? At least for processors that only 
>> execute one task at a time, and whether or not the system provides 
>> facilities for measuring this time?
> 
> I'm honestly not sure. The problem is that while such a concept might 
> logicially exist, as a practical matter it cannot be measured outside of the 
> most controlled circumstances.

I don't think that measurement is so difficult or that the circumstances 
must be so very controlled.

Let's assume a single-processor system and a measurement mechanism like 
the "Model A" that Dmitry described. That is, we have a HW counter that 
is driven by a fixed frequency. For simplicity and accuracy, let's 
assume that the counter is driven by the CPU clock so that the counter 
changes are synchronized with instruction executions. I think such 
counters are not uncommon in current computers used in real-time 
systems, although they may be provided by the board and not the 
processor. We implement Ada.Execution_Time by making the task-switching 
routines and the Clock function in Ada.Execution_Time read the value of 
the counter to keep track of how much the counter increases while the 
processor is running a given task. The accumulated increase is stored in 
the TCB of the task when the task is not running.

Randy, why do you think that this mechanism does not measure the 
execution time of the tasks, or a good approximation of the ideal? There 
are of course nice questions about exactly when in the task-switching 
code the counter is read and stored, and what to do with interrupts, but 
I think these are details that can be dismissed as in RM D.14 (11/2) by 
considering them implementation defined.

It is true that execution times measured in that way are not exactly 
repeatable on today's processors, even when the task follows exactly the 
same execution path in each measurement, because of external 
perturbations (such as memory access delays due to DRAM refresh) and 
inter-task interferences (such as cache evictions due to interrupts or 
preempting tasks). But this non-repeatability is present in the ideal 
concept, too, and I don't expect Ada.Execution_Time.Clock to give 
exactly repeatable results. It should show the execution time in the 
current execution of the task.

> Thus, that might make sense in a bare-board 
> Ada implementation, but not in any implementation running on top of any OS 
> or kernel.

The task concept is not so Ada-specific. For example, I would call RTEMS 
a "kernel", and it has a general (not Ada-specific) service for 
measuring per-task execution times, implemented much as Dmitry's Model A 
except that the "RTC counter" may be the RTEMS interrupt-driven "clock 
tick" counter, not a HW counter.

And what is an OS but a kernel with a large set of services, and usually 
running several unrelated applications / processes, not just several 
tasks in one application? Task-specific execution times for an OS-based 
application are probably less repeatable, and less useful, but that does 
not detract from the principle.

> As such, whether the concept exists is more of an "angels on the 
> head of a pin" question than anything of practical importance.

It is highly important for all practical uses of real-time scheduling 
theories and algorithms, since it is their basic assumption. Of course, 
some real-time programmers are of the opinion that real-time scheduling 
theories are of no practical importance. I am not one of them :-)

>> If so, then even if Ada.Execution_Time is intended as only a window into 
>> these facilities, it is still intended to provide measures of the physical 
>> execution time of tasks, to some practical level of accuracy.
> 
> The problem is that that "practical level of accuracy" isn't realistic. 

I disagree. I think execution-time measurements using Dmitry's Model A 
are practical and can be sufficiently accurate, especially if they use a 
hardware counter or RTC.

> Moreover, I've always viewed such facilities as "profiling" ones -- it's the 
> relative magnitudes of the values that matter, not the absolute values. In 
> that case, the scale of the values is not particularly relevant.

When profiling is used merely to identify the CPU hogs or bottlenecks, 
with the aim of speeding up a program, I agree that the scale is not 
relevant. Profiling is often used in this way for non-real-time systems. 
It can be done also for real-time systems, but would not help in the 
analysis of schedulability if the time-scale is arbitrary.

If profiling is used to measure task execution times for use in 
schedulability analysis or even crude CPU load computations, the scale 
must be real time, because the reference values (deadlines, load 
measurement intervals) are expressed in real time.

>> It has already been said, and not only by me, that Ada.Execution_Time is 
>> intended (among other things, perhaps) to be used for implementing task 
>> scheduling algorithms that depend on the accumulated execution time of the 
>> tasks. This is supported by the Burns and Wellings paper referenced above. 
>> In such algorithms I believe it is essential that the execution times are 
>> physical times because they are used in formulae that relate (sums of) 
>> execution-time spans to spans of real time.
> 
> That would be a wrong interpretation of a algorithms, I think. (Either that, 
> or the algorithms themselves are heavily flawed!). The important property is 
> that all of the execution times have a reasonably proportional relationship 
> to the actual time spent executing each task (that hypothetical concept); 
> the absolute values shouldn't matter much

I think you are wrong. The algorithms presented in the Burns and 
Wellings paper that I referred to implement "execution-time servers" 
which, as I understand them, are meant to limit the fraction of the CPU 
power that is given to certain groups of tasks and work as follows in 
outline. The CPU fraction is defined by an execution-time budget, say B 
seconds, that the tasks in the group jointly consume. When the budget is 
exhausted, the tasks in the group are either suspended or demoted to a 
background priority. The budget is periodically replenished (increased 
up to B seconds) every R seconds, with B < R for a one-processor system. 
The goal is thus to let this task group use at most B/R of the CPU time. 
In other words, that the CPU load fraction from this task group should 
be no more than B/R.

The B part is measured in execution time (CPU_Time differences as 
Time_Spans) and the R part in real time. If execution time is not scaled 
properly, the load fraction B/R is wrong in proprtion, and the CPU time 
(1-B/R) left for other, perhaps more critical tasks could be too small, 
causing deadline misses and failures. It is essential that execution 
time (B) is commensurate with real time (R). The examples in the paper 
also show this assumption.

As usual for academic papers in real-time scheduling, Burns and Wellings 
make no explicit assumptions about the meaning of execution time. They 
do mention measurement inaccuracies, but a systematic difference in 
scale is not considered.

> (just as the exact priority values 
> are mostly irrelevant to scheduling decisions).

The case of priority values is entirely different. I don't know of any 
real-time scheduling methods or analyses in which the quantitative 
values of priorities are important; only their relative order is 
important. In contrast, all real-time scheduling methods and analyses 
that I know of depend on the quantitative, metric, values of task 
execution times. Perhaps some heuristic scheduling methods like 
"shortest task first" are exceptions to this rule, but I don't think 
they are suitable for real-time systems.

> Moreover, when the values 
> are close, one would hope that the algorithms don't change behavior much.

Yes, I don't think the on-line execution-time dependent scheduling 
algorithms need very accurate measures of execution time. But if the 
time-scale is wrong by a factor of 2, say, I'm sure the algorithms will 
not perform as expected.

Burns and Wellings say that one of the "servers" they describe may have 
problems with cumulative drift due to measurement errors -- similar to 
round-off or truncation errors -- and they propose methods to correct 
that. But they do not discuss scale errors, which should lead to a much 
larger drift.

> The values would be trouble if they bore no relationship at all to the 
> "actual time spent executing the task", but it's hard to imagine any 
> real-world facility in which that was the case.

I agree, but Dmitry claimed the opposite ("CPU_Time has no physical 
meaning") to which I objected. And indeed it seems that under some 
conditions the execution times measured with the Windows "quants" 
mechanism are zero, which would certainly inconvenience scheduling 
algorithms.

>> The question is how much meaning should be read into ordinary words like 
>> "time" when used in the RM without a formal definition.
>>
>> If the RM were to say that L is the length of a piece of string S, 
>> measured in meters, and that some parts of S are colored red, some blue, 
>> and some parts may not be colored at all, surely we could conclude that 
>> the sum of the lengths in meters of the red, blue, and uncolored parts 
>> equals L? And that the sum of the lengths of the red and blue parts is at 
>> most L? And that, since we like colorful things, we hope that the length 
>> of the uncolored part is small?
>>
>> I think the case of summing task execution time spans is analogous.
> 
> I think you are inventing things.

Possibly so. RM D.14 (11/2) tries to define "the execution time of a 
task" by using an English phrase "... is the time spent by the system 
...". I am trying to understand what this definition might mean, as a 
description of the ideal or intent in the minds of the authors, who 
certainly intended the definition to have *some* meaning for the reader.

I strongly suspect that the proposers of D.14 meant "execution time" as 
understood in the real-time scheduling theory domain, and that they felt 
it unnecessary to define it or its properties more formally, partly out 
of habit, as those properties are implicitly assumed in academic papers, 
partly because any definition would have had to include messy text about 
measurement errors, and they did not want to specify accuracy requirements.

> There is no such requirement in the standard, and that's good:

I agree and I don't want to add such a requirement.

> I've never seen a real system in which this has been true.

I don't think it will be exactly true in a real system. I continue to 
think that it will be approximately true in a good implementation of 
Ada.Execution_Time (modulo the provision on interrupt handlers etc), and 
that this is necessary if Ada.Execution_Time is to be used as Burns and 
Wellings propose.

> Even the various profilers I wrote for MS-DOS (the closest system to a bare 
> machine that will ever be in wide use) never had this property.

I can't comment on that now, but I would be interested to hear what you 
tried, and what happened.

>> Moreover, on a multi-process systems (an Ada program running under Windows 
>> or Linux, for example) some of the CPU time is spent on other processes, 
>> all of which would be "overhead" from the point of view of the Ada 
>> program. I don't think that the authors of D.14 had such systems in mind.
> 
> I disagree, in the sense that the ARG as a whole certainly considered the 
> use of this facility in all environments.

Aha. Well, as I said above on the question of Ada vs kernel vs OS, the 
package should be implementable under and OS like Windows or Linux, too. 
I don't think D.14 makes any attempt to exclude that, and why should it.

> (I find that it would be very 
> valuable for profiling on Windows, for instance, even if the results only 
> have a weak relationship to reality).

Yes, if all you want are the relative execution-time consumptions of the 
tasks in order to find the CPU hogs. And the quant-truncation problem 
may distort even the relative execution times considerably.

Randy wrote in another post:
 > It's clear that he [Niklas] would never be happy
 > with a Windows implementation of Ada.Execution_Time -- too bad,
 > because it still can be useful.

I probably would not be happy to use Windows at all for a real-time 
system, so my happiness with Ada.Execution_Time is perhaps moot.

But it is all a question of what accuracy is required and can be 
provided. If I understand the description of Windows quants correctly, 
an implementation of Ada.Execution_Time under Windows may have tolerable 
accuracy if the average span of non-interrupted, non-preempted, and 
non-suspended execution of task is much larger than a quant, as the 
truncation of partially used quants then causes a relatively small error.

I don't know if the scheduling methods of Windows would let one make 
good use the measured execution times. I have no experience of Windows 
programming on that level, unfortunately.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-28 15:08                                           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-28 16:18                                             ` Ada.Execution_Time Simon Wright
@ 2010-12-31  0:40                                             ` BrianG
  2010-12-31  9:09                                               ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 124+ messages in thread
From: BrianG @ 2010-12-31  0:40 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Tue, 28 Dec 2010 14:14:57 +0000, Simon Wright wrote:
> 
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>>
>>> And conversely, the catastrophic accuracy of the VxWorks real-time
>>> clock service does not hinder its usability for real-time application.
>> Catastrophic?
>>
...
> 
> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we used
> prior to it had poor performance) The problem was that Ada.Real_Time.Clock
> had the accuracy of the clock interrupts, i.e. 1ms, which is by all
> accounts catastrophic for a 1.7GHz processor. You can switch some tasks
> forth and back between two clock changes.
>  

So, we're talking about GNAT's use of VxWorks' features as
"catastrophic"?  That's not how I read the original statement.

(We use GNAT on VxWorks, but since we don't (yet) really use tasks, and 
have our own "delay" equivalent to match external hardware, I don't see 
its performance - but since I'm used to GNAT on DOS/Windows, I don't 
have very high expectations.  It sounds like I should keep that opinion.)

--BrianG



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-31  0:40                                             ` Ada.Execution_Time BrianG
@ 2010-12-31  9:09                                               ` Dmitry A. Kazakov
  0 siblings, 0 replies; 124+ messages in thread
From: Dmitry A. Kazakov @ 2010-12-31  9:09 UTC (permalink / raw)


On Thu, 30 Dec 2010 19:40:08 -0500, BrianG wrote:

> Dmitry A. Kazakov wrote:
>> On Tue, 28 Dec 2010 14:14:57 +0000, Simon Wright wrote:
>> 
>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>>>
>>>> And conversely, the catastrophic accuracy of the VxWorks real-time
>>>> clock service does not hinder its usability for real-time application.
>>> Catastrophic?
>> 
>> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we used
>> prior to it had poor performance) The problem was that Ada.Real_Time.Clock
>> had the accuracy of the clock interrupts, i.e. 1ms, which is by all
>> accounts catastrophic for a 1.7GHz processor. You can switch some tasks
>> forth and back between two clock changes.  
> 
> So, we're talking about GNAT's use of VxWorks' features as
> "catastrophic"?  That's not how I read the original statement.

No, GNAT just uses the standard OS clock. It is the OS design flaw. They
should have used the CPU's real time clock, or provide a configurable
clock, so that you could choose its source. Why should AdaCore clean up
Wind River's mess?

Happy New Year,

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-30 23:49                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-31 23:34                                       ` Randy Brukardt
  2011-01-01 13:52                                         ` Ada.Execution_Time Niklas Holsti
  2011-01-01 15:54                                         ` Ada.Execution_Time Simon Wright
  0 siblings, 2 replies; 124+ messages in thread
From: Randy Brukardt @ 2010-12-31 23:34 UTC (permalink / raw)


I think we're actually in agreement on most points. The main difference is 
that I contend that the theoretical "execution time" is, in actual practice, 
unmeasurable. ("Unmeasurable" is a bit strong, but I mean that you can only 
get a gross estimation of it.)

You have argued that there exist cases where it is possible to do better (a 
hardware clock, a single processor, a kernel that doesn't get in the way of 
using the hardware clock, no contention with other devices on the bus, 
etc.), and I wouldn't disagree. The problem is that those cases aren't 
realistic, particularly if you are talking about a language-defined package. 
(Even if there is a hardware clock available on a particular target, an 
off-the-shelf compiler isn't going to be able to use it. It can only use the 
facilities of the kernel or OS.)

This shows up when you talk about the inability to reproduce the results. In 
practice, I think you'll find that the values vary a lot from run to run; 
the net effect is that the error is a large percentage of the results for 
many values.

Your discussion of real-time scheduling using this still seems to me to be 
more in the realm of academic exercise than something practical. I'm sure 
that it works in very limited circumstances, but those circumstances are 
getting less and less likely by the year. Techniques that assume a strong 
correspondence between these values and real-time are simply fantasy -- I 
would hope no one depend on these for anything of importance. (With the 
possible exception of an all-Ada kernel, but even then you would be 
*writing* Ada.Execution_Time, not using it.) A weak correspondence (for 
choosing between otherwise similar tasks, for one instance) might make 
sense, but that is about it.

                       Randy.


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8o4k3tFko2U1@mid.individual.net...
> Before I answer Randy's points, below, I will try to summarise my position 
> in this discussion. It seems that my convoluted dialog with Dmitry has not 
> made it clear. I'm sorry that this makes for a longish post.
>
> I am not proposing or asking for new requirements on Ada.Execution_Time in 
> RM D.14. I accept that the accuracy and other qualities of the 
> implementation are (mostly) not specified in the RM, so Randy is right 
> that the implementors can (mostly) just provide an interface to whatever 
> services exist, and Dmitry is also (mostly) right when he says that 
> Ada.Execution_Time can deliver garbage results and still satisfy the RM 
> requirements.
>
> However, I think we all hope that, as Randy said, implementers are going 
> to provide the best implementation that they can. Speaking of a "best" 
> implementation implies that there is some ideal solution, perhaps not 
> practically realizable, against which the quality of an implementation can 
> be judged. Moreover, since the RM defines the language and is the basis on 
> which implementors work, I would expect that the RM tries to describe this 
> ideal solution, perhaps only through informal definitions, rationale, 
> implementation advice, or annotations. This is what I have called the 
> "intent of Ada.Execution_Time" and it includes such questions as the 
> intended meaning of a CPU_Time value and what is meant by "execution time 
> of a task".
>
> My background for this discussion and for understanding Ada.Execution_Time 
> is real-time programming and analysis, and in particular the various forms 
> of schedulability analysis. In this domain the theory and practice depend 
> crucially on the execution times of tasks, usually on the worst-case 
> execution time (WCET) but sometimes on the whole range or distribution of 
> execution times. Moreover, the theory and practice assume that "the 
> execution time of a task" has a physical meaning and a strong relationship 
> to real time.
>
> For example, it is assumed (usually implicitly) that when a task is 
> executing uninterrupted on a processor, the execution time of the task and 
> the real time increase at the same rate -- this is more or less the 
> definition of "execution time". Another (implicitly) assumed property is 
> that if a processor first runs task A for X seconds of execution time, 
> then switches to task B for Y seconds of execution time, the elapsed real 
> time equals X + Y plus some "overhead" time for the task switch. (As a 
> side comment, I admit that some of these assumptions are becoming dubious 
> for complex processors where tasks can have strong interactions, for 
> example through the cache.)
>
> I have assumed, and still mostly believe, that this concept of "execution 
> time of a task" is the ideal or intent behind Ada.Execution_Time, and that 
> approaching this ideal is an implementer's goal.
>
> My participation in this thread started with my objection to Dmitry's 
> assertion that "CPU_Time has no physical meaning". I may have 
> misunderstood Dmitry's thought, leading to a comedy of misunderstandings. 
> Perhaps Dmitry only meant that in the absence of any accuracy 
> requirements, CPU_Time may not have a useful physical meaning in a poor 
> implementation of Ada.Execution_Time. I accept that, but I think that a 
> good implementation should try to give CPU_Time the physical meaning that 
> "execution time" has in the theory and practice of real-time systems, as 
> closely as is practical and desired by the users of the implementation.
>
> My comments in this thread therefore show the properties that I think a 
> good implementation of Ada.Execution_Time should have, and are not 
> proposed as new requirements. At most they could appear as additions to 
> the rationale, advice, or annotations for RM D.14. I measure the 
> "goodness" of an implementation as "usefulness for implementing real-time 
> programs". Others, perhaps Randy or Dmitry, may have other goodness 
> measures.
>
> This thread started by a question about how Ada.Execution_Time is meant to 
> be used. I think it is useful to discuss the properties and uses that can 
> be expected of a good implementation, even if the RM also allows poor 
> implementations.
>
> Randy Brukardt wrote:
>> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
>> news:8o0p0lF94rU1@mid.individual.net...
>>> I'm sure that the proposers of the package Ada.Execution_Time expected 
>>> the implementation to use the facilities of the underlying system. But I 
>>> am also confident that they had in mind some specific uses of the 
>>> package and that these uses require that the values provided by 
>>> Ada.Execution_Time have certain properties that can reasonably be 
>>> expected of "execution time", whether or not these properties are 
>>> expressly written as requirements in the RM.
>>
>> Probably, but quality of implementation is rarely specified in the Ada 
>> Standard. When it is, it generally is in the form of Implementation 
>> Advice (as opposed to hard requirements). The expectation is that 
>> implementers are going to provide the best implementation that they 
>> can -- implementers don't purposely build crippled or useless 
>> implementations. Moreover, that is *more* likely when the Standard is 
>> overspecified, simply because of the need to provide something that meets 
>> the standard.
>
> I agree.
>
>>> You put "time" in quotes, Randy. Don't you agree that there *is* a 
>>> valid, physical concept of "the execution time of a task" that can be 
>>> measured in units of physical time, seconds say? At least for processors 
>>> that only execute one task at a time, and whether or not the system 
>>> provides facilities for measuring this time?
>>
>> I'm honestly not sure. The problem is that while such a concept might 
>> logicially exist, as a practical matter it cannot be measured outside of 
>> the most controlled circumstances.
>
> I don't think that measurement is so difficult or that the circumstances 
> must be so very controlled.
>
> Let's assume a single-processor system and a measurement mechanism like 
> the "Model A" that Dmitry described. That is, we have a HW counter that is 
> driven by a fixed frequency. For simplicity and accuracy, let's assume 
> that the counter is driven by the CPU clock so that the counter changes 
> are synchronized with instruction executions. I think such counters are 
> not uncommon in current computers used in real-time systems, although they 
> may be provided by the board and not the processor. We implement 
> Ada.Execution_Time by making the task-switching routines and the Clock 
> function in Ada.Execution_Time read the value of the counter to keep track 
> of how much the counter increases while the processor is running a given 
> task. The accumulated increase is stored in the TCB of the task when the 
> task is not running.
>
> Randy, why do you think that this mechanism does not measure the execution 
> time of the tasks, or a good approximation of the ideal? There are of 
> course nice questions about exactly when in the task-switching code the 
> counter is read and stored, and what to do with interrupts, but I think 
> these are details that can be dismissed as in RM D.14 (11/2) by 
> considering them implementation defined.
>
> It is true that execution times measured in that way are not exactly 
> repeatable on today's processors, even when the task follows exactly the 
> same execution path in each measurement, because of external perturbations 
> (such as memory access delays due to DRAM refresh) and inter-task 
> interferences (such as cache evictions due to interrupts or preempting 
> tasks). But this non-repeatability is present in the ideal concept, too, 
> and I don't expect Ada.Execution_Time.Clock to give exactly repeatable 
> results. It should show the execution time in the current execution of the 
> task.
>
>> Thus, that might make sense in a bare-board Ada implementation, but not 
>> in any implementation running on top of any OS or kernel.
>
> The task concept is not so Ada-specific. For example, I would call RTEMS a 
> "kernel", and it has a general (not Ada-specific) service for measuring 
> per-task execution times, implemented much as Dmitry's Model A except that 
> the "RTC counter" may be the RTEMS interrupt-driven "clock tick" counter, 
> not a HW counter.
>
> And what is an OS but a kernel with a large set of services, and usually 
> running several unrelated applications / processes, not just several tasks 
> in one application? Task-specific execution times for an OS-based 
> application are probably less repeatable, and less useful, but that does 
> not detract from the principle.
>
>> As such, whether the concept exists is more of an "angels on the head of 
>> a pin" question than anything of practical importance.
>
> It is highly important for all practical uses of real-time scheduling 
> theories and algorithms, since it is their basic assumption. Of course, 
> some real-time programmers are of the opinion that real-time scheduling 
> theories are of no practical importance. I am not one of them :-)
>
>>> If so, then even if Ada.Execution_Time is intended as only a window into 
>>> these facilities, it is still intended to provide measures of the 
>>> physical execution time of tasks, to some practical level of accuracy.
>>
>> The problem is that that "practical level of accuracy" isn't realistic.
>
> I disagree. I think execution-time measurements using Dmitry's Model A are 
> practical and can be sufficiently accurate, especially if they use a 
> hardware counter or RTC.
>
>> Moreover, I've always viewed such facilities as "profiling" ones -- it's 
>> the relative magnitudes of the values that matter, not the absolute 
>> values. In that case, the scale of the values is not particularly 
>> relevant.
>
> When profiling is used merely to identify the CPU hogs or bottlenecks, 
> with the aim of speeding up a program, I agree that the scale is not 
> relevant. Profiling is often used in this way for non-real-time systems. 
> It can be done also for real-time systems, but would not help in the 
> analysis of schedulability if the time-scale is arbitrary.
>
> If profiling is used to measure task execution times for use in 
> schedulability analysis or even crude CPU load computations, the scale 
> must be real time, because the reference values (deadlines, load 
> measurement intervals) are expressed in real time.
>
>>> It has already been said, and not only by me, that Ada.Execution_Time is 
>>> intended (among other things, perhaps) to be used for implementing task 
>>> scheduling algorithms that depend on the accumulated execution time of 
>>> the tasks. This is supported by the Burns and Wellings paper referenced 
>>> above. In such algorithms I believe it is essential that the execution 
>>> times are physical times because they are used in formulae that relate 
>>> (sums of) execution-time spans to spans of real time.
>>
>> That would be a wrong interpretation of a algorithms, I think. (Either 
>> that, or the algorithms themselves are heavily flawed!). The important 
>> property is that all of the execution times have a reasonably 
>> proportional relationship to the actual time spent executing each task 
>> (that hypothetical concept); the absolute values shouldn't matter much
>
> I think you are wrong. The algorithms presented in the Burns and Wellings 
> paper that I referred to implement "execution-time servers" which, as I 
> understand them, are meant to limit the fraction of the CPU power that is 
> given to certain groups of tasks and work as follows in outline. The CPU 
> fraction is defined by an execution-time budget, say B seconds, that the 
> tasks in the group jointly consume. When the budget is exhausted, the 
> tasks in the group are either suspended or demoted to a background 
> priority. The budget is periodically replenished (increased up to B 
> seconds) every R seconds, with B < R for a one-processor system. The goal 
> is thus to let this task group use at most B/R of the CPU time. In other 
> words, that the CPU load fraction from this task group should be no more 
> than B/R.
>
> The B part is measured in execution time (CPU_Time differences as 
> Time_Spans) and the R part in real time. If execution time is not scaled 
> properly, the load fraction B/R is wrong in proprtion, and the CPU time 
> (1-B/R) left for other, perhaps more critical tasks could be too small, 
> causing deadline misses and failures. It is essential that execution time 
> (B) is commensurate with real time (R). The examples in the paper also 
> show this assumption.
>
> As usual for academic papers in real-time scheduling, Burns and Wellings 
> make no explicit assumptions about the meaning of execution time. They do 
> mention measurement inaccuracies, but a systematic difference in scale is 
> not considered.
>
>> (just as the exact priority values are mostly irrelevant to scheduling 
>> decisions).
>
> The case of priority values is entirely different. I don't know of any 
> real-time scheduling methods or analyses in which the quantitative values 
> of priorities are important; only their relative order is important. In 
> contrast, all real-time scheduling methods and analyses that I know of 
> depend on the quantitative, metric, values of task execution times. 
> Perhaps some heuristic scheduling methods like "shortest task first" are 
> exceptions to this rule, but I don't think they are suitable for real-time 
> systems.
>
>> Moreover, when the values are close, one would hope that the algorithms 
>> don't change behavior much.
>
> Yes, I don't think the on-line execution-time dependent scheduling 
> algorithms need very accurate measures of execution time. But if the 
> time-scale is wrong by a factor of 2, say, I'm sure the algorithms will 
> not perform as expected.
>
> Burns and Wellings say that one of the "servers" they describe may have 
> problems with cumulative drift due to measurement errors -- similar to 
> round-off or truncation errors -- and they propose methods to correct 
> that. But they do not discuss scale errors, which should lead to a much 
> larger drift.
>
>> The values would be trouble if they bore no relationship at all to the 
>> "actual time spent executing the task", but it's hard to imagine any 
>> real-world facility in which that was the case.
>
> I agree, but Dmitry claimed the opposite ("CPU_Time has no physical 
> meaning") to which I objected. And indeed it seems that under some 
> conditions the execution times measured with the Windows "quants" 
> mechanism are zero, which would certainly inconvenience scheduling 
> algorithms.
>
>>> The question is how much meaning should be read into ordinary words like 
>>> "time" when used in the RM without a formal definition.
>>>
>>> If the RM were to say that L is the length of a piece of string S, 
>>> measured in meters, and that some parts of S are colored red, some blue, 
>>> and some parts may not be colored at all, surely we could conclude that 
>>> the sum of the lengths in meters of the red, blue, and uncolored parts 
>>> equals L? And that the sum of the lengths of the red and blue parts is 
>>> at most L? And that, since we like colorful things, we hope that the 
>>> length of the uncolored part is small?
>>>
>>> I think the case of summing task execution time spans is analogous.
>>
>> I think you are inventing things.
>
> Possibly so. RM D.14 (11/2) tries to define "the execution time of a task" 
> by using an English phrase "... is the time spent by the system ...". I am 
> trying to understand what this definition might mean, as a description of 
> the ideal or intent in the minds of the authors, who certainly intended 
> the definition to have *some* meaning for the reader.
>
> I strongly suspect that the proposers of D.14 meant "execution time" as 
> understood in the real-time scheduling theory domain, and that they felt 
> it unnecessary to define it or its properties more formally, partly out of 
> habit, as those properties are implicitly assumed in academic papers, 
> partly because any definition would have had to include messy text about 
> measurement errors, and they did not want to specify accuracy 
> requirements.
>
>> There is no such requirement in the standard, and that's good:
>
> I agree and I don't want to add such a requirement.
>
>> I've never seen a real system in which this has been true.
>
> I don't think it will be exactly true in a real system. I continue to 
> think that it will be approximately true in a good implementation of 
> Ada.Execution_Time (modulo the provision on interrupt handlers etc), and 
> that this is necessary if Ada.Execution_Time is to be used as Burns and 
> Wellings propose.
>
>> Even the various profilers I wrote for MS-DOS (the closest system to a 
>> bare machine that will ever be in wide use) never had this property.
>
> I can't comment on that now, but I would be interested to hear what you 
> tried, and what happened.
>
>>> Moreover, on a multi-process systems (an Ada program running under 
>>> Windows or Linux, for example) some of the CPU time is spent on other 
>>> processes, all of which would be "overhead" from the point of view of 
>>> the Ada program. I don't think that the authors of D.14 had such systems 
>>> in mind.
>>
>> I disagree, in the sense that the ARG as a whole certainly considered the 
>> use of this facility in all environments.
>
> Aha. Well, as I said above on the question of Ada vs kernel vs OS, the 
> package should be implementable under and OS like Windows or Linux, too. I 
> don't think D.14 makes any attempt to exclude that, and why should it.
>
>> (I find that it would be very valuable for profiling on Windows, for 
>> instance, even if the results only have a weak relationship to reality).
>
> Yes, if all you want are the relative execution-time consumptions of the 
> tasks in order to find the CPU hogs. And the quant-truncation problem may 
> distort even the relative execution times considerably.
>
> Randy wrote in another post:
> > It's clear that he [Niklas] would never be happy
> > with a Windows implementation of Ada.Execution_Time -- too bad,
> > because it still can be useful.
>
> I probably would not be happy to use Windows at all for a real-time 
> system, so my happiness with Ada.Execution_Time is perhaps moot.
>
> But it is all a question of what accuracy is required and can be provided. 
> If I understand the description of Windows quants correctly, an 
> implementation of Ada.Execution_Time under Windows may have tolerable 
> accuracy if the average span of non-interrupted, non-preempted, and 
> non-suspended execution of task is much larger than a quant, as the 
> truncation of partially used quants then causes a relatively small error.
>
> I don't know if the scheduling methods of Windows would let one make good 
> use the measured execution times. I have no experience of Windows 
> programming on that level, unfortunately.
>
> -- 
> Niklas Holsti
> Tidorum Ltd
> niklas holsti tidorum fi
>       .      @       . 





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-31 23:34                                       ` Ada.Execution_Time Randy Brukardt
@ 2011-01-01 13:52                                         ` Niklas Holsti
  2011-01-01 14:42                                           ` Ada.Execution_Time Simon Wright
  2011-01-03 21:27                                           ` Ada.Execution_Time Randy Brukardt
  2011-01-01 15:54                                         ` Ada.Execution_Time Simon Wright
  1 sibling, 2 replies; 124+ messages in thread
From: Niklas Holsti @ 2011-01-01 13:52 UTC (permalink / raw)


Randy Brukardt wrote:
> I think we're actually in agreement on most points.

Good, and I agree. Nevertheless I again make a lot of comments, below, 
because I think your view, if true, means that Ada.Execution_Time is 
useless for real-time systems.

> The main difference is 
> that I contend that the theoretical "execution time" is, in actual practice, 
> unmeasurable. ("Unmeasurable" is a bit strong, but I mean that you can only 
> get a gross estimation of it.)

You are right, we disagree on this.

> You have argued that there exist cases where it is possible to do better (a 
> hardware clock, a single processor, a kernel that doesn't get in the way of 
> using the hardware clock,

Yes.

> no contention with other devices on the bus, etc.),

I may have been fuzzy on that. Such real but variable or unpredictable 
delays or speed-ups are, in my view, just normal parts of the execution 
time of the affected task, and do not harm the ideal concept of "the 
execution time of a task" nor the usefulness of Ada.Execution_Time and 
its child packages. They only mean that the measured execution time of 
one particular execution of a task is less predictive of the execution 
times of future executions of that task. More on this below.

> and I wouldn't disagree. The problem is that those cases aren't 
> realistic, particularly if you are talking about a language-defined package. 

I am not proposing new general requirements on timing accuracy for the RM.

> (Even if there is a hardware clock available on a particular target, an 
> off-the-shelf compiler isn't going to be able to use it. It can only use the 
> facilities of the kernel or OS.)

For critical, embedded, hard-real-time systems I think it is not 
uncommon to use dedicated real-time computers with kernels such as 
VxWorks or RTEMS or bare-board Ada run-time systems. I have worked on 
such computers in the past, and I see such computers being advertised 
and used today.

In such systems, the kernel or Ada RTS is usually customised by a "board 
support package" (BSP) that, among other things, handles the interfaces 
to clocks and timers on the target computer (the "board"). Such systems 
can provide high-accuracy timers and mechanisms for execution-time 
monitoring without having to change the compiler; it should be enough to 
change the implementation of Ada.Execution_Time. In effect, 
Ada.Execution_Time would be a part of the BSP, or depend on types and 
services defined in the BSP.

The question is then if the compiler/system vendors will take the 
trouble and cost to customise Ada.Execution_Time for a particular 
board/computer, or if they will just use the general, lowest-denominator 
but portable service provided by the kernel. Dmitry indicates that GNAT 
on VxWorks takes the latter, easy way out. That's a matter of cost vs 
demand; if the users want it, the vendors can provide it.

For example, the "CPU Usage Statistics" section of the "RTEMS C User's 
Guide" says: "RTEMS versions newer than the 4.7 release series, support 
the ability to obtain timestamps with nanosecond granularity if the BSP 
provides support. It is a desirable enhancement to change the way the 
usage data is gathered to take advantage of this recently added 
capability. Please consider sponsoring the core RTEMS development team 
to add this capability." Thus, in March 2010 the RTEMS developers wanted 
and could implement accurate execution-time measurement, but no customer 
had yet paid for its implementation.

> This shows up when you talk about the inability to reproduce the results. In 
> practice, I think you'll find that the values vary a lot from run to run; 
> the net effect is that the error is a large percentage of the results for 
> many values.

Again, I may have not have been clear on my view of the reproducibility 
and variability of execution-time measurements. The variability in the 
measured execution time of a task has three components or sources:

1. Variable execution paths due to data-dependent conditional 
control-flow (if-then-else, case, loop). In other words, on different 
executions of the task, different sequences of instructions are 
executed, leading to different execution times.

2. Variable execution times of indivual instructions, for example due to 
variable cache contents and variable bus contention.

3. Variable measurement errors, for example truncations or roundings in 
the count of CPU_Time clock cycles, variable amount of interrupt 
handling included in the task execution time, etc.

Components 1 and 2 are big problems for worst-case analysis but in my 
view are not problems for Ada.Execution_Time. In fact, I think that one 
of the intended uses of Ada.Execution_Time is to help systems make good 
use of this execution-time variability by letting the system do other 
useful things when some task happens to execute quicker than its 
worst-case execution time and leaves some slack in the schedule.

Only component 3 is an "error" in the measured value. This component, 
and also constant measurement errors if any, are problems for users of 
Ada.Execution_Time.

> Your discussion of real-time scheduling using this still seems to me to be 
> more in the realm of academic exercise than something practical. I'm sure 
> that it works in very limited circumstances,

I think these circumstances have some correlation with the domain for 
which Ada is or was intended: reliable, embedded, possibly real-time 
systems. Well, this is a limited niche.

> but those circumstances are 
> getting less and less likely by the year. Techniques that assume a strong 
> correspondence between these values and real-time are simply fantasy -- I 
> would hope no one depend on these for anything of importance.

If this is true, Ada.Execution_Time is useless for real-time systems. 
Since it was added to the RM for 2005, and is still in the draft for RM 
2012, I suppose the ARG majority does not share your view.

Thanks for a frank discussion, and best wishes for 2011!

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-01 13:52                                         ` Ada.Execution_Time Niklas Holsti
@ 2011-01-01 14:42                                           ` Simon Wright
  2011-01-01 16:01                                             ` Ada.Execution_Time Simon Wright
  2011-01-03 21:27                                           ` Ada.Execution_Time Randy Brukardt
  1 sibling, 1 reply; 124+ messages in thread
From: Simon Wright @ 2011-01-01 14:42 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> The question is then if the compiler/system vendors will take the
> trouble and cost to customise Ada.Execution_Time for a particular
> board/computer, or if they will just use the general,
> lowest-denominator but portable service provided by the kernel. Dmitry
> indicates that GNAT on VxWorks takes the latter, easy way out.

The latest supported GNAT on VxWorks (5.5) actually doesn't implement
Ada.Execution_Time at all (well, the source of the package spec is
there, adorned with "pragma Unimplemented_Unit", just like the FSF 4.5.0
sources and the GNAT GPL sources -- unless you're running on Cygwin or
MaRTE).



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2010-12-31 23:34                                       ` Ada.Execution_Time Randy Brukardt
  2011-01-01 13:52                                         ` Ada.Execution_Time Niklas Holsti
@ 2011-01-01 15:54                                         ` Simon Wright
  2011-01-03 21:33                                           ` Ada.Execution_Time Randy Brukardt
  1 sibling, 1 reply; 124+ messages in thread
From: Simon Wright @ 2011-01-01 15:54 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> Your discussion of real-time scheduling using this still seems to me
> to be more in the realm of academic exercise than something
> practical. I'm sure that it works in very limited circumstances, but
> those circumstances are getting less and less likely by the year.

Sounds as if you don't agree with the !problem section of AI-00307?

The first paragraph says that measurement/estimation is important, and
that measurement is difficult [actually, I don't see this; _estimation_
is hard, sure, what with pipelining, caches etc, but _measurement_?]

The second paragraph says that in a hard real time system you ought to
be able to monitor execution times in order to tell whether things have
gone wrong. Hence Ada.Execution_Time.Timers.

The third paragraph talks about fancy scheduling algorithms. This is the
point at which I too start to wonder whether things are getting a bit
academic; but I have no practical experience of the sort of real-time
systems under discussion. We merely have to produce the required output
within 2 ms of a program interrupt; but the world won't end if we miss
occasionally (not that we ever do), because of the rest of the system
design, which has to cope with data loss over noisy radio channels.

The last paragraphs call up the Real-Time extensions to POSIX (IEEE
1003.1d, after a lot of googling) as an indication of the intention.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-01 14:42                                           ` Ada.Execution_Time Simon Wright
@ 2011-01-01 16:01                                             ` Simon Wright
  2011-01-01 19:18                                               ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 124+ messages in thread
From: Simon Wright @ 2011-01-01 16:01 UTC (permalink / raw)


Simon Wright <simon@pushface.org> writes:

> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>
>> The question is then if the compiler/system vendors will take the
>> trouble and cost to customise Ada.Execution_Time for a particular
>> board/computer, or if they will just use the general,
>> lowest-denominator but portable service provided by the kernel. Dmitry
>> indicates that GNAT on VxWorks takes the latter, easy way out.
>
> The latest supported GNAT on VxWorks (5.5) actually doesn't implement
> Ada.Execution_Time at all (well, the source of the package spec is
> there, adorned with "pragma Unimplemented_Unit", just like the FSF
> 4.5.0 sources and the GNAT GPL sources -- unless you're running on
> Cygwin or MaRTE).

Actually, Dmitry's complaint was that time (either from
Ada.Calendar.Clock, or Ada.Real_Time.Clock, I forget) wasn't as precise
as it can easily be on modern hardware, being incremented at each timer
interrupt; so if your timer ticks every millisecond, that's your
granularity.

This could easily be changed (at any rate for Real_Time), but doesn't I
believe necessarily affect what's to be expected from delay/delay until,
or Execution_Time come to that.



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-01 16:01                                             ` Ada.Execution_Time Simon Wright
@ 2011-01-01 19:18                                               ` Niklas Holsti
  0 siblings, 0 replies; 124+ messages in thread
From: Niklas Holsti @ 2011-01-01 19:18 UTC (permalink / raw)


Simon Wright wrote:
> Simon Wright <simon@pushface.org> writes:
> 
>> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>>
>>> The question is then if the compiler/system vendors will take the
>>> trouble and cost to customise Ada.Execution_Time for a particular
>>> board/computer, or if they will just use the general,
>>> lowest-denominator but portable service provided by the kernel. Dmitry
>>> indicates that GNAT on VxWorks takes the latter, easy way out.
>> The latest supported GNAT on VxWorks (5.5) actually doesn't implement
>> Ada.Execution_Time at all (well, the source of the package spec is
>> there, adorned with "pragma Unimplemented_Unit", just like the FSF
>> 4.5.0 sources and the GNAT GPL sources -- unless you're running on
>> Cygwin or MaRTE).
> 
> Actually, Dmitry's complaint was that time (either from
> Ada.Calendar.Clock, or Ada.Real_Time.Clock, I forget) wasn't as precise
> as it can easily be on modern hardware, being incremented at each timer
> interrupt; so if your timer ticks every millisecond, that's your
> granularity.

Thanks for correcting me, Simon, I mis-remembered or misunderstood Dmitry.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-01 13:52                                         ` Ada.Execution_Time Niklas Holsti
  2011-01-01 14:42                                           ` Ada.Execution_Time Simon Wright
@ 2011-01-03 21:27                                           ` Randy Brukardt
  2011-01-06 22:55                                             ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 124+ messages in thread
From: Randy Brukardt @ 2011-01-03 21:27 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8o8ptdF35oU1@mid.individual.net...
> Randy Brukardt wrote:
>> I think we're actually in agreement on most points.
>
> Good, and I agree. Nevertheless I again make a lot of comments, below, 
> because I think your view, if true, means that Ada.Execution_Time is 
> useless for real-time systems.

Much like "unmeasurable", "useless" is a little bit strong, but this is 
definitely true in general.

That is, if you can afford megabucks to have a compiler runtime tailored to 
your target hardware (and compiler vendors love such customers), then you 
probably could find a use for in a hard real-time system.

But the typical off-the-shelf implementation is not going to be useful for 
anything beyond gross guidance. That's probably enough for soft real-time 
systems anyway; and the facilities are useful for profiling and the like 
even without any strong connection to reality.

But it doesn't make sense to assume tight matches unless you have a very 
controlled environment. And if you have that environment, a language-defined 
package doesn't buy you anything over roll-your-own. So I would agree that 
the existence Ada.Execution_Time really doesn't buy anything for a hard 
real-time system.

I suppose there are others that disagree (starting with Alan Burns). I think 
they're wrong.

                                   Randy.





>> The main difference is that I contend that the theoretical "execution 
>> time" is, in actual practice, unmeasurable. ("Unmeasurable" is a bit 
>> strong, but I mean that you can only get a gross estimation of it.)
>
> You are right, we disagree on this.
>
>> You have argued that there exist cases where it is possible to do better 
>> (a hardware clock, a single processor, a kernel that doesn't get in the 
>> way of using the hardware clock,
>
> Yes.
>
>> no contention with other devices on the bus, etc.),
>
> I may have been fuzzy on that. Such real but variable or unpredictable 
> delays or speed-ups are, in my view, just normal parts of the execution 
> time of the affected task, and do not harm the ideal concept of "the 
> execution time of a task" nor the usefulness of Ada.Execution_Time and its 
> child packages. They only mean that the measured execution time of one 
> particular execution of a task is less predictive of the execution times 
> of future executions of that task. More on this below.
>
>> and I wouldn't disagree. The problem is that those cases aren't 
>> realistic, particularly if you are talking about a language-defined 
>> package.
>
> I am not proposing new general requirements on timing accuracy for the RM.
>
>> (Even if there is a hardware clock available on a particular target, an 
>> off-the-shelf compiler isn't going to be able to use it. It can only use 
>> the facilities of the kernel or OS.)
>
> For critical, embedded, hard-real-time systems I think it is not uncommon 
> to use dedicated real-time computers with kernels such as VxWorks or RTEMS 
> or bare-board Ada run-time systems. I have worked on such computers in the 
> past, and I see such computers being advertised and used today.
>
> In such systems, the kernel or Ada RTS is usually customised by a "board 
> support package" (BSP) that, among other things, handles the interfaces to 
> clocks and timers on the target computer (the "board"). Such systems can 
> provide high-accuracy timers and mechanisms for execution-time monitoring 
> without having to change the compiler; it should be enough to change the 
> implementation of Ada.Execution_Time. In effect, Ada.Execution_Time would 
> be a part of the BSP, or depend on types and services defined in the BSP.
>
> The question is then if the compiler/system vendors will take the trouble 
> and cost to customise Ada.Execution_Time for a particular board/computer, 
> or if they will just use the general, lowest-denominator but portable 
> service provided by the kernel. Dmitry indicates that GNAT on VxWorks 
> takes the latter, easy way out. That's a matter of cost vs demand; if the 
> users want it, the vendors can provide it.
>
> For example, the "CPU Usage Statistics" section of the "RTEMS C User's 
> Guide" says: "RTEMS versions newer than the 4.7 release series, support 
> the ability to obtain timestamps with nanosecond granularity if the BSP 
> provides support. It is a desirable enhancement to change the way the 
> usage data is gathered to take advantage of this recently added 
> capability. Please consider sponsoring the core RTEMS development team to 
> add this capability." Thus, in March 2010 the RTEMS developers wanted and 
> could implement accurate execution-time measurement, but no customer had 
> yet paid for its implementation.
>
>> This shows up when you talk about the inability to reproduce the results. 
>> In practice, I think you'll find that the values vary a lot from run to 
>> run; the net effect is that the error is a large percentage of the 
>> results for many values.
>
> Again, I may have not have been clear on my view of the reproducibility 
> and variability of execution-time measurements. The variability in the 
> measured execution time of a task has three components or sources:
>
> 1. Variable execution paths due to data-dependent conditional control-flow 
> (if-then-else, case, loop). In other words, on different executions of the 
> task, different sequences of instructions are executed, leading to 
> different execution times.
>
> 2. Variable execution times of indivual instructions, for example due to 
> variable cache contents and variable bus contention.
>
> 3. Variable measurement errors, for example truncations or roundings in 
> the count of CPU_Time clock cycles, variable amount of interrupt handling 
> included in the task execution time, etc.
>
> Components 1 and 2 are big problems for worst-case analysis but in my view 
> are not problems for Ada.Execution_Time. In fact, I think that one of the 
> intended uses of Ada.Execution_Time is to help systems make good use of 
> this execution-time variability by letting the system do other useful 
> things when some task happens to execute quicker than its worst-case 
> execution time and leaves some slack in the schedule.
>
> Only component 3 is an "error" in the measured value. This component, and 
> also constant measurement errors if any, are problems for users of 
> Ada.Execution_Time.
>
>> Your discussion of real-time scheduling using this still seems to me to 
>> be more in the realm of academic exercise than something practical. I'm 
>> sure that it works in very limited circumstances,
>
> I think these circumstances have some correlation with the domain for 
> which Ada is or was intended: reliable, embedded, possibly real-time 
> systems. Well, this is a limited niche.
>
>> but those circumstances are getting less and less likely by the year. 
>> Techniques that assume a strong correspondence between these values and 
>> real-time are simply fantasy -- I would hope no one depend on these for 
>> anything of importance.
>
> If this is true, Ada.Execution_Time is useless for real-time systems. 
> Since it was added to the RM for 2005, and is still in the draft for RM 
> 2012, I suppose the ARG majority does not share your view.
>
> Thanks for a frank discussion, and best wishes for 2011!
>
> -- 
> Niklas Holsti
> Tidorum Ltd
> niklas holsti tidorum fi
>       .      @       . 





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-01 15:54                                         ` Ada.Execution_Time Simon Wright
@ 2011-01-03 21:33                                           ` Randy Brukardt
  2011-01-05 15:55                                             ` Ada.Execution_Time Brad Moore
  0 siblings, 1 reply; 124+ messages in thread
From: Randy Brukardt @ 2011-01-03 21:33 UTC (permalink / raw)


"Simon Wright" <simon@pushface.org> wrote in message 
news:m2zkrk62ms.fsf@pushface.org...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> Your discussion of real-time scheduling using this still seems to me
>> to be more in the realm of academic exercise than something
>> practical. I'm sure that it works in very limited circumstances, but
>> those circumstances are getting less and less likely by the year.
>
> Sounds as if you don't agree with the !problem section of AI-00307?

Pretty much.

Keep in mind that the IRTAW proposals tend to get put into the language 
without much resistance from the majority of the ARG. Most of us don't have 
a enough real-time experience to really be able to have a strong opinion on 
a topic. So we don't generally oppose the basic reason for a proposal.

We simply spend time on polishing the proposals into acceptable RM language 
(which is usually a large job by itself; the proposals tend to be very 
sloppy). Rarely do we object to the proposal itself, in large part because 
we simply don't have enough energy to think in great detail about every 
proposal.

It's best to think of the Annex D stuff as designed and proposed separately 
by a small subgroup. (There's a similar dynamic for some other areas as 
well, numerics coming to mind.) That small subgroup may have a different 
world-view than the rest of us.

                                   Randy.





^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-03 21:33                                           ` Ada.Execution_Time Randy Brukardt
@ 2011-01-05 15:55                                             ` Brad Moore
  0 siblings, 0 replies; 124+ messages in thread
From: Brad Moore @ 2011-01-05 15:55 UTC (permalink / raw)


On 03/01/2011 2:33 PM, Randy Brukardt wrote:

 > if you can afford megabucks to have a compiler runtime tailored to
> your target hardware (and compiler vendors love such customers), then you
> probably could find a use for in a hard real-time system.
>
> But the typical off-the-shelf implementation is not going to be useful for
> anything beyond gross guidance. That's probably enough for soft real-time
> systems anyway; and the facilities are useful for profiling and the like
> even without any strong connection to reality.
>
> But it doesn't make sense to assume tight matches unless you have a very
> controlled environment. And if you have that environment, a language-defined
> package doesn't buy you anything over roll-your-own. So I would agree that
> the existence Ada.Execution_Time really doesn't buy anything for a hard
> real-time system.
>
> I suppose there are others that disagree (starting with Alan Burns). I think
> they're wrong.

> Keep in mind that the IRTAW proposals tend to get put into the language
> without much resistance from the majority of the ARG. Most of us don't have
> a enough real-time experience to really be able to have a strong opinion on
> a topic. So we don't generally oppose the basic reason for a proposal.
>
> We simply spend time on polishing the proposals into acceptable RM language
> (which is usually a large job by itself; the proposals tend to be very
> sloppy). Rarely do we object to the proposal itself, in large part because
> we simply don't have enough energy to think in great detail about every
> proposal.
>
> It's best to think of the Annex D stuff as designed and proposed separately
> by a small subgroup. (There's a similar dynamic for some other areas as
> well, numerics coming to mind.) That small subgroup may have a different
> world-view than the rest of us.

To get a perspective on some of the IRTAW thinking with respect to 
execution time, one might want to have a look at the paper;

"Execution-Time Control For Interrupt Handling", a proposal from the 
most recent IRTAW.

http://www.sigada.org/ada_letters/apr2010/paper3.pdf

The paper expresses an interest in accounting for the time spent
during the handling of interrupts, and tracking this separately
rather than charge the execution time to the current running task,
as is currently the case for most current implementations of 
Ada.Execution_Time.

The intent is to improve the accuracy of execution time measurement
and computing WCET so that task budgets can be tighter allowing for 
higher CPU utilization.


--BradM



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-03 21:27                                           ` Ada.Execution_Time Randy Brukardt
@ 2011-01-06 22:55                                             ` Niklas Holsti
  2011-01-07  6:25                                               ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 124+ messages in thread
From: Niklas Holsti @ 2011-01-06 22:55 UTC (permalink / raw)


Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:8o8ptdF35oU1@mid.individual.net...
>> Randy Brukardt wrote:
>>> I think we're actually in agreement on most points.
>> Good, and I agree. Nevertheless I again make a lot of comments, below, 
>> because I think your view, if true, means that Ada.Execution_Time is 
>> useless for real-time systems.
> 
> Much like "unmeasurable", "useless" is a little bit strong, but this is 
> definitely true in general.
> 
> That is, if you can afford megabucks to have a compiler runtime tailored to 
> your target hardware (and compiler vendors love such customers),

The cost will depend on the number of users/customers and the extent of 
tailoring needed; I admit I haven't asked for quotes.

At least the definition of a predefined, standard programmer interface 
in the form of Ada.Execution_Time and children should help make the 
tailoring more portable over targets, applications, and customers.

> then you probably could find a use for in a hard real-time system.

I think so.

> But the typical off-the-shelf implementation is not going to be useful for 
> anything beyond gross guidance.

Depends on which shelf you buy from :-)  You are probably right for GNAT 
GPL on Linux, but perhaps there is more hope for RAVEN and the like. 
Although I see that Ravenscar excludes Ada.Execution_Time.

> That's probably enough for soft real-time 
> systems anyway; and the facilities are useful for profiling and the like 
> even without any strong connection to reality.

I don't see how Ada.Execution_Time would be very useful for profiling. 
While it could show you which tasks are CPU hogs, it does not resolve 
the execution-time consumption to subprograms or subprogram parts, which 
I think would be the important information for code redesigns aiming at 
improving speed. Real profilers usually do give you subprogram-level or 
even statement-level information.

Of course, a profiler that collects subprogram-level execution time (per 
call, for example) but does not know about Ada task preemption will 
probably produce wildly wrong results.

> ... unless you have a very 
> controlled environment. And if you have that environment, a
> language-defined package doesn't buy you anything over
> roll-your-own.

I don't agree. I think a clean, standard, language-defined interface 
between the Ada RTS/kernel and the application will help an implementer, 
as I said above. Moreover, the scheduling algorithms published by Burns 
and Wellings and others are written to use Ada.Execution_Time. Rolling 
one's own interface would mean changing these algorithms.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 124+ messages in thread

* Re: Ada.Execution_Time
  2011-01-06 22:55                                             ` Ada.Execution_Time Niklas Holsti
@ 2011-01-07  6:25                                               ` Randy Brukardt
  0 siblings, 0 replies; 124+ messages in thread
From: Randy Brukardt @ 2011-01-07  6:25 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8omvicFgm0U1@mid.individual.net...
...
>> That's probably enough for soft real-time systems anyway; and the 
>> facilities are useful for profiling and the like even without any strong 
>> connection to reality.
>
> I don't see how Ada.Execution_Time would be very useful for profiling. 
> While it could show you which tasks are CPU hogs, it does not resolve the 
> execution-time consumption to subprograms or subprogram parts, which I 
> think would be the important information for code redesigns aiming at 
> improving speed. Real profilers usually do give you subprogram-level or 
> even statement-level information.

I was thinking of "profiling" by hand, essentially by adding profiler calls 
to "interesting" points in the code. I have a number of packages which are 
designed for this purpose.

My experience with subprogram level profiling is that it often changes the 
results too much, as the overhead of profiling can be a lot more than the 
overhead of calling small subprograms (like the classic Getters/Setters). 
Perhaps it is just the way that we do it in Janus/Ada (by adding it to 
Enter_Walkback/Exit_Walkback calls that occur at the entrance and exit of 
every subprogram [unless turned off by compiler switch or pragma], which 
mean you don't have to recompile anything other than the main subprogram).

Instruction level profiling (which is used to provide statement profiling) 
is a statistical method, and that has its own problems (its possible to get 
into a situation where the program and profiler operate in sync, such that 
the profiler never sees the execution of some of the code). It's also 
impractical unless you have a very fast timer interrupt or a long time to 
run (machines have gotten too fast for the one I used to use -- it only gets 
a few hits before the compilations are finished...).

> Of course, a profiler that collects subprogram-level execution time (per 
> call, for example) but does not know about Ada task preemption will 
> probably produce wildly wrong results.

Right, and that is a problem with both my profiling packages and the 
Janus/Ada subprogram profiler. They both use Calendar.Clock, which is 
obviously oblivious to task switching. Ada.Execution_Time would be a major 
improvement in that respect.

                              Randy.

P.S. I think I owe you a compiler update. Did you read on our mailing list 
about the latest beta of Janus/Ada?





^ permalink raw reply	[flat|nested] 124+ messages in thread

end of thread, other threads:[~2011-01-07  6:25 UTC | newest]

Thread overview: 124+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-12-12  4:19 Ada.Execution_Time BrianG
2010-12-12  5:27 ` Ada.Execution_Time Jeffrey Carter
2010-12-12 16:56 ` Ada.Execution_Time Jeffrey Carter
2010-12-12 21:59   ` Ada.Execution_Time BrianG
2010-12-12 22:08     ` Ada.Execution_Time BrianG
2010-12-13  9:28     ` Ada.Execution_Time Georg Bauhaus
2010-12-13 22:25       ` Ada.Execution_Time Randy Brukardt
2010-12-13 22:42         ` Ada.Execution_Time J-P. Rosen
2010-12-14  3:31         ` Ada.Execution_Time Jeffrey Carter
2010-12-14 15:42           ` Ada.Execution_Time Robert A Duff
2010-12-14 16:17             ` Ada.Execution_Time Jeffrey Carter
2010-12-14 19:10             ` Ada.Execution_Time Warren
2010-12-14 20:36               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-14 20:48                 ` Ada.Execution_Time Jeffrey Carter
2010-12-14  8:17         ` Ada.Execution_Time Vinzent Hoefler
2010-12-14 15:51           ` Ada.Execution_Time Adam Beneschan
2010-12-14 15:53           ` Ada.Execution_Time Robert A Duff
2010-12-14 17:17             ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-14 17:45               ` Ada.Execution_Time Robert A Duff
2010-12-14 18:23                 ` Ada.Execution_Time Adam Beneschan
2010-12-14 21:02                   ` Ada.Execution_Time Randy Brukardt
2010-12-15 22:52             ` Ada.Execution_Time Keith Thompson
2010-12-15 23:14               ` Ada.Execution_Time Adam Beneschan
2010-12-17  0:44                 ` Ada.Execution_Time Randy Brukardt
2010-12-17 17:54                   ` Ada.Execution_Time Warren
2010-12-20 21:28                   ` Ada.Execution_Time Keith Thompson
2010-12-21  3:23                     ` Ada.Execution_Time Robert A Duff
2010-12-21  8:04                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-21 17:19                         ` Ada.Execution_Time Robert A Duff
2010-12-21 17:43                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-14 19:43           ` Ada.Execution_Time anon
2010-12-14 20:09             ` Ada.Execution_Time Adam Beneschan
2010-12-15  0:16       ` Ada.Execution_Time BrianG
2010-12-15 19:17         ` Ada.Execution_Time jpwoodruff
2010-12-15 21:42           ` Ada.Execution_Time Pascal Obry
2010-12-16  3:54             ` Ada.Execution_Time jpwoodruff
2010-12-17  7:11               ` Ada.Execution_Time Stephen Leake
2010-12-15 21:40         ` Ada.Execution_Time Simon Wright
2010-12-15 23:40           ` Ada.Execution_Time BrianG
2010-12-15 22:05         ` Ada.Execution_Time Randy Brukardt
2010-12-16  1:14           ` Ada.Execution_Time BrianG
2010-12-16  5:46             ` Ada.Execution_Time Jeffrey Carter
2010-12-16 16:13               ` Ada.Execution_Time BrianG
2010-12-16 11:37             ` Ada.Execution_Time Simon Wright
2010-12-16 17:24               ` Ada.Execution_Time BrianG
2010-12-16 17:45                 ` Ada.Execution_Time Adam Beneschan
2010-12-16 21:13                   ` Ada.Execution_Time Jeffrey Carter
2010-12-17  0:35               ` New AdaIC site (was: Ada.Execution_Time) Randy Brukardt
2010-12-16 13:08             ` Ada.Execution_Time Peter C. Chapin
2010-12-16 17:32               ` Ada.Execution_Time BrianG
2010-12-16 18:17             ` Ada.Execution_Time Jeffrey Carter
2010-12-16  8:45           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-16 16:49             ` Ada.Execution_Time BrianG
2010-12-16 17:52               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-17  8:49                 ` Ada.Execution_Time Niklas Holsti
2010-12-17  9:32                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-17 11:50                     ` Ada.Execution_Time Niklas Holsti
2010-12-17 13:10                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-18 21:20                         ` Ada.Execution_Time Niklas Holsti
2010-12-19  9:57                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-25 11:31                             ` Ada.Execution_Time Niklas Holsti
2010-12-26 10:25                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-27 12:44                                 ` Ada.Execution_Time Niklas Holsti
2010-12-27 15:28                                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-27 20:11                                     ` Ada.Execution_Time Niklas Holsti
2010-12-27 21:34                                       ` Ada.Execution_Time Simon Wright
2010-12-28 10:01                                         ` Ada.Execution_Time Niklas Holsti
2010-12-28 14:17                                           ` Ada.Execution_Time Simon Wright
2010-12-27 21:53                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 14:14                                         ` Ada.Execution_Time Simon Wright
2010-12-28 15:08                                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 16:18                                             ` Ada.Execution_Time Simon Wright
2010-12-28 16:34                                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-31  0:40                                             ` Ada.Execution_Time BrianG
2010-12-31  9:09                                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 14:46                                         ` Ada.Execution_Time Niklas Holsti
2010-12-28 15:42                                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 16:27                                             ` Ada.Execution_Time (see below)
2010-12-28 16:55                                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 19:41                                                 ` Ada.Execution_Time (see below)
2010-12-28 20:03                                                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 22:39                                                     ` Ada.Execution_Time Simon Wright
2010-12-29  9:07                                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-27 17:24                                 ` Ada.Execution_Time Robert A Duff
2010-12-27 22:02                                   ` Ada.Execution_Time Randy Brukardt
2010-12-27 22:43                                     ` Ada.Execution_Time Robert A Duff
2010-12-27 22:11                               ` Ada.Execution_Time Randy Brukardt
2010-12-29 12:48                                 ` Ada.Execution_Time Niklas Holsti
2010-12-29 14:30                                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-29 16:19                                     ` Ada.Execution_Time (see below)
2010-12-29 16:51                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-29 19:57                                         ` Ada.Execution_Time (see below)
2010-12-29 21:20                                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-30  5:13                                             ` Ada.Execution_Time Randy Brukardt
2010-12-30 13:37                                             ` Ada.Execution_Time Niklas Holsti
2010-12-29 20:32                                     ` Ada.Execution_Time Niklas Holsti
2010-12-29 21:21                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-30 13:34                                         ` Ada.Execution_Time Niklas Holsti
2010-12-30 19:23                                     ` Ada.Execution_Time Niklas Holsti
2010-12-30  5:06                                   ` Ada.Execution_Time Randy Brukardt
2010-12-30 23:49                                     ` Ada.Execution_Time Niklas Holsti
2010-12-31 23:34                                       ` Ada.Execution_Time Randy Brukardt
2011-01-01 13:52                                         ` Ada.Execution_Time Niklas Holsti
2011-01-01 14:42                                           ` Ada.Execution_Time Simon Wright
2011-01-01 16:01                                             ` Ada.Execution_Time Simon Wright
2011-01-01 19:18                                               ` Ada.Execution_Time Niklas Holsti
2011-01-03 21:27                                           ` Ada.Execution_Time Randy Brukardt
2011-01-06 22:55                                             ` Ada.Execution_Time Niklas Holsti
2011-01-07  6:25                                               ` Ada.Execution_Time Randy Brukardt
2011-01-01 15:54                                         ` Ada.Execution_Time Simon Wright
2011-01-03 21:33                                           ` Ada.Execution_Time Randy Brukardt
2011-01-05 15:55                                             ` Ada.Execution_Time Brad Moore
2010-12-17  8:59         ` Ada.Execution_Time anon
2010-12-19  3:07           ` Ada.Execution_Time BrianG
2010-12-19  4:01             ` Ada.Execution_Time Vinzent Hoefler
2010-12-19 11:00               ` Ada.Execution_Time Niklas Holsti
2010-12-21  0:37                 ` Ada.Execution_Time Randy Brukardt
2010-12-21  1:20                   ` Ada.Execution_Time Jeffrey Carter
2010-12-19 12:27               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-21  0:32               ` Ada.Execution_Time Randy Brukardt
2010-12-19 22:54             ` Ada.Execution_Time anon
2010-12-20  3:14               ` Ada.Execution_Time BrianG
2010-12-22 14:30                 ` Ada.Execution_Time anon
2010-12-22 20:09                   ` Ada.Execution_Time BrianG

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox