Skip to Main Content
AVEVA™ PI System™ Feedback Portal

Welcome to our feedback site!


We created this site to hear your enhancement ideas, suggestions and feedback about AVEVA products and services. All of the feedback you share here is monitored and reviewed by the AVEVA product managers.

To start, take a look at the ideas in the list below and VOTE for your favorite ideas submitted by other users. POST your own idea if it hasn’t been suggested yet. Include COMMENTS and share relevant business case details that will help our product team get more information on the suggestion. Please note that your ideas and comments are visible to all other users.


This page is for feedback specifically for AVEVA PI System. For links to our other feedback portals, please see the tab RESOURCES below.

Status Completed
Created by Matt Voll
Created on Aug 20, 2022

Trigger Time

Need a way to utilize the trigger time in analysis functions. '*' refers to now or snapshot values Using timestamp('triggered attribute','*') only works if there is only one triggered attribute configured.
  • ADMIN RESPONSE
    Aug 20, 2022
    I believe we have come to a common understanding in this idea on the meaning of trigger time for real time calculations. If there are additional questions or comments, feel free to update this idea.
  • Attach files
  • Guest
    Reply
    |
    Aug 20, 2022
    Have you tried using the Parsetime function? ParseTime("*") When an analysis triggers, this should return the trigger time.
  • Matt Voll
    Reply
    |
    Aug 20, 2022
    No I had not tried that. ParseTime is used for Strings . . . Asteriks is usually with apostrophes ( '*' ). Putting those those together ( ParseTime('*') ) gives a Calc Failed. I had not thought until now to try using quotes with the asterisks . . . ParseTime("*") and that does calculate properly. How does Parsetime("*") and '*' differ, attached picture. The attached picture also suggests a further confusing aspect of this problem. Used outside of any tag retrieval functions, '*' means trigger time . . . but used inside of any tag retrieval functions, '*' means snapshot. Is this correct? That's very confusing. (side question, i don't recall any other situation in where the usage of "*" is appropriate) I do believe this would address the issues I had mentioned . . . however I would argue that this is far from ideal. I believe the confusion between * vs snapshot vs trigger time is likely a common misconception and thus a not obvious problem. I believe the ParseTime solution is a not obvious work around, requiring extra variables on MANY analyses. It would mean that I would need to include ParseTime("*") in nearly every analysis created that contains process data (usually 'not late') and manually entered data (usually 'late'). Not Obvious problem + Not Obvious solution + extra overhead on many analysses = :( .
  • Guest
    Reply
    |
    Aug 20, 2022
    Can you please provide some details on what you're trying to do? An example would be very useful.
  • Matt Voll
    Reply
    |
    Aug 20, 2022
    Process Data (not late), a like a Flow Rate, used in an analysis with Lab Data (late), like a concentration. The lab data could be anywhere between 2 hours late and 14 days late. One of the more obvious configurations would be to have an analysis 'event triggered' on the lab data. The trigger time could be 2 hours ago (or 14 days ago), but the process data's snapshot would be <10 seconds ago. The misconception that '*' represents trigger time is the issue . . . because it leads to a very simple analysis that is also incorrect. just using tagval('process data','*'). Again its confusing that you are suggesting that using tagval('process data','*') is incorrect but then using '*' outside of a tag retrieval function is correct. This is also hard to come up with 'on the spot' examples, because this is a situation where backfilling/evaluate behave DIFFERENTLY than normal running over time. I see a lot of 'solutions' given that are essentially 'well, just backfill'. Backfilling does seem to treat '*' as trigger time and not snapshot time ALWAYS. The evalute button does the same thing. The example I had most recently (from Case 00564622) was adjusted so that the analysis is not event triggered, but periodic, then relying on the automatic recalculation to hit when the late data finally arrives.
  • Guest
    Reply
    |
    Aug 20, 2022
    Let's back up and correct your original statement.  This is incorrect in regards to Asset Analytics:   '*' refers to now or snapshot values  For analytics, '*' does NOT refer to Now but rather refers to the trigger time.  Analytics do not have a notion of wall clock.  Likewise, other PI relative times are relative to that trigger time.  So 't' refers to midnight of the day of that trigger time.  'y' refers to yesterday before this trigger time.   Is this really a critical distinction?  YES, particularly with recalculations, which can be for several days ago.  Each recalculation's '*' refers to its own trigger time (perhaps 2 days ago) instead of Now.   Thus, what you are asking for already exists and is amazingly simple:  
  • Guest
    Reply
    |
    Aug 20, 2022
    In response to James Voll, "               "For analytics, '*' does ..." What version of Analytics are you using?  You may send a private message to me with the case number(s) so I may review.
  • Matt Voll
    Reply
    |
    Aug 20, 2022
                   "For analytics, '*' does NOT refer to Now but rather refers to the trigger time"   Your statement is in direct contradiction to what I was told through a Tech Support call concerning issues with Analyses giving incorrect results (that WERE corrected if I manually backfilled . . . thus its a difference between how backfilling behaves and how real time analyses behave).    Previously, I have always been under the assumption that '*' represents trigger time. But my analyses were not behaving like they should. The explanation provided did fit with the observations I was seeing.   I would be happy to revisit the issue . . . however . . .  I have no real complaints on tech support engineers with OSIsoft, they're great, but it is clear that they have varying levels of experiences and different strength areas. Previous experience of having tech support calls involving intricate issues with AF Analyses like this suggest that it would not be worth while to call and roll the dice on who i get.   These issues are hard to identify because they are the most evident by a backfilling behaving differently than normal real time analyses. While Automatic recalculation addresses a big part of this, it does make it more confusing as well. At this point, it has become common to have 'incorrect' analysis results that are simply fixed by backfilling . . . but that means any testing to what the problem is difficult because hitting evaluate or backfill on the analysis is not trust worthy.
  • Guest
    Reply
    |
    Aug 20, 2022
    Good discussion thread.  Let me provide some context around the design concept of Asset Analytics to help with this discussion.   First off, in the PI world, * was used to designate the Snapshot.  Since typically the Snapshot holds the latest value that has pass Exception, over time, * is considered to be synonymous with "now" by users.   In the world of Asset Analytics, when running in real time (streaming analytics) with event-triggered scheduling, the analyses are typically "triggered" by Snapshot values coming via the PI Data Archive Update Manager.  Thus, * = Snapshot = Trigger time in Asset Analytics.  It is important to embrace "trigger time" in Asset Analytics because there is a default 5 seconds wait time before the analyses are actually executed.  We call this the Execution time but the time context used in the analyses is always the Trigger time.  That means if your analyses are triggered at 12:00 because there was a new value coming from the Update Manager at 12:00, the analyses are executed at 12:00:05 but the values used are the values at 12:00.  This default 5 seconds delay is to allow for data that may arrive at the PI Data Archive with a slight delay.  This delay is user configurable but I would caution everyone to adjust this only if necessary to accomplish specific use cases - please contact tech support if you feel an adjustment is needed to avoid unintended side effects.  Nevertheless, the time context used is always the Trigger time.   Therefore, if you have lab data that arrives late, but with the desired timestamp, you can execute the analyses in event-triggered scheduling but only triggering on the lab data.  For example, let's say your lab data arrives 2 days late, but with a time stamp that is 2 days old and this is your ONLY triggering input, then the analyses would execute with the time context of the lab data (2 days old).  However, there is a caveat to this, there is a difference between late arriving data vs. out of order data.  Late arriving data is supported as triggers in real time, but out of order data is ignored as triggers.  Out of order data is data that arrives with a time stamp that is older than the Snapshot value.  In the case of out of order data, you would either need to recalculate or turn on auto-recalculation for the affects analyses.  This is because streaming calculation are triggered by Snapshot values from the Update Manager and the out of order data bypasses Snapshot thus we never see it as a trigger.   In the case of auto-recalculation or manual recalculation, be aware that the inputs are no longer Snapshot values but rather archive values since typically the Snapshot values are not longer available.  Practically that means depending on your Compression setting, real time (streaming) calculations may produce different results than backfill or recalculation.   Hope this helps.  (Hope it didn't confuse things further :-))
  • Matt Voll
    Reply
    |
    Aug 20, 2022
    Everything you state is EXACTLY what my assumptions were prior to my original post on April 10th . . . where I was provided with information contradicting these assumptions, and thus turning my world upside down :). I would love to be correct in my original assumption and incorrect in my thinking over the last 1.5 months. I'm not exactly in the practice of keeping incorrect analyses floating around, thus the analyses that were causing issues have been adjusted. Instead of event-triggered on the lab data (like you even suggest) I switched them to perioidic and they were structured in such a way that all lab data now becomes out of order in relation to the analysis, thus late arriving data will cause the analysis to be recalculated. Anyway, I did, just now, attempt to create a duplicate set of analyses and output tags . . . switching them back to be event triggered on the lab data. However based on my previous observations I do not trust that backfilling is an accurate representation of what is occurring during normal operation . . . thus i will give this a few days to run normally and then compare the two sets of results form these analyses I cannot pin point to any obvious items that may be part of this . . . but the other caveat to through out is that originally this system was on AF 2017SP2 and as of last week its on 2018R2. There were some analytic related bugs fixed that were causing me/us problems, but again nothing i would think having a directly impact here.
  • Matt Voll
    Reply
    |
    Aug 20, 2022
    Another comment . . . that may be the original tangent point in which led to the discussion during my original case of what '*' actually means in terms of backfilling, late data, out of order data, and normal operations . . . Is there a difference between having Variable = 'attribute1' and Variable = Tagval('attribute1','*'), or are they the same? Especially in the context of attribute1 being non-late process data and the analysis being triggered on attribute2, late lab data.
  • Guest
    Reply
    |
    Aug 20, 2022
    In response to James Voll, "Everything you state is EXACTLY what my ..." I cannot duplicate your problem.  Today I created a new tag on my data archive.  My examples focus on tag "Sinusoid Previous Value".     I quickly created an element with it.  Keep in mind that as it take many seconds or minutes to move from one application to another that the SINUSOID will be changing:     Next up is a simple analysis.  We will take the current value of SINUSOID to produce the previous value.  There are a couple of points to note: (1) the expression CurrentValue is not needed in long term.  I only used it to see when I click Evaluate.  And (2), the timestamp for PreviousValue will be TriggerTime.  That is to say the CurrentValue and PreviousValue have the same timestamp.     Obviously, PreviousValue is mapped as an output back to my newly created tag "Sinusoid Previous Value".   Next I backfilled for a few days:     Using SMT, here's what I show for SINUSOID:     And here's what I show for Sinusoid Previous Value:     Note there a couple of extra values for the Previous version than for SINUSOID.  Understand that this analysis is triggered whenever the SINUSOID snapshot changes.  However, the SMT archive viewer is showing me values archived values - that is snapshot values that have passed the compression test.  So it makes sense that during the time I was messing with this and performing backfills, that is a little after 11:30-ish AM my time, that the analysis was still running for incoming snapshot data.  But not every SINUSOID snapshot makes it to the archives.   This falls in line with Steve's previous (and nice) explanation, and on my system demonstrates that '*' refers to trigger time and not a wall clock Now.
  • Guest
    Reply
    |
    Aug 20, 2022
    In response to James Voll, "Another comment . . . that may be the or..." Good questions James.  I'll need to get into more details to answer your questions, but I get the feeling that you do want more details :-).   There are a few things you need to be aware of in terms of the way we do data calls under the hood.  If you're in PI System Explorer, if you select the "Evaluate" button, the data call under the hood is an InterpolatedValues call based on the client time.  Thus, if you have an offset between the client time and server time, it is possible that when you select "Evaluate", you end up with values that are not the Snapshot.  Let's say you have 5 seconds data in your PI Data Archive and your client is 2 seconds behind the server time, your interpolated call with the Evaluate button could be interpolated between the last two values.   Variable1='attribute1' and Variable1=TagVal('attribute1', '*') should give you the same result.  Both should be in the context of the trigger time.  So * in your example is the trigger time of attribute2, which is late arriving lab data.   Having said that, you should also consider the concept of * with respect to client and PI Analysis Service.  When you're in PI System Explorer, and you perform TagVal('attribute1', '*') or 'attribute1', you will get back the InterpolatedValue based on the client time of "now".  In PI System Explorer, when you select "Evaluate" you get back two values, one is value at the last trigger time and the other one is value at evaluation time (basically now).  The value at last trigger time is based on the schedule.  We added this feature a long time ago because the last trigger time may be a long time ago and in many cases is not the same as "now".  Meanwhile when you're running the analyses in real time, the basis for time is PI Analysis Service so if there are time offsets between PI Analysis Service, PI Data Archive and PI System Explorer, you may get inexplicable results.   Lastly be aware that we put in special handling of autobackfilling based on the service start time compared to the last evaluation time.  Gory details are in the relevant documentation.
  • Matt Voll
    Reply
    |
    Aug 20, 2022
    I've spent some time today trying to re-create the issue i had originally seen in the Tech Support case that generated this feedback item. I tried creating AF analyses from scratch using existing input tags that I was able to add 'in-order' and 'out of order' data into. I had also, previously, attempted to duplicate the original elements/analyses/tags from my tech support case and set the analyses back to the way they were before I had made changes to them. The end result . . . i was not able to create any scenario that counters your premise . . . that '*' = trigger time. I am happy because the ramifications of this not being true is . . . . ugly . . . but i'm frustrated in having no way to explain the behavior i originally saw (as well as the fact that this premise still contradicts what was communicated to me supposedly from a Product Specialist via a Tech Support Engineer I will again add the caveat that the original tech support case was on a system running AF 2017 R2 and we have since moved to AF 2018 SP2. I know specifically of a handful of items fixed between these two versions directly concern issues around automatic recalculation and analyses. I would not think any of these issues would have been related . . . but at this point I cannot say for certain whether one of those issues was causing the issue in someway that was not apparent at the time. I am also still very much keeping a close eye on the usage of automatic recalculations in the new version as we have never been able to fully 'trust' auto recalc and have had continual problems with it performing as it should. I have two cases open now concerning discrepancies between results from backfilling vs results from normal running analyses using auto recalc. FYI