Skip to Main Content
AVEVA™ PI System™ Feedback Portal

Welcome to our feedback site!


We created this site to hear your enhancement ideas, suggestions and feedback about AVEVA products and services. All of the feedback you share here is monitored and reviewed by the AVEVA product managers.

To start, take a look at the ideas in the list below and VOTE for your favorite ideas submitted by other users. POST your own idea if it hasn’t been suggested yet. Include COMMENTS and share relevant business case details that will help our product team get more information on the suggestion. Please note that your ideas and comments are visible to all other users.


This page is for feedback specifically for AVEVA PI System. For links to our other feedback portals, please see the tab RESOURCES below.

Status Declined
Created by Guest
Created on Aug 20, 2022

Allow Rollup analysis to output to to an Analysis Data Reference

I would like there to be a way to output the results of a rollup analysis to an Analysis Data Reference, so that I can view a trend without having to write to a PI point. Currently, there is no option to have a rollup analysis that does not write to a PI point other than changing it to a None DR after the fact, and if you do this you cannot see a trend of the attribute (it shows a flat line).
  • ADMIN RESPONSE
    Aug 20, 2022
    All data references, including Analysis Data Reference, are evaluated on the client side. This means that in a multi-user system, analyses that uses many inputs, such as a Rollup, can be very disruptive on the overall stability of the PI System. Trending of such Analysis Data Reference attributes compounds the problem. When a Rollup is configured to save its output to a PI Point to save history, only PI Analysis Service performs the rollup and all users will simply access the already-computed rollups. This scales much better for larger multi-user systems. As such, we currently have no plans to implement this idea as written.
  • Attach files
  • Guest
    Reply
    |
    Aug 20, 2022
    Vincent, after clicking "Map" in the Output column to select an output attribute for the rollup, you should be able to select the "No" radio button next to "Save Output History".  This should create an Analysis Data Reference attribute.     On my installation, the radio buttons are greyed out, but I think that's due to some bug that I need to track down with OSIsoft.  I don't think that's expected behavior.
  • Guest
    Reply
    |
    Aug 20, 2022
    Nope!  I was wrong.  I fixed my issue, but the "Save Output History" is still greyed out on Rollups.  It looks like these are disabled by design.  Chris Manhard, another example of OSIsoft protecting us from creating expensive queries or is there a larger issue here?   FYI, I happened upon this thread because I'm trying to do the same thing. : )
  • Guest
    Reply
    |
    Aug 20, 2022
    In response to Zev Arnold, "Nope!  I was wrong.  I fixed my issue, b..." There are many things that make an on-demand rollup taxing on the system - more so than might be apparent.   General expectations for time-series based attributes is the ability to notify on change, summarize, trend, interpolate, calculate previous values, etc.  Meeting those requirements and dealing with a potentially dynamically changing and potentially quite large set of input attributes - including nested rollups - does not make sense when we have the PI Data archive available to store those results.  Is there a particular reason, other than cost of a tag, that you are avoiding tag creation?
  • Guest
    Reply
    |
    Aug 20, 2022
    In response to Chris Manhard, "There are many things that make an on-de..." Suppose that, for instance: * I introduce a new cleansing algorithm on one of my source attributes for the calculation and would like the history of my rollup to reflect the new cleansing. * My rollup includes manually entered data from a TLDR which is time-series, but entered erratically.  I would like the history of my rollup to reflect the manually entered source data to the best of our knowledge *at the moment of request*. * I introduce a new sub-element underneath my rollup that I would like the history of my rollup to reflect rather than only reflecting it from the time I moved the element onward. All of these problems are derivatives of the recalculation dilemma, i.e. Lambda problems (courtesy Nathan Marz).  If I want to be confident that my calculation reflects the best data *at the moment of request* then I have three options: 1. Compute it on demand 2. Remember to recalculate whenever anything changes 3. Engage in some kind of Lambda architecture shenanigans Number 2, and I cannot stress this enough, is HIGHLY undesirable.  If the PI System is to be a trusted data layer for analytics then we need to be sure that it accurately reflects the data it models.  Trying to track which analyses are dependent on which inputs can grow to a monumental task, particularly when we start looking at solutions at scale such as advanced metering.  My current approach is to mix options 1 and 3.  Where I can compute on demand in a reasonably timely fashion, then I do that.  Where the on-demand calculation is too slow, I deploy Lambda shenanigans. I see solving this problem as particularly important to keeping AF relevant in the age of Big Data.  If AF is not the engine to guarantee fidelity for time-series data then it has lost a significant competitive advantage to other calculation engines (like Spark, I suppose). It would be great to get some feedback from the community on this.  Such as (Lonnie Bowling, Rhys Kirk, Robert Raesemann, Alexander Brodskiy, Eduan Smit, Ashok Krishnan, Wesley Tucker, Wilson Correa, Ian Gore, Akash Naik, luis Trejo)
  • Guest
    Reply
    |
    Aug 20, 2022
    In response to Zev Arnold, "Suppose that, for instance: * I introduc..." Hi Zev - We are currently working on supporting automatic recalculation feature for Analytics. When available, this should solve 2/3 scenarios that you described above - (1) updated input/source data and (2) late arriving or out-of-order data. The PI Analysis service will monitor such updates to data for analysis inputs and trigger recalculations as required.   The third use case that you described (introducing a new sub-element) is a bit tricky and is different than other two, in the sense that (1) and (2) will in most cases only require recalculating a limited range of data (depending on the range for updated data and calculation logic), while (3) requires recalculating analysis for the entire duration that it has been running. This can be really expensive, and may not even be desirable in all cases. Currently, we are not planning to address automatic recalculation for configuration changes for analyses - i.e. updates to calculation logic or updating list of roll-up inputs, but will love to hear feedback from the community.   Regards, Nitin
  • Guest
    Reply
    |
    Aug 20, 2022
    The issue as Stephen stated is something that a PI admin must consider. Perhaps make this a configurable feature that is disabled by default. But there are strong use cases for having this feature that would not jeopardize the system and it feels like a waste to create PI tags. For instance say you have 50 assets that have 2-4 modules each module has some attribute that is a PI tag and you would like an average for each of the 50 assets. Given that the number of modules is not consistent it takes a good bit of work to create these averages where a rollup would work perfectly. However to use the rollup we need to create 50 tags and add some level of load to the analysis service when this is a simple average calculation that only has 2-4 PI tag inputs. This should easily be calculated client side.
  • Guest
    Reply
    |
    Aug 20, 2022
    Thank you for the additional info. All data references, including the Analysis Data Reference, are only evaluated on demand. As stated earlier, if you have say 25 simultaneous users all trending this same Analysis Data Reference attribute, there would be data access activity for all the values needed for the Rollup for the duration of the trend, times 25x (# of users). Depending on the data density, this could be a large volume of data. This gets repeated every time someone creates/opens a trend. With that in mind, do you feel the PI admin should be tasked with managing this? The PI admin would need to estimate data volume and configuration for each user. Thoughts? What would your PI admin be comfortable with?