We created this site to hear your enhancement ideas, suggestions and feedback about AVEVA products and services. All of the feedback you share here is monitored and reviewed by the AVEVA product managers.
To start, take a look at the ideas in the list below and VOTE for your favorite ideas submitted by other users. POST your own idea if it hasn’t been suggested yet. Include COMMENTS and share relevant business case details that will help our product team get more information on the suggestion. Please note that your ideas and comments are visible to all other users.
This page is for feedback specifically for AVEVA PI System. For links to our other feedback portals, please see the tab RESOURCES below.
Just talking about PI interfaces in a collective... I had written a powershell script to compare the current values of the primary against the current values of one secondary. I was trying to alert on the problem where PIBufss was not setup correctly to fan data. If PIBufss was not setup correctly, then only the Primary would get data. So, users would get different answers depending on which collective member.
The script looked at each PointSource and searched its tag list until if found an "active" tag. (All my PointSources are unique, so I don't care about Location1.) If the snapshot timestamps where not within a "reasonable" difference, then this PS was "alerted". There could have been false positives. It really didn't really matter that the data was the same when checking. It mattered what the snapshot timestamps were.
I stopped reviewing that much after implementing the AF IT Monitor templates from AVEVA. It checks for PIBufss problems. That still does not check for setup problems. AFAIK, PI Analytics can't compare values from specific Collective members.
I mostly resolved that using a check off procedure when installing new interfaces. The simplest is to pull all the interface values from all collective members into PI-SMT -> Data -> Current Values, sort by name or timestamp. If you scan the list, it usually tells you if you have PIBufss Fanning working correctly. PIBufss -cfg is only helpful if you only have one interface on a server.
The most problem I have is that every interface on a specific node must use the same PI Server naming scheme as PIBufss. Also, the PIBufss wizard sometimes comments out the server names in the 4 .INI files.
So, I'm more interested in being alerted to the problem. I only worry about finding the problem & then fixing it. Since my default interface setup would only send to the Primary, I can re-initialize the Secondaries from the Primary and the Secondaries are in-sync again.
No, using a PI to PI interface to synchronize data between members within a collective is the wrong approach. We have the concept of primary/secondary members in the collective already; however, the primary member needs to be the write-only member, where the rest of the nodes are read-only (Example: need to get point configuration or data? You'll get it from the secondaries, although you can choose to grab it from the primary depending on the times sensitivity of the data pull). However, the primary member should not be static like it currently is. The primary member would need to be dynamic and automatically shift as the primary goes down.
It should never be the responsibility of the data provider to ensure all members of a collective have the data they need. Example, review the MongoDB system to see how they're able to achieve the HA requirements and data integrity across members. If you look at their concept of sharding, that would likely be a similar approach required & currently used to section data, but in this case - by timestamp.
Sync data using PI to PI Interface is not always a viable solution. It is required to be able get an advice if data are missing in one member of a collective. A health point making sur that all collective member have the same information would be helpful.