Skip to Main Content
AVEVA™ PI System™ Feedback Portal

Welcome to our new feedback site!


We created this site to hear your enhancement ideas, suggestions and feedback about AVEVA products and services. All of the feedback you share here is monitored and reviewed by the AVEVA product managers.

To start, take a look at the ideas in the list below and VOTE for your favorite ideas submitted by other users. POST your own idea if it hasn’t been suggested yet. Include COMMENTS and share relevant business case details that will help our product team get more information on the suggestion. Please note that your ideas and comments are visible to all other users.


This page is for feedback specifically for AVEVA PI System. For links to our other feedback portals, please see the tab RESOURCES below.

Status No Status
Created by Guest
Created on Nov 14, 2022

Improve Data Cache Performance by parallelizing Data Archive update fetches

Currently, when the Data Cache is signed up for updates on multiple data archives, updates from each archive are fetched in series, one after another. When there are many data archives involved (especially if they are geographically dispersed) this can make the baseline update cycle take quite a long time. If one or more data archive loses connectivity, the performance hit can lead to loss of signups on other data archives.

Fetching the updates from each data archive in parallel would make the Data Cache scale much better for systems with many different sites of varying reliability. With this enhancement, the overall update fetch would take about as long as the slowest-responding data archive, instead of the sum of all DA response times.

  • Attach files