Skip to Main Content
AVEVA™ PI System™ Feedback Portal

Welcome to our feedback site!


We created this site to hear your enhancement ideas, suggestions and feedback about AVEVA products and services. All of the feedback you share here is monitored and reviewed by the AVEVA product managers.

To start, take a look at the ideas in the list below and VOTE for your favorite ideas submitted by other users. POST your own idea if it hasn’t been suggested yet. Include COMMENTS and share relevant business case details that will help our product team get more information on the suggestion. Please note that your ideas and comments are visible to all other users.


This page is for feedback specifically for AVEVA PI System. For links to our other feedback portals, please see the tab RESOURCES below.

Status No status
Categories Data Archive
Created by Guest
Created on Aug 20, 2022

Provide the capability to apply compression on uncompressed data, aka recompress using offline archive utility.  See work item 6944OSI8.

Provide the capability to apply compression on uncompressed data, aka recompress using offline archive utility.  See work item 6944OSI8.
  • Attach files
  • HemanthKumarKempula
    Reply
    |
    Sep 10, 2024

    This approach would be beneficial for compressing historical data, thereby reducing the storage needed for historizing values, retrieving large data sets, and resolving slowness issues. Additionally, it would be a great idea to consider options like implementing time-weighted average values and interpolated values at set intervals.

  • jerome.boudon
    Reply
    |
    Mar 25, 2024

    This would be extremely useful for us. It would help to reduce the volume of data in our archives and be sure that these data matches the actual compression rules for each tag

  • AlistairTCO
    Reply
    |
    Mar 29, 2023

    This would be massively useful, to the extent that I actually wrote a tool to apply simple lossless compression. We have thousands of tags with no compression at all and many are retrieving at a 1-second scan rate. Even after we have fixed these going forward, we still have several years-worth of archives with this data in it. This data makes retrieval and analysis slow and inefficient.

  • Guest
    Reply
    |
    Aug 20, 2022
    We are finding this to be an issue right now. We are bringing over data from many remote sites and many have very little compression set. This would be a great idea.
  • Guest
    Reply
    |
    Aug 20, 2022
    it would be interesting to see the cost analysis on this. how much $$ would it save the user
  • caffreys_col
    Reply
    |
    Aug 20, 2022
    I'm finding that I'm getting 2 data values for the same timestamp (bad compression/exception settings) which is causing my PE calcs to error out due to the number of archive reads required for yearly data when I'm only in February! Having tools to remove duplicate values at the same timestamp would be great. Also helps when importing data from other systems with very little compression.
  • Guest
    Reply
    |
    Aug 20, 2022
    Hi, Exception/compression settings should not give you 2 values with the same timestamp. Have you contacted tech support?
  • Christoph Rose
    Reply
    |
    Aug 20, 2022
    This would be very useful. When connecting new data sources, it takes some time to check what useful exception and compression settings would be. After we have figured out good values, it would be nice to then easily apply them backwards to all the data stored since the beginning, so that any Analyses run on similarly compressed data.
  • Guest
    Reply
    |
    Aug 20, 2022
    This capability would be a great time saver to fix 'bloated' archives when fast scan classes (1 sec) cause the archives to fill 4x faster than estimated. PI to PI seems to be our only way to process these down to a reasonable size.