Enhanced printout performance and storage efficiency (v.24)

In version 24, we've introduced a new feature that optimizes the loading and storage of printouts.

This is part of our ongoing strategy for database size optimization and monitoring. These efforts are aimed not only at cleaning up outdated information but also at finding ways to reduce the growth rate of the databases.

One of the fastest-growing tables in our client's databases is the Document Print Images Table, which stores the data of a printout as it was obtained from the data source at the time of printing. Therefore, it became one of our primary optimization targets. The changes we've implemented will not only reduce CPU usage and server time but also decrease database storage requirements, making the printouts significantly faster.

Database storage optimization

Before a printout is generated, the system now analyzes all fields to determine if every table from the data source is necessary. If a table is not used in the printout, it will not be loaded.

This change ensures that unnecessary data is not saved in the database, which helps reduce its growth. 

Here is a real-life example with an actual printout from one of our customer's databases - a printout image that previously took up 17868 KB will now only take up 240 KB after the optimization.

 

Loading time optimization

Using less data results in faster printout processing i.e. less waiting for the users as the printout loads to be displayed.

We've measured the loading time of a real client’s printout before and after optimization. We used the built-in Performance Benchmarking mode of the desktop client to measure the improvement.

Result: Before optimization, the printout took 4112 ms to load, but now it only takes 81 ms.

Here are the details before optimization:

Proc: Load Printout Data (15:08:55.171, 0 ms, Total: 4112 ms)
Data Source: Load Data Async (15:08:55.171, 0 ms, Total: 4112 ms)
Load Data (15:08:55.172, 4 ms, Total: 4110 ms)
Store Transaction (15:08:55.176, 8 ms, Total: 4106 ms)
Get Full Schema (15:08:55.176, 0 ms, Total: 12 ms)
Reference_Path-s (15:08:55.176, 1 ms, Total: 12 ms)
Crm_Sales_Orders (15:08:55.176, 4 ms, Total: 4 ms)
Fill Relations (15:08:55.189, 0 ms, Total: 3772 ms)
Sales Orders (15:08:55.189, 3772 ms, Total: 3772 ms)
Fill Extension Fields (15:08:58.962, 1 ms, Total: 312 ms)
Sales Orders (15:08:58.962, 12 ms, Total: 12 ms)
Enterprise Companies (15:08:58.976, 4 ms, Total: 4 ms)
Parties 1 (15:08:58.981, 1 ms, Total: 1 ms)
Sales Orders 1 (15:08:58.983, 56 ms, Total: 56 ms)
Store Orders (15:08:59.040, 233 ms, Total: 233 ms)
Populate Column Extended Properties (15:08:59.274, 1 ms, Total: 1 ms)
Commit Transaction (15:08:59.282, 1 ms, Total: 1 ms)

Here are the details after optimization:

Proc: Load Printout Data (15:09:33.052, 0 ms, Total: 81 ms)
Data Source: Load Data Async (15:09:33.052, 1 ms, Total: 81 ms)
Load Data (15:09:33.053, 3 ms, Total: 79 ms)
Store Transaction (15:09:33.057, 1 ms, Total: 76 ms)
Get Full Schema (15:09:33.057, 0 ms, Total: 11 ms)
Reference_Path-s (15:09:33.057, 1 ms, Total: 11 ms)
Crm_Sales_Orders@Продажби (15:09:33.057, 3 ms, Total: 3 ms)
Fill Relations (15:09:33.068, 0 ms, Total: 46 ms)
Sales Orders (15:09:33.068, 46 ms, Total: 46 ms)
Fill Extension Fields (15:09:33.115, 0 ms, Total: 16 ms)
Sales Orders (15:09:33.115, 10 ms, Total: 10 ms)
Enterprise Companies (15:09:33.127, 3 ms, Total: 3 ms)
Populate Column Extended Properties (15:09:33.132, 1 ms, Total: 1 ms)
Commit Transaction (15:09:33.140, 1 ms, Total: 1 ms)

More info, more control

In addition to optimizations, we have introduced new warnings.

If the data source processes an unusually large number of records (over 20,000) or if it takes more than 10 seconds, the system will show an information balloon. The warning will also be logged in the Information Messages table, allowing database administrators to review it later if an investigation is necessary.

This will not only prevent accidental errors during the configuration but also make identifying potentially problematic printouts much easier.

 

This is just part of our strategy for handling large datasets and performance optimization for what we call "big databases." We will continue refining our approach based on real-world cases and client needs. We also conduct ongoing monitoring of our server processes and include optimizations in each new version to improve overall system performance.

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.
Powered by Zendesk