

W H I T E P A P E R
© 2017 Persistent Systems Ltd. All rights reserved. 77
www.persistent.com
—
Secure software assets – decide who will have access to which software like modeling tools should only be
accesses by designers, ETL tool and jobs by ETL team etc. It is recommended to load each software on a
dedicated machine or virtual machine as it becomes easier to track and provide privileges and access can be
given to a dedicated group than individual users.
—
Monitor system usage, ensure compliance and identify unusual access patterns or events like large dataset
query or failed attempts to access sensitive data without permission and investigate them. It does no good to
simply monitor the events. Some events tomonitor are:
—
User connections, login/logout date/time
—
Failed attempts to connect/reconnect
—
Data access activity, especially ad hoc queries
—
Database transactions or DBArelated activities
—
Every new component or upgrade, user group changes, system changes need to be examined from security
angle to make sure it is not compromised at any point. Many organizations require a sign-off from a security
manager as part of the deployment process.
7.3 Enhancements to the reference publication
7.3.1 Performance optimization based on our own experience
The first section below lists some performance optimization practices that have worked well in our projects, some of
which are summarized in the second section below.
7.3.1.1 Best practices
1.
Performance should never be an afterthought.
In projects where performance and scalability requirements
are clear, make sure that performance is addressed from the beginning in the overall design. If performance
requirements are not clear, make sure this situation is corrected.
2.
At the same time, don’t optimize too soon
. There is no contradiction with the above point if performance is
measured at each step and optimizations are provided based on thesemeasures.
3.
Plan for performance testingwith real data before going live
, both getting the data in (via ETL) and out (via
queries from reports and dashboards). Confirm your performance SLAs with real data.Also preferably perform
this testing in an isolated staging area different than development, with similar workload characteristics
simulated to at least 50% of peak load or average load anticipated in production and on a hardware similar to
that of production but on a reduced scale.
4.
Leave automated performance tests behind to make sure that performance is measured on every
cycle.
For products or solutions which are being developed continuously, these tests can be part of periodic or
regular sprint activity carried on stable builds. Include canned queries from reports and dashboards, as well as
ad-hoc queries. Performance testing of the latter may be challenging as, by definition, ad-hoc means… ad-
hoc. You can (i) re-examine the business requirements to come up with your own queries; (ii) observe how
initial business user testers query the system, or (iii) look at the system logs for ad-hoc query sessions if the
system is in production. Agood performance and scalability test suite should contain multi-user sessions and,
for each session, think time between sessions. The workload should be realistic matching