Products for Application Management | Application Management FAQ |
Application Management FAQ
Answers to Frequently Asked Questions related to Application Management
- Why Archiving ?
- ... We still have plenty of free disk space
- ... We use block level differential backup
- ...Those 1 million small files consume only 100 MB.
- We agree that archiving is really important, but business does not decide...
- Log- and Trace File Management
- ...Why should I rotate log files and trace files daily?
- ...My log- and trace files are very small. Why should I tar- and zip or remove those ?
- I am working in Application Support, I am not a DBA...
- ...Why should I understand Database Transaction Logs / Redo- and Archive log files ?
- ...Why should I care about invalid database objects?
- Other
- I want to check my application for upcoming problems. What should I check?
- Our monitoring system helps just UNIX- and DBA-team, but does not detect application problems.
- What can I do that the next application will be more easily to manage?
- A new application should be connected to my application - Should I be concerned ?
- IT- and DBA-Team are talking about a standby solution for disaster recovery. Does this cause any impact on my application?
- The auditor complained about the missing operations manual. How to write that?
Why Archiving ?
- Time to Restore: If you do need to restore you don't want to increase your downtime by hours for restoring data you don't really need!
And in this stress-situation you don't have time to start investigating which data to restore now and which data to restore later.
Even if Backup Costs don't matter,
unplanned downtime does!
unplanned downtime does!
- Backup Volume: It is very likely that your backup process backs up even old, static, unchanged data unless you have a very specific backup configuration separating backup for historic data.
- Backup Costs: How much does your internal IT department or external service provider charge per GB backup volume ?
- Backup Duration and Server-Load:
- How long does the backup process run? - Is your backup window limited ? - Even in case of "online backup" there will be negative performance impact on your nightly batch jobs if backup did not finish before!
- What is the CPU- and disk-load caused by the backup process ? - Running backup in multiple streams can consume quickly 2 or even more CPU's!
Note: Even if your backup excludes files not changed since last backup you should be aware that database file headers are changed at each checkpoint, many times a day, even if the data remain unchanged!
Answer: Block Level Differential Backup needs to READ all data blocks to detect changed blocks. Of course only changed blocks are sent to backup tapes, but reading hundreds of gigabytes of unchanged blocks can significantly load and slow down your disk system! And in case of shared disk systems (storage arrays), this can even impact other applications.
Back to top of page
Back to top of page
Answer:
Back to top of page
- That's just a "backup killer" - ask your backup team!
Even if you backup daily only changed files, you will have a weekly or monthly full backup.- Reading 1 Million small files consumes MUCH more time than reading a big file! - At a backup speed of 20 MB/second you can backup a 100 MB file in 5 seconds, but reading 1 Million files a 100 Bytes will run for an hour. And watch your CPU-load during this time - you will see increased system- / kernel CPU consumption.
- And to backup a file which contains only 100 or even only 1 byte the backup system needs to insert a complete record containing file name, date, tape-id etc. into the database of the backup system!
- And don't ask about the restore-performance.
- Are you sure that those files consume only 100 MB ? - Even if the file size is only 100 bytes per file, a file usually consumes a full block in the file system, which is usually 1 KB or 8 KB. On an UNIX-based system use the command
du -k <filename>
- You just won't find the file you need except you know the EXACT file name.
- Commands like "ls -ltr *" or "dir *" are not usable, and if you use a graphic user interface to search your file your server might even freeze when clicking on that directory tree... do NOT try that on a production system!
- tar and compress / zip all files older than <x> days. If you just compress / zip just each single file you do not solve the problem caused by storing huge number of files.
- Archive or purge as early as possible
- Move files to completely separated filesystem and implement a special backup- and restore policy for this filesystem. In case of loss of data you might decide to restore this filesystem AFTER all other files were restored and the system is up and running fine. But you surely need to make and document this decision! - In case of an unplanned outage requiring restore there is no time for discussion!
- Store the files in the database using BLOB or CLOB fields.
- Instead of millions of small files you can efficiently backup a few big files.
- Using a database partitioning concept you can easily and efficiently remove old data instead of deleting million of small files (inode-operations).
The application manager can of course not change the application, but he can submit this as enhancement request for future releases.
Because of the high importance of this topic this needs to be addressed already in the project phase of gathering requirements. Therefore this is one of 150 non-functional requirements in our product Template for Non-Functional Requirements".
Back to top of page
Answer:
Back to top of page
- No decision about archiving and purging an cause non-compliance violating laws like "Data Protection Act", "Datenschutzgesetz (DSG" etc.!
- Use the "Template Archiving and Purging Requirements" and discuss it with the business departments. This template will help you to communicate with business and to extract and document the required information and decisions.
Back to top of page
Log- and Trace-File Management
FAQ: Why should I rotate log files and trace files daily?
Answer:
- Identify problems fast:
- Searching in a 100 MB or even GB large log requires more time...
- You definitely don't want to wait minutes, see your workstation running out of memory, starting to swap or even get an error message like "File too big - cannot open" when opening the logfile.
- In case that you are supporting a system via a satellite link connection scrolling through huge logfiles is indeed a huge pain, local viewing requires download of the complete file.
- In case that you are on-call using a dial-up connection scrolling in or downloading a huge logfile will cause some delay!
- In case that you are on-call but abroad using GPRS / UMTS / HSDPA / EDGE mobile connectivity you don't want to download a 100 MB logfile at a fee of 1.- to 10.- Euro/MB
- Purging or archiving of old log information: A file which is never switched / rotated will show every day a new time stamp, and every day the complete file will be backed up.
Answer:
- You just won't the file you need fast enough...
- Sometimes the file name contains the UNIX-process ID e.g. <pid>.trc - after a few weeks or months your trace files will contain information of different processes, which causes confusion when investigating a problem.
- See "What's wrong storing 1 million small files on disk"
I am working in Application Support, I am not a DBA....
There might exist a big gap between
- that what your DBA's are doing based on their internal job description
- and that what you assume that they are responsible for.
Recommendation:
Back to top of page
- The template Template DBA and Application Support - Job Description" list 60 tasks executed by either DBA or Application Support. Using this template you easily can document to whom each of those 60 tasks is assigned.
- If you want to establish a formal agreement about the tasks executed by the DBA team and the service quality then we recommend our "Template for an Operations Level Agreement (OLA) with the DBA Team"
Back to top of page
Answer: Although the management of Transaction Log Files / Redo- and Archivelog Files is the System DBA's job,
Note: A recently added "Standby Database" updated by applying transaction log files transferred to the remote destination can cause surprise - because of limited network capacity to the remote data centre logfiles cannot be transferred fast enough....
- you should know: Each insert / update / delete not only changes the database the data block containing the data, but additionally creates a "journal"-information about this change in a separate file. Those files are called "Transaction Logfiles" or "Redo- / Archivelog Files" and are used to "roll forward" after a crash or restore. This feature allows that you can recover the last committed transaction even after restoring your backup from last nigh!
- You need to be aware that unusual high insert / update / delete activity will create unusual high amount of Transaction Log or Archivelog Files which can cause that filesystem to run full and freeze your database!
- Therefore you need to test on a full-sized test database ("Pre-Production Environment") this unusual job and ask your DBA to measure the amount of Transaction log or Archive log created.
Examples:- After years you finally got the permission to archive and remove all data older than 6 months.
You execute the SQL command
DELETE FROM < name of order_detail_table >
WHERE order_date < sysdate-186; - Instead of processing data daily or weekly you change to run the same job for a week or a month. (Probably you had to "back out" the last months due to some wrong data.) The same job that runs daily fine will now create 7x or even 30x more transaction log / archive log - and that can become a significant problem.
- After checking your daily or monthly job you detect some wrong data. You back the job out and re-run it. Rerunning the job will not just double your transaction log volume on that day, as the "backout" will update changed and delete inserted data your transaction volume will triple!
- After years you finally got the permission to archive and remove all data older than 6 months.
You execute the SQL command
We highly recommend to read our free white paper
Transaction Log Volume / Archive Log Volume!
Transaction Log Volume / Archive Log Volume!
Note: A recently added "Standby Database" updated by applying transaction log files transferred to the remote destination can cause surprise - because of limited network capacity to the remote data centre logfiles cannot be transferred fast enough....
Answer: We highly recommend that you read our free whitepaper The Danger of Invalid Database Objects"
Back to top of page
Back to top of page
Other
Answer: Monitoring Frameworks provide good "out of the box" modules for monitoring of hardware, operating system and popular database servers, but for only the most popular applications special monitoring modules are available, typically at additional costs.
But those monitoring frameworks provide "hooks" / API's to integrate your individual application monitoring scripts. In some cases the application vendor offers such scripts to integrate his application into one or more of the leading monitoring frameworks. No matter who writes and integrates such application monitoring scripts, it takes time which needs to be planned, budgeted and allocated.
Our free whitepaper "The Importance of Application Level Monitoring" evaluates this issue and clearly describes the difference between
Back to top of page
But those monitoring frameworks provide "hooks" / API's to integrate your individual application monitoring scripts. In some cases the application vendor offers such scripts to integrate his application into one or more of the leading monitoring frameworks. No matter who writes and integrates such application monitoring scripts, it takes time which needs to be planned, budgeted and allocated.
Our free whitepaper "The Importance of Application Level Monitoring" evaluates this issue and clearly describes the difference between
- Database Monitoring,
- Server Monitoring and
- Application Level Monitoring.
Back to top of page
Answer: Communicate YOUR Requirements!
Unfortunately the requirement engineering process, even if executed by an experienced team, focuses only on functional requirements, but neglects
the non-functional requirements for manageability, reliability, .....
Of course it is significant effort to write down detailed requirements. But you can safe a lot of time! -
Just select among more than 150 non-functional requirements those which you need!
Our Template Non-Functional Requirements makes it really easy!
Back to top of page
Back to top of page
Answer: Instead of just being concerned you should act! -
- Make sure that the new application interface introduces only restrictions which are acceptable to you but no real problems.
(Each interface introduces some dependencies, at least in case of outage planning....) - Checking the requirements specifications of the new interface against our "Checklist for Application Interfaces" will help you in identifying potential issues.
- Submit the "Documentation Template for Application Interfaces" included in our checklist to the technical writer to ensure that the documentation will be usable and complete.
Answer: Very likely! - Our document "Technology Selection for Disaster Recovery" focuses on the operational impact of different DR-Technologies.
Back to top of page
Back to top of page
Answer:
Back to top of page
- Use our "Template for an IT Operations Manual"> - It will save you a lot of time!
- For new projects we recommend that you add our "Template for an IT Operations Manual"> to the project requirements. This template clearly indicates what you expect and will allow to reserve in project time plan and budget sufficient man days or man weeks for creating this important deliverable.
Back to top of page