Different databases have different requirements for size overhead.
For example, some databases have transaction logs for permitting recovery, some have rollback segments which allow a current operation to be rolled back in the event of failure, etc. These requirements are database dependent and are in addition to that required to store the Remedy schemas and data. They are not covered here.
This entry only covers sizing requirements for storing data in the ARSystem schemas.
1) First determine how many fields there are on the schema.
2) Next determine the datatypes of each of the fields. Do this by looking in the field properties window and recording the type. For character fields, record the field length as well (note: this is not the same as display length !).
3) Next, locate information on these datatypes in the relevant database manuals. Different databases use different storage methods for different datatypes, and therefore, their sizes will accordingly be database dependent. Also, each database typically has some small amount (a byte or two of overhead) for each datatype in addition to their storage requirements. Record this information.
4) Diary fields and fields > 255 characters in length are a special case. They have some additional database overhead associated with them. Find out what that is.
5) Now, for each field on the schema, add up the byte values for each field’s storage requirements (don’t forget to include overhead !). For Varchar(255) or variable length character fields, an estimate of the average length of stored values may be 2/3 of the length. For example, a variable length field which can store a max length of 60 characters can be estimated that the average length of all entries in the field would probably be 2/3 * 60 or a length of 40. So, use a value of 40 bytes for this field. Don’t forget to include any associated overhead. If more precision is necessary, log into the database and issue a “select maxlength” type of statement against each character column in the schema table (this assumes the database supports this sort of operation), and then multiply by 2/3.
6) For diary fields or fields > 255 characters in length, the stored lengths of values tend to be a lot smaller than the maximum length specified. Use the same formula outlined in step 5, but multiply by 1/3. For example, a variable length field which can store a max length of 10000 characters will have an estimated average length of of 1/3 * 10000 or a length of 3.3K. So, use a value of 3.3K bytes for this field. Don’t forget to include any associated overhead ! If more precision is necessary, log into the database and issue a “select maxlength” type of statement against each character column in the schema table (this assumes the database supports this sort of operation), and then multiply by 1/3.
7) Determine if there are any indexes built on any fields in the schema. If so, multiply the value of the storage length of that fields datatype by 1.5. For example, if a character field of length 20 has an index built on it: 20 * 1.5 = 30. Use 30 as the storage length for that field instead of 20. Note that indexes cannot generaly be built on large text fields, so don’t include them. If Full Text Search is being used, those indexes are built outside of the database – not in it.
8) Add up all the byte values determined and recorded in the previous steps. This is the base value for how much space each ticket or record requires in the database.
9) Estimate how many tickets a day will be entered into the system. Multiply that number by the value found in step 8.
10) Estimate how many days a year the system will be in production. Use a figure of 200 days for normal businesses (taking into account holidays, weekends, etc.) or 365 if operating a 24×7 production system. Multiply this by the value found in step 9.
11) Multiply the value in step 10 by 1.2 (this adds 20% to account for system “slop” and other miscellaneous overhead). This value now represents approximately how much disk storage it will take to accomodate this schema’s growth for the next year. If there are multiple schemas this will need to be calculated for each schema.
Credits – Vishal V. Dhainje