Top Activities:
1. Verify
the content search performance
#
Activity
Detail
1
Disk Speed
Fast local disk will always beat
network attached storage. For clusters, you really don't have an option here,
but if it is network based, that should ideally be a dedicated, fast network
connection between the CS nodes and the storage.
2
Anti-virus
Active scanning virus products can
make disk speeds horrible, make sure there is no active scanning against the
search directory. Ideally they should only scan vault/~temp
3
Collection maintenance
Everyone with a large sized
collection who still runs Verity should run the Verity maintenance commands
to optimize the collection periodically. These commands remove deleted
content entries from the index.
Verity recommends to run the optimization tune-up after each bulk-submit. By
default mkvdk does housekeeping of the collection once in a while, but
performing an explicit optimization can ensure that search performance is at
its peak at any time.
NOTE
748026.1 How To Clean-up and Optimize the Verity VDK6
Component's Search Collection
4
Collection structure
The Verity collection is much like
a database, it needs to be structured correctly to perform well. This usually
just involves moving fields that are being searched against to their own part
file (inside Verity, usually called a data table). This can be done in the
interface in 7.5 (Advanced Search Config) but Content Server 7.1 requires the
SeparatePartsFile component and manual configuration. Note that this will
always require a rebuild. There are TKBs on this as well. Look for
SeparatePartFiles.
5
Pure load
If the system is busy doing other
things, that will slow searches
6
Index commonly searched fields
Ensure that all fields typically
used during a search have an index on them. In particular, dSecurityGroup
should have an index and if you are using accounts, then dDocAccount should
have one as well.
7
Security Group and Accounts
structure
Search performance is affected by
the number of security groups a user has permission to. To return only
content that a user has permission to see, the database WHERE clause includes
a list of security groups. The WHERE clause either includes all of the security
groups the user has permission to, or it includes all of the security groups
the user does not have permission to. Which approach is taken depends on
whether the user has permission to more than 50% or fewer than 50% of the
defined security groups.
For example, if 100 security groups are defined, and a user has permission to
10 security groups, the 10 security groups will be included in the WHERE
clause. In contrast, for a user with permission to 90 security groups, the
WHERE clause includes the 10 security groups the user does not have
permission to.
Therefore, if a user has permission to almost 50% of the security groups, the
search performance is less efficient. If a user has permission to all or none
of the security groups, the search performance is more efficient.
Consider the following performance issues when using accounts in your
security model:
Theoretically, you can create an unlimited number of accounts without
affecting content server performance. A system with over 100,000 pieces of content
has only limited administration performance problems at 200 accounts per
person; however, there is significant impact on search performance with over
100 accounts per person.
(Note that these are explicit accounts, not accounts that are implicitly associated
with a user through a hierarchical account prefix. A user can have permission
to thousands of implicit accounts through a single prefix.)
For performance reasons, do not use more than approximately 15 to 25 security
groups if you enable accounts.
Ensure that your security groups and accounts have relatively short names.
2. Index
Rebuilding
#
Activity
Detail
1
System Hardware
A good place to start performance
evaluation is with system hardware. If system performance is poor, check the
following areas to make sure they are adequate for the task:
File system size: If your file system is inadequate or too slow, search
performance can be slowed. Make certain that you have adequate space for the
number of servers yoursite is using.
Memory size: If your memory size is too small, swap times can slow the system
considerably. Make certain that you have enough memory and that it is not over-used
or under-used.
Processor type: Make certain you are using a processor with adequate speed
and that it isn't being overused.
2
Type of Index
If you have up to 1 million
content items, then Verity indexing will be fine (for systems running at or below
v10g)
If you have over 1 million content items, then we recommend database indexing
with Text Indexer filter.
3
Content Items Per Indexer Batch
A way of speeding up your
collection building process could be as follows:
In Repository Manager, Collection Rebuild Cycle press the
"Configure" button.
The first item, "Content Items Per Indexer Batch", tells how many
items to send through mkvdk to get indexed at one time. If you set this
number high, indexing goes faster, but if one item fails, we send the entire
batch through again. So if you have it set to 2000 and a document fails,
you'd have to wait until all 2000 go through again. This would take a lot
longer than if you have it set to 25 and an item fails. However, if there are
no failures, then having this set to high number would go faster.
Try setting this to 1,000 and see if indexer is quicker. If not, try setting
this to 2,000 before the next time you rebuild the index.
The second item, "Content Items per Checkpoint", sets a checkpoint.
After the checkpoint is reached, some merging of the collection is done
before the next batch is indexed. The problem with setting this high is that
if you try to cancel a rebuild or an update cycle, it won't stop until the
checkpoint is reached. However, setting it too low will cause the indexing to
take longer.
If the first item speeds up indexing a little, try setting "Content
Items per Checkpoint" to 20,000.
Important:
Content Items Per Indexer Batch is
the number of documents that are indexed at one time. If one of these
documents in a batch fails indexing, then the entire batch will
be processed again. This can cause many index retries. If you have a lot of
documents that fail indexing, you may want to decrease the Content Items Per
Indexer Batch so that there is a higher percentage of batches being processed
the first time, thus avoiding retries.
If your document indexing failure rate = approximately 1%, then setting
Content Items Per Indexer Batch to 25 is recommended.
If your document indexing failure rate = approximately 10%, then setting
Content Items Per Indexer Batch to 5 is recommended.
References:
Selective Refine and Index
component: This component will not convert the file and or index it if the
filter is met.
Note: Format map replaces the entire list of formats.
4
Checking the Metadata Field
Properties
The product name metadata field
may not have been properly updated in Configuration Manager. Depending on the
type of metadata field that the 'product name' is, changing the value could
be the reason for the lock-up problem. Is the product name metadata field a
(long) text field only or also an option list? If it is an option list, make
sure that the new name value is a selection on the corresponding list.
Log in to the Content Server
instance as an administrator.
Choose Administration,
then Configuration Manager.
In the Configuration Manager
window, select the Information Fields tab.
Select the product name metadata
field from the Field Info list.
Click Edit.
In the Edit Custom Info Field
window, if the Field Type value is Text or Long
Text and Enable Option List is deselected,
click OK or Cancel (this should not cause the lock-up
problem).
Otherwise,
If Enable Option List is
selected, then make sure that the new product name metadata field value is
included as a selection on the corresponding list:
Locate the Use Option
List field and click Edit.
Enter the new product name
metadata field value in the Option List dialog.
Click OK.
Click OK again (on the
Edit Custom Info Field window).
Click Update Database Design.
Click Rebuild Search Index.
Checking the Indexing Automatic
Update Cycle
The lock-up problem may be due to
the indexer's automatic update cycle. The error message indicates that the
indexer is failing because it loses connectivity. Every five minutes, the
indexer executes an automatic update cycle and could somehow be grabbing the
index file and locking it. If so, it might be useful to disable the indexer's
automatic update cycle while you run the import.
Log in to the Content Server
instance as an administrator.
Choose Administration,
then Repository Manager.
In the Repository Manager window,
select the Indexer tab.
Click
the Configure button in the Automatic Update Cycle section of the
tab.
In the Automatic Update Cycle
window, deselect Indexer Auto Updates.
6. Click OK.
Note:Be sure to reactivate the
automatic update cycle after completing the import. Otherwise, the server
will no longer automatically update the index database, which could adversely
impact future search results.
3. Health
Monitor (Components)
4. Archiving
(Export, Transfer, Import)
#
Activity
Detail
1
Source and Target locations
If the source and target Content
Servers are on different machines in a slow network, then
transfering/publishing from source to target will also be slow. Perform
network trace using 3rd party product. Some customers have been successful in
finding network bottlenecks using Ethereal.
2
Automatic Update Cycle
To prevent lock-ups during import,
it might be useful to disable the Automatic Update Cycle during import.
Be sure to reactivate the automatic update cycle after completing the import.
Otherwise, the server will no longer automatically update the index database,
which could adversely impact future search results.
3
Checking the Metadata Field
Properties
The product name metadata field
may not have been properly updated in Configuration Manager. Depending on the
type of metadata field that the 'product name' is, changing the value could
be the reason for the lock-up problem. Is the product name metadata field a (long)
text field only or also an option list? If it is an option list, make sure
that the new name value is a selection on the corresponding list.
Log in to the Content Server
instance as an administrator.
Choose Administration, then Configuration
Manager.
In the Configuration Manager
window, select the Information Fields tab.
Select the product name metadata
field from the Field Info list.
Click Edit.
In the Edit Custom Info Field
window, if the Field Type value is Text or Long Text and Enable
Option List is deselected, click OK or Cancel (this
should not cause the lock-up problem).
Otherwise,
If Enable Option List is
selected, then make sure that the new product name metadata field value is
included as a selection on the corresponding list:
Locate the Use Option List field
and click Edit.
Enter the new product name
metadata field value in the Option List dialog.
Click OK.
Click OK again (on the
Edit Custom Info Field window).
Click Update Database Design.
Click Rebuild Search Index.
4
Checking the Indexing Automatic
Update Cycle
The lock-up problem may be due to
the indexer's automatic update cycle. The error message indicates that the
indexer is failing because it loses connectivity. Every five
minutes, the indexer executes an automatic update cycle and could somehow be
grabbing the index file and locking it. If so, it might be useful to disable
the indexer's automatic update cycle while you run the import.
Log in to the Content Server
instance as an administrator.
Choose Administration,
then Repository Manager.
In the Repository Manager window,
select the Indexer tab.
Click
the Configure button in the Automatic Update Cycle section of the
tab.
In the Automatic Update Cycle
window, deselect Indexer Auto Updates.
Click OK.
Note:
Be sure to reactivate the
automatic update cycle after completing the import. Otherwise, the server
will no longer automatically update the index database, which could adversely
impact future search results.
5
5. Java
JVM (Memory heap size)
#
Activity
Detail
1
JVM memory check
In the memory section, it says to
use not more than half of your memory. That isn't necessarily true... If you
had a box with 2GB of memory it could make sense to give us 1.5GB. It depends
on a lot of things... what OS, is anything else running on the box, etc....
If I had to build a generic model I would say that a particular OS needs X (I
don't know what X is) and then say you can give Java up to (TotalRam - other
apps - X) total memory. But, there are per-OS limits on the maximum as well.
For example, you can't really go over 1.5GB right now so even if you had 8GB
of memory in the box, it wouldn't matter.
We recommend against setting -Xms:
Also, you can't just set JAVA_OPTIONS, that will cause problems on some
versions with some platforms. For patched 7.5.1, 7.5.2 and 10gr3 there is a
better way. See (Note- Setting JAVA_OPTIONS in intradoc.cfg) for details.
Set verbose JVC for garbage collection logging to determine Java memory
bottleneck.
For 1.4. JVMs (7.5.1 and earlier) and for 1.5 JVMs (7.5.2 and 10gR3) this is
the key reference:
The options are: "* *-verbose:gc" and also
"-XX:+PrintGCDetails" to get even more data.
The default heap size value for the Java Virtual Machine (JVM) may not be
adequate. To increase the amount of memory your Content Server Java process
will use on your server, add the following configuration variable to the
<Stellent-instance>/bin/intradoc.cfg file depending on how much RAM you
have.
Example:
JAVA_MAX_HEAP_SIZE=1024 (HEAP_OPTIONS will take on -Xmx1024m).
Once you have added this variable, restart Content Server.
2
6. Network
#
Activity
Detail
1
Network Performance check
Perform network trace using 3rd
party product. Some customers have been successful in finding network
bottlenecks using Ethereal.
Suggestion: Where applicable, use proxy servers and reverse proxy servers to
optimize network traffic and distribute CPU and database loads.
2
7. Database
#
Activity
Detail
1
Database performance check
If Folders is installed, have the
DBA make sure there is a database index on this column:
DocMeta.xCollectionID.
Periodically re-build the database table indexes for Documents, DocMeta, and
Revisions.
The Content Server is a very generic tool that gets used in many, very
different ways. Depending on the particular data involved and the usage, many
different actions could be necessary to keep a system running as it should.
From our experience rebuilding indexes in the database is necessary as data
changes in the table. However, issues generally only show up once you get to
large numbers of rows, say hundreds of thousands. The frequency with which it
should be done depends on many variables. We recommend monitoring the query
speed once a day and seeing if it starts to degrade significantly.
From that data we would take appropriate actions. Meaning if it degrades
quickly, you may need to rebuild once a week. Or, you may only need to do it
once a month.
Acquire the latest database driver for the particular version of their
database and make sure the <stellent-instance>\config.cfg points to it.
Oracle Query Optimizer Component
The Oracle Query Optimization component takes advantage of our intrinsic
knowledge of the Content Server table data distribution and selectivity of
indexes. Based on this knowledge, a hint rules table is defined, and is used
by the component to analyze the database query and to add appropriate hints
to the query to achieve better performance.
Oracle Query Optimization Document
2
Allotted Tablespace Exceeded
When the Content Server instance
creates its database tablespace, it only allocates 50 extents. As the
database grows and is re-indexed, it uses more space (extents). Eventually,
the 50 extents limit is exceeded. At some point in the transfer, one of your
files tried to extend past the 'max extents' limit. In this case, try
implementing one or more of the following solutions:
Look for weblayout queries that
are excessively large, eliminate them, and retry your transfer.
Perhaps a Content Server user does
not have the right permission grants (resource and connect) to the Content
Server schema. That user must have the temporary tablespace and default
tablespace set to the Content Server defaults.
If the system 'max extents' limit
is less than the system maximum, you must increase the number of extents that
are available. Refer to your Oracle Database documentation or ask your
database administrator for the appropriate Oracle SQL command to increase the
tablespace extents.
You can optionally choose to
re-create the database using larger initial, next or percent to grow
parameters for the tablespaces. In this case, it is advisable to set the
initial extents and next extents to 1Mb. Set the percent to grow parameter
(PCTINCREASE) to 0% to allow the tables to automatically grow on an as-needed
basis.
3
Top Activities:
1. Verify
the content search performance
#
|
Activity
|
Detail
|
1
|
Disk Speed
|
Fast local disk will always beat
network attached storage. For clusters, you really don't have an option here,
but if it is network based, that should ideally be a dedicated, fast network
connection between the CS nodes and the storage.
|
2
|
Anti-virus
|
Active scanning virus products can
make disk speeds horrible, make sure there is no active scanning against the
search directory. Ideally they should only scan vault/~temp
|
3
|
Collection maintenance
|
Everyone with a large sized
collection who still runs Verity should run the Verity maintenance commands
to optimize the collection periodically. These commands remove deleted
content entries from the index.
Verity recommends to run the optimization tune-up after each bulk-submit. By default mkvdk does housekeeping of the collection once in a while, but performing an explicit optimization can ensure that search performance is at its peak at any time.
NOTE
748026.1 How To Clean-up and Optimize the Verity VDK6
Component's Search Collection
|
4
|
Collection structure
|
The Verity collection is much like
a database, it needs to be structured correctly to perform well. This usually
just involves moving fields that are being searched against to their own part
file (inside Verity, usually called a data table). This can be done in the
interface in 7.5 (Advanced Search Config) but Content Server 7.1 requires the
SeparatePartsFile component and manual configuration. Note that this will
always require a rebuild. There are TKBs on this as well. Look for
SeparatePartFiles.
|
5
|
Pure load
|
If the system is busy doing other
things, that will slow searches
|
6
|
Index commonly searched fields
|
Ensure that all fields typically
used during a search have an index on them. In particular, dSecurityGroup
should have an index and if you are using accounts, then dDocAccount should
have one as well.
|
7
|
Security Group and Accounts
structure
|
Search performance is affected by
the number of security groups a user has permission to. To return only
content that a user has permission to see, the database WHERE clause includes
a list of security groups. The WHERE clause either includes all of the security
groups the user has permission to, or it includes all of the security groups
the user does not have permission to. Which approach is taken depends on
whether the user has permission to more than 50% or fewer than 50% of the
defined security groups.
For example, if 100 security groups are defined, and a user has permission to 10 security groups, the 10 security groups will be included in the WHERE clause. In contrast, for a user with permission to 90 security groups, the WHERE clause includes the 10 security groups the user does not have permission to. Therefore, if a user has permission to almost 50% of the security groups, the search performance is less efficient. If a user has permission to all or none of the security groups, the search performance is more efficient. Consider the following performance issues when using accounts in your security model: Theoretically, you can create an unlimited number of accounts without affecting content server performance. A system with over 100,000 pieces of content has only limited administration performance problems at 200 accounts per person; however, there is significant impact on search performance with over 100 accounts per person. (Note that these are explicit accounts, not accounts that are implicitly associated with a user through a hierarchical account prefix. A user can have permission to thousands of implicit accounts through a single prefix.) For performance reasons, do not use more than approximately 15 to 25 security groups if you enable accounts. Ensure that your security groups and accounts have relatively short names. |
2. Index
Rebuilding
#
|
Activity
|
Detail
|
1
|
System Hardware
|
A good place to start performance
evaluation is with system hardware. If system performance is poor, check the
following areas to make sure they are adequate for the task:
File system size: If your file system is inadequate or too slow, search performance can be slowed. Make certain that you have adequate space for the number of servers yoursite is using. Memory size: If your memory size is too small, swap times can slow the system considerably. Make certain that you have enough memory and that it is not over-used or under-used. Processor type: Make certain you are using a processor with adequate speed and that it isn't being overused. |
2
|
Type of Index
|
If you have up to 1 million
content items, then Verity indexing will be fine (for systems running at or below
v10g)
If you have over 1 million content items, then we recommend database indexing with Text Indexer filter. |
3
|
Content Items Per Indexer Batch
|
A way of speeding up your
collection building process could be as follows:
In Repository Manager, Collection Rebuild Cycle press the "Configure" button. The first item, "Content Items Per Indexer Batch", tells how many items to send through mkvdk to get indexed at one time. If you set this number high, indexing goes faster, but if one item fails, we send the entire batch through again. So if you have it set to 2000 and a document fails, you'd have to wait until all 2000 go through again. This would take a lot longer than if you have it set to 25 and an item fails. However, if there are no failures, then having this set to high number would go faster. Try setting this to 1,000 and see if indexer is quicker. If not, try setting this to 2,000 before the next time you rebuild the index. The second item, "Content Items per Checkpoint", sets a checkpoint. After the checkpoint is reached, some merging of the collection is done before the next batch is indexed. The problem with setting this high is that if you try to cancel a rebuild or an update cycle, it won't stop until the checkpoint is reached. However, setting it too low will cause the indexing to take longer. If the first item speeds up indexing a little, try setting "Content Items per Checkpoint" to 20,000.
Important:
Content Items Per Indexer Batch is
the number of documents that are indexed at one time. If one of these
documents in a batch fails indexing, then the entire batch will
be processed again. This can cause many index retries. If you have a lot of
documents that fail indexing, you may want to decrease the Content Items Per
Indexer Batch so that there is a higher percentage of batches being processed
the first time, thus avoiding retries.
If your document indexing failure rate = approximately 1%, then setting Content Items Per Indexer Batch to 25 is recommended. If your document indexing failure rate = approximately 10%, then setting Content Items Per Indexer Batch to 5 is recommended.
References:
Selective Refine and Index
component: This component will not convert the file and or index it if the
filter is met.
Note: Format map replaces the entire list of formats. |
4
|
Checking the Metadata Field
Properties
|
The product name metadata field
may not have been properly updated in Configuration Manager. Depending on the
type of metadata field that the 'product name' is, changing the value could
be the reason for the lock-up problem. Is the product name metadata field a
(long) text field only or also an option list? If it is an option list, make
sure that the new name value is a selection on the corresponding list.
Log in to the Content Server
instance as an administrator.
Choose Administration,
then Configuration Manager.
In the Configuration Manager
window, select the Information Fields tab.
Select the product name metadata
field from the Field Info list.
Click Edit.
In the Edit Custom Info Field
window, if the Field Type value is Text or Long
Text and Enable Option List is deselected,
click OK or Cancel (this should not cause the lock-up
problem).
Otherwise,
If Enable Option List is
selected, then make sure that the new product name metadata field value is
included as a selection on the corresponding list:
Locate the Use Option
List field and click Edit.
Enter the new product name
metadata field value in the Option List dialog.
Click OK.
Click OK again (on the
Edit Custom Info Field window).
Click Update Database Design.
Click Rebuild Search Index.
|
Checking the Indexing Automatic
Update Cycle
|
The lock-up problem may be due to
the indexer's automatic update cycle. The error message indicates that the
indexer is failing because it loses connectivity. Every five minutes, the
indexer executes an automatic update cycle and could somehow be grabbing the
index file and locking it. If so, it might be useful to disable the indexer's
automatic update cycle while you run the import.
Log in to the Content Server
instance as an administrator.
Choose Administration,
then Repository Manager.
In the Repository Manager window,
select the Indexer tab.
Click
the Configure button in the Automatic Update Cycle section of the
tab.
In the Automatic Update Cycle
window, deselect Indexer Auto Updates.
6. Click OK.
Note:Be sure to reactivate the
automatic update cycle after completing the import. Otherwise, the server
will no longer automatically update the index database, which could adversely
impact future search results.
|
|
3. Health
Monitor (Components)
4. Archiving
(Export, Transfer, Import)
#
|
Activity
|
Detail
|
1
|
Source and Target locations
|
If the source and target Content
Servers are on different machines in a slow network, then
transfering/publishing from source to target will also be slow. Perform
network trace using 3rd party product. Some customers have been successful in
finding network bottlenecks using Ethereal.
|
2
|
Automatic Update Cycle
|
To prevent lock-ups during import,
it might be useful to disable the Automatic Update Cycle during import.
Be sure to reactivate the automatic update cycle after completing the import. Otherwise, the server will no longer automatically update the index database, which could adversely impact future search results. |
3
|
Checking the Metadata Field
Properties
|
The product name metadata field
may not have been properly updated in Configuration Manager. Depending on the
type of metadata field that the 'product name' is, changing the value could
be the reason for the lock-up problem. Is the product name metadata field a (long)
text field only or also an option list? If it is an option list, make sure
that the new name value is a selection on the corresponding list.
Log in to the Content Server
instance as an administrator.
Choose Administration, then Configuration
Manager.
In the Configuration Manager
window, select the Information Fields tab.
Select the product name metadata
field from the Field Info list.
Click Edit.
In the Edit Custom Info Field
window, if the Field Type value is Text or Long Text and Enable
Option List is deselected, click OK or Cancel (this
should not cause the lock-up problem).
Otherwise,
If Enable Option List is
selected, then make sure that the new product name metadata field value is
included as a selection on the corresponding list:
Locate the Use Option List field
and click Edit.
Enter the new product name
metadata field value in the Option List dialog.
Click OK.
Click OK again (on the
Edit Custom Info Field window).
Click Update Database Design.
Click Rebuild Search Index.
|
4
|
Checking the Indexing Automatic
Update Cycle
|
The lock-up problem may be due to
the indexer's automatic update cycle. The error message indicates that the
indexer is failing because it loses connectivity. Every five
minutes, the indexer executes an automatic update cycle and could somehow be
grabbing the index file and locking it. If so, it might be useful to disable
the indexer's automatic update cycle while you run the import.
Log in to the Content Server
instance as an administrator.
Choose Administration,
then Repository Manager.
In the Repository Manager window,
select the Indexer tab.
Click
the Configure button in the Automatic Update Cycle section of the
tab.
In the Automatic Update Cycle
window, deselect Indexer Auto Updates.
Click OK.
Note:
Be sure to reactivate the
automatic update cycle after completing the import. Otherwise, the server
will no longer automatically update the index database, which could adversely
impact future search results.
|
5
|
5. Java
JVM (Memory heap size)
#
|
Activity
|
Detail
|
1
|
JVM memory check
|
In the memory section, it says to
use not more than half of your memory. That isn't necessarily true... If you
had a box with 2GB of memory it could make sense to give us 1.5GB. It depends
on a lot of things... what OS, is anything else running on the box, etc....
If I had to build a generic model I would say that a particular OS needs X (I
don't know what X is) and then say you can give Java up to (TotalRam - other
apps - X) total memory. But, there are per-OS limits on the maximum as well.
For example, you can't really go over 1.5GB right now so even if you had 8GB
of memory in the box, it wouldn't matter.
We recommend against setting -Xms: Also, you can't just set JAVA_OPTIONS, that will cause problems on some versions with some platforms. For patched 7.5.1, 7.5.2 and 10gr3 there is a better way. See (Note- Setting JAVA_OPTIONS in intradoc.cfg) for details. Set verbose JVC for garbage collection logging to determine Java memory bottleneck. For 1.4. JVMs (7.5.1 and earlier) and for 1.5 JVMs (7.5.2 and 10gR3) this is the key reference: The options are: "* *-verbose:gc" and also "-XX:+PrintGCDetails" to get even more data. The default heap size value for the Java Virtual Machine (JVM) may not be adequate. To increase the amount of memory your Content Server Java process will use on your server, add the following configuration variable to the <Stellent-instance>/bin/intradoc.cfg file depending on how much RAM you have. Example: JAVA_MAX_HEAP_SIZE=1024 (HEAP_OPTIONS will take on -Xmx1024m). Once you have added this variable, restart Content Server. |
2
|
6. Network
#
|
Activity
|
Detail
|
1
|
Network Performance check
|
Perform network trace using 3rd
party product. Some customers have been successful in finding network
bottlenecks using Ethereal.
Suggestion: Where applicable, use proxy servers and reverse proxy servers to optimize network traffic and distribute CPU and database loads. |
2
|
7. Database
#
|
Activity
|
Detail
|
1
|
Database performance check
|
If Folders is installed, have the
DBA make sure there is a database index on this column:
DocMeta.xCollectionID.
Periodically re-build the database table indexes for Documents, DocMeta, and Revisions. The Content Server is a very generic tool that gets used in many, very different ways. Depending on the particular data involved and the usage, many different actions could be necessary to keep a system running as it should. From our experience rebuilding indexes in the database is necessary as data changes in the table. However, issues generally only show up once you get to large numbers of rows, say hundreds of thousands. The frequency with which it should be done depends on many variables. We recommend monitoring the query speed once a day and seeing if it starts to degrade significantly. From that data we would take appropriate actions. Meaning if it degrades quickly, you may need to rebuild once a week. Or, you may only need to do it once a month. Acquire the latest database driver for the particular version of their database and make sure the <stellent-instance>\config.cfg points to it. Oracle Query Optimizer Component The Oracle Query Optimization component takes advantage of our intrinsic knowledge of the Content Server table data distribution and selectivity of indexes. Based on this knowledge, a hint rules table is defined, and is used by the component to analyze the database query and to add appropriate hints to the query to achieve better performance. Oracle Query Optimization Document |
2
|
Allotted Tablespace Exceeded
|
When the Content Server instance
creates its database tablespace, it only allocates 50 extents. As the
database grows and is re-indexed, it uses more space (extents). Eventually,
the 50 extents limit is exceeded. At some point in the transfer, one of your
files tried to extend past the 'max extents' limit. In this case, try
implementing one or more of the following solutions:
Look for weblayout queries that
are excessively large, eliminate them, and retry your transfer.
Perhaps a Content Server user does
not have the right permission grants (resource and connect) to the Content
Server schema. That user must have the temporary tablespace and default
tablespace set to the Content Server defaults.
If the system 'max extents' limit
is less than the system maximum, you must increase the number of extents that
are available. Refer to your Oracle Database documentation or ask your
database administrator for the appropriate Oracle SQL command to increase the
tablespace extents.
You can optionally choose to
re-create the database using larger initial, next or percent to grow
parameters for the tablespaces. In this case, it is advisable to set the
initial extents and next extents to 1Mb. Set the percent to grow parameter
(PCTINCREASE) to 0% to allow the tables to automatically grow on an as-needed
basis.
|
3
|
No comments:
Post a Comment