Wednesday, August 26, 2015

Change the Port for the WebCenter Content 11g Web Interface

1. Access WebLogic Server Admin Console (normally http://weblogic_server:7001/console), and login with an administrator user.
2. Expand the Environment option under Domain Structure, and click Servers.
3. Click on the required UCM/WebCenter Content server in the list.
4. If the Listen Port field is shaded/disabled, you need to enable editing of the configuration - this can normally be done by clicking the Lock & Edit button in the top-left of the screen.
5. Update the Listen Port field to whatever port you want it to use, and click Save.
6. This change usually takes place instantly without restarting, but it will inform you if a restart is necessary.
7. Update your HttpServerAddress setting in the <ecm_domain>/ucm/cs/config/config.cfg such that it reflects the new port (such that webviewable links reflect the new port).
8. Via Repository Manager admin applet's Indexer tab, perform a full, not Fast, rebuild of the search collection (such that the indexed doc url is updated with the new value).

Sunday, August 23, 2015

WebCenter Content RIDC Outlines

RIDC Call to Download File for Spring

1. Method to getFile

public Attachment getWCCFile(String dID,String dDocName,String username) throws IdcClientException, IOException{
IdcContext idcContext= new IdcContext(username);
IdcClient<IdcClientConfig, Protocol, Connection>  client = getIdcClient();
        DataBinder dataBinderReq = client.createBinder();
           dataBinderReq.putLocal("IdcService", "GET_FILE");
           dataBinderReq.putLocal("dID", dID);
           dataBinderReq.putLocal("dDocName", dDocName);
           dataBinderReq.putLocal("allowInterrupt", "1");
           dataBinderReq.putLocal("RevisionSelectionMethod", "LatestReleased");
           ServiceResponse  severiceResponse = client.sendRequest(idcContext, dataBinderReq);
           InputStream inputStream = severiceResponse.getResponseStream();
           byte[] bytes = IOUtils.toByteArray(inputStream);
    Attachment attachment = new Attachment(bytes,0,bytes.length);
    attachment.setContentType(severiceResponse.getHeader("Content-Type"));
    attachment.setFileName(severiceResponse.getHeader("filename"));
    attachment.setContentLength(severiceResponse.getHeader("Content-Length"));
 
    return attachment;
}

2.  Attachment Class
                  public class Attachment {
                     private byte[] content;
                     private int offset;
                     private int length;
                     private String contentType;
                     private String extension;
                     private String fileName;
                     private String contentLength;
                   }

3. Spring Controller

@RequestMapping(value = "/getDocument", method = RequestMethod.POST)
public ResponseEntity<byte[]> getDocument(@ModelAttribute("metaDataForm") MetaData metaData) {
logger.info(" In WCCController.getDocument() ");
try {
Attachment attachment = new RIDCHelper(ridcUrl,ridcPort,ucmPort).getWCCFile(metaData.getdID(), metaData.getdDocName(), metaData.getdDocAuthor());
return getResponse(metaData.getdDocName(),attachment);
} catch (IdcClientException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (MimeTypeException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}

   private ResponseEntity<byte[]> getResponse(String dDocName,Attachment attachment) throws MimeTypeException{
    logger.info(" In WCCController.getResponse() ");
HttpHeaders headers = new HttpHeaders();
    headers.setContentType(MediaType.parseMediaType(attachment.getContentType()));
 
   Random rand = new Random(System.currentTimeMillis());
   MimeTypes allTypes = MimeTypes.getDefaultMimeTypes();
   MimeType extMime = allTypes.forName(attachment.getContentType());
   String ext = extMime.getExtension();

String randomFileName = "/"+dDocName+"_"+Math.abs(rand.nextLong()) + ext;
   headers.setContentDispositionFormData(randomFileName, randomFileName);
   headers.setCacheControl("must-revalidate, post-check=0, pre-check=0");
   ResponseEntity<byte[]> response = new ResponseEntity<byte[]>(attachment.getContent(), headers, HttpStatus.OK);
   return response;
   }

4. Data save and retrieval through jdbc utility.

Save:-

 public void save(TransactionAction entity) throws Exception {
logger.info(" In TransactionActionDAOImpl.save(TransactionAction) ");
Object []statementParams = new Object[]{entity.getActionid(),entity.getName(),entity.getTitle(),entity.getTitlear(),
entity.getActiontype(),entity.getIsRequired()};
DataWorker dataWorker =  new DataWorker(addNew, null,statementParams);
try {
session.executeDataWorkers(dataWorker);
} catch (SQLException e) {
logger.error(" In TransactionActionDAOImpl.save(TransactionAction), Exception : "+e.getMessage());
throw e;
}

}

Update:-
public void update(TransactionAction entity)throws Exception {
logger.info(" In TransactionActionDAOImpl.update(TransactionAction) ");
Object []statementParams = new Object[]{new String(entity.getTitle().toUpperCase().replaceAll("\\s", "")),entity.getTitle(),entity.getTitlear(),
entity.getActiontype(),entity.getIsRequired(),entity.getActionid()};
DataWorker dataWorker =  new DataWorker(update, null,statementParams);
try {
session.executeDataWorkers(dataWorker);
} catch (SQLException e) {
logger.error(" In TransactionActionDAOImpl.update(TransactionAction), Exception : "+e.getMessage());
throw e;
}
}

FindAll:-

public TransactionAction findById(BigDecimal id) {
logger.info(" In TransactionActionDAOImpl.findById(BigDecimal), Action Id: "+id);
TransactionAction action = null;
try {
action = getSession().getEntity(findById,new TransactionAction(),id,Constants.ACTIVE);
} catch (SQLException e) {
logger.error(" TransactionActionDAOImpl.findById(BigDecimal), Action Id: "+id+", Exception : "+e.getMessage());
e.printStackTrace();
}
return action;
}


Thursday, August 20, 2015

Oracle WebCenter Content Tracking Report Outlines

Install Content tracker in UCM

Data collection: Gathering content access information and writing the information to event log files.
Data reduction: Processing the information from data collection and merging it into a database table.
Data Engine Control Center: The interface that provides access to the user-controlled functions of the Data Engine. It has the following tabs or sub-screens:
Collection: Used to enable data collection.
Reduction: Used to stop and start data reduction (merging data into database tables).
Schedule: Used to enable automatic data reduction.
Snapshot: Used to enable activity metrics. The term snapshot also denotes an information set representing the world at a particular time.
Services: Used to add, configure, and edit service calls to be logged. It is also used to define the specific event details logged for a given service.


Note:- C:\Oracle\Middleware\user_projects\domains\webcenter\ucm\cs\data\components\ContentTracker\config.cfg (SctTrackContentAccessOnly=false)

1. Enable Content Tracker and Content Tracker Report component.

2. we will use three (3) files :
 i - Oracle\Middleware\Oracle_ECM1\ucm\idc\components\ContentTrackerReports\resources\contenttrackerreports_query.htm

3. Add these lines at the end of Oracle\Middleware\Oracle_ECM1\ucm\idc\components\ContentTrackerReports\resources\contenttrackerreports_query.htm
<tr> <td>qCustomusers</td> <td>select dname from users</td> </tr>

4. Add these lines at the end of file Oracle\Middleware\Oracle_ECM1\ucm\idc\components\ContentTrackerReports\resources\contenttrackerreports_template_resource.htm
<@dynamichtml qCustomusers_vars@>
<$reportWidth = "100%"$> <$title = "<i>Users</i>"$> <$reportTitle="Users"$> <$column1Width="35%"$> <$column0Drill="qSctrDocsSeenByUser_Drill"$>
<@end@>


5. Then restart content server.

Some Short notes are :-



Wednesday, August 19, 2015

Oracle Document Distributed Capture Configuration

1. Install Oracle Document Distributed Capture

2. Install ODAC or Oracle Client for windows download (http://www.oracle.com/technetwork/database/windows/downloads/index-090165.html)
Reference: https://tensix.com/2012/06/setting-up-an-oracle-odbc-driver-and-data-source/

3. Create ODBC Connection in windows for oracle database.

4. Start Oracle Distributed Capture

5. Select ODBC connection that you have created on 3rd step or select Microsoft access database for quick start.

6. Create File Cabinets [ODC > Admin > File Cabines > click on Sun icon to add new ]

7. Add required Index fields

8. Create commit profile for created cabinet. [<<Cabinet Name>> > Commit Profile]
i. Change commit profile name
ii. Select commit driver and configure the selected driver to map the fields
WebCneter Server URL : http://server:16200/cs/idcplg

iii. Map the fields and in options tab do settings for name of document.

9. Manage Scan Profile (Scanner Setting Icon) and Add new scan profile.
i. Manage General Setting and select cabinet for scan profile, add batch prefix
ii. Manage image source and select file / scanner.
iii. Manager other required settings and save the managed profile

10.  Manage Index Profile (Folder Setting Icon) and new index profile with new button.
i. Add name and select file cabinet
ii. Add fields in "Fields"
iii. Manage each field properties
iv. Save index profile

11. Update zonal OCR (Zone Editor) if there is any in your document by clicking pen icon.
i. Select Index Profile
ii. Open document by clicking folder icon
iii. Select index field and select zone in document, zone selection is enabled by blue box icon.
iv. select zone in document and click OCR field icon (Paper with A) and Save the selection.

12. To start contribution press scanner button and select scan profile and scan button (scanner icon on Batch scanning window) the document will be scanned/file selection and batch will be created.

13. To index the document click batch indexing (yellow folder icon), select index profile and all batches will be indexed.

14. Select the latest indexed document open it and fill the required meta information.

15. Commit the document by click PC icon on new bar.

16. Verify the checkIn document in WebCenter Content.

Fore More reference search google :)

Complete Uninstall Document Capture

1.  Use the Add/Remove Programs feature in the Control Panel of Windows to remove the software.

2.  Delete any Capture folders under Program Files (32bit), Program Files x86 (64bit) and Program Data.

3.  Make sure that you have a good and verifiable backup of the registry of the system.  Remove the Captovation registry key under: "HKEY_LOCAL_MACHINE\SOFTWARE\Captovation" (for 32-bit systems) or "HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432NODE\Captovation" (for 64-bit systems).

Monday, August 17, 2015

Oracle WebCenter Content Troubleshooting Tasks

Top Activities:

1.     Verify the content search performance

#
Activity
Detail
1
Disk Speed
Fast local disk will always beat network attached storage. For clusters, you really don't have an option here, but if it is network based, that should ideally be a dedicated, fast network connection between the CS nodes and the storage.
2
Anti-virus
Active scanning virus products can make disk speeds horrible, make sure there is no active scanning against the search directory. Ideally they should only scan vault/~temp 
3
Collection maintenance
Everyone with a large sized collection who still runs Verity should run the Verity maintenance commands to optimize the collection periodically. These commands remove deleted content entries from the index.

Verity recommends to run the optimization tune-up after each bulk-submit. By default mkvdk does housekeeping of the collection once in a while, but performing an explicit optimization can ensure that search performance is at its peak at any time.

NOTE 748026.1 How To Clean-up and Optimize the Verity VDK6 Component's Search Collection
4
Collection structure
The Verity collection is much like a database, it needs to be structured correctly to perform well. This usually just involves moving fields that are being searched against to their own part file (inside Verity, usually called a data table). This can be done in the interface in 7.5 (Advanced Search Config) but Content Server 7.1 requires the SeparatePartsFile component and manual configuration. Note that this will always require a rebuild. There are TKBs on this as well. Look for SeparatePartFiles.
5
Pure load
If the system is busy doing other things, that will slow searches
6
Index commonly searched fields
Ensure that all fields typically used during a search have an index on them. In particular, dSecurityGroup should have an index and if you are using accounts, then dDocAccount should have one as well.
7
Security Group and Accounts structure
Search performance is affected by the number of security groups a user has permission to. To return only content that a user has permission to see, the database WHERE clause includes a list of security groups. The WHERE clause either includes all of the security groups the user has permission to, or it includes all of the security groups the user does not have permission to. Which approach is taken depends on whether the user has permission to more than 50% or fewer than 50% of the defined security groups.

For example, if 100 security groups are defined, and a user has permission to 10 security groups, the 10 security groups will be included in the WHERE clause. In contrast, for a user with permission to 90 security groups, the WHERE clause includes the 10 security groups the user does not have permission to.

Therefore, if a user has permission to almost 50% of the security groups, the search performance is less efficient. If a user has permission to all or none of the security groups, the search performance is more efficient.

Consider the following performance issues when using accounts in your security model:

Theoretically, you can create an unlimited number of accounts without affecting content server performance. A system with over 100,000 pieces of content has only limited administration performance problems at 200 accounts per person; however, there is significant impact on search performance with over 100 accounts per person.

(Note that these are explicit accounts, not accounts that are implicitly associated with a user through a hierarchical account prefix. A user can have permission to thousands of implicit accounts through a single prefix.)

For performance reasons, do not use more than approximately 15 to 25 security groups if you enable accounts.

Ensure that your security groups and accounts have relatively short names.

2.     Index Rebuilding

#
Activity
Detail
1
System Hardware
A good place to start performance evaluation is with system hardware. If system performance is poor, check the following areas to make sure they are adequate for the task:

File system size: If your file system is inadequate or too slow, search performance can be slowed. Make certain that you have adequate space for the number of servers yoursite is using.

Memory size: If your memory size is too small, swap times can slow the system considerably. Make certain that you have enough memory and that it is not over-used or under-used.

Processor type: Make certain you are using a processor with adequate speed and that it isn't being overused.
2
Type of Index
If you have up to 1 million content items, then Verity indexing will be fine (for systems running at or below v10g)
If you have over 1 million content items, then we recommend database indexing with Text Indexer filter.
3
Content Items Per Indexer Batch
A way of speeding up your collection building process could be as follows:

In Repository Manager, Collection Rebuild Cycle press the "Configure" button.

The first item, "Content Items Per Indexer Batch", tells how many items to send through mkvdk to get indexed at one time. If you set this number high, indexing goes faster, but if one item fails, we send the entire batch through again. So if you have it set to 2000 and a document fails, you'd have to wait until all 2000 go through again. This would take a lot longer than if you have it set to 25 and an item fails. However, if there are no failures, then having this set to high number would go faster.

Try setting this to 1,000 and see if indexer is quicker. If not, try setting this to 2,000 before the next time you rebuild the index.

The second item, "Content Items per Checkpoint", sets a checkpoint. After the checkpoint is reached, some merging of the collection is done before the next batch is indexed. The problem with setting this high is that if you try to cancel a rebuild or an update cycle, it won't stop until the checkpoint is reached. However, setting it too low will cause the indexing to take longer.

If the first item speeds up indexing a little, try setting "Content Items per Checkpoint" to 20,000.

Important:
Content Items Per Indexer Batch is the number of documents that are indexed at one time. If one of these documents in a batch fails indexing, then the entire   batch will be processed again. This can cause many index retries. If you have a lot of documents that fail indexing, you may want to decrease the Content Items Per Indexer Batch so that there is a higher percentage of batches being processed the first time, thus avoiding retries.

If your document indexing failure rate = approximately 1%, then setting Content Items Per Indexer Batch to 25 is recommended.

If your document indexing failure rate = approximately 10%, then setting Content Items Per Indexer Batch to 5 is recommended. 

References:
Selective Refine and Index component: This component will not convert the file and or index it if the filter is met.

Note: Format map replaces the entire list of formats.
4
Checking the Metadata Field Properties
The product name metadata field may not have been properly updated in Configuration Manager. Depending on the type of metadata field that the 'product name' is, changing the value could be the reason for the lock-up problem. Is the product name metadata field a (long) text field only or also an option list? If it is an option list, make sure that the new name value is a selection on the corresponding list.
Log in to the Content Server instance as an administrator.
Choose Administration, then Configuration Manager.
In the Configuration Manager window, select the Information Fields tab.
Select the product name metadata field from the Field Info list.
Click Edit.
In the Edit Custom Info Field window, if the Field Type value is Text or Long Text and Enable Option List is deselected, click OK or Cancel (this should not cause the lock-up problem).
Otherwise,
If Enable Option List is selected, then make sure that the new product name metadata field value is included as a selection on the corresponding list:
Locate the Use Option List field and click Edit.
Enter the new product name metadata field value in the Option List dialog.
Click OK.
Click OK again (on the Edit Custom Info Field window).
Click Update Database Design.
Click Rebuild Search Index.
Checking the Indexing Automatic Update Cycle
The lock-up problem may be due to the indexer's automatic update cycle. The error message indicates that the indexer is failing because it loses connectivity. Every five minutes, the indexer executes an automatic update cycle and could somehow be grabbing the index file and locking it. If so, it might be useful to disable the indexer's automatic update cycle while you run the import.
Log in to the Content Server instance as an administrator.
Choose Administration, then Repository Manager.
In the Repository Manager window, select the Indexer tab.
Click the Configure button in the Automatic Update Cycle section of the tab.
In the Automatic Update Cycle window, deselect Indexer Auto Updates.
6.             Click OK.
Note:Be sure to reactivate the automatic update cycle after completing the import. Otherwise, the server will no longer automatically update the index database, which could adversely impact future search results.

3.     Health Monitor (Components)

#
Activity
Detail
1
NumConnections
There seems to be a general support trend of increasing NumConnections to troubleshoot the Content Server error:

System Error: There are no connections available from pool for provider 'SystemDatabase'

Many times this results in a false "solution" for customers. Common numbers are 35 and 50.
This approach may not be the best overall resolution. We recommend troubleshooting to truly get to the heart of the problem.

Note: in some cases this is customer site/instance specific, although over 50 should typically not be necessary.

Causes of no connections being available from pool for provider 'SystemDatabase':

Long running database queries
Some process not releasing database connections (could be custom code)

Troubleshooting recommendations:

Check the Content Server System Audit Information page for long running queries and long connection active time. These may be caused by Archive replication, Content Tracker, Site Studio, etc

Ex.

Long execution starting at 5/17/07 4:36 PM for 134 (secs)
Long connection active time at 5/17/07 4:36 PM for 134 (secs)


In Content Server's System Audit Information page, turn on Verbose systemdatabase and requestaudit. Then capture the Server Output before restarting Content Server.

Look for unique thread number (Open and Close) connections.
If this is a UNIX system, get the Process ID (PID) from \etc\pid file then perform "kill <PID>". This will show what is being done inside the JVM.
If this is a Windows system, start Content Server from Command Prompt by running <stellent>\bin\idcserver. Get the server to a slow state and press CTRL-BREAK.
If this is a clustered Content Server, check both nodes for database query or connection issues.
2
Batch Loading
This section provides some basic guidelines that you can use to improve Batch Loader performance. These suggestions can minimize potentially slow batch load performance when you are checking in a large number of content items. In many cases, proper tuning for batch loading can significantly speed up a slow server.


To minimize batch loading slow downs, try implementing the following Batch Loader adjustments:

Temporarily disable other activities such as shutting down Inbound Refinery (see the Inbound Refinery Administration Guide) and suspending the automatic   update cycle feature of the Repository Manage.

If you are using the Verity index system, optimize the search collection prior to inserting the batch load file. For more information about Verity Indexer optimization scripts and parameters, see the AdditionalIndexBuildParams configuration variable in the Idoc Script Reference Guide.

Analyze your database usage during a batch load to help the database query optimizer. Databases have built-in optimizer utilities that can help make database queries more efficient. However, to maximize the efficiency of optimizers, it is necessary to update or recreate the statistics about the physical characteristics of a table and the associated indexes. These characteristics include number of records, number of pages, and the average record length. The optimizers use these statistics to access data.

Each database has a proprietary command that you can use to invoke the statistic update or recreation process. For example:

For Oracle, use the ANALYZE TABLE COMPUTE STATISTICS command

For SQL Server, use the CREATE STATISTICS statement

For DB2, use the RUNSTATS command
3

4.     Archiving (Export, Transfer, Import)

#
Activity
Detail
1
Source and Target locations
If the source and target Content Servers are on different machines in a slow network, then transfering/publishing from source to target will also be slow. Perform network trace using 3rd party product. Some customers have been successful in finding network bottlenecks using Ethereal.
2
Automatic Update Cycle
To prevent lock-ups during import, it might be useful to disable the Automatic Update Cycle during import.
Be sure to reactivate the automatic update cycle after completing the import. Otherwise, the server will no longer automatically update the index database, which could adversely impact future search results.
3
Checking the Metadata Field Properties
The product name metadata field may not have been properly updated in Configuration Manager. Depending on the type of metadata field that the 'product name' is, changing the value could be the reason for the lock-up problem. Is the product name metadata field a (long) text field only or also an option list? If it is an option list, make sure that the new name value is a selection on the corresponding list.
Log in to the Content Server instance as an administrator.
Choose Administration, then Configuration Manager.
In the Configuration Manager window, select the Information Fields tab.
Select the product name metadata field from the Field Info list.
Click Edit.
In the Edit Custom Info Field window, if the Field Type value is Text or Long Text and Enable Option List is deselected, click OK or Cancel (this should not cause the lock-up problem).
Otherwise,
If Enable Option List is selected, then make sure that the new product name metadata field value is included as a selection on the corresponding list:
Locate the Use Option List field and click Edit.
Enter the new product name metadata field value in the Option List dialog.
Click OK.
Click OK again (on the Edit Custom Info Field window).
Click Update Database Design.
Click Rebuild Search Index.
4
Checking the Indexing Automatic Update Cycle
The lock-up problem may be due to the indexer's automatic update cycle. The error message indicates that the indexer is failing because it loses connectivity.   Every five minutes, the indexer executes an automatic update cycle and could somehow be grabbing the index file and locking it. If so, it might be useful to disable the indexer's automatic update cycle while you run the import.
Log in to the Content Server instance as an administrator.
Choose Administration, then Repository Manager.
In the Repository Manager window, select the Indexer tab.
Click the Configure button in the Automatic Update Cycle section of the tab.
In the Automatic Update Cycle window, deselect Indexer Auto Updates.
Click OK.
Note:
Be sure to reactivate the automatic update cycle after completing the import. Otherwise, the server will no longer automatically update the index database, which could adversely impact future search results.
5

5.     Java JVM (Memory heap size)

#
Activity
Detail
1
JVM memory check
In the memory section, it says to use not more than half of your memory. That isn't necessarily true... If you had a box with 2GB of memory it could make sense to give us 1.5GB. It depends on a lot of things... what OS, is anything else running on the box, etc.... If I had to build a generic model I would say that a particular OS needs X (I don't know what X is) and then say you can give Java up to (TotalRam - other apps - X) total memory. But, there are per-OS limits on the maximum as well. For example, you can't really go over 1.5GB right now so even if you had 8GB of memory in the box, it wouldn't matter.

We recommend against setting -Xms:

Also, you can't just set JAVA_OPTIONS, that will cause problems on some versions with some platforms. For patched 7.5.1, 7.5.2 and 10gr3 there is a better way. See (Note- Setting JAVA_OPTIONS in intradoc.cfg) for details.

Set verbose JVC for garbage collection logging to determine Java memory bottleneck.


For 1.4. JVMs (7.5.1 and earlier) and for 1.5 JVMs (7.5.2 and 10gR3) this is the key reference:

The options are: "* *-verbose:gc" and also "-XX:+PrintGCDetails" to get even more data.

The default heap size value for the Java Virtual Machine (JVM) may not be adequate. To increase the amount of memory your Content Server Java process will use on your server, add the following configuration variable to the <Stellent-instance>/bin/intradoc.cfg file depending on how much RAM you have.

Example:
JAVA_MAX_HEAP_SIZE=1024 (HEAP_OPTIONS will take on -Xmx1024m).

Once you have added this variable, restart Content Server.
2

6.     Network

#
Activity
Detail
1
Network Performance check
Perform network trace using 3rd party product. Some customers have been successful in finding network bottlenecks using Ethereal.

Suggestion: Where applicable, use proxy servers and reverse proxy servers to optimize network traffic and distribute CPU and database loads.
2

7.     Database          

#
Activity
Detail
1
Database performance check
If Folders is installed, have the DBA make sure there is a database index on this column: DocMeta.xCollectionID.
Periodically re-build the database table indexes for Documents, DocMeta, and Revisions.

The Content Server is a very generic tool that gets used in many, very different ways. Depending on the particular data involved and the usage, many different actions could be necessary to keep a system running as it should. From our experience rebuilding indexes in the database is necessary as data changes in the table. However, issues generally only show up once you get to large numbers of rows, say hundreds of thousands. The frequency with which it should be done depends on many variables. We recommend monitoring the query speed once a day and seeing if it starts to degrade significantly.

From that data we would take appropriate actions. Meaning if it degrades quickly, you may need to rebuild once a week. Or, you may only need to do it once a month.

Acquire the latest database driver for the particular version of their database and make sure the <stellent-instance>\config.cfg points to it.

Oracle Query Optimizer Component

The Oracle Query Optimization component takes advantage of our intrinsic knowledge of the Content Server table data distribution and selectivity of indexes. Based on this knowledge, a hint rules table is defined, and is used by the component to analyze the database query and to add appropriate hints to the query to achieve better performance.

Oracle Query Optimization Document
2
Allotted Tablespace Exceeded
When the Content Server instance creates its database tablespace, it only allocates 50 extents. As the database grows and is re-indexed, it uses more space (extents). Eventually, the 50 extents limit is exceeded. At some point in the transfer, one of your files tried to extend past the 'max extents' limit. In this case, try implementing one or more of the following solutions:
Look for weblayout queries that are excessively large, eliminate them, and retry your transfer.
Perhaps a Content Server user does not have the right permission grants (resource and connect) to the Content Server schema. That user must have the temporary tablespace and default tablespace set to the Content Server defaults.
If the system 'max extents' limit is less than the system maximum, you must increase the number of extents that are available. Refer to your Oracle Database documentation or ask your database administrator for the appropriate Oracle SQL command to increase the tablespace extents.
You can optionally choose to re-create the database using larger initial, next or percent to grow parameters for the tablespaces. In this case, it is advisable to set the initial extents and next extents to 1Mb. Set the percent to grow parameter (PCTINCREASE) to 0% to allow the tables to automatically grow on an as-needed basis.
3




WebTier - OHS Request Forwarding

http://docs.oracle.com/cd/E29597_01/doc.1111/e15483/extend_ucm.htm

10.14 Configuring Oracle HTTP Server for the WLS_WCC Managed Servers

To enable Oracle HTTP Server to route to WCC_Cluster, which contain the WLS_WCC1 and WLS_WCC2 managed servers, you must set the WebLogicCluster parameter to the list of nodes in the cluster:

On WEBHOST1 and WEBHOST2, add the following lines to the ORACLE_BASE/admin/instance_name/config/OHS/component_name/mod_wl_ohs.conf file:


# UCM
<Location /cs>
   WebLogicCluster 192.138.1.161:16200,192.168.1.161:16200
   SetHandler weblogic-handler
   WLCookieName JSESSIONID
   WLProxySSL ON
   WLProxySSLPassThrough ON
</Location>

<Location /adfAuthentication>
   WebLogicCluster 192.138.1.161:16200,192.168.1.161:16200
   SetHandler weblogic-handler
   WLCookieName JSESSIONID
   WLProxySSL ON
   WLProxySSLPassThrough ON
</Location>

<Location /_ocsh>
   WebLogicCluster 192.138.1.161:16200,192.168.1.161:16200
   SetHandler weblogic-handler
   WLCookieName JSESSIONID
   WLProxySSL ON
   WLProxySSLPassThrough ON
</Location>


<Location /console>
   WebLogicCluster 192.138.1.161:7001,192.168.1.161:7001
   SetHandler weblogic-handler
   WLCookieName JSESSIONID
   WLProxySSL ON
   WLProxySSLPassThrough ON
</Location>


Restart Oracle HTTP Server on both WEBHOST1 and WEBHOST2:

ORACLE_BASE/admin/instance_name/bin/opmnctl restartproc ias-component=ohsX
For WEBHOST1, use ohs1 for ias-component and for WEBHOST2 use ohs2.

10.15 Validating Access Through Oracle HTTP Server

You should verify URLs to ensure that appropriate routing and failover is working from Oracle HTTP Server to WCC_Cluster. To verify the URLs:

While WLS_WCC2 is running, stop WLS_WCC1 using the WebLogic Server Administration Console.

Access http://WEBHOST1:7777/cs to verify it is functioning properly.

Start WLS_WCC1 from the WebLogic Server Administration Console.

Stop WLS_WCC2 from the WebLogic Server Administration Console.

Access http://WEBHOST1:7777/cs to verify it is functioning properly.

Clean Oracle Database


Execute bellow commands to clean the database.


BEGIN
FOR c IN (SELECT table_name FROM user_tables) LOOP
EXECUTE IMMEDIATE ('DROP TABLE "' || c.table_name || '" CASCADE CONSTRAINTS');
END LOOP;

FOR s IN (SELECT sequence_name FROM user_sequences) LOOP
EXECUTE IMMEDIATE ('DROP SEQUENCE ' || s.sequence_name);
END LOOP;

FOR f IN (select object_name from user_objects where object_type = 'FUNCTION') LOOP
EXECUTE IMMEDIATE ('DROP FUNCTION ' || f.object_name);
END LOOP;

FOR t IN (select object_name from user_objects where object_type = 'TYPE') LOOP
EXECUTE IMMEDIATE ('DROP TYPE ' || t.object_name||' FORCE');
END LOOP;

FOR p IN (select object_name from user_objects where object_type = 'PROCEDURE') LOOP
EXECUTE IMMEDIATE ('DROP PROCEDURE ' || p.object_name);
END LOOP;

END;

Idoc Script for WebCenter Profile Security

Some Idoc Script for webcenter security implementation.

For Document CheckIn:-

<$if userHasRole("ChairmanAdmin")$>
<$isLinkActive=1$>
<$elseif userHasRole("ChairmanSubAdmin")$>
<$isLinkActive=1$>
<$elseif userHasRole("ChairmanContributor")$>
<$isLinkActive=1$>
<$endif$>

For Document Search:-

<$if userHasRole("ChairmanAdmin")$>
<$isLinkActive=1$>
<$elseif userHasRole("ChairmanSubAdmin")$>
<$isLinkActive=1$>
<$elseif userHasRole("ChairmanContributor")$>
<$isLinkActive=1$>
<$elseif userHasRole("ChairmanViewer")$>
<$isLinkActive=1$>
<$endif$>

Default Title Value:-

<$if #active.dDocType like "FinanceSupplierInvoice"$>
<$dprDefaultValue="Supplier Invoice"$>
<$endif$>

Default Drive Title :-
<$dprDerivedValue="Sales - "&#active.xOrderNumber$>
<$dprDerivedValue=#active.xSalesClassification&"  - "&#active.xOrderNumber$>


Introduction to IDOC Script

- Server Side scripting language

- <!--$script--> syntex used in HCSP,HCSF files.

- <!--$variable = "Hello World" -->

- Six Basic uses of Idoc are Include, Variables, Functions, Conditions, Looping, Administration Interface (allows you to use Idoc Script in content server applets and customizations).

- Include :
Define Include : <!--@dynamichtml name@>code<!--@end@> / <@dynamichtml name@>code<@end@>
Include in page : <!--$IncludeName$-->
- Standard includes are defined in the <install_dir>/shared/config/resources/std_page.htm file.
- Body Defination example :
<@dynamichtml body_def@>
<body
<$if background_image$>
background="<$HttpImagesRoot$><$background_image$>"
<$elseif colorBackground$>
bgcolor="<$colorBackground$>"
<$endif$>

<$if xpedioLook$>
link="#663399" vlink="#CC9900"
<$else$>
link="#000000" vlink="#CE9A63" alink="#9C3000"
<$endif$>
marginwidth="0" marginheight="0" topmargin="0" leftmargin="0"
>
<@end@>

- Include in page : <$include body_def$> / <!--$include body_def -->

- Super Tag - The super tag is used to define exceptions to an existing include. The super tag tells the include to start with an existing include and then add to it or modify using the specified code.

- Component Defined :  <@dynamichtml my_resource@> <$a = 1, b = 2$> <@end@>

- Enhances the my_resource include using the super tag
<@dynamichtml my_resource@> <$include super.my_resource$> <!--Change "b" but not "a" --><$b = 3$> <@end@>

- Creating variables : <$variable_name$> / <!--$variable_name -->  (i.e : <!--$a=10 -->)
 Standard configuration variables are defined in the config/config.cfg file.
 Using commas as separators <$a=1,b=2$> / <!--$a=1,b=2 -->

- Referencing a Variable in a Conditional <$if variable_name$>

- Functions : Global Functions, Personalization Functions (utGetValue,utLoad,utLoadResultSet)

- Conditionals : <$if condition$> Statement; <$else$> Statement; <$elseif condition$> Statement; <$endif$>
Boolean Operators - and (<$if 3>2 and 4>3$>), or <$if 3>2 or 3>4$>, not (<$if not 3=4$>)

- Looping : For loop ( <$loop ResultSet_name$> Statement; <$endloop$> ), While Loop (<$loopwhile condition$> Statement; <$endloop$>).
<$break$> causes the innermost loop to be exited.

- Administration Interface : You can use Idoc Script in several areas of the administration interface, including:
Workflow Admin (page -14), Web Layout Editor (page -14), Batch Loader (page -15), Archiver (page -15), System Properties (page -15), Search Expressions (page -15), E-mail (page -15)

- Special Keywords : #active (<$#active.variable$>),  #local (<$#local.variable$>), #env (<$#env.variable$>),exec (<$exec expression$>),
include (<$include ResourceName$>), super (<$include super.<include>$>)

- Keywords and Functions : exec keyword (), eval function, include keyword, inc function

- Operators : Comparison Operators (==, !=, <, <=, >, >= ) , Special String Operators ( &, Like, | (<$if "car" like "car|truck|van"$>) )

- MetaData Fields : All internal metadata field names begin with either a “d” (Predefined field names begin with a “d”. For example, dDocAuthor) or an “x” (Custom field names begin with an “x”. For example, xDepartment.)

Idoc Scripting:

Include :
Define:
<@dynamichtml name@>
code
<@end@>
Include in page:
<$include name$>

Standard includes are defined in the : <install_dir>/shared/config/resources/std_page.htm file.

Variable:

<$variable_name$> as <$variable=value$> as <$i=0$>
i.e <$a=1,b=2$>
Functions:

onditions:

• <$if condition$>
• <$else$>
• <$elseif condition$>
• <$endif$>

<$if xDepartment$>
<td><$xDepartment$></td>
<$else$>
<td>Department is not defined.</td>
<$endif$>
<$xDepartment=""$>

Looping:

<$name="SearchResults"$>
<$loop name$>
<!--output code-->
<$endloop$>
Instead, you need to use the following code:
<$name="SearchResults"$>
<$rsFirst(name)$>
<$loopwhile getValue(name, "#isRowPresent")$>
<!--output code-->
<$rsNext(name)$>
<$endloop$>


<$QueryText="dDocType <matches> ‘ADACCT‘"$>
<$executeService("GET_SEARCH_RESULTS")$>
<table>
<tr>
<td>Title</td><td>Author</td>
</tr>


<$loop SearchResults$>
<tr>
<td><a href="<$SearchResults.URL$>">
<$SearchResults.dDocTitle$></a></td>
<td><$SearchResults.dDocAuthor$></td>
</tr>
<$endloop$>
</table>

<$loopwhile condition$>
code
<$endloop$>


<$abc=0$>
<$loopwhile abc<10$>
<$abc=(abc+2)$>
<$endloop$>

AWS EC2 - SSH locked with UFW

Need to update the instance's user data: 1. Stop the instance 2. Right click (windows) or ctrl + click (Mac) on the instance to open a c...