The FDMEE process log is the first step to troubleshooting a failed process. The process log captures, in a text-based file, information about an execution of application functionality. The process log can contain an extraordinary amount of information. As highlighted in Chapter 1, the level of detail contained in the process log is determined by the Log Level setting at the System, Application or User Setting. Log level 5 is the most verbose and will result in the process log containing the greatest level of information.
The process logs can be accessed from the Workflow tab by clicking on Process Details. Each execution in the application for which a process ID is generated is displayed. The Show hyperlink in the Log column allows the process log for a given Process ID to be downloaded locally. This is extremely valuable since not all users, including administrators, will always have access to the FDMEE network share directory where the process log is generated and saved. The process logs are stored in the Logs directory of the application Outbox.
Figure 9-1: Process Details
The below image displays the process details. Clicking on the Show text in the Log column for a given process ID allows the log file to be downloaded for viewing.
A unique process ID, and therefore process log, is generated for each execution of the workflow, batches, custom scripts, reports, initializing of a source system and purging of application elements. Importantly, there is no process ID associated with any mapping table maintenance performed within the web interface. This means that there is no ability to investigate – using the process log – any mapping changes made in the application.
Figure 9-2: Process Logs Directory
The process log file name always contains the process ID. For any execution of the workflow, the process log is named TargetApp_ProcessID.log where TargetApp is the name of the Target Application for which the POV processed and ProcessID is the numeric process ID displayed on the Process Details.
Process Log Header
The header section of every process log contains valuable information about the execution including the Log level, the user executing the process, and if applicable, the location, category, period and data load rule name processed. This is useful when needing to investigate an error associated with the workflow.
Figure 9-3: Process Log Header
The below image shows the header from a sample process log. Notice on the second line of the log file that the process log level is set to five (5) meaning the log will contain the most detailed information available.
Before beginning the process of analysing the log, confirm the logging level in the header of the process log associated with the failed process. If the logging level is not set to five (5), update the log level settings, recreate the problem by executing the process again and then utilize the log file from the subsequent execution.
Process Log Analysis
As previously noted, the process log can contain very detailed information ranging from the SQL statements executed by the application to detailed Jython errors encountered during a script execution. This can be daunting to not only new users of the application but also for experienced users. When analysing a log file for a failed process, first open the log in a more powerful text editor like Notepad++ or Textpad as they better respect carriage return and line feed characters than Notepad.
Figure 9-4: Side by Side Text Editor Comparison
The below image shows a side by side comparison of Textpad (left) and Notepad (right) of the same process log. The Textpad view is far easier to visually interpret.
When trying to identify the source of a failed process, perform a text search for the term FATAL or ERROR. This will often bring you to the section of the log that can be used to identify the source of the problem.
Figure 9-5: FATAL Keyword
The below image displays an error in the validateData (green) process. The error highlighted in yellow indicates that the process failed due to unmapped members. This error can be resolved by checking the Validation Errors tab in the Data Load Workbench.
Certain errors will be easily identifiable and remedied based on the error information contained in the process log. In the event of Jython script errors, the script and the line of the script that is failing to execute is noted. This is incredibly useful even if the error information provided is not easily understood.
Figure 9-6: Script Error
The below image shows an error generated by the ER_ExportMaps script. The error was encountered when executing line 14 of the script. The error message circled in red may not have a lot of meaning at first blush but to an experienced FDMEE administrator this means that the variable strLoc was not properly populated since you cannot apply the Jython method upper() to a None variable type.
There are instances where the FATAL or ERROR keywords are not found in the process log. It can be more challenging to determine the root cause when the process log lacks explicit information about the error. In these cases there are several systematic options – more detailed process log analysis, FDMEE services log analysis, and leveraging the ODI Studio.
Detailed Process Log Analysis
A more detailed analysis of the process log is a time consuming process but the reward of identifying and correcting an issue makes the effort worthwhile. To enable oneself to perform this analysis, a SQL client is required – SQL Developer when FDMEE is run on an Oracle RDBMS or SQL Server Management Studio in the case of Microsoft SQL Server. As noted above, the process log will contain a number of SQL statements that are executed by the application. A SQL client will allow the SQL statements contained in the process log to be executed. The resulting data set can be used to determine if there is incorrect or missing information somewhere in the application that is causing the error.
When establishing a connection to the FDMEE RDBMS repository, all of the connection information – except the password – is available in the System Settings under the ODI Profile Type.
Figure 9-7: SQL Connection Information
The below image shows how the ODI Profile Type can be used to gather database connection information. The URL specifies the database server name (yellow), the port (green), the SID for Oracle (brown) or the database name for SQL Server (not shown). The user login (pink) to the database is specified in the ODI Master Repository User field. The password can usually be supplied by IT or by the database administrator.
A word of caution – when using the ODI Master Repository user credentials bear in mind that this login has full access to the database and therefore the data contained in the FDMEE application. An alternative safer approach would be to create a view only database user for troubleshooting purposes.
When executing SQL statements written to the process log, be sure not to execute any DML – INSERT or UPDATE – statements as these would impact the data contained in the database of the application. When encountering a DML statement, you can transform to a simple SELECT statement (or execute only the SELECT subquery) by removing the portion of the query that controls the update of data within the application.
Figure 9-8A: Process Log DML Statement
The below image shows a DML statement generated by the application and logged in the process log. The portion in yellow represents a portion of the SQL query that would modify data in the database. The portion in green represents the SELECT subquery which, when executed independently of the INSERT statement, would return a record set.
Figure 9-8B: Modified SQL Statement for Debugging
The below image shows the result of executing the SELECT statement from the DML statement in Figure 9-8A. The records circled in red shows the result of the query.
The resulting record set can be evaluated to ensure that the information contained in the record set is accurate or represents data that should be leveraged by the application. Having a deep understanding of the application, its tables and the data are critical to being able to determine if the records returned by a query execution are accurate. This level of understanding comes with experience and exploration. Taking the time to understand the database structure and how each table relates to different web components is critical to developing a strong set of troubleshooting skills.
This content has been taken from
Tony Scalese is also the author of: