This code analyzes log files from WMS (Warehouse Management System) XDock Pegging operations to find problems with database record locking and Oracle database errors.
In simple terms, it:
- Scans through large log files looking for WDD (Warehouse Delivery Detail) lock attempts
- Tracks whether locks succeeded or failed and how long they took
- Finds Oracle database errors (like ORA-00054, ORA-01422, etc.)
- Creates reports in multiple formats (CSV, Excel, HTML) for easy analysis
Think of it like a detective searching through thousands of pages of system logs to find and highlight all the important problems, then organizing them into easy-to-read reports.
- You point it to a folder containing
.logfiles - It processes each file one by one, reading through every line
- Large files (hundreds of MB) are handled efficiently
- It looks for lines containing
WMS_XDock_Pegging_Pub: - It finds the Delivery ID (
Del Id) being processed - It tracks when a lock wait started (
wdd update wait time) - It records whether the lock succeeded (
RM - Got WDD lock) or failed (Could not lock the WDD demand line record) - It calculates how long each lock attempt took
- It finds all
ORA-XXXXXerror codes in the logs - It skips
ORA-01403(no data found) since this is often expected behavior - It captures the error message, timestamp, line number, and context
- CSV file: Simple spreadsheet with WDD lock results
- Excel file: Professional workbook with multiple sheets (WDD locks, Oracle errors, Error summary)
- Console output: Color-coded summary displayed in your terminal
-
Python version: 3.9 or higher
-
Required packages:
pip install openpyxl(openpyxl is needed for Excel report generation)
- Log files: One or more
.logfiles from WMS XDock Pegging operations - Log format: Files must contain timestamps in format
[DD-MON-YY HH:MM:SS]
- Log files should be in a single folder
- Files can be any size (the tool handles large files efficiently)
python logparser.py /path/to/your/logsStep 1: Install Required Packages
pip install openpyxlStep 2: Prepare Your Log Files
- Place all log files you want to analyze in a single folder
- Ensure files have
.logextension (or specify your pattern)
Step 3: Run the Code
Basic usage (analyzes all .log files):
python logparser.py /path/to/logsWith custom output filename:
python logparser.py /path/to/logs my_results.csvWith custom file pattern:
python logparser.py /path/to/logs results.csv "*.txt"Step 4: Find Your Output After running, you'll find these files in your current directory:
wdd_lock_results.csv- CSV with WDD lock datawdd_lock_results.xlsx- Excel workbook with all reports
To extract all log lines for a specific ID:
python logparser.py /path/to/logs --id 12345678This creates a folder id_traces_12345678 containing extracted traces from each log file where the ID was found.
| Scenario | Command | What It Does |
|---|---|---|
| Basic analysis | python logparser.py ./logs |
Analyzes all .log files in ./logs folder |
| Custom output | python logparser.py ./logs output.csv |
Saves results to output.csv/xlsx |
| Different file type | python logparser.py ./logs results.csv "*.txt" |
Analyzes .txt files instead |
| Trace specific ID | python logparser.py ./logs --id 98765 |
Extracts all lines mentioning ID 98765 |
Scenario: You need to check overnight batch processing logs for lock failures. How to use:
python logparser.py /app/logs/overnight results_$(date +%Y%m%d).csvExample: Running this on Monday morning gives you results_20241211.csv with all weekend lock issues summarized.
Scenario: A user reports that delivery ID 45678901 had problems. You need to find all related log entries. How to use:
python logparser.py /app/logs --id 45678901Example: This creates id_traces_45678901/ folder with all log lines mentioning this delivery, making it easy to trace the issue.
Scenario: DBA needs a summary of all Oracle errors for the past week. How to use:
python logparser.py /app/logs/week results.csvExample: Open results.xlsx and check the "Oracle Errors" and "Error Summary" sheets for a complete breakdown of all database errors.
Scenario: System is slow and you suspect lock contention. How to use:
python logparser.py /app/logs perf_analysis.csvExample: Check the "Time Diff (s)" column in results - entries with high values indicate lock wait times that may be causing slowdowns.
| Color | Meaning |
|---|---|
| RED BOLD | Lock FAILED - the system could not obtain the lock |
| RED | Lock succeeded but with delay > 0 seconds |
| GREEN | Lock succeeded immediately (no delay) |
| Column | Description |
|---|---|
| File | Source log file name |
| Del_ID | Delivery ID being processed |
| Wait_Start | When the lock wait began |
| Result_Time | When the lock attempt completed |
| Time_Diff_Seconds | How long the lock took (in seconds) |
| Result | "LOCK SUCCESS" or "LOCK FAILED" |
Sheet 1: WDD Lock Results
- Same data as CSV but with color-coded rows
- Legend included at top
- Statistics summary
Sheet 2: Oracle Errors
- Error code (ORA-XXXXX)
- Source file and line number
- Timestamp and context
- Full error message
Sheet 3: Error Summary
- Count of each error code
- Percentage breakdown
- Reference table of common Oracle error descriptions
process_folder(folder_path, output_path, file_pattern, generate_reports)
- Main function that orchestrates the log analysis
- Processes all matching files in a folder
- Generates all output files
extract_id_from_folder(folder_path, search_id, file_pattern)
- Extracts all traces for a specific ID
- Creates output files in a dedicated folder
parsers.py - Core parsing functions
parse_timestamp(line)- Extracts timestamp from log lineextract_wdd_lock_info(file_path)- Finds WDD lock attemptsextract_oracle_errors(file_path)- Finds Oracle errorsextract_id_traces(file_path, search_id)- Extracts lines for specific ID
console_output.py - Terminal display
print_summary(results, stats, files_processed)- Prints WDD summaryprint_oracle_errors(oracle_errors)- Prints Oracle error summaryColorsclass - ANSI color codes for terminal
excel_report.py - Excel generation
generate_excel(results, output_excel, stats, oracle_errors)- Creates Excel workbook
html_report.py - HTML generation (currently disabled in main)
generate_html(results, output_html, stats, files_processed, oracle_errors)- Creates HTML report
Why it happens: The folder path is wrong or files have different extension. How to fix:
- Check the folder path exists:
ls /your/path - Check file extensions:
ls /your/path/*.log - Try a different pattern:
python logparser.py /path results.csv "*.txt"
Why it happens: The Excel library is missing. How to fix:
pip install openpyxlWhy it happens: Log files don't contain WMS_XDock_Pegging patterns. How to fix:
- Verify logs are from the right system
- Check a log file manually for
WMS_XDock_Pegging_Pub:text - The tool still captures Oracle errors even if no WDD locks found
Why it happens: Log file contains unusual characters.
How to fix: The tool handles this automatically with errors='replace', but if issues persist, check if logs are in a non-standard encoding.
Why it happens: Files are extremely large (multiple GB). How to fix:
- Split large log files into smaller chunks
- Process logs day-by-day instead of all at once
| What | Details |
|---|---|
| Purpose | Analyze WMS logs for WDD lock issues and Oracle errors |
| Main Input | Folder containing .log files |
| Main Output | CSV, Excel workbook with analysis results |
| Run Command | python logparser.py /path/to/logs |
| ID Trace Mode | python logparser.py /path/to/logs --id <ID> |
| Required Package | openpyxl (for Excel output) |
| Python Version | 3.9+ |
logparser/
├── logparser.py # Main entry point
└── logparser/ # Package directory
├── __init__.py # Package exports
├── parsers.py # Log parsing logic
├── console_output.py # Terminal display
├── excel_report.py # Excel generation
└── html_report.py # HTML generation
- Input: Log files in specified folder
- Parsing:
parsers.pyextracts WDD locks and Oracle errors - Processing: Main script aggregates results and calculates statistics
- Output: Console display + CSV + Excel (+ HTML if enabled)
Documentation generated for LogParser v1.0.0