Author: Kaemon Ng Bao Leng | TP082265 Course: Research Methods for Computing and Technology (CT098-3-2) Institution: Asia Pacific University of Technology & Innovation (APU)
| Document | Description |
|---|---|
KAEMON NG BAO LENG PART1 RESUBMISSION.pdf |
Part 1 – Project Proposal |
KAEMON NG BAO LENG PART2.pdf |
Part 2 – Literature Review & Methodology |
This research investigates the effectiveness of Receptance Weighted Key Value (RWKV)-based architecture in enhancing operational efficiency for public transport systems.
Public transport operators struggle with supply-demand imbalances caused by dynamic urban data. Existing deep learning solutions like BERT Transformer Encoder achieve high prediction accuracy but suffer from quadratic computational complexity O(N²), making real-time fleet management computationally expensive.
This research proposes a RWKV-based architecture that introduces a Linear Attention mechanism O(N), combining:
- Transformer's parallel training capability
- RNN's constant-memory inference capability
- Examine how public transport operations are affected by low demand prediction
- Investigate the shortcomings of BERT Transformer Encoder in public transport
- Propose RWKV to improve prediction accuracy and lower resource consumption
- Direct Users: Public transport operators, fleet managers, dispatchers
- Indirect Users: Daily commuters, urban policymakers
- Sampling: Purposive sampling (operators) + Cluster sampling (commuters)
- Data Collection: Survey forms (quantitative) + Interviews (qualitative)
| Aspect | BERT Transformer Encoder | RWKV |
|---|---|---|
| Complexity | O(N²) | O(N) |
| Memory | Grows exponentially | Constant |
| Training | Parallel | Parallel |
| Inference | Heavy | Lightweight |
| Real-time Use | Difficult | Feasible |
RWKV · Deep Learning · Public Transport · Demand Prediction · BERT · Transformer · Linear Attention · Sustainable Development Goals (SDG11)