Our approach to the PhysioNet Challenge 2025 centers on a custom Ranking Aware Tversky (RAT) loss, explicitly designed to align optimization with the competition metric: true positive rate among the top 5% of ranked predictions (TPR@5%). While standard objectives such as Binary Cross-Entropy and Focal Loss optimize average accuracy, they fail to prioritize the high-confidence predictions critical for this task. RAT introduces a differentiable soft top-k weighting that emphasizes the most confident predictions, penalizes overconfident false positives through entropy regularization, and stabilizes training with a BCE anchor. To improve representation learning under severe class imbalance, we incorporate lightly weighted supervised contrastive learning, which further enhances intra-class cohesion and inter-class separation. Combined with a ResNet-18 backbone augmented by Group Normalization and Squeeze-and-Excitation modules, RAT consistently outperformed baseline losses in local validation, achieving the highest TPR@5%. These results highlight the importance of explicitly rank-aware loss design for ranking-based evaluation metrics in imbalanced clinical datasets.