To address the challenges of insufficient label semantic understanding, vague relationship modeling, and high computational costs of Large Language Models (LLMs) in zero-shot re-ranking tasks, a hierarchical filtering and label semantic extension method named HFLS (Hierarchical Filtering and Label Semantics) was proposed. In the method, by constructing a multi-level label semantic extension path, a progressive prompting strategy “keyword matching → semantic association → domain knowledge integration” was designed to guide LLMs in deep relational reasoning. At the same time, a hierarchical filtering mechanism was introduced to reduce computational complexity while retaining high-potential candidate documents. Experimental results indicate that on seven benchmark datasets such as TREC-DL2019, HFLS achieves average gains of 21.92%, 13.43% and 8.59%, respectively, in NDCG(Normalized Discounted Cumulative Gain)@10 compared to Pointwise methods like Pointwise.qg, Pointwise.yes_no, and Pointwise.3Label. In terms of reasoning efficiency, HFLS has the processing latency per query reduced by 91.06%, 68.87% and 33.54% compared to Listwise, Pairwise, and Setwise methods, respectively.