Natural Language Processing and Chinese Computing : 14th National CCF Conference, NLPCC 2025, Urumqi, China, August 7-9, 2025, Proceedings, Part I (Lecture Notes in Computer Science)

個数:
  • 予約

Natural Language Processing and Chinese Computing : 14th National CCF Conference, NLPCC 2025, Urumqi, China, August 7-9, 2025, Proceedings, Part I (Lecture Notes in Computer Science)

  • 現在予約受付中です。出版後の入荷・発送となります。
    重要:表示されている発売日は予定となり、発売が延期、中止、生産限定品で商品確保ができないなどの理由により、ご注文をお取消しさせていただく場合がございます。予めご了承ください。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 577 p.
  • 言語 ENG
  • 商品コード 9789819533428

Full Description

The four-volume set LNAI 16102 - 16105 constitutes the refereed proceedings of the 14th CCF National Conference on Natural Language Processing and Chinese Computing, NLPCC 2025, held in Urumqi, China, during August 7-9, 2025.

The 152 full papers and 26 evaluation workshop papers included in these proceedings were carefully reviewed and selected from 505 submissions. They were focused on the following topical sections:

Part I : Information Extraction and Knowledge Graph & Large Language Models and Agents.
Part II : Multimodality and Explainability & NLP Applications / Text Mining.
Part III :  IR / Dialogue Systems / Question Answering; Machine Translation and Multilinguality & Sentiment analysis / Argumentation Mining / Social Media.
Part IV : Machine Learning for NLP; Fundamentals of NLP; Summarization and Generation; Others &
Evaluation Workshop.

Contents

.- Information Extraction and Knowledge Graph.
.- Progressive Training of Transformer for Knowledge Graph Completion Tasks.
.- Document-level Event Coreference Resolution on Trigger Augmentation and Contrastive Learning.
.- Dynamic Chain-of-thought for Low-Resource Event Extraction.
.- On Sentence-level Non-adversarial Robustness of Chinese Named Entity Recognition with Large Language Models.
.- Spatial Relation Classification on Supervised In-Context Learning.
.- HGNN2KAN: Distilling hypergraph neural networks into KAN for efficient inference.
.- Adapting Task-General ORE Systems for Extracting Open Relations between Fictional Characters in Chinese Novels.
.- DRLF:Denoiser-Reinforcement Learning Framework for Entity Completion.
.- Fashion-related Attribute Value Extraction with Visual Prompting.
.- Discovering Latent Relationship for Temporal Knowledge Graph Reasoning.
.- Logical Rule-Constrained Large Language Models for Document-Level Relation Extraction.
.- An Adaptive Semantic-Aware Fusion Method for Multimodal Entity Linking.
.- Retrieve, Interaction, Fusion: a Simple Approach in Ancient Chinese Named Entity Recognition.
.- Reasoning-Guided Prompt Learning with Historical Knowledge Injection for Ancient Chinese Relation Extraction.
.- MMD-TKGR: Multi-Agent Multi-Round Debate for Temporal Knowledge Graph Reasoning.
.- AutoPRE: Discovering Concept Prerequisites with LLM Agents.
.- Weakly-Supervised Generative Framework for Product Attribute Identification in Live-Streaming E-Commerce.
.- Exploring Representation-Efficient Transfer Learning Approaches for Speech Recognition and Translation Using Pre-trained Speech Models.
.- A Neighborhood Aggregation-based Knowledge Graph Reasoning Approach in Operations and Maintenance.
.- CARE: Contextual Augmentation with Retrieval Enhancement for Relation Extraction in Large Language Models.
.- RHDG: Retrieval-augmented Heuristics-driven Demonstration Generation for Document-Level Event Argument Extraction.
.- Large Language Models and Agents.
.- Beyond One-Size-Fits-All: Adaptive Fine-Tuning for LLMs Based on Data Inherent Heterogeneity.
.- From Chain to Loop: Improving Reasoning Capability in Small Language Models via Loop-of-Thought.
.- TaxBen: Benchmarking the Chinese Tax Knowledge of Large Language Models.
.- Propagandistic Meme Detection via Large Language Model Distillation.
.- Multi-Candidate Speculative Decoding.
.- Debate-Driven Legal Reasoning: Disambiguating Confusing Charges through Multi-Agent Debate.
.- A Human-Centered AI Agent Framework with Large Language Models for Academic Research Tasks.
.- ReGA: Reasoning and Grounding Decoupled GUI Navigation Agents.
.- PSYCHE: Practical Synthetic Math Data Evolution. 
.- MultiJustice: A Chinese Dataset for Multi-Party, Multi-Charge Legal Prediction.
.- Reward-Guided Many-Shot Jailbreaking.
.- Self-Prompt Tuning: Enable Autonomous Role-Playing in LLMs.
.- RASR: A Multi-Perspective RAG-based Strategy for Semantic Textual Similarity.
.- H2HTALK: Evaluating Large Language Models as Emotional Companion.
.- EvoP: Robust LLM Inference via Evolutionary Pruning.
.- Large Language Model based Multi-Agent Learning for Mixed Cooperative-Competitive Environments.
.- EduMate:LLM-Powered Detection of Student Learning Emotions and Efficacy in Semi-Structured Counseling.
.- MAD-HD: Multi-Agent Debate-Driven Ungrounded Hallucination Detection.
.- TIANWEN: A Comprehensive Benchmark for Evaluating LLMs in Chinese Classical Poetry Understanding and Reasoning.
.- RKE-Coder: A LLMs-based Code Generation Framework with Algorithmic and Code Knowledge Integration.
.- See Better, Say Better: Vision-Augmented Decoding for Mitigating Hallucinations in Large Vision-Language Models.
.- Exploring Large Language Models for Grammar Error Explanation and Correction in Indonesian as a Low-Resource Language.
.- Libra: Large Chinese-based Safeguard for AI Content.
.- FADERec: Fine-grained Attribute Distillation Enhanced by Collaborative Fusion for LLM-based Recommendation.
.- Improving RL Exploration for LLM Reasoning through Retrospective Replay.

最近チェックした商品