Computer Vision - ECCV 2022 : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXV (Lecture Notes in Computer Science)

個数:

Computer Vision - ECCV 2022 : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXV (Lecture Notes in Computer Science)

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 745 p.
  • 商品コード 9783031198328

Full Description

The 39-volume set, comprising the LNCS books 13661 until 13699, constitutes the refereed proceedings of the 17th European Conference on Computer Vision, ECCV 2022, held in Tel Aviv, Israel, during October 23-27, 2022.

 

The 1645 papers presented in these proceedings were carefully reviewed and selected from a total of 5804 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation.

Contents

Efficient One-Stage Video Object Detection by Exploiting Temporal Consistency.- Leveraging Action Affinity and Continuity for Semi-Supervised Temporal Action Segmentation.- Spotting Temporally Precise, Fine-Grained Events in Video.- Unified Fully and Timestamp Supervised Temporal Action Segmentation via Sequence to Sequence Translation.- Efficient Video Transformers with Spatial-Temporal Token Selection.- Long Movie Clip Classification with State-Space Video Models.- Prompting Visual-Language Models for Efficient Video Understanding.- Asymmetric Relation Consistency Reasoning for Video Relation Grounding.- Self-Supervised Social Relation Representation for Human Group Detection.- K-Centered Patch Sampling for Efficient Video Recognition.- A Deep Moving-Camera Background Model.- GraphVid: It Only Takes a Few Nodes to Understand a Video.- Delta Distillation for Efficient Video Processing.- MorphMLP: An Efficient MLP-Like Backbone for Spatial-Temporal Representation Learning.- COMPOSER: Compositional Reasoning of Group Activity in Videos with Keypoint-Only Modality.- E-NeRV: Expedite Neural Video Representation with Disentangled Spatial-Temporal Context.- TDViT: Temporal Dilated Video Transformer for Dense Video Tasks.- Semi-Supervised Learning of Optical Flow by Flow Supervisor.- Flow Graph to Video Grounding for Weakly-Supervised Multi-step Localization.- Deep 360o Optical Flow Estimation Based on Multi-Projection Fusion.- MaCLR: Motion-Aware Contrastive Learning of Representations for Videos.- Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection.- Frozen CLIP Models Are Efficient Video Learners.- PIP: Physical Interaction Prediction via Mental Simulation with Span Selection.- Panoramic Vision Transformer for Saliency Detection in 360o Videos.- Bayesian Tracking of Video Graphs Using Joint Kalman Smoothing and Registration.- Motion Sensitive Contrastive Learning for Self-Supervised Video Representation.- Dynamic Temporal Filtering In Video Models.- Tip-Adapter: Training-Free Adaption of CLIP for Few-Shot Classification.- Temporal Lift Pooling for Continuous Sign Language Recognition.- MORE: Multi-Order RElation Mining for Dense Captioning in 3D Scenes.- SiRi: A Simple Selective Retraining Mechanism for Transformer-Based Visual Grounding.- Cross-Modal Prototype Driven Network for Radiology Report Generation.- TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts.- SeqTR: A Simple Yet Universal Network for Visual Grounding.- VTC: Improving Video-Text Retrieval with User Comments.- FashionViL: Fashion-Focused Vision-and-Language Representation Learning.- Weakly Supervised Grounding for VQA in Vision-Language Transformers.- Automatic Dense Annotation of Large-Vocabulary Sign Language Videos.- MILES: Visual BERT Pre-training with Injected Language Semantics for Video-Text Retrieval.- GEB+: A Benchmark for Generic Event Boundary Captioning, Grounding and Retrieval.- A Simple and Robust Correlation Filtering Method for Text-Based Person Search.

最近チェックした商品