サイバーセキュリティのためのAI<br>AI for Cybersecurity : Research and Practice

個数:
電子版価格
¥19,184
  • 電子版あり
  • ポイントキャンペーン

サイバーセキュリティのためのAI
AI for Cybersecurity : Research and Practice

  • ウェブストア価格 ¥30,160(本体¥27,419)
  • Wiley-IEEE Press(2026/01発売)
  • 外貨定価 US$ 145.00
  • 【ウェブストア限定】洋書・洋古書ポイント5倍対象商品(~2/28)
  • ポイント 1,370pt
  • 在庫がございません。海外の書籍取次会社を通じて出版社等からお取り寄せいたします。
    通常6~9週間ほどで発送の見込みですが、商品によってはさらに時間がかかることもございます。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合がございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 656 p.
  • 言語 ENG
  • 商品コード 9781394293742
  • DDC分類 005.8

Full Description

Informative reference on the state of the art in cybersecurity and how to achieve a more secure cyberspace

AI for Cybersecurity presents the state of the art and practice in AI for cybersecurity with a focus on four interrelated defensive capabilities of deter, protect, detect, and respond. The book examines the fundamentals of AI for cybersecurity as a multidisciplinary subject, describes how to design, build, and operate AI technologies and strategies to achieve a more secure cyberspace, and provides why-what-how of each AI technique-cybersecurity task pair to enable researchers and practitioners to make contributions to the field of AI for cybersecurity.

This book is aligned with the National Science and Technology Council's (NSTC) 2023 Federal Cybersecurity Research and Development Strategic Plan (RDSP) and President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Learning objectives and 200 illustrations are included throughout the text.

Written by a team of highly qualified experts in the field, AI for Cybersecurity discusses topics including:

Robustness and risks of the methods covered, including adversarial ML threats in model training, deployment, and reuse
Privacy risks including model inversion, membership inference, attribute inference, re-identification, and deanonymization
Forensic and formal methods for analyzing, auditing, and verifying security- and privacy-related aspects of AI components
Use of generative AI systems for improving security and the risks of generative AI systems to security
Transparency and interpretability/explainability of models and algorithms and associated issues of fairness and bias

AI for Cybersecurity is an excellent reference for practitioners in AI for cybersecurity related industries such as commerce, education, energy, financial services, healthcare, manufacturing, and defense. Fourth year undergraduates and postgraduates in computer science and related programs of study will also find it valuable.

Contents

List of Contributors xix

Foreword xxvii

About the Editors xxxi

Preface xxxv

Acknowledgments xxxvii

1 LLMs Are Not Few-shot Threat Hunters 1
Glenn A. Fink, Luiz M. Pereira, and Christian W. Stauffer

1.1 Overview 1

1.1.1 AI Is Not Magic 1

1.1.2 Inherent Difficulty of Human Tasks in Cybersecurity and Threat Hunting 3

1.2 Large Language Models 4

1.2.1 Background 4

1.2.2 Transformers 4

1.2.3 Pretraining and Fine-tuning 9

1.2.4 General Limitations 9

1.3 Threat Hunters 12

1.3.1 Introduction to Threat Hunting 12

1.3.2 The Dimensions of Threat Hunting 13

1.3.3 The Approaches to Threat Hunting 15

1.3.4 The Process of Threat Hunting 16

1.3.5 Challenges to Modern Threat Hunting 17

1.4 Capabilities and Limitations of LLMs in Cybersecurity 18

1.4.1 General Limitations of LLMs for Cybersecurity 18

1.4.2 General Capabilities of LLMs Useful for Cybersecurity 20

1.4.3 Applications of LLMs in Cybersecurity 22

1.5 Conclusion: Reimagining LLMs as Assistant Threat Hunter 24

References 27

2 LLMs on Support of Privacy and Security of Mobile Apps: State-of-the-art and Research Directions 29
Tran Thanh Lam Nguyen, Barbara Carminati, and Elena Ferrari

2.1 Introduction 29

2.2 Background on LLMs 32

2.2.1 Large Language Models 32

2.2.2 FSL and RAG 39

2.3 Mobile Apps: Main Security and Privacy Threats 43

2.4 LLM-based Solutions: State-of-the-art 47

2.4.1 Vulnerabilities Detection 48

2.4.2 Bug Detection and Reproduction 50

2.4.3 Malware Detection 52

2.5 An LLMs-based Approach for Mitigating Image Metadata Leakage Risks 53

2.6 Research Challenges 57

2.7 Conclusion 60

Acknowledgment 61

References 61

3 Machine Learning-based Intrusion Detection Systems: Capabilities, Methodologies, and Open Research Challenges 67
Chaoyu Zhang, Ning Wang, Y. Thomas Hou, and Wenjing Lou

3.1 Introduction 67

3.2 Basic Concepts and ML for Intrusion Detection 69

3.2.1 Fundamental Concepts 69

3.2.2 ml Algorithms for Intrusion Detection 70

3.2.3 Taxonomy of IDSs 72

3.2.4 Evaluation Metrics and Datasets 73

3.3 Capability I: Zero-day Attack Detection with ml 75

3.3.1 Understanding Zero-day Attacks and Their Impact 75

3.3.2 General Workflow of ML-IDS for Identifying Zero-day Attacks 75

3.3.3 Anomaly Detection Mechanisms 76

3.3.4 Open Research Challenges 77

3.4 Capability II: Intrusion Explainability Through XAI 79

3.4.1 Enhancing Transparency and Trust in Intrusion Detection 79

3.4.2 General Workflow of XAI 80

3.4.3 XAI Methods for IDS Transparency Enhancement 80

3.4.4 Open Research Challenges 83

3.5 Capability III: Intrusion Detection in Encrypted Traffic 84

3.5.1 Challenges in Intrusion Detection for Encrypted Traffic 84

3.5.2 Workflow of ML-IDS for Encrypted Traffic 84

3.5.3 ML-based Solutions for Encrypted Traffic Analysis 84

3.5.4 Open Research Challenges 87

3.6 Capability IV: Context-aware Threat Detection and Reasoning with GNNs 88

3.6.1 Introduction to GNNs in IDS 88

3.6.2 Workflow of GNNs for Intrusion Detection 88

3.6.3 Provenance-based Intrusion Detection by GNNs 89

3.6.4 Open Research Challenges 92

3.7 Capability V: LLMs for Intrusion Detection and Understanding 93

3.7.1 The Role of LLMs in Cybersecurity 93

3.7.2 Leveraging LLMs for Intrusion Detection 94

3.7.3 A Review of LLM-based IDS 94

3.7.4 Open Research Challenges 97

3.8 Summary 97

References 98

4 Generative AI for Advanced Cyber Defense 109
Moqsadur Rahman, Aaron Sanchez, Krish Piryani, Siddhartha Das, Sai Munikoti, Luis de la Torre Quintana, Monowar Hasan, Joseph Aguayo, Monika Akbar, Shahriar Hossain, and Mahantesh Halappanavar

4.1 Introduction 109

4.2 Motivation and Related Work 111

4.2.1 AI-supported Vulnerability Management 112

4.3 Foundations for Cyber Defense 114

4.3.1 Mapping Vulnerabilities, Weaknesses, and Attack Patterns Using LLMs 115

4.4 Retrieval-augmented Generation 117

4.5 KG and Querying 118

4.5.1 Graph Schema 119

4.5.2 Neo4j KG Implementation 122

4.5.3 Cypher Queries 123

4.6 Evaluation and Results 126

4.6.1 RAG-based Response Generation 127

4.6.2 CWE Predictions Using RAG 131

4.6.3 CWE Predictions Using GPT4-o 136

4.7 Conclusion 142

References 142

5 Enhancing Threat Detection and Response with Generative AI and Blockchain 147
Driss El Majdoubi, Souad Sadki, Zakia El Uahhabi, and Mohamed Essaidi

5.1 Introduction 147

5.2 Cybersecurity Current Issues: Background 148

5.3 Blockchain Technology for Cybersecurity 150

5.3.1 Blockchain Benefits for Cybersecurity 150

5.3.2 Existing Blockchain-based Cybersecurity Solutions 153

5.4 Combining Generative AI and Blockchain for Cybersecurity 156

5.4.1 Integration of Generative AI and Blockchain 160

5.4.2 Understanding Capabilities and Risks 160

5.4.3 Practical Benefits for Cybersecurity 161

5.4.4 Limitations and Open Research Issues 161

5.5 Conclusion 162

References 163

6 Privacy-preserving Collaborative Machine Learning 169
Runhua Xu and James Joshi

6.1 Introduction 169

6.1.1 Objectives and Structure 171

6.2 Collaborative Learning Overview 172

6.2.1 Definition and Characteristics 172

6.2.2 Related Terminologies 174

6.2.3 Collaborative Decentralized Learning and Collaborative Distributed Learning 175

6.3 Collaborative Learning Paradigms and Privacy Risks 177

6.3.1 Key Collaborative Approaches 177

6.3.2 Privacy Risks in Collaborative Learning 182

6.3.3 Privacy Inference Attacks in Collaborative Learning 183

6.4 Privacy-preserving Technologies 187

6.4.1 The Need for Privacy Preservation 187

6.4.2 Privacy-preserving Technologies 188

6.5 Conclusion 195

References 196

7 Security and Privacy in Federated Learning 203
Zhuosheng Zhang and Shucheng Yu

7.1 Introduction 203

7.1.1 Federated Learning 203

7.1.2 Privacy Threats in FL 205

7.1.3 Security Issues in FL 207

7.1.4 Characterize FL 211

7.2 Privacy-preserving FL 215

7.2.1 Secure Multiparty Computation 215

7.2.2 Trust Execution Environments 216

7.2.3 Secure Aggregation 217

7.2.4 Differential Privacy 218

7.3 Enhance Security in FL 219

7.3.1 Data-poisoning Attack and Nonadaptive Model-poisoning Attack 220

7.3.2 Model-poisoning Attack 222

7.4 Secure Privacy-preserving FL 225

7.4.1 Enhancing Security in FL with DP 225

7.4.2 Verifiability in Private FL 226

7.4.3 Security in Private FL 227

7.5 Conclusion 228

References 229

8 Machine Learning Attacks on Signal Characteristics in Wireless Networks 235
Yan Wang, Cong Shi, Yingying Chen, and Zijie Tang

8.1 Introduction 235

8.2 Threat Model and Targeted Models 239

8.2.1 Backdoor Attack Scenarios 239

8.2.2 Attackers' Capability 240

8.2.3 Attackers' Objective 240

8.2.4 Targeted ML Models 241

8.3 Attack Formulation and Challenges 241

8.3.1 Backdoor Attack Formulation 241

8.3.2 Challenges 244

8.4 Poison-label Backdoor Attack 246

8.4.1 Stealthy Trigger Designs 246

8.4.2 Backdoor Trigger Optimization 249

8.5 Clean-label Backdoor Trigger Design 252

8.5.1 Clean-label Backdoor Trigger Optimization 253

8.6 Evaluation 255

8.6.1 Victim ML Model 255

8.6.2 Experimental Methodology 255

8.6.3 RF Backdoor Attack Performance 257

8.6.4 Resistance to Backdoor Defense 259

8.7 Related Work 261

8.8 Conclusion 262

References 263

9 Secure by Design 267
Mehdi Mirakhorli and Kevin E. Greene

9.1 Introduction 267

9.1.1 Definitions and Contexts 268

9.1.2 Core Principles of "Secure by Design" 269

9.1.3 Principle of Compartmentalization and Isolation 273

9.2 A Methodological Approach to Secure by Design 275

9.2.1 Assumption of Breach 275

9.2.2 Misuse and Abuse Cases to Drive Secure by Design 276

9.2.3 Secure by Design Through Architectural Tactics 277

9.2.4 Shifting Software Assurance from Coding Bugs to Design Flaws 282

9.3 AI in Secure by Design: Opportunities and Challenges 283

9.4 Conclusion and Future Directions 284

References 284

10 DDoS Detection in IoT Environments: Deep Packet Inspection and Real-world Applications 289
Nikola Gavric, Guru Bhandari, and Andrii Shalaginov

10.1 Introduction 289

10.2 DDoS Detection Techniques in Research 294

10.2.1 Network-based Intrusion Detection Systems 295

10.2.2 Host-based Intrusion Detection Systems 300

10.3 Limitations of Research Approaches 303

10.4 Industry Practices for DDoS Detection 305

10.5 Challenges in DDoS Detection 309

10.6 Future Directions 311

10.7 Conclusion 313

References 314

11 Data Science for Cybersecurity: A Case Study Focused on DDoS Attacks 317
Michele Nogueira, Ligia F. Borges, and Anderson B. Neira

11.1 Introduction 317

11.2 Background 319

11.2.1 Cybersecurity 320

11.2.2 Data Science 326

11.3 State of the Art 333

11.3.1 Data Acquisition 334

11.3.2 Data Preparation 335

11.3.3 Feature Preprocessing 336

11.3.4 Data Visualization 337

11.3.5 Data Analysis 338

11.3.6 ml in Cybersecurity 339

11.4 Challenges and Opportunities 340

11.5 Conclusion 341

Acknowledgments 342

References 342

12 AI Implications for Cybersecurity Education and Future Explorations 347
Elizabeth Hawthorne, Mihaela Sabin, and Melissa Dark

12.1 Introduction 347

12.2 Postsecondary Cybersecurity Education: Historical Perspective and Current Initiatives 348

12.2.1 ACM Computing Curricula 348

12.2.2 National Centers for Academic Excellence in Cybersecurity 356

12.2.3 ABET Criteria 359

12.3 Cybersecurity Policy in Secondary Education 361

12.3.1 US High School Landscape 362

12.4 Conclusion 367

12.5 Future Explorations 368

References 368

13 Ethical AI in Cybersecurity: Quantum-resistant Architectures and Decentralized Optimization Strategies 371
Andreou Andreas, Mavromoustakis X. Constandinos, Houbing Song, and Jordi Mongay Batalla

13.1 Introduction 371

13.1.1 Motivation 372

13.1.2 Contribution 373

13.1.3 Novelty 373

13.2 Literature Review 373

13.3 Overview and Ethical Considerations in AI-centric Cybersecurity 374

13.4 AML and Privacy Risks in AI Systems 378

13.5 Forensic and Formal Methods for AI Security 380

13.5.1 Auditing Tools for Security and Privacy 383

13.5.2 Transparency, Interpretability, and Trust 383

13.5.3 Building Secure and Trustworthy AI Systems 384

13.6 Generative AI and Quantum-resistant Architectures in Cybersecurity 385

13.6.1 Opportunities and Risks 385

13.6.2 Threats and Countermeasures 386

13.6.3 Strategies for Resilience 387

13.7 Future Directions and Ethical Considerations 387

13.8 Conclusion 390

References 391

14 Security Threats and Defenses in AI-enabled Object Tracking Systems 397
Mengjie Jia, Yanyan Li, and Jiawei Yuan

14.1 Introduction 397

14.2 Related Works 398

14.2.1 UAV Object Tracking 398

14.2.2 Adversarial Tracking Attacks 399

14.2.3 Robustness Enhancement Against Attacks 400

14.3 Methods 401

14.3.1 Model Architecture 403

14.3.2 Decision Loss 403

14.3.3 Feature Loss 404

14.3.4 l 2 Norm loss 405

14.4 Evaluation 405

14.4.1 Experiment Setup 405

14.4.2 Evaluation Metrics 405

14.4.3 Results 406

14.4.4 Tracking Examples 409

14.5 Conclusion 413

Acknowledgment 413

References 413

15 AI for Android Malware Detection and Classification 419
Safayat Bin Hakim, Muhammad Adil, Kamal Acharya, and Houbing Herbert Song

15.1 Introduction 419

15.1.1 Security Threats in Android Applications 420

15.1.2 Challenges in Android Malware Detection 422

15.1.3 Current Approaches and Limitations 423

15.2 Design of the Proposed Framework 424

15.2.1 Core Components and Architecture 424

15.2.2 Feature Extraction with Attention Mechanism 425

15.2.3 Feature Extraction with Attention Mechanism 425

15.2.4 Dimensionality Reduction and Optimization 427

15.2.5 Classification Using SVMs 427

15.3 Implementation and Dataset Overview 428

15.3.1 Dataset Insights 428

15.3.2 Preprocessing Strategies 429

15.3.3 Handling Class Imbalance 429

15.3.4 Adversarial Training and Evaluation 429

15.4 Results and Insights 431

15.4.1 Experimental Setup 431

15.4.2 Performance Analysis 435

15.4.3 Performance Insights with Visualization 436

15.4.4 Benchmarking Against Existing Methods 438

15.4.5 Key Insights 439

15.5 Feature Importance Analysis 439

15.5.1 Top Feature Importance 439

15.5.2 Feature Impact Analysis Using SHAP Values 441

15.5.3 Global Feature Impact Distribution 442

15.6 Comparative Analysis and Advancements over Existing Methods 442

15.6.1 Feature Space Optimization 444

15.6.2 Advances in Adversarial Robustness 445

15.6.3 Performance Improvements 445

15.6.4 Summary of Key Advancements 445

15.7 Discussion 446

15.7.1 Limitations and Future Work 446

15.8 Conclusion 447

References 447

16 Cyber-AI Supply Chain Vulnerabilities 451
Joanna C. S. Santos

16.1 Introduction 451

16.2 AI/ML Supply Chain Attacks via Untrusted Model Deserialization 452

16.2.1 Model Deserialization 453

16.2.2 AI/ML Attack Scenarios 457

16.3 The State-of-the-art of the AI/ML Supply Chain 458

16.3.1 Commonly Used Serialization Formats 458

16.3.2 Deliberately Malicious Models Published on Hugging Face 460

16.3.3 Developers' Perception on Safetensors 462

16.4 Conclusion 466

16.4.1 Implications for Research 466

16.4.2 Implications for Practitioners 467

References 467

17 AI-powered Physical Layer Security in Industrial Wireless Networks 471
Hong Wen, Qi Wang, and Zhibo Pang

17.1 Introduction 471

17.2 Radio Frequency Fingerprint Identification 474

17.2.1 System Model 474

17.2.2 Cross-device RFFI 476

17.2.3 Experimental Investigation 480

17.3 CSI-based PLA 481

17.3.1 System Model 482

17.3.2 Transfer Learning-based PLA 484

17.3.3 Data Augmentation 488

17.3.4 Experimental Investigation 490

17.4 PLK Distribution 493

17.4.1 System Model 493

17.4.2 AI-powered Quantization 495

17.5 Physical Layer Security Enhanced ZT Security Framework 498

17.5.1 ZT Requirements in IIoT 499

17.5.2 PLS Enhanced ZT Security Framework 500

References 502

18 The Security of Reinforcement Learning Systems in Electric Grid Domain 505
Suman Rath, Zain ul Abdeen, Olivera Kotevska, Viktor Reshniak, and Vivek Kumar Singh

18.1 Introduction 505

18.2 RL for Control 506

18.2.1 Overview of RL Algorithms 506

18.2.2 DQN Algorithm 510

18.3 Case Study: RL for Control in Cyber-physical Microgrids 513

18.4 Related Work: Grid Applications of RL 516

18.5 Open Challenges and Solutions 518

18.6 Conclusion 522

Acknowledgments 524

References 524

19 Geopolitical Dimensions of AI in Cybersecurity: The Emerging Battleground 533
Felix Staicu and Mihai Barloiu

19.1 Introduction 533

19.1.1 A Conceptual Framework 534

19.2 Foundations of AI in Geopolitics: From Military Origins to Emerging Strategic Trajectories 536

19.2.1 Historical Foundations: The Military and Intelligence Roots of Key Technologies 536

19.2.2 Early International Debates on AI Governance and Their Geopolitical Dimensions 537

19.2.3 The Two-way Influence Between AI and Geopolitics: Early Signals of Strategic Catalysts and Normative Vectors 538

19.3 The Contemporary Battleground: AI as a Strategic Variable 540

19.3.1 AI-infused IO: Precision, Persistence, and Policy Dilemmas 540

19.3.2 Fusion Technologies for Battlefield Control, Unmanned Vehicles, and AI Swarming 542

19.3.3 Regulatory Power as Soft Power: Competing Models for Global AI Norms 543

19.3.4 Global Rivalries: The US-China AI Race and the Fragmenting Digital Ecosystem 545

19.4 Beyond Today's Conflicts: Future Horizons in AI-driven Security 548

19.4.1 2050 Hypothesis-driven Scenarios in the International System 548

19.4.2 AI in the Nuclear Quartet 551

19.4.3 AI in Kinetic Conventional Military Capabilities 553

19.4.4 AI in Cybersecurity and Information Warfare 554

19.4.5 A Holistic View of AI's Impact on International Security 556

19.5 Conclusions and Recommendations 558

19.5.1 Integrative Insights 558

19.6 Conclusion 560

Acknowledgments 561

References 561

20 Robust AI Techniques to Support High-consequence Applications in the Cyber Age 567
Joel Brogan, Linsey Passarella, Mark Adam, Birdy Phathanapirom, Nathan Martindale, Jordan Stomps, Olivera Kotevska, Matthew Yohe, Ryan Tokola, Ryan Kerekes, and Scott Stewart

20.1 Introduction 567

20.2 Motivation 568

20.3 Explainability Measures for Deep Learning in High-consequence Scenarios 570

20.3.1 Gradient-based Methods 571

20.3.2 Perturbation-based Methods 572

20.3.3 Comparisons Between Explainability Methods 572

20.4 Improving Confidence and Robustness Measures for Deep Learning in Critical Decision-making Scenarios 573

20.4.1 Introduction 573

20.4.2 Dataset Description 574

20.4.3 Methodology 575

20.4.4 Attribution Algorithms 576

20.4.5 Confidence Measure Algorithms 576

20.4.6 Results and Analysis 581

20.4.7 Discussion and Future Work 581

20.5 Building Robust AI Through SME Knowledge Embeddings 583

20.5.1 Explicit Knowledge in Structured Formats 586

20.5.2 Fine-tuning and Evaluating Foundation Models 587

20.6 Flight-path Vocabularies for Foundation Model Training 588

20.6.1 Introduction 588

20.6.2 Dataset 589

20.6.3 Methodology 590

20.6.4 Results and Discussion 591

20.7 Promise and Peril of Foundation Models in High-consequence Scenarios 592

20.7.1 Adversarial Vulnerabilities of Foundation Models 593

20.7.2 Privacy Violation Vulnerabilities in Foundation Models 594

20.7.3 Alignment Hazards When Training Foundation Models 594

20.7.4 Performance Hazards When Inferring and Generating with Foundation Models 595

20.8 Discussion 596

Acknowledgments 596

References 596

Index 601

最近チェックした商品