Posts

Showing posts with the label Deep Reinforcement Learning

2025-01-27: LLM Driven Behavioral Analysis for Adaptive Intrusion Detection in IoT Networks - Funded by CCI

Image
I am delighted to receive the Commonwealth Cyber Initiative Grant for $100,000 to support our collaborative proposal, “Adaptive Intrusion Detection in IoT Networks Using LLM-Driven Behavioral Analysis and Deep Reinforcement Learning” beginning in January 2025. This is a collaborative work with Dr. Neda Moghim and Virginia Tech.   Figure 1: Project Plan and Tasks This research project explores the integration of Deep Reinforcement Learning (DRL), Large Language Models (LLMs), neuro-symbolic AI, and wireless networking to create adaptive intrusion detection systems for Internet of Things (IoT) networks. The central research question focuses on developing resilient IoT systems capable of recovering swiftly from cyberattacks without degrading the user experience. To address this, the project introduces several key innovations. First, an adaptive prompt-generation system is proposed using DRL to optimize LLM queries in real-time by tracking the evolving nature of cy...

2023-08-11: Paper Summary: "Mastering Diverse Domains through World Models"

Image
Figure 2 Hafner et al. : The authors of this work consider four visual domains, including robot locomotion and manipulation tasks, Atari games with 2D graphics, DMLab, and Minecraft. DreamerV3 is successful in all of these diverse domains, demonstrating its ability to handle spatial and temporal reasoning challenges.  In my last post , Paper Summary: "Beyond Classifiers: Remote Sensing Change Detection with Metric Learning" Zhang et al., I reviewed methods to detect discrete changes in temporal visual data. But what if we're concerned with the fidelity of simulated or generative data vs. the real world? In my work at NASA, I study machine learning methods for training autonomous systems in simulation. One of the biggest problems with this research direction is the Simulation-to-Reality problem, where training in simulation can result in relatively high uncertainty due to differences between the simulated representation of the environm...

2022-02-16: Trust Management in Multi-Agent Systems via Deep Reinforcement Learning

Image
VANET system modeled in Zhang et al. I discovered Deep Reinforcement Learning (DRL)   sometime around the end of 2014 when I was taking Dr. Charles Isbell's and Dr. Michael Littman's Machine Learning course during the course of my masters degree. One of the projects (during the Reinforcement Learning (RL) section of the course) was to code up a Deep Q Network to play Lunar Lander (a classic tutorial nowadays) .  This is the project that cemented my focus on artificial intelligence and machine learning and lead to my current career. This was right around when my daughter was born and all concept of time management disappeared from my life and, I mean, what better hobby for a gamer computer scientist with no free time to game than one where you teach the computer to play for you? So, I feel its natural for me to look for approaches which couple my area of professional expertise and my interests.  In my previous post:  Evaluating Trust in User-Data Networks...