icon — Shaping the Next Generation of AI Systems


📢

News

  • 📅 2025-10-15: Deadline Extended! Abstract submission: October 26, 2025; Paper submission: October 28, 2025
  • 🎉 2025-09-11: Website launched and call for contributions open!
  • Submit your paper: 🔥🔥🔥 OpenReview
  • Welcome to join our Slack Slack workspace
Venue & Time: January 27, 2026, 9:00–17:00 | Venue: EXPO Singapore, Level 2, Room: Peridot 201 | Poster area: WS31–WS40
🎯

Workshop Introduction

While foundation models excel across NLP, computer vision, and multimodal tasks, they cannot capture individual user characteristics—preferences, behavioral patterns, and contextual needs—creating a disconnect between general intelligence and personalized user experience. This workshop “Personalization in the Era of Large Foundation Models (PerFM 2026)” will unite researchers and practitioners to explore theoretical foundations, scalable architectures, evaluation methods, lifelong learning, and ethical considerations, shaping the next generation of AI systems that adapt to and grow with individual users. We welcome original work, recently published work, and work-in-progress.

iconSubmit Your Paper to PerFM 2026
📝

Call for Contributions (Topics and Scope)

We welcome submissions on topics including but not limited to:

🔬 Theoretical Foundations: Generalization and stability under personalization, user heterogeneity, multi-task and meta-learning theory, privacy–utility trade-offs.
🛠️ Benchmarks and Tooling: Datasets, metrics, simulators, open-source libraries, evaluation frameworks across tasks, modalities, data sources, and demographic groups.
🏗️ Architectures and Algorithms: Parameter-efficient tuning, preference alignment, retrieval-augmented personalization, federated and decentralized personalization, on-device adaptation, agentic personalization frameworks.
🧠 Memory and Lifelong Learning: Continuous user adaptation, balancing short-term contextual awareness with long-term memory persistence, catastrophic forgetting prevention, evolving user preference modeling.
⚡️ Efficiency and Scalability: Computational optimization for millions of users, model compression, distributed serving, cold start strategies for new users, lightweight deployment, parameter sharing across users, cloud-edge collaborative efficiency.
🚀 Applications: Dialogue systems, recommendation, healthcare, education, finance, scientific discovery, time-series forecasting.
🛡️ Trustworthiness: Safety, robustness, fairness, algorithmic bias across demographics, transparency in personalized decisions, privacy-preserving policies for personal data collection and storage, societal implications of widespread personalized AI deployment.
📅

Submissions and Timeline

⏰ Important Dates (AoE Time)

  • Abstract submission deadline: October 17, 2025 October 26, 2025
  • Paper submission deadline: October 22, 2025 October 28, 2025
  • Author notification: November 5, 2025 November 12, 2025
  • Camera-ready submission: TBA
  • Workshop date: January 27, 2026 (at AAAI 2026)

🔥 Submission Guidelines

  • Use the AAAI 2026 style file for formatting.
  • Submissions should be PDFs of 6-8 pages for full papers or 2-4 pages for short/position papers (excluding references and appendices).
  • Double-blind review.
  • By default, submissions are non-archival.
  • Outstanding papers will be selected for lightning talks and a best paper award will be announced at the workshop.
🎤

Invited Speakers (Keynotes)

Jay Katukuri

Dr. Jay Katukuri

Affiliation: JPMorgan Chase, Managing Director of Engineering, Head of Technology AI/ML

Title: Driving Personalization in Banking and Finance from Large to Small Language Models

Abstract: This talk explores the evolving landscape of personalization in banking and finance, driven by advancements in both large and small language models. The first part delves into how Large Language Models (LLMs) can curate rich, consumer-centric merchant knowledge graphs from diverse metadata, enabling financial institutions to deliver more relevant and insightful experiences to their customers. The second part highlights the role of Small Language Models (SLMs), demonstrating how fine-tuning with low-rank adapters can efficiently mimic individual user behaviors and preferences, paving the way for scalable, targeted personalization. By bridging the capabilities of LLMs and SLMs, this session provides a comprehensive view of how financial organizations can harness the full spectrum of language model technologies to enhance customer engagement in personalization domain.

Bio: Dr. Jay Katukuri is Managing Director of Engineering and Head of Technology AI/ML at JPMorgan Chase. His organization is responsible for building best-in-class omni-channel personalization experiences for Chase customers. Prior to joining JPMorgan Chase, Dr. Katukuri was Head of Personalization at Apple, where he built large-scale recommender systems for the App Store, Apple Music, Video, Books, and Podcasts, enhancing personalized discovery experiences for millions of users worldwide.

Hamed Zamani

Prof. Hamed Zamani

Affiliation: UMass Amherst, Associate Professor

Title: Personalizing Large Language Models

Abstract: Many users these days rely on Large Language Models (LLMs) to learn about topics and find the answer to their questions. In this talk, I will discuss models and evaluation methodologies for generating personalized outputs, depending on the user's preferences, history, or background knowledge. In more detail, I will first introduce three large-scale benchmarks for various LLM personalization tasks. I will later draw connections between LLM personalization and retrieval-enhanced machine learning (REML) and introduce retrieval-augmented and reasoning approaches for personalizing large language models.

Bio: Hamed Zamani is an Associate Professor in the Manning College of Information and Computer Sciences at the University of Massachusetts Amherst (UMass), where he serves as the Associate Director of the Center for Intelligent Information Retrieval (CIIR), one of the top academic research labs in Information Retrieval worldwide. Prior to UMass, he was a Researcher at Microsoft. His research focuses on designing and evaluating statistical and machine learning models with applications to (interactive) information access systems and retrieval-enhanced AI systems. His work has led to over 120 refereed publications in the field, in addition to a number of widely-used open-source research artifacts. His research has been recognized by a CAREER Award from NSF, Early Career Excellence in Research and Excellence in Community Engagement awards from ACM SIGIR, multiple research awards from Adobe, Amazon, Cisco, Google, and Microsoft, and multiple paper awards from SIGIR 2024, SIGIR 2023, SIGIR 2022, and CIKM 2020.

Aleks Farseev

Dr. Aleks Farseev

Affiliation: SOMIN, Singapore, CEO

Title: Dynamic RAG Personalisation For the Marketing Content Ideation

Abstract: As generative artificial intelligence catalyzes a radical paradigm shift within the advertising industry, the research and practitioner communities are moving beyond general-purpose automation toward a sophisticated model of personalized orchestration. While dominant industry frameworks range from Meta's "black-box" automated pipelines to Google's collaborative co-creation paradigms, the true frontier of competitive advantage lies in the personalization of Large Language Models (LLMs) through dynamic Retrieval-Augmented Generation (RAG). In this keynote, Prof. Aleks Farseev examines how the evolution of advertising hinges on the ability to anchor generative outputs in high-fidelity "Content Perspectives" mined directly from brand ecosystems and competitor landscapes. By transitioning from static prompting to a dynamic RAG architecture that incorporates deep-mined Personas and latent Tensions, platforms like SOMIN demonstrate how AI can move from generic content generation to strategic "context engineering." This shift redefines the role of the marketing agency from a traditional producer of creative assets to a critical architect of AI systems - orchestrating complex workflows, curating personalized outputs, and safeguarding brand integrity through data-driven insights. Ultimately, this session argues that in an era of ubiquitous automation, the synthesis of emotional intelligence and brand-specific grounding via personalized RAG will become the definitive currency of value, transforming brand and agency marketing teams into indispensable strategic consultants and AI facilitators.

Bio: Prof. Aleks Farseev is a distinguished luminary in both entrepreneurship and academia. Renowned as a top-tier researcher and international keynote speaker, he stands as the driving force behind SoMonitor.ai – a cloud platform leveraging AI and Large Language Models for Competitor Analytics and Ad Optimization. His expertise shines not only as CEO but also as a Research Professor, where he adeptly imparts wisdom on Digital Marketing, Large Language Models, and AI Technology. Through courses offered in esteemed universities across the Globe and over 30 publications in top-tier conferences, Prof. Farseev ensures that the path to mastery is both accessible and enlightening.

Xiangnan He

Prof. Xiangnan He

Affiliation: University of Science and Technology of China, Professor

Title: From General to Personal: Towards LLM-based Personal Intelligence

Abstract: Large Language Models are increasingly becoming the central interface between people and the digital world, yet most existing systems remain fundamentally generic—optimized for the average user rather than individuals. This keynote argues for a necessary shift from general-purpose intelligence toward LLM-based personal intelligence, and articulates a unified vision built on three core pillars: user memory, personalized alignment, and continual self-evolving. I contend that this transition is essential for enabling AI systems that can understand users, adapt over time, and ultimately realize personal intelligence for everyone.

Bio: Xiangnan He is a Professor at the School of Artificial Intelligence, University of Science and Technology of China (USTC). His research focuses on recommendation systems, information retrieval and mining, large language models and general artificial intelligence. He has published over 100 papers in leading conferences and journals, including SIGIR, ICLR, NeurIPS, WWW, KDD, IEEE TKDE, and ACM TOIS, and his work has received more than 70,000 citations on Google Scholar. He is an Elsevier China Highly Cited Researcher and a recipient of multiple international and national research awards, including the ACM SIGIR Best Paper Award, the ICLR Best Paper Award, and the Wu Wenjun Artificial Intelligence Natural Science First Prize. He serves as an Associate Editor for several top journals, including IEEE TKDE, IEEE TBD, ACM TOIS, etc., and senior PC member for conferences including SIGIR, WWW, KDD, MM, etc.

Quanyu Dai

Dr. Quanyu Dai

Affiliation: Huawei Foundation Model Department, Senior Researcher

Title: Empowering Personal AI Assistants with Socially Intelligent LLMs: Exploration and Future Directions

Abstract: Social intelligence in Large Language Models (LLMs) is a core foundational capability for personal AI assistants. It enables assistants to understand user intentions and emotions within complex social contexts, make rational decisions accordingly, and provide personalized and empathetic services, thereby significantly enhancing the human-computer interaction experience. The realization of effective social intelligence primarily relies on two key technologies: first, long-term memory, which encompasses efficient storage, precise retrieval, and dynamic updating of information; and second, social reasoning, which involves a deep understanding of users and scenarios as well as rational decision-making in multi-turn interactions. This report first analyzes the current state of social intelligence in LLMs, revealing the limitations of existing models as the backbone for personal assistants. Subsequently, it shares our explorations in enhancing these capabilities from the perspectives of long-term memory and social reasoning. Finally, the report concludes by proposing several open questions worthy of further exploration based on industrial practices.

Bio: Quanyu Dai is a Senior Researcher at the Huawei Foundation Model Department. He received his Bachelor's degree from Shanghai Jiao Tong University and his Ph.D. from The Hong Kong Polytechnic University. His primary research interests include Large Foundation Models, LFM-based agents, and recommender systems. His research achievements have been successfully deployed across multiple scenarios of Huawei terminal business, serving hundreds of millions of users. He has published over 60 academic papers in top-tier AI conferences and journals, such as NeurIPS, KDD, WWW, ACL, TKDE, and TOIS, and serves as a long-standing reviewer for these prestigious venues.

Yulan He

Prof. Yulan He

Affiliation: King's College London, UK, Professor

Title: Many Minds: Rethinking LLM Personalisation

Abstract: Large language models (LLMs) are trained on the collective knowledge of the internet, but they are used to serve billions of individual users. Recent years witness increasing interests in adapting population-level LLMs to accommodate the diverse goals, preferences, and contexts of individual users. To build personalised LLMs, we need models that can maintain memory over time, learn from sparse and heterogeneous personal data, and align with what each user values. In this talk, I will compare three strategies for doing this, retrieval-based prompting, model adaptation, and preference-based alignment, and illustrate each through examples from our group's recent work. I will conclude by discussing open challenges and potential directions for the future of personalised LLM research.

Bio: Yulan He is a Professor in Natural Language Processing at King's College London and a Turing AI Fellow. Her research focuses on addressing the limitations of Large Language Models (LLMs), aiming to improve their reasoning capabilities, robustness, and explainability. She has published over 300 papers on topics such as self-evolution of LLMs, mechanistic interpretability, and LLMs for educational assessment and health. She received several prizes and awards for her research, including an SWSA Ten-Year Award, a CIKM Test-of-Time Award, and was recognised as an inaugural Highly Ranked Scholar by ScholarGPS. She served as the General Chair for AACL-IJCNLP 2022 and a Program Co-Chair for various conferences such as ECIR 2024, CCL 2024, and EMNLP 2020.

🏆

Awards

Best Paper

  • Title: Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients
    Author: Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars

Outstanding Papers

  • Title: Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models
    Author: Mahesh Kumar Nandwana, Youngwan Lim, Joseph Liu, Alex Yang, Varun Notibala, Nishchaie Khanna
  • Title: Federated Agent Reinforcement Learning
    Author: Canyu Chen, Kangyu Zhu, Zhaorun Chen, Zhanhui Zhou, Shizhe Diao, Yiping Lu, Tian Li, Manling Li, Dawn Song
  • Title: Drift No More? Context Equilibria in Multi-Turn LLM Interactions
    Author: Vardhan Dongre, Ryan A. Rossi, Viet Dac Lai, David Seunghyun Yoon, Dilek Hakkani-Tür, Trung Bui
  • Title: Dynamic Orthogonal Continual Fine-tuning for Mitigating Catastrophic Forgetting of LLMs
    Author: Zhixin Zhang, Zeming Wei, Meng Sun
  • Title: Generative Archetype-Grounded Item Representations for Sequential Recommendation
    Author: Yifan Li, Jiahong Liu, Xinni Zhang, Yankai Chen, Hao Chen, Wenhao Yu, Jianting Chen, Irwin King
🗓️

Schedule

Workshop Timetable

Time Session
MORNING SESSION  
09:00–09:10 Opening Remarks & Welcome
09:10–09:50 Keynote: Dr. Jay Katukuri (JPMorgan Chase, CTO – AI/ML)
09:50–10:30 Keynote: Prof. Hamed Zamani (UMass, Associate Professor)
10:30–11:00 Coffee Break & Morning Poster Session
11:00–12:00 Oral Presentations Session 1
12:00–12:30 Keynote: Dr. Aleks Farseev (SOMIN, CEO)
12:30–14:00 Lunch Break
AFTERNOON SESSION  
14:00–14:30 Keynote: Prof. Xiangnan He (USTC, Professor)
14:30–15:00 Keynote: Dr. Quanyu Dai (Huawei, Senior Researcher)
15:00–15:30 Oral Presentations Session 2
15:30–16:00 Coffee Break & Afternoon Poster Session
16:00–16:40 Keynote: Prof. Yulan He (KCL)
16:40–17:00 Award Ceremony & Closing Remarks

Oral Presentations

Session 1 (11:00–12:00)

Time Title Authors
11:00–11:15 Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars
11:15–11:30 Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models Mahesh Kumar Nandwana, Youngwan Lim, Joseph Liu, Alex Yang, Varun Notibala, Nishchaie Khanna
11:30–11:45 Federated Agent Reinforcement Learning Canyu Chen, Kangyu Zhu, Zhaorun Chen, Zhanhui Zhou, Shizhe Diao, Yiping Lu, Tian Li, Manling Li, Dawn Song
11:45–12:00 Drift No More? Context Equilibria in Multi-Turn LLM Interactions Vardhan Dongre, Ryan A. Rossi, Viet Dac Lai, David Seunghyun Yoon, Dilek Hakkani-Tür, Trung Bui

Session 2 (15:00–15:30)

Time Title Authors
15:00–15:15 Dynamic Orthogonal Continual Fine-tuning for Mitigating Catastrophic Forgetting of LLMs Zhixin Zhang, Zeming Wei, Meng Sun
15:15–15:30 Generative Archetype-Grounded Item Representations for Sequential Recommendation Yifan Li, Jiahong Liu, Xinni Zhang, Yankai Chen, Hao Chen, Wenhao Yu, Jianting Chen, Irwin King

Poster Sessions

Morning Poster Session (10:30–11:00)

Title Authors
Preference Descriptions for Dynamic Personalization of Large Language Models Naofumi Osawa
A Unified Framework for Prompt Privacy is Elusive and Misleading Prakhar Ganesh, Yash More, Marco Romanelli, Ferdinando Fioretto, Golnoosh Farnadi
PersonaAgent with GraphRAG: Community-Aware Knowledge Graphs for Personalized LLM Siqi Liang, Yudi Zhang, Yue Guo
Enhancing Serendipity Recommendation System by Constructing Dynamic User Knowledge Graphs with Large Language Models Qian Yong, Yanhui Li, Jialiang Shi, Yaguang Dou, Tian Qi
Domain-Specific LLM Adaptation: Bridging Personalization and Efficiency Through Synthetic Data and Optimization Iman Abbasnejad, Brett Tully, Wei Zhou, Tomal Deb, Sheldon Liu, Xuefeng Liu, Warren Wei
Structured Personalization: Modeling Constraints as Matroids for Data-Minimal LLM Agents Daniel Platnick, Marjan Alirezaie, Hossein Rahnama
LOOM: Personalized Learning Informed by Daily LLM Conversations Toward Long-Term Mastery via a Dynamic Learner Memory Graph Justin Cui, Kevin Pu, Tovi Grossman
ShapLoRA: Allocation of Low-rank Adaption on Large Language Models via Shapley Value Inspired Importance Estimation Yi Zhao, Wei Zhu
Personalization of Large Foundation Models for Health Interventions Stefan Konigorski, Johannes E. Vedder, Babajide Alamu Owoyele, İbrahim Özkan
ATLAS: User-Side Personalization and Privacy Protection Against Geolocation Risks in Large Vision–Language Models Kelvin Yuxiang Huang, Qingyun Wang, Yi R. Fung, Yue Xiao
Controlled Text Generation of DLLMs with Efficient Classifier Guidance Zhuo Cao, Xuanyi Xie, Qingyan Wei, Jiawang Zhao

Afternoon Poster Session (15:30–16:00)

Title Authors
LENS: Learning Architecture Navigator for LLM Agentic Systems Guancheng Wan, Jiayi Yang, Mengting Li, Eric Hanchen Jiang, Haixin Wang, Hui Yi Leong, Yizhou Sun, Wei Wang
Lightweight Inference-Time Personalization for Frozen Knowledge Graph Embeddings Cerag Oguztuzun, Ozan Oguztuzun
FinPerF: Dynamic User Profiling for Personalized Financial News Recommendation Kristina Lewandowska
T-REX: Transformer-based Category Sequence Generation for Grocery Basket Recommendation Soroush Mokhtari, Muhammad Tayyab Asif, Sergiy Zubatiy
Mitigating Conversational Amnesia in Tutoring Agents via Hybrid Memory and Offline Consolidation Luoxiao Yang
Hybrid Detection of Machine-Generated Texts in Academic Contexts Viacheslav Shalamov, Korniliev Artemiy, Ilya Astafjev, Valeria Efimova
PolyLingua: Margin-based Inter-class Transformer for Robust Cross-domain Language Detection Ali Lotfi Rezaabad, Bikram Khanal, Shashwat Chaurasia, Lu Zeng, Dezhi Hong, Hossein Bashashati, Thomas Butler, Megan Ganji
BiasJailbreak: Analyzing Ethical Biases and Jailbreak Vulnerabilities in Large Language Models Isack Lee, Haebin Seong
SCALE: Upscaled Continual Learning of Large Language Models Jin-woo Lee, Junhwa Choi, Bongkyu Hwang, Jinho Choo, Bogun Kim, JeongSeon Yi, Joonseok Lee, DongYoung Jung, Jaeseon Park, Kyoungwon Park, Suk-hoon Jung
Enhancing Human-Like Responses in Large Language Models Ethem Yağız Çalık, Talha Rüzgar Akkuş
Tiny Personal Critic: A Lightweight Critic for Low-Compute Personalization Aditya Singh, M Ganesh Kumar
Learning Without Forgetting: Preserving Reasoning Capabilities in LLMs via Structural Orthogonality Mustafa Hayri Bilgin, Mariam Barry, Albert Bifet, Azzedine Ait Said, Soumya Banerjee
👥

Organizers

1
Jiahong Liu
CUHK
1
Yang Zhang
NUS
1
Weizhi Zhang
UIC
1
Runcong Zhao
KCL
1
Lucas Vinh Tran
JPMorganChase
👥

Advisory Committee

1
Irwin King
CUHK
1
Tat-Seng Chua
NUS
1
Philip S. Yu
UIC
👥

Area Chairs and Program Chairs

  • Yali Fu (JLU, China)
  • Zhihao Wu (KCL, UK)
  • Tianyi Yao (Microsoft, US)
  • Yaozu Wu (UTokyo, Japan)
  • Raghav Sharma (Workday, US)
  • Hins Hu (Cornell, US)
  • Ramasubramanian Balasubramanian (LinkedIn, US)
  • Yuyao Yang (UIC, US)
  • Zeyu Zhang (RUC, China)
  • Bodhisatta Maiti (Home Depot, US)
  • Rajendra Ugrani (Meta, US)
  • Wei-Chieh Huang (UIC, US)
  • Dipanwita Guhathakurta (IBM, US)
  • Deep Shah (Google, US)
  • Twinkle Joshi (IQGeo, Canada)
  • Ketan Thakkar (LinkedIn, US)
  • Jeyadev Needhidevan (NYU, US)
  • Yilun Qiu (NUS, Singapore)
  • Chengyu Cao (HIT, China)
  • Huanhuan Ma (UIC, US)
  • Yifan Li (CUHK, Hong Kong SAR)
  • Jieyong Kim (Yonsei, South Korea)
  • Xiaoyan Zhao (CUHK, Hong Kong SAR)
  • Wenhao Yu (CUHK, Hong Kong SAR)
  • Italo Luis da Silva (KCL, UK)
  • Liangwei Yang (UIC, US)
  • Qinglin Zhu (KCL, UK)
  • Dongha Lee (Yonsei, South Korea)

FAQ

🤔 Can I attend virtually?
TBA.

📚 What does non-archival mean?
Non-archival means the submissions are not formally published in proceedings.

📧

Contact

Feel free to contact us: 📧 personalizationllm@outlook.com or Slack Slack workspace