Human-AI Interaction in the Era of Large Language Models (LLMs)
Guardado en:
| Publicado en: | ProQuest Dissertations and Theses (2025) |
|---|---|
| Autor principal: | |
| Publicado: |
ProQuest Dissertations & Theses
|
| Materias: | |
| Acceso en línea: | Citation/Abstract Full Text - PDF |
| Etiquetas: |
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
| Resumen: | The first chapter, joint work with Nikhil Malik, Tim Derdenger, and Kannan Srinivasan, challenges conventional wisdom regarding eXplainable AI (XAI) regulations such as GDPR. Through a game-theoretic model examining XAI methods and levels in a duopoly market with heterogeneous customer preferences, we demonstrate that partial explanations can emerge as an equilibrium in unregulated settings. Importantly, we identify conditions where mandating full explanations through regulation may actually harm consumer surplus rather than enhance it. This finding holds across various policy levers (strict, self-regulating, and lower bound), regardless of firms’ choice of XAI methods and policy objectives including welfare maximization, consumer surplus, and average XAI depth. Our comparative analysis reveals that while strict XAI policies ensure uniform explanation depth, they potentially limit firms’ capacity for differentiation and innovation. Conversely, unregulated XAI, while offering maximum flexibility, may fail to guarantee minimum explanation depth for all consumers. The introduction of flexible approaches—self-regulating XAI and lower-bounded XAI—results in higher consumer welfare than either unregulated or full XAI policies. This research urges policymakers to consider a more nuanced approach when crafting XAI regulations, as a one-size-fits-all policy across all markets, particularly one mandating full explanation, may not yield the desired outcomes. For firms operating in these markets, the optimal strategy may not be to provide full explanations, as partial explanations can emerge as equilibrium strategies that better serve their competitive positioning while still addressing consumer needs.The second chapter addresses the growing use of LLMs as simulated consumers in marketing research. I develop a novel approach based on Shapley values from cooperative game theory to interpret LLM behavior and quantify the relative contribution of prompt components to model outputs. Through applications in discrete choice experiments and cognitive bias investigations, I uncover what I term the “token noise” effect—a phenomenon where LLM decisions are disproportionately influenced by tokens providing minimal informative content (such as empty lines in a questionnaire!). This finding provides a theoretical foundation for understanding how LLMs process information and make decisions, revealing fundamental differences from human cognition that must be accounted for in marketing research. For marketers employing LLMs for consumer simulation, this raises significant concerns about the validity of using LLMs as proxies for human subjects and necessitates rigorous validation procedures when using LLMs for preference elicitation or behavior prediction. The proposed Shapley value method offers practitioners a model-agnostic approach for optimizing prompts and mitigating apparent cognitive biases in LLM responses.The third chapter investigates the unintended consequences of AI alignment techniques on the creative capabilities of language models. Through a series of experiments with the Llama model family (created by Meta/Facebook), I demonstrate that alignment methods like Reinforcement Learning from Human Feedback (RLHF), while reducing bias and harmful outputs, significantly diminish syntactic and semantic diversity. My findings reveal that aligned models exhibit lower entropy in token predictions, form distinct clusters in embedding space, and gravitate toward “attractor states”, indicating limited output diversity. This contributes to our theoretical understanding of AI creativity by conceptualizing the relationship between alignment and creativity as a fundamental trade-off rather than a technical limitation. Marketing teams must strategically balance the benefits of AI safety alignment with creative performance when selecting language models for content generation tasks. Different models may be optimal for different marketing functions—aligned models for customer-facing interactions where consistency and brand safety are paramount, and base models for ideation tasks that benefit from novelty and creativity, such as ad copywriting and customer persona development.The fourth chapter steps beyond individual models to examine networks of AI agents that work together to accomplish complex goals, such as automating various functions in a business (e.g., customer support, SEO, refunds, etc.). This introduces a new challenge: not just how we build these agents, but how we coordinate them. To address this, I introduce Pel, a programming language I developed from scratch specifically for orchestrating AI agents. Pel offers an elegant, principled framework for multi-agent AI systems, addressing limitations in current methods of controlling LLMs through a syntactically simple yet semantically rich platform for expressing complex actions, control flow, and inter-agent communication. Its design emphasizes minimal grammar suitable for constrained LLM generation, powerful composition mechanisms, and built-in support for natural language conditions. This advances programming language theory through the development of a domain-specific language (DSL) optimized for AI agent control, proposing a new paradigm for human-AI interaction that incorporates the unique capabilities and limitations of language models. From a managerial perspective, Pel provides marketing technology teams with a specialized tool for building sophisticated marketing automation systems powered by LLMs, to be used in customer engagement and support, content personalization, and multi-channel campaign management. |
|---|---|
| ISBN: | 9798290943916 |
| Fuente: | ProQuest Dissertations & Theses Global |