Clara’s Verdict
Let me be direct about the conditions of this review. Emotional Intelligence for AI Prompts, written by Joe Davis and published independently in February 2026, carries no Audible UK ratings and no listener reviews at the time of writing. The rating field in the catalogue data appears to contain an Amazon shopping cart error message rather than a score, which suggests a data retrieval anomaly rather than a genuine assessment. I am working from the synopsis, the narrator credit, and the genre designation of technology. I will give this the fair reading it deserves, but I want to be transparent about what I cannot verify.
The title raises a genuinely interesting question: does the concept of emotional intelligence, as developed by Daniel Goleman and applied in human psychology and professional settings, map usefully onto the practice of crafting prompts for large language models? The premise is timely enough to be worth taking seriously even in the absence of listener consensus.
What the Book Is Reaching For
The synopsis positions the book at « the revolutionary intersection of human emotion and artificial intelligence, » which is marketing language but not empty marketing language. There is a real and currently under-explored relationship between the framing of a request and the quality of the output an AI produces. Anyone who has spent time working seriously with large language models knows that the difference between a prompt that generates generic, surface-level responses and one that generates something genuinely useful is partly structural and partly a matter of tone, context, and the implicit relationship the prompt establishes with the model.
Whether « emotional intelligence » is precisely the right framework for understanding that relationship is a question worth asking carefully. The term was developed to describe human capacities: self-awareness, empathy, social skill, emotional regulation. Applying it to interactions with systems that do not have emotions is either a generative metaphor or a category error, depending on how precisely you think the underlying concepts need to hold. The book promises « practical strategies, insightful case studies, and exercises, » which suggests it is aimed at application rather than theoretical rigour.
At four hours and 45 minutes, the runtime is concise for a practical guide of this scope. Independent publication through Joe Davis suggests this is author-led rather than coming from a trade publisher with editorial infrastructure, which can mean anything from highly focused practical insight to less rigorously edited content. The field is new enough that being early and practical may be more valuable than being comprehensive.
Jordan Zuniga at the Microphone
Jordan Zuniga is the narrator. For a technology self-help title, the narrator’s ability to handle technical vocabulary fluently and to maintain an engaging pace through instructional content matters considerably. A narration that feels mechanical or that trips over AI terminology will undermine the credibility of the practical guidance. Without listener reviews to draw on, assessing Zuniga’s performance here is genuinely speculative. His credit on an independently produced title suggests he works across the AI and self-development space, which gives some confidence in his familiarity with the vocabulary.
Four hours and 45 minutes is a manageable commitment even if the narration or the content turns out not to work for you. The Audible sample will tell you a great deal in the first five minutes.
What Readers Say
No reviews are available. The book was released in February 2026, which makes it a recent independent title that has not yet accumulated listener response on Audible UK. The AI productivity and prompt engineering space has grown considerably crowded since 2022, and titles in this area often find their audiences through professional recommendation and social sharing rather than through traditional browse discovery on audiobook platforms. Checking the author’s own platform, LinkedIn, or AI-focused communities may give a better sense of the early reception.
For a more established body of listener response in the adjacent prompt engineering space, titles like The Art of Prompt Engineering by various authors or discussions in AI-focused newsletters and communities may provide useful context for comparison.
The Broader Question This Book Raises
There is a more interesting question underneath the one the book’s title poses. Effective prompting of large language models is genuinely a skill, and it is one that improves with practice and reflection in ways that most professional users have discovered empirically rather than through any systematic framework. The insight that providing context, tone, audience, and purpose alongside a request significantly improves output quality is not a complicated one, but it is routinely overlooked. Whether framing that insight through the vocabulary of emotional intelligence adds clarity or adds an unnecessary layer of metaphor is something that each reader will need to assess for themselves, and the answer may differ depending on your familiarity with the emotional intelligence literature in its original human-psychology context.
What the book is clearly not attempting is a technical guide to prompt engineering in the LLM-researcher sense: tokenisation, system prompts, few-shot examples, parameter tuning. This is a book about the human side of the interaction, which is a legitimate and relatively underserved area compared to the technical literature.
Who Should Listen?
This may be of genuine interest to communication professionals, marketers, educators, and creative practitioners who use AI tools regularly and want to approach prompt design with more intentionality. The emotional intelligence framing is likely to resonate with readers who already have that vocabulary from professional development contexts and want to extend it into their AI workflows. Approach with realistic expectations about what a four-hour independently published title can cover in a rapidly evolving field, and consider sampling before committing.