Prompt Engineering Applications

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,195,097 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,250 followers

    In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ‘secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the “Take a deep breath” instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    68,908 followers

    Yesterday I had the pleasure of working with leaders and teachers from L’Anse Creuse School District outside of Detroit for one of our Train-the-Trainer Institutes. We had a great time digging into all things GenAI!   Our 1-day institute focuses on two key PD sessions: Introduction to Generative AI for Educators and Prompting 101. We work to upskill the new trainers on foundational concepts of GenAI, before equipping them with strategies to turnkey this work in their school. In our Prompting 101 session we focus on strategies for getting the best out of popular and powerful free GenAI tools like ChatGPT, Claude, and Gemini.   What's great is there are many different prompt frameworks out there for educators to use - including our 5S Framework: Set the scene (priming), be Specific, Simplify language, Structure output, and Share feedback. We also break down a good prompting in the following four steps: 1.      Clarity is Key   Explicitly state what you would like the model to do. The more specific your prompt, the more accurate and tailored the AI's response will be. General prompts will result in general responses. 2. Pick the Right Prompting Technique You may be able to get what you need from one well-structured prompt (one-shot prompting), but there are other techniques too. You can provide examples in your prompt to guide the AI's responses (few-shot prompting), or cut down your requests into steps (chain-of-thought prompting). 3.      Provide Context   The chatbot is called a "context window" for a reason! Give AI as much necessary background information as possible. This will help it prepare a response that fits your needs.   4.      Format Matters   A well-structured prompt guides the AI in understanding the exact nature of your request. Use clear and direct language, and structure your prompt logically.   So what does that look like in practice for a one-shot prompt?   An OK prompt for educators might look like this:   “Create a lesson plan about multiplying fractions for 5th graders”   A better prompt would look like:   “Act as an expert mathematician and a teacher skilled in designing engaging learning experiences for upper elementary students. Design a lesson plan about multiplying fractions for 5th grade students.”   And an even more effective prompt would be:   “You are an expert mathematician and teacher skilled in Universal Design for Learning. Design an accessible lesson plan about multiplying fractions for 5th grade students interested in soccer. The lesson should include a hands-on activity and frequent opportunities for collaboration. Format your response in a table.”   We take this approach every time we create on of our more than 100 customizable prompts in our Prompt Library. You can check out or complete prompt library here: https://lnkd.in/evExAZSt. AI for Education #teachingwithAI #promptengineering #GenAI #aieducation #aiforeducation

  • View profile for Ravit Jain
    Ravit Jain Ravit Jain is an Influencer

    Founder & Host of "The Ravit Show" | Influencer & Creator | LinkedIn Top Voice | Startups Advisor | Gartner Ambassador | Data & AI Community Builder | Influencer Marketing B2B | Marketing & Media | (Mumbai/San Francisco)

    164,296 followers

    My friend Chip has done it again!!!! Just finished reading AI Engineering! Not going to lie, this is not your typical AI book. It's dense, it's opinionated in the best way, and it's exactly what we needed in the middle of all the noise around GenAI This book is not about “how to prompt better” or “10 tools to build with ChatGPT”. It’s a real engineering guide. You want to understand why RAG works or when to use finetuning over prompt engineering? This book breaks it down The chapters that hit hard for me: 1. Evaluation. Most people don’t talk about how tough it is to evaluate LLMs. Chip dives deep into perplexity, cross entropy, exact match, embedding-based similarity, and even using LLMs to judge other LLMs. There's nuance here. She lays out the limitations, and it’s not sugarcoated. If you're building anything beyond a toy demo, this stuff is critical 2. Prompt Engineering. Way beyond “add examples to your prompt”. Talks about context windows, system prompts, chaining reasoning steps, prompt versioning, and even how to defend against jailbreaks and prompt injection. Real talk for anyone putting a model in front of users 3. RAG and Agents. RAG gets the technical treatment it deserves. Vector stores, retrieval strategies, failure modes, ways to optimize latency — all in there. On the agent side, I appreciated that she didn’t oversell it. Agents can be powerful, sure, but they also fail in weird ways and we’re still figuring them out. This section felt honest 4. Finetuning. The memory math. Quantization. PEFTs. When to merge models. If you’ve ever struggled with GPU limits or ran into model bloat, this chapter hits home. This isn’t “click this button to fine-tune” — it’s “here’s what’s actually going on” 5. Inference optimization. If you’ve worked with LLM latency, you know the pain. This book doesn’t gloss over it. It talks about how to cache, how to route requests, model optimization tricks, service-level tricks, and tradeoffs around hosting vs. calling APIs What I liked most, it’s not trying to hype up AI. It’s showing how to actually build with it. It doesn’t assume you’re at a FAANG company with unlimited infra. It assumes you’re trying to ship real stuff, today, under real constraints And I genuinely believe every engineer building production AI systems should read it. It’s not a light read. It’s a reference manual. And yeah, it’s going to sit on my desk for a long time Chip — hats off. You didn’t write a trend-chasing book. You wrote a field guide for the ones actually building #aiengineering #theravitshow

  • View profile for John Kutay
    John Kutay John Kutay is an Influencer

    Data & AI Engineering Leader

    9,107 followers

    🩺 RAG and Fine-Tuning: Precision and Personalization in AI 🩺 Consider a highly skilled radiologist with decades of training (Fine-Tuning). This training allows them to accurately interpret medical images based on patterns they've mastered. However, to provide the best diagnosis, they need your specific patient data (RAG), such as images from a recent CT scan. Combining their expertise with this personalized data results in a precise and personalized diagnosis. In AI, Fine-Tuning is similar to the radiologist’s extensive training. It involves adjusting pre-trained models to perform specific tasks with high accuracy. This process uses a large dataset to refine the model’s parameters, making it highly specialized and efficient for particular applications. Retrieval-Augmented Generation (RAG) works like the personalized patient data. RAG integrates external, real-time information into the model’s responses. It retrieves relevant data from various sources during inference, allowing the model to adapt and provide more contextually accurate outputs. How They Work Together: Fine-Tuning: ✅ Purpose: Customizes the base model for specific tasks. ✅ Process: Uses a labeled dataset to refine the model’s parameters. Outcome: Produces a highly accurate and efficient model for the task at hand. RAG: ✅ Purpose: Enhances the model with real-time, relevant information. Process: During inference, it retrieves data from external sources and integrates this data into the model’s responses. ✅ Outcome: Provides contextually relevant and up-to-date outputs, improving the model’s adaptability. Combining Fine-Tuning and RAG creates a powerful AI system. Fine-Tuning ensures deep expertise and accuracy, while RAG adds a layer of real-time adaptability and relevance. This combination allows AI models to deliver precise, contextually aware solutions, much like a skilled radiologist providing a personalized diagnosis based on both their expertise and the latest patient data. #dataengineering #AI #MachineLearning #RAG #FineTuning #DataScience #ArtificialIntelligence

  • View profile for Andreas Sjostrom
    Andreas Sjostrom Andreas Sjostrom is an Influencer

    LinkedIn Top Voice | AI Agents | Robotics I Vice President at Capgemini's Applied Innovation Exchange | Author | Speaker | San Francisco | Palo Alto

    12,980 followers

    Some of the best AI breakthroughs we’ve seen came from small, focused teams working hands-on, with structured inputs and the right prompting. Here’s how we help clients unlock AI value in days, not months: 1. Start with a small, cross-functional team (4–8 people) 1–2 subject matter experts (e.g., supply chain, claims, marketing ops) 1–2 technical leads (e.g., SWE, data scientist, architect) 1 facilitator to guide, capture, and translate ideas Optional: an AI strategist or business sponsor 2. Context before prompting - Capture SME and tech lead deep dives (recorded and transcribed) - Pull in recent internal reports, KPIs, dashboards, and documentation - Enrich with external context using Deep Research tools: Use OpenAI’s Deep Research (ChatGPT Pro) to scan for relevant AI use cases, competitor moves, innovation trends, and regulatory updates. Summarize into structured bullets that can prime your AI. This is context engineering: assembling high-signal input before prompting. 3. Prompt strategically, not just creatively Prompts that work well in this format: - “Based on this context [paste or refer to doc], generate 100 AI use cases tailored to [company/industry/problem].” - “Score each idea by ROI, implementation time, required team size, and impact breadth.” - “Cluster the ideas into strategic themes (e.g., cost savings, customer experience, risk reduction).” - “Give a 5-step execution plan for the top 5. What’s missing from these plans?” - “Now 10x the ambition: what would a moonshot version of each idea look like?” Bonus tip: Prompt like a strategist (not just a user) Start with a scrappy idea, then ask AI to structure it: - “Rewrite the following as a detailed, high-quality prompt with role, inputs, structure, and output format... I want ideas to improve our supplier onboarding process with AI. Prioritize fast wins.” AI returns something like: “You are an enterprise AI strategist. Based on our internal context [insert], generate 50 AI-driven improvements for supplier onboarding. Prioritize for speed to deploy, measurable ROI, and ease of integration. Present as a ranked table with 3-line summaries, scoring by [criteria].” Now tune that prompt; add industry nuances, internal systems, customer data, or constraints. 4. Real examples we’ve seen work: - Logistics: AI predicts port congestion and auto-adjusts shipping routes - Retail: Forecasting model helps merchandisers optimize promo mix by store cluster 5. Use tools built for context-aware prompting - Use Custom GPTs or Claude’s file-upload capability - Store transcripts and research in Notion, Airtable, or similar - Build lightweight RAG pipelines (if technical support is available) - Small teams. Deep context. Structured prompting. Fast outcomes. This layered technique has been tested by some of the best in the field, including a few sharp voices worth following, including Allie K. Miller!

  • View profile for Varun Grover
    Varun Grover Varun Grover is an Influencer

    Product Marketing Leader at Rubrik | AI & SaaS | LinkedIn Top Voice | Creator🎙️

    9,155 followers

    ⭐️ Generative AI Fundamentals 🌟 In the Generative AI development process, understanding the distinctions between pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation) is crucial for efficient resource allocation and achieving targeted results. Here’s a comparative analysis for a practical perspective: Pre-training:📚 • Purpose: To create a versatile base model with a broad grasp of language. • Resources & Cost: Resource-heavy, requiring thousands of GPUs and significant investment, often in millions. • Time & Data: Longest phase, utilizing extensive, diverse datasets. • Impact: Provides a robust foundation for various AI applications, essential for general language understanding. Fine-tuning:🎯 • Purpose: Customize the base model for specific tasks or domains. • Resources & Cost: More economical, utilizes fewer resources. • Time & Data: Quicker, focused on smaller, task-specific datasets. • Impact: Enhances model performance for particular applications, crucial for specialized tasks and efficiency in AI solutions. RAG:🔎 • Purpose: Augment the model’s responses with external, real-time data. • Resources & Cost: Depends on retrieval system complexity. • Time & Data: Varies based on integration and database size. • Impact: Offers enriched, contextually relevant responses, pivotal for tasks requiring up-to-date or specialized information. So what?💡 Understanding these distinctions helps in strategically deploying AI resources. While pre-training establishes a broad base, fine-tuning offers specificity. RAG introduces an additional layer of contextual relevance. The choice depends on your project’s goals: broad understanding, task-specific performance, or dynamic, data-enriched interaction. Effective AI development isn’t just about building models; it’s about choosing the right approach to meet your specific needs and constraints. Whether it’s cost efficiency, time-to-market, or the depth of knowledge integration, this understanding guides you to make informed decisions for impactful AI solutions. Save the snapshot below to have this comparative analysis at your fingertips for your next AI project.👇 #AI #machinelearning #llm #rag #genai

  • View profile for Mark Hinkle

    I publish a network of AI newsletters for business under The Artificially Intelligent Enterprise Network and I run a B2B AI Consultancy Peripety Labs. I love dogs and Brazilian Jiu Jitsu.

    13,122 followers

    𝗛𝗼𝘄 𝗳𝗿𝗲𝗾𝘂𝗲𝗻𝘁𝗹𝘆 𝗱𝗼 𝘆𝗼𝘂 𝗴𝗲𝘁 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗼𝘂𝘁𝗽𝘂𝘁 𝗳𝗿𝗼𝗺 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 𝗼𝗻 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝘁𝗿𝘆? 𝗡𝗼𝘁 𝗲𝗻𝗼𝘂𝗴𝗵? Give these tips a try, check out the attached anatomy of a prompt below. Be Specific: Precision in prompts leads to targeted and useful AI responses. It’s about asking the right questions to get the right answers. Set Constraints: Constraints guide the AI in generating focused and relevant outputs. Think of them as guardrails that keep the AI on track. Provide Context: Context is king. It helps AI understand the 'why' behind a prompt, leading to more meaningful and insightful responses. Seek Creativity: Don't shy away from asking for imaginative or out-of-the-box ideas. AI can surprise us with its creative capabilities. Use Clear Language: Clarity is critical. Clear prompts result in clear responses. Avoid ambiguities to ensure that AI understands your exact needs. Include Criteria for Success: Define what success looks like for your prompt. This helps in evaluating the AI's response and in iterative improvements. Ask for Reasoning: Encourage AI to not just provide answers, but also the rationale behind them. This deepens understanding and trust in AI outputs. Iterate and Refine: AI prompting is an iterative process. Refine your prompts based on responses to achieve the best outcomes. 𝗭𝗲𝗿𝗼, 𝗦𝗶𝗻𝗴𝗹𝗲, 𝗮𝗻𝗱 𝗙𝗲𝘄 𝗦𝗵𝗼𝘁 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 In addition to the core strategies for crafting AI prompts, understanding the nuances of zero-shot, single-shot and few-shot prompting can improve your ChatGPT results. 𝗦𝗶𝗻𝗴𝗹𝗲-𝗦𝗵𝗼𝘁 𝗣𝗿𝗼𝗺𝗽𝘁𝘀: 𝗧𝗵𝗲 𝗔𝗿𝘁 𝗼𝗳 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻 Typically we don't provide any examples to our prompts this is called zero-shot prompting. Single-shot prompting involves providing the AI with one example. This is ideal for straightforward tasks or when you need a quick, creative solution without much context. The key here is specificity and clarity. Since you're only giving one shot or example it's typically good for showing the format of the output you are looking for. 𝗙𝗲𝘄-𝗦𝗵𝗼𝘁 𝗣𝗿𝗼𝗺𝗽𝘁𝘀: 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝘄𝗶𝘁𝗵 𝗘𝘅𝗮𝗺𝗽𝗹𝗲𝘀 Few-shot prompting means providing the AI with a small number of examples to guide its output.Perfect for tasks where you want the AI to follow a certain style or format, or when more complex understanding is required.Choose your examples wisely. They should be representative of the task at hand and demonstrate the variety you expect in responses. Mastering these skills can significantly enhance the results from ChatGPT and other chatbots that use LLMs. What tips do you have for crafting effective AI prompts?

  • TL;DR: Going from GenAI PoC to production is not easy at all but that remains a big focus for enterprises given the value GenAI can offer when done right. Will share #GenAIPOCtoProd stories as I see them, much to learn and practice. Two consultants, one researcher and two AI leaders one at Amazon and one at a startup and one an educator came together to write this. https://lnkd.in/e-XwGsHA Their work is organized into three pieces: tactical, operational, and strategic. Below is the outline for the first of three pieces. It dives into the tactical nuts and bolts of working with LLMs. They share their best practices and common pitfalls around prompting, setting up retrieval-augmented generation, applying flow engineering, and evaluation and monitoring. -- Prompting  -- Focus on getting the most out of fundamental prompting techniques  -- Structure your inputs and outputs  -- Have small prompts that do one thing, and only one thing, well  -- Craft your context tokens -- RAG  -- The quality of your RAG’s output is dependent on the quality of retrieved documents, which in turn can be considered along a few factors.  -- Look at multiple ranking metrics for retrieval  -- Don’t forget keyword search; use it as a baseline and in hybrid search  -- Prefer RAG over fine-tuning for new knowledge  -- Long-context models won’t make RAG obsolete -- Tuning and optimizing workflows  -- Step-by-step, multi-turn “flows” can give large boosts.  -- Prioritize deterministic workflows for now  -- Getting more diverse outputs beyond temperature  -- Caching is underrated.  -- When to fine-tune -- Evaluation & Monitoring  -- Create a few assertion-based unit tests from real input/output samples  -- LLM-as-Judge can work (somewhat), but it’s not a silver bullet  -- The “intern test” for evaluating generations  -- Overemphasizing certain evals can hurt overall performance  -- Simplify annotation to binary tasks or pairwise comparisons   -- (Reference-free) evals and guardrails can be used interchangeably  -- LLMs will return output even when they shouldn’t  -- Hallucinations are a stubborn problem. Highly recommended read and look forward to more.

  • View profile for Kristin Tynski

    Co-Founder at Fractl - Marketing automation AI scripts, content marketing & PR case studies - 15 years and 5,000+ press-earning content marketing campaigns for startups, fortune 500s and SMBs.

    13,938 followers

    🚀 My favorite prompting trick that you probably haven't seen: Simulating Agentic Behavior With a Prompt 🤖 After spending now likely thousands of hours prompting #LLMs, one thing I've found that can vastly improve the quality of outputs is something I haven't seen talked about much. ✨ "Instantiate two agents competing to find the real answer to the given problem and poke holes in the other agent's answers until they agree, which they are loathe to do." ✨ This works especially well with #CLAUDE3 and #Opus. For a more advanced version that often works even better: ✨"Instantiate two agents competing to find the real answer and poke holes in the other's answer until they agree, which they are loathe to do. Each agent has unique skills and perspective and thinks about the problem from different vantage points. Agent 1: Top-down agent Agent 2: Bottom-up agent Both agents: Excellent at the ability to think counter factually, think step by step, think from first principles, think laterally, think about second order implications, are highly skilled at simulating in their mental model and thinking critically before answering, having looked at the problem from many directions." ✨ This often solves the following issues you will encounter with LLMs: 1️⃣ Models often will pick the most likely answer without giving it proper thought, and will not go back to reconsider. With these kinds of prompts, the second agent forces this, and the result is a better-considered answer. 2️⃣ Continuing down the wrong path. There's an inertia to an answer, and the models can often get stuck, biased toward a particular kind of wrong answer or previous mistake. This agentic prompting improves this issue significantly. 3️⃣ Overall creativity of output and solution suggestions. Having multiple agents considering solutions results in the model considering solutions that might otherwise be difficult to elicit from the model. If you haven't tried something like this and have a particularly tough problem, try it out and let me know if it helps!

More engineering topics