Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,193,904 followers

    When we get to AGI, it will have come slowly, not overnight. A NeurIPS Outstanding Paper award recipient, Are Emergent Abilities of Large Language Models a Mirage? (by Rylan Schaeffer, Brando Miranda and Sanmi Koyejo) studies emergent properties of LLMs, and concludes: "... emergent abilities appear due the researcher’s choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous, predictable changes in model performance." Public perception goes through discontinuities when lots of people suddenly become aware of a technology -- maybe one that's been developing for a long time -- leading to a surprise. But growth in AI capabilities is more continuous than one might think. That's why I expect the path to AGI to be one involving numerous steps forward, leading to step-by-step improvements in how intelligent our systems are.

  • View profile for Jeff Winter
    Jeff Winter Jeff Winter is an Influencer

    Industry 4.0 & Digital Transformation Enthusiast | Business Strategist | Avid Storyteller | Tech Geek | Public Speaker

    162,491 followers

    Gone are the days when the only way to know something was wrong with your machinery was the ominous clunking sound it made, or the smoke signals it sent up as a distress signal. In the traditional world of maintenance, these were the equivalent of a machine's cry for help, often leading to a mad dash of troubleshooting and repair, usually at the most inconvenient times. Today, we're witnessing a seismic shift in how maintenance is approached, thanks to the advent of Industry 4.0 technologies. This new era is characterized by a move from the reactive "𝐈𝐟 𝐢𝐭 𝐚𝐢𝐧'𝐭 𝐛𝐫𝐨𝐤𝐞, 𝐝𝐨𝐧'𝐭 𝐟𝐢𝐱 𝐢𝐭"  philosophy to a proactive "𝐋𝐞𝐭'𝐬 𝐟𝐢𝐱 𝐢𝐭 𝐛𝐞𝐟𝐨𝐫𝐞 𝐢𝐭 𝐛𝐫𝐞𝐚𝐤𝐬" mindset. This transformation is powered by a suite of digital tools that are changing the game for industries worldwide. 𝐓𝐡𝐫𝐞𝐞 𝐍𝐮𝐠𝐠𝐞𝐭𝐬 𝐨𝐟 𝐖𝐢𝐬𝐝𝐨𝐦 𝐟𝐨𝐫 𝐄𝐦𝐛𝐫𝐚𝐜𝐢𝐧𝐠 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐌𝐚𝐢𝐧𝐭𝐞𝐧𝐚𝐧𝐜𝐞: 𝟏. 𝐌𝐚𝐤𝐞 𝐅𝐫𝐢𝐞𝐧𝐝𝐬 𝐰𝐢𝐭𝐡 𝐈𝐨𝐓 By outfitting your equipment with IoT sensors, you're essentially giving your machines a voice. These sensors can monitor everything from temperature fluctuations to vibration levels, providing a continuous stream of data that can be analyzed to predict potential issues before they escalate into major problems. It's like social networking for machines, where every post and status update helps you keep your operations running smoothly. 𝟐. 𝐓𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐞 𝐂𝐫𝐲𝐬𝐭𝐚𝐥 𝐁𝐚𝐥𝐥 𝐨𝐟 𝐀𝐈 By feeding the data collected from IoT sensors into AI algorithms, you can uncover patterns and predict failures before they happen. AI acts as the wise sage that reads tea leaves in the form of data points, offering insights that can guide your maintenance decisions. It's like having a fortune teller on your payroll, but instead of predicting vague life events, it provides specific insights on when to service your equipment. 𝟑. 𝐒𝐭𝐞𝐩 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞 𝐰𝐢𝐭𝐡 𝐌𝐢𝐱𝐞𝐝 𝐑𝐞𝐚𝐥𝐢𝐭𝐲 Using devices like the Microsoft HoloLens, technicians can see overlays of digital information on the physical machinery they're working on. This can include everything from step-by-step repair instructions to real-time data visualizations. It's like giving your maintenance team superhero goggles that provide them with x-ray vision and super intelligence, making them more efficient and reducing the risk of errors. ******************************************** • Follow #JeffWinterInsights to stay current on Industry 4.0 and other cool tech trends • Ring the 🔔 for notifications!

  • View profile for Damien Benveniste, PhD
    Damien Benveniste, PhD Damien Benveniste, PhD is an Influencer

    Founder @ TheAiEdge | Follow me to learn about Machine Learning Engineering, Machine Learning System Design, MLOps, and the latest techniques and news about the field.

    172,103 followers

    We have seen recently a surge in vector databases in this era of generative AI. The idea behind vector databases is to index the data with vectors that relate to that data. Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector. Navigable Small World (NSW) is a process to build efficient graphs for search. We build a graph by adding vectors one after the others and connecting each new node to the most similar neighbors. When building the graph, we need to decide on a metric for similarity such that the search is optimized for the specific metric used to query items. Initially, when adding nodes, the density is low and the edges will tend to capture nodes that are far apart in similarity. Little by little, the density increases and the edges start to be shorter and shorter. As a consequence the graph is composed of long edges that allow us to traverse long distances in the graph, and short edges that capture closer neighbors. Because of it, we can quickly traverse the graph from one side to the other and look for nodes at a specific location in the vector space. When we want to find the nearest neighbor to a query vector, we initiate the search by starting at one node (i.e. node A in that case). Among its neighbors (D, G, C), we look for the closest node to the query (D). We iterate over that process until there are no closer neighbors to the query. Once we cannot move anymore, we found a close neighbor to the query. The search is approximate and the found node may not be the closest as the algorithm may be stuck in a local minima. The problem with NSW, is we spend a lot of iterations traversing the graph to arrive at the right node. The idea for Hierarchical Navigable Small World is to build multiple graph layers where each layer is less dense compared to the next. Each layer represents the same vector space, but not all vectors are added to the graph. Basically, we include a node in the graph at layer L with a probability P(L). We include all the nodes in the final layer (if we have N layers, we have P(N) = 1) and the probability gets smaller as we get toward the first layers. We have a higher chance of including a node in the following layer and we have P(L) < P(L + 1). The first layer allows us to traverse longer distances at each iteration where in the last layer, each iteration will tend to capture shorter distances. When we search for a node, we start first in layer 1 and go to the next layer if the NSW algorithm finds the closest neighbor in that layer. This allows us to find the approximate nearest neighbor in less iterations in average. ---- Find more similar content in my newsletter: TheAiEdge.io Next ML engineering Masterclass starting July 29th: MasterClass.TheAiEdge.io #machinelearning #datascience #artificialintelligence

  • View profile for Natalie Glance

    Chief Engineering Officer at Duolingo

    24,575 followers

    At Duolingo, we dedicate two hours company-wide every Friday to learning how to use AI to change how we work. What I’ve done with some of that time is shadow software engineers to observe their use of AI coding tools. It’s been very eye-opening. Here are some of the things I’ve learned so far. > One experienced software engineer has challenged himself to not write any code himself at all. “Writing code is 90% solved. LLMs are very, very good at this stuff. What I can do as an engineer is focus on high-level architecture and use my intuition to guide things.” He described AI coding like working with a "super genius intern." He provides occasional hints while leveraging his expertise in high-level architecture, his intuition, and his knowledge of the codebase. > An intern noted that interns and new grads should prioritize learning foundational skills first. Relying heavily on AI for implementation hinders deeper understanding of intricate system interactions. She uses AI more for explanations rather than direct implementation to enhance her learning process.  > Critical thinking is very important. “Vibe coding” is amazing for unlocking prototyping and tool building for non-software engineers. Software engineers still need to apply their skills to guide AI tools. > There’s no single front-runner for AI coding tools. Engineers that are successful in using AI have figured out which tools and which models are good for which task, whether it’s debugging a stack trace, fixing a bug, building a new feature, refactoring, migrating code, understanding a repo, etc. > Tech specs are more important than ever. In fact, good prompting looks a lot like a tech spec. While use of AI coding tools like Cursor and Claude Code have taken off, it’s clear that we’re still very much in the learning phase. For all the note-worthy AI wins, there are also the AI failures that people are less likely to talk about: going down a rabbit hole trying to solve a problem with AI assistance and then having to restart from scratch the old way. We’re not yet in the stage of seeing meaningful productivity gains that translate into faster product iterations. And that’s okay. It takes time to learn new ways to do work, especially when the tools themselves are changing so quickly. #engineering

  • View profile for Allie K. Miller
    Allie K. Miller Allie K. Miller is an Influencer

    #1 Most Followed Voice in AI Business (2M) | Former Amazon, IBM | Fortune 500 AI and Startup Advisor, Public Speaker | @alliekmiller on Instagram, X, TikTok | AI-First Course with 100K+ students - Link in Bio

    1,572,275 followers

    AI agents can autonomously hack real-world, unknown vulnerabilities 🔒 Researchers from UIUC developed HPTSA, a system of AI agents that includes a planning agent and specialized agents for different exploits (XSS, SQLi, etc.) On a benchmark of 15 real-world vulnerabilities, HPTSA achieved a 53% success rate in 5 attempts, outperforming a single GPT-4 agent by 2.7× and open-source scanners by ∞ (OSS got 0%) 📈 With exploit costs of ~$24 and falling, AI hacking may soon be cheaper than human pen-testing 💰 Get details in the paper: https://lnkd.in/eCE-HwkS

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | Strategist | LLM | Generative AI | Agentic AI

    673,948 followers

    𝗠𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗯𝗲𝗹𝗶𝗲𝘃𝗲 𝘁𝗵𝗮𝘁 𝗔𝗜 𝗶𝘀 𝗮 𝘀𝘁𝗿𝗮𝗶𝗴𝗵𝘁 𝗽𝗮𝘁𝗵 𝗳𝗿𝗼𝗺 𝗱𝗮𝘁𝗮 𝘁𝗼 𝘃𝗮𝗹𝘂𝗲. The assumption: 𝗗𝗮𝘁𝗮 → 𝗔I → 𝗩𝗮𝗹𝘂𝗲 But in real-world enterprise settings, the process is significantly more complex, requiring multiple layers of engineering, science, and governance. Here’s what it actually takes: 𝗗𝗮𝘁𝗮 • Begins with selection, sourcing, and synthesis. The quality, consistency, and context of the data directly impact the model’s performance. 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 • 𝗗𝗮𝘁𝗮 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴: Exploration, cleaning, normalization, and feature engineering are critical before modeling begins. These steps form the foundation of every AI workflow. • 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴: This includes model selection, training, evaluation, and tuning. Without rigorous evaluation, even the best algorithms will fail to generalize. 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 • Getting models into production requires deployment, monitoring, and retraining. This is where many teams struggle—moving from prototype to production-grade systems that scale. 𝗖𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀 • Legal regulations, ethical transparency, historical bias, and security concerns aren’t optional. They shape architecture, workflows, and responsibilities from the ground up. 𝗔𝗜 𝗶𝘀 𝗻𝗼𝘁 𝗺𝗮𝗴𝗶𝗰. 𝗜𝘁’𝘀 𝗮𝗻 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗱𝗶𝘀𝗰𝗶𝗽𝗹𝗶𝗻𝗲 𝘄𝗶𝘁𝗵 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝗳𝗶𝗰 𝗿𝗶𝗴𝗼𝗿 𝗮𝗻𝗱 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗺𝗮𝘁𝘂𝗿𝗶𝘁𝘆. Understanding this distinction is the first step toward building AI systems that are responsible, sustainable, and capable of delivering long-term value.

  • View profile for Olivia Moore

    AI Partner at Andreessen Horowitz

    25,293 followers

    Generative AI has spawned thousands of new products. But outside of ChatGPT, what are everyday consumers using? What's growing, and what has flattened? I crunched the numbers to find the top 50 consumer AI web products by monthly global visits - here's my learnings: 1. Most leading products are built from the “ground up” around generative AI - of the 50 on the list, 80% are brand new as of the past year. Only five are owned by big tech companies (ex. Google, Microsoft), and of the remaining 45, nearly half are bootstrapped! 2. ChatGPT has a massive lead, for now…representing 60% of traffic to the entire list! Character.AI comes in at #2, with ~21% of ChatGPT's traffic. Compared to mainstream consumer products, even the top AI products are fairly small - ChatGPT ranks around the same traffic scale as Reddit, LinkedIn, and Twitch, but far behind Facebook and Instagram. 3. General assistants (ex. ChatGPT, Bard, Poe) represent almost 70% of traffic, but companionship (ex. Character.AI) and content generation (ex. Midjourney, ElevenLabs) are surging! Model hubs are also a category to watch, with only two companies on the list (Civitai, Hugging Face) but both in the top 10. 4. While some early winners have emerged, most categories are still up for grabs - with a <2x gap in traffic between the #1 and the #2 leading players. Use case or workflow-specific platforms are also emerging alongside more horizontal players - ex. Leonardo Ai has taken off in image generation for games assets, while Midjourney continues growing as the leading generalist platform. 5. Acquisition for top products is almost all organic - with the median gen AI company on the list seeing 99% free acquisition! This compares to 52% for the median consumer subscription company before AI. Consumers are also showing significant willingness to pay for genAI, with 90% of products monetizing, and at a ~2x higher ARPU than non-AI consumer subscription comparables. 6. Mobile is still emerging as a platform for AI products - only 15 companies on the list have an app, and just three (PhotoRoom, Speechify, Character.AI) saw >10% of traffic from their app versus website. Given consumers now spend 36 more minutes per day on mobile than desktop, we're excited to see more app-first AI products emerge soon. For the full post and more stats, check out: https://lnkd.in/gR6Paycc #ai #genai #startups

  • View profile for Chip Huyen
    Chip Huyen Chip Huyen is an Influencer

    Building something new | AI x storytelling x education

    287,717 followers

    New blog post: Multimodality and Large Multimodal Models (LMMs) Link: https://lnkd.in/gJAsQjMc Being able to work with data of different modalities -- e.g. text, images, videos, audio, etc. --  is essential for AI to operate in the real world. Many use cases are impossible without multimodality, especially those in industries that deal with multimodal data such as healthcare, robotics, e-commerce, retail, gaming, etc. Not only that, data from new modalities can help boost model performance. Shouldn’t a model that can learn from both text and images perform better than a model that can learn from only text or only image? OpenAI noted in their GPT-4V system card that “incorporating additional modalities (such as image inputs) into LLMs is viewed by some as a key frontier in AI research and development.” This post covers multimodal systems, including LMMs (Large Multimodal Models). It consists of 3 parts. * Part 1 covers the context for multimodality, including use cases, different data modalities, and types of multimodal tasks. * Part 2 discusses how to train a multimodal system, using the examples of CLIP, which lays the foundation for many LMMs, and Flamingo, whose impressive performance gave rise to LMMs. * Part 3 discusses some active research areas for LMMs, including generating multimodal outputs and adapters for more efficient multimodal training. Even though we’re still in the early days of multimodal systems, there’s already so much work in the space. At the end of the post, I also compiled a list of models and resources for those who are interested in learning more about multimodal. As always, feedback is appreciated! #llm #lmm #multimodal #genai #largemultimodalmodel

  • View profile for Pascal BORNET

    Award-winning AI & Automation Expert, 20+ years | Agentic AI Pioneer | Keynote Speaker, Influencer & Best-Selling Author | Forbes Tech Council | 2 Million+ followers | Thrive in the age of AI and become IRREPLACEABLE ✔️

    1,475,810 followers

    🧠 If AI agents can do 80% of your job... What exactly is your job title now? That question stayed with me. Because this isn’t just about automation anymore. It’s about identity. Over the last 20 years, I’ve helped companies unlock value with AI. But this moment feels different. AI agents aren’t just helping us work faster — they’re starting to own the work: → Drafting strategies → Leading meetings → Making financial decisions → Even hiring contractors and reallocating budgets And they’re learning — fast. Every prompt. Every project. Every outcome. I’m no stranger to transformation. But this shift is so fundamental, it’s rewriting job descriptions before we even have time to update LinkedIn. 📊 What’s happening now: 80% of knowledge workers already use AI to complete tasks AI agents now execute end-to-end workflows with limited oversight Companies report up to 500% productivity gains Entry-level roles in consulting, finance, and project management are vanishing Titles like Junior Analyst or PMO Coordinator may not survive 2026 In IRREPLACEABLE, we describe this as the human shift. But how we navigate it matters. 📚 And now, we have data to back it up. A groundbreaking new study from Stanford University introduces the WORKBank, surveying: → 1,500 workers → 104 occupations → 844 tasks → Alongside 52 AI experts Here’s what it found: ✅ 46% of workers want AI to take over repetitive, low-value tasks 🟥 But many don’t want AI in areas requiring judgment or human interaction 🟨 Critical mismatches exist between what workers want and what AI can do 🧭 A new Human Agency Scale (HAS) helps define how much control humans want to retain over tasks 📈 The biggest shift? From information skills → interpersonal skills This isn’t just a tech upgrade. It’s a realignment of the core competencies that define our value at work. ✅ To stay ahead, I’m doubling down on: Human-AI collaboration fluency Strategic thinking that AI can’t replicate Ethical oversight and empathy Becoming the bridge between human vision and agent execution 💥 So let me ask you: If an AI agent does 80% of your tasks… What’s your role now? Coach? Strategist? Orchestrator? Or something entirely new? 👇 Let’s debate. How are you preparing? #AI #FutureOfWork #AIagents #WorkplaceTransformation #JobTitles #Automation #IRREPLACEABLE#Stanford #WORKBank #HumanAgency #AIleadership

  • View profile for Tom Goodwin
    Tom Goodwin Tom Goodwin is an Influencer
    740,165 followers

    A year into trying to use Generative AI / LLM's wherever I can, and I'm failing to see ANY value I could of course make some lovely images with AI, but despite being a Keynote speaker, I don't need images mocked up. I need real images found or taken. I don't see any benefit adding a picture to this post either. I could write some code better perhaps, but I don't ever need to do that ever. I could make some adverts to promote my book, but I want them to be really good, not really fast & easy I can use it to draft emails for me, but how I talk to people is important, I need to remember what I said, I need to remember what people told me, so I use voice to text ( a far more revolutionary form of ML IMHO) I can use it to create first research drafts for me, but it does a worse job that any intern I've ever had. If I want to know why China dramatically increased the number of cars they made from 2007-2010, I'll get a woeful , generic, bland reply from Gen AI, but a good Google search will find me 10 pieces that if I read make me oddly expert in about an hour. An hour I'll enjoy, and learn a great deal from. The 20 other pieces it bubbles up will remind me that there are further good questions about how Japan & Korea swept the world, about Korean Chaebol structure & what it means. About the shifting dynamic in car making to either be more horizontal and thin or more vertical and deep If I want to find charts to bring to life the data I find, Gen AI won't help one bit, Google Images will be amazing And I don't see this changing any time soon. As hard as I try to dramatically change the way I work AROUND the power of AI, the reality is that my work is far too important for AI. AI will offer me no insights, no opinions, no depth, no understanding, no connected dots, no leaps of faith, no powerful ways to express ideas or compelling ways to sell or communicate I am not alone, I'll talk to Architects, to Foresight Practitioners, to Analytics people, to Engineers. And by and large most people are curious, interested but frustrated. They find pockets where it helps, but often in spaces where we've had the same technology before, we just called it something else, like modeling, big data, logic gates, interpolation, visual effects etc So next time you think you're using it wrong, maybe you're just more knowledgable and discerning about what good work is Maybe our greatest unrealized fear of AI should be that it won't change much, not that it will change too much Maybe like all technology from 5G to 3D Printing, to Blockchain to the Metaverse, we're seeing that people who know industries, and people, and maintain positions of calmness, lose business to those who jump on every new thing Maybe we should get excited about databases that talk to each other, about containerization / modular software / better microservices, maybe we should unleash the power of good visual design? Is getting the basics right too much to ask? Am I alone in thinking this?