You will never anticipate all the uses, contexts, edge cases, or needs of your product. Let me tell you a quick example of why small deployments and evaluating AI outputs is so crucial. ⬇️ I ran computer vision product development at a big tech company. We had a customer that was using CCTV to measure traffic patterns. We had camera feeds from all over the city, trying to get a sense of cars, motorcycles, and bikes to estimate traffic and the impact of fees 💰 One camera was spewing out data that sounded wrong. It was the peak of traffic and it was detecting no cars, no motos, no bikes. We could tell the camera was on, but just sometimes, nothing was getting picked up. So we asked them to look at the images… 👀 Well, it turned out this one camera was a hot spot for the pigeon community. Every so often, the camera was blocked by pigeons going by, making the system detect nothing 😆 We had planned for day, night, rain, snow, fog, and outages, but not pigeons. Data is the fuel for AI, yes, but you need WISDOM to harness its full potential. Test small, review outputs, and integrate human intelligence. Don’t let your AI models get…pigeonholed 🐦
MLOps for AI Development
Explore top LinkedIn content from expert professionals.
-
-
There are 3 ingredients that pretty much guarantee the failure of any Machine Learning project: having the Data Scientists training models in notebooks, having the data teams siloed, and having no DevOps for the ML applications! Interestingly enough, that is where most companies trying out ML get stuck. The level of investment in ML infrastructures for companies is directly proportional to the level of impact they expect ML to have on the business. And the level of impact is, in turn, proportional to the level of investment. It is a vicious circle! Both Microsoft and Google established standards for MLOps maturity that capture the degree of automation of ML practices, and there is a lot to learn from those: - Microsoft: https://lnkd.in/gtzDcNb9 - Google: https://lnkd.in/gA4bR77x Level 0 is the stage without any automation. Typically, the Data Scientists (or ML engineers, depending on the company) are completely disconnected from the other data teams. That is the guaranteed failure stage! It is possible for companies to pass through that stage to explore some ML opportunities, but if they stay stuck at that stage, ML is never going to contribute to the company's revenue. The level 1 is when there is a sense that ML applications are software applications. As a consequence, basic DevOps principles are applied at the software level in production, but there is a failure to realize the specificity of ML operations. In development, data pipelines are better established to streamline manual model development. At level 2, things get interesting! ML becomes significant enough for the business that we invest in reducing model development time and errors. Data teams work closer as model development is automated and experiments are tracked and reproducible. If ML becomes a large driver of revenue, level 3 is the minimum bar to strive for! That is where moving from development to deployment is a breeze. DevOps principles extend to ML pipelines, including testing the models and the data. Models are A/B tested in production, and monitoring is maturing. This allows for fast model iteration and scaling for the ML engineering team. Level 4 is FAANG maturity level! A level that most companies shouldn't compare themselves to. Because of ads, Google owes ~70% of its revenue to ML and ~95% for Meta, so a high level of maturity is required. Teams work together, recurring training happens at least daily, and everything is fully monitored. For any company to succeed in ML, teams should work closely together and aim for a high level of automation, removing the human element as a source of error. #MachineLearning #DataScience #ArtificialIntelligence -- 👉 Register for the ML Fundamentals Bootcamp: https://lnkd.in/gasbhQSk --
-
Next year, thousands of generative AI pilots will move into production. Despite everyone's good intentions and evolving AI technology, there are some very real hurdles for most organizations to put AI into production at scale, and AI governance isn't something optional anymore. It is easier said than done. Here are 4 common AI Governance challenges I have found working with customers and approaches to solve them: 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟭: AI governance collaboration requires a lot of manual work, amplified by changes in data and model versions. Solution: Automate the governance activities as much as possible. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟮: Companies have models in multiple tools, applications, and platforms, developed inside and outside the organization. Solution: Consolidate as much as possible into one single governance platform. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟯: Governance is not a one-size-fits-all approach. Solution: Configure to your specific situation. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟰: Constraining technical teams in their choice of technology or frameworks. Solution: Provide an open architecture to wrap around the AI tooling of choice. As new Generative AI models bring benefits and risks, organizations need to take an enterprise-wide approach to governing all AI. With impending regulation, they must urgently address risks and govern both old and new AI, no matter who created it. The key: take a proactive approach and address AI governance before regulation requires it.
-
Product managers & designers working with AI face a unique challenge: designing a delightful product experience that cannot fully be predicted. Traditionally, product development followed a linear path. A PM defines the problem, a designer draws the solution, and the software teams code the product. The outcome was largely predictable, and the user experience was consistent. However, with AI, the rules have changed. Non-deterministic ML models introduce uncertainty & chaotic behavior. The same question asked four times produces different outputs. Asking the same question in different ways - even just an extra space in the question - elicits different results. How does one design a product experience in the fog of AI? The answer lies in embracing the unpredictable nature of AI and adapting your design approach. Here are a few strategies to consider: 1. Fast feedback loops : Great machine learning products elicit user feedback passively. Just click on the first result of a Google search and come back to the second one. That’s a great signal for Google to know that the first result is not optimal - without tying a word. 2. Evaluation : before products launch, it’s critical to run the machine learning systems through a battery of tests to understand in the most likely use cases, how the LLM will respond. 3. Over-measurement : It’s unclear what will matter in product experiences today, so measuring as much as possible in the user experience, whether it’s session times, conversation topic analysis, sentiment scores, or other numbers. 4. Couple with deterministic systems : Some startups are using large language models to suggest ideas that are evaluated with deterministic or classic machine learning systems. This design pattern can quash some of the chaotic and non-deterministic nature of LLMs. 5. Smaller models : smaller models that are tuned or optimized for use cases will produce narrower output, controlling the experience. The goal is not to eliminate unpredictability altogether but to design a product that can adapt and learn alongside its users. Just as much as the technology has changed products, our design processes must evolve as well.
-
6 Books & Strategies that increase Ops effectiveness in AI products 📚 MLOps (Machine Learning Operations) and Product Ops are often underrated, but they are the backbone of successful AI products. How should leaders think about increasing their effectiveness? 1. 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵 𝗖𝗹𝗲𝗮𝗿 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗖𝗵𝗮𝗻𝗻𝗲𝗹𝘀 Clarity is key. Clear communication ensures everyone from engineers to customer support is aligned with your product's vision. ⚙️ Leverage collaborative tools like Slack or Microsoft Teams to facilitate transparent and efficient cross-departmental dialogue. 📚 "Crucial Conversations" offers valuable tactics for navigating and mastering high-stakes discussions: https://amzn.to/3RbqG0y 2. 𝗣𝗿𝗶𝗼𝗿𝗶𝘁𝗶𝘇𝗲 𝗨𝘀𝗲𝗿 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 User insights are the lifeblood of AI product efficacy. Harnessing this feedback propels your product from good to great. ⚙️ Utilize platforms like UserVoice or Qualtrics to gather actionable user insights. 📚 "The Lean Startup" by Eric Ries emphasizes the importance of customer feedback in product development: https://amzn.to/4a1T2CP 3. 𝗗𝗲𝘃𝗲𝗹𝗼𝗽 𝗥𝗼𝗯𝘂𝘀𝘁 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 Monitoring is critical for maintaining the health of AI systems. With proper oversight, you can ensure AI applications perform at their peak. ⚙️ Implement monitoring tools like Prometheus to track AI model performance. 📚 "Site Reliability Engineering" by Jennifer Petoff provides insights into maintaining high-performing systems: https://amzn.to/3uCBz3I 4. 𝗖𝗿𝗲𝗮𝘁𝗲 𝗮 𝗖𝘂𝗹𝘁𝘂𝗿𝗲 𝗼𝗳 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 AI thrives in an environment where collaboration is the norm. Break down silos to build AI that resonates with users. ⚙️ Facilitate interdepartmental meetings and collaborative sessions to ideate and iterate. 📚 "Team of Teams" by @Stanley McChr General Stan McChrystal discusses the power of collaborative effort: https://amzn.to/4154Z6K 5. 𝗔𝗱𝗼𝗽𝘁 𝗔𝗴𝗶𝗹𝗲 𝗠𝗲𝘁𝗵𝗼𝗱𝗼𝗹𝗼𝗴𝗶𝗲𝘀 Agility is non-negotiable in AI product management. It allows you to navigate the complexities of AI with flexibility and speed. ⚙️ Embrace agile frameworks like Scrum or Kanban to enhance your product development cycle. 📚 "Scrum: The Art of Doing Twice the Work in Half the Time" by Jeff Sutherland provides a foundational understanding of agile practices:https://amzn.to/3sPdN45 6. 𝗘𝗺𝗽𝗵𝗮𝘀𝗶𝘇𝗲 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗔𝗜 𝗨𝘀𝗲 Ensure that your team is trained on the ethical implications of AI, prioritizing fairness, transparency, and accountability in your products. ⚙️ Conduct ethical AI training sessions and establish a review board for AI ethics. 🛠️💻"Weapons of Math Destruction" by Cathy O'Neil explores the ethics of AI algorithms: https://amzn.to/40Zrudn
-
AI products don't work without frameworks. Data teams need to know how broad or narrow the use cases to build them. Teams often have clear strategic goals but fail to adequately define the tactical scope of the problem - which essential for developing good AI products. This leads to a cycle of developing, launching, and eventually abandoning AI product development. The data team is then often perceived as a cost sink. Define how narrow your solution needs to be. Narrowly focused AI products optimize engineering resources and cater to specific segments. This helps to focus the data team on a limited set of features and use cases. Define how broad it your solution needs to be. Broad AI products aim for wider reach with diverse applications. You’ll need to know this if you are working between multiple teams and business units. AI PMs and data teams must make tough choices about how they approach the scope of data products. Data teams and AI PMs that define these frameworks will be strong performers in the next 12 months. The reason why most ML/AI products fail isn't because of bad engineering. It's often because range of use cases for users wasn't explored. We need to approach products with a defined and solid use case framework. It's no longer enough to deploy a model and hope it's a product. #datalife360 #datastrategy #ai #productmanagement #datascience
-
Scaling MLOps on AWS: Embracing Multi-Account Mastery 🚀 Move beyond the small team playground and build robust MLOps for your growing AI ambitions. This architecture unlocks scalability, efficiency, and rock-solid quality control – all while embracing the power of multi-account setups. Ditch the bottlenecks, embrace agility: 🔗 Multi-account mastery: Separate development, staging, and production environments for enhanced control and security. 🔄 Automated model lifecycle: Seamless workflow from code versioning to production deployment, powered by SageMaker notebooks, Step Functions, and Model Registry. 🌟 Quality at every step: Deploy to staging first, rigorously test, and seamlessly transition to production, all guided by a multi-account strategy. 📊 Continuous monitoring and feedback: Capture inference data, compare against baselines, and trigger automated re-training if a significant drift is detected. Here's how it unfolds: 1️⃣ Development Sandbox: Data scientists experiment in dedicated accounts, leveraging familiar tools like SageMaker notebooks and Git-based version control. 2️⃣ Automated Retraining Pipeline: Step Functions orchestrate model training, verification, and artifact storage in S3, while the Model Registry keeps track of versions and facilitates approvals. 3️⃣ Multi-Account Deployment: Staging and production environments provide controlled testing grounds before unleashing your model on the world. SageMaker endpoints and Auto Scaling groups handle inference requests, powered by Lambda and API Gateway across different accounts. 4️⃣ Continuous Quality Control: Capture inference data from both staging and production environments in S3 buckets. Replicate it to the development account for analysis. 5️⃣ Baseline Comparison and Drift Detection: Use SageMaker Model Monitor to compare real-world data with established baselines, identifying potential model or data shifts. 6️⃣ Automated Remediation: Trigger re-training pipelines based on significant drift alerts, ensuring continuous improvement and top-notch model performance. This is just the tip of the iceberg! Follow Shadab Hussain for deeper dives into each element of this robust MLOps architecture, explore advanced tools and practices, and empower your medium and large teams to conquer the AI frontier. 🚀 #MLOps #AI #Scalability #MultiAccount #QualityControl #ShadabHussain
-
“We spent millions on AI and have nothing to show for it.” That’s what the CEO told me. And they weren’t wrong… The results were underwhelming. Deadlines kept slipping. The board was asking tough questions. But instead of agreeing to pull the plug, I said something that surprised them: "Before you give up, let's take three steps back." I emphasized that AI can deliver exceptional outcomes, but only when you're rooted in what's actually achievable. Here's what I mean: STEP ONE: Know exactly what you're dealing with - The current state of your data quality - How prepared your infrastructure really is - What capabilities your team actually possesses STEP TWO: Balance your aspirational AI goals (what could be possible) with the reality of what you can deliver today (what is practical). Success in AI comes from marrying honest evaluation with executable strategy. So that’s exactly what we did: we stepped back, rethought the goal, and simplified the approach. We kept their ambitious vision but completely changed the execution: → Redefined success metrics to be measurable and achievable. → Broke their "moonshot" goal into 6 smaller milestones. → Started with one use case in a smaller capacity that could demonstrate clear ROI Six weeks later, they had their first AI success story. Not the revolutionary transformation they originally envisioned, but something better: proof that AI could work in their environment. - That early win gave the team confidence. - The board renewed their commitment. - And now they're scaling systematically. So the lesson here isn't about scaling back your vision. It's about finding the right path forward. Sometimes that means starting smaller to eventually go bigger. Big AI transformations don't happen overnight. They happen when you break them into manageable pieces and prove value incrementally. Start practical. Then scale ambitious. Have you ever had to shift from moonshot thinking to practical execution in AI? How did it go?
-
In the past few months, while I’ve been experimenting with it by myself on the side, I've worked with a variety of companies to assess their readiness for implementing #GenerativeAI. The pattern is striking: people are drawn to the allure of Gen AI for its elegant, rapid answers, but then often stumble upon age-old hurdles during implementation. The importance of robust #datamanagement is evident. Foundational capabilities are not merely helpful but essential, and neglecting them can endanger a company's reputation and business sustainability when training Gen AI models. Data still matters. ⚠️ Gen AI systems are generally advanced and complex, requiring large, diverse, and high-quality datasets to function optimally. One of the foremost challenges is therefore to maintain data quality. The old adage “garbage in, garbage out” holds true in the context of #GenAI. Just like any other AI use case or business process, the quality of the data fed into the system directly impacts the quality of the output. 💾 Another significant challenge is managing the sheer volume of data needed, especially for those who wish to train their own Gen AI models. While off-the-shelf models may require less data, custom training demands vast amounts of data and substantial processing power. This has a direct impact on the infrastructure and energy required. For instance, generating a single image can consume as much energy as fully charging a mobile phone. 🔐 Privacy and security concerns are paramount as many Gen AI applications rely on sensitive #data about individuals or companies. Consider the use case of personalizing communications, which cannot be effectively executed without having, indeed, personal details about the intended recipient. In Gen AI, the link between input data and outcomes is less explicit compared to other predictive models, particularly those with clearly defined dependent variables. This lack of transparency can make it challenging to understand how and why specific outputs are generated, complicating efforts to ensure #privacy and #security. This can also cause ethical problems when the training data contains biases. 🌐 Most Gen AI applications have a specific demand for data integration, as they require synthesis of information from a variety of sources. For instance, a Gen AI system designed for market analysis might need to integrate data from social media, financial reports, news articles and consumer behavior studies. The ability to integrate these disparate data sets not only demands the right technological solutions but also raises complexities around data compatibility, consistency, and processing efficiency. In the next few weeks, we’ll unpack these challenges in more detail, but for those that can’t wait, here’s the full article ➡️ https://lnkd.in/er-bAqrd
-
Harvard + AI = MAS Inspiration for Inclusive Innovation. “The future is already here, it’s just not evenly distributed yet”. William Gibson In an era where AI is reshaping landscapes, my participation in the YPO Harvard Business School Presidents Program marks a “singular moment” of inspiration and challenge. Engaging with the brilliant minds at Harvard alongside successful YPO CEOs and entrepreneurs from over 40 countries has been an unparalleled experience. It has deepened my commitment to harnessing AI to elevate underserved communities in tech, and to ensure we are providing thought lesdership and guidance to our clients as they discover the gamechanging possibilities for their businesses. Plenty of material to read, including Karim Lakhani’s book: “Competing in the Age of AI” I will be reflecting on and implementing key takeaways from this rich experience, sharing some with you: + Curiosity and knowing is not doing. There is a gap between knowing the huge impact and benefits of AI, and taking action + Success in AI implementation is 70% mindset + Access to abundant, high-quality data is crucial, requiring both domain expertise and technical skills + The methods of data collection, labeling, and model training are critical for minimizing bias and ensuring the desired outcomes. Algorithms are Important but the key is in the data + Scaling AI efforts requires an 'AI Factory' approach, demanding tight collaboration among various experts, including data labelers, data scientists, data engineers, machine learning engineers, MLOps, etc. + Not all challenges are suited for AI solutions. It's wise to establish a strong business case, define key success metrics, and develop POCs/MVPs before scaling up and using a big-bang approach + Never underestimate the impact of leadership imprint on an organization's structure, values, and culture. Is your organization primed for innovation? + AI presents opportunities and risks alike, from governance to its impact on humanity. Every decision is crucial #AI #responsibleleadership #Harvard #lifelonglearning #MASTechforAll MAS Global Consulting
-
+4