A look at how CS50 has incorporated artificial intelligence (AI), including its new-and-improved rubber duck debugger, and how it has impacted the course already. 🦆 https://lnkd.in/eb-8SAiw In Summer 2023, we developed and integrated a suite of AI-based software tools into CS50 at Harvard University. These tools were initially available to approximately 70 summer students, then to thousands of students online, and finally to several hundred on campus during Fall 2023. Per the course's own policy, we encouraged students to use these course-specific tools and limited the use of commercial AI software such as ChatGPT, GitHub Copilot, and the new Bing. Our goal was to approximate a 1:1 teacher-to-student ratio through software, thereby equipping students with a pedagogically-minded subject-matter expert by their side at all times, designed to guide students toward solutions rather than offer them outright. The tools were received positively by students, who noted that they felt like they had "a personal tutor." Our findings suggest that integrating AI thoughtfully into educational settings enhances the learning experience by providing continuous, customized support and enabling human educators to address more complex pedagogical issues. In this paper, we detail how AI tools have augmented teaching and learning in CS50, specifically in explaining code snippets, improving code style, and accurately responding to curricular and administrative queries on the course's discussion forum. Additionally, we present our methodological approach, implementation details, and guidance for those considering using these tools or AI generally in education. Paper at https://lnkd.in/eZF4JeiG. Slides at https://lnkd.in/eDunMSyx. #education #community #ai #duck
Advanced AI Training
Explore top LinkedIn content from expert professionals.
-
-
In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ‘secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the “Take a deep breath” instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y
-
Excited to share this essential roadmap for anyone serious about thriving in the AI era! Whether you're a beginner or looking to deepen your expertise, mastering these foundational AI concepts will set you up for long-term success: 🔹 AI Foundations • Understand AI basics, its various types, and real-world applications. 🔹 Programming & Math for AI • Build strong fundamentals in Python, linear algebra, probability, calculus, and statistics. 🔹 Machine Learning (ML) • Learn supervised, unsupervised, and semi-supervised approaches, including regression, classification, clustering, and core algorithms. 🔹 Deep Learning (DL) • Explore advanced neural networks: CNNs, RNNs, LSTMs, autoencoders, and backpropagation. 🔹 Large Language Models (LLMs) • Dive into transformers, BERT, GPT, tokenization, and attention mechanisms powering tools like ChatGPT. 🔹 Prompt Engineering • Master zero-shot/few-shot prompting, chain-of-thought, and instruction tuning to get the best from LLMs. 🔹 Retrieval-Augmented Generation (RAG) • Combine LLMs with external knowledge sources using vector databases and advanced pipelines. 🔹 Vector Databases • Learn to store and retrieve high-dimensional vectors (FAISS, Pinecone, Weaviate, ChromaDB, Milvus). 🔹 AI Agents & Agentic AI • Automate complex workflows with tools and agent architectures (AutoGen, CrewAI). 🔹 Computer Vision • Enable machines to “see” with image classification, object detection, YOLO, and OpenCV. 🔹 Natural Language Processing (NLP) • Let machines understand and generate language with NER, POS tagging, sentiment analysis, and summarization. 🔹 Model Deployment & Serving • Deploy models into production with robust monitoring, logging, and A/B testing. 🔹 MLOps & Scalability • Scale production AI systems with efficient pipelines and best practices. 🔹 Real-World Projects & Use Cases • Apply your skills to impactful projects across diverse industries. If you're starting out or aiming to future-proof your tech career, focusing on these concepts will help you unlock new opportunities in AI. Ready to level up?
-
I've been getting a lot of questions about what L&D leaders can use AI for. The answer? A LOT more than you think. 👇 Generative AI has a lot of use cases, many we don't hear enough about. Here are a few that I've seen L&D leaders explore so far: 🏔 Content Generation 🏔 The most time-consuming parts of the job (think voice overs, subtitles, and getting copy just right) are now sped to lightning speed with AI. An L&D team of one can now do the work of many! 📊 Analyzing Learning Data 📊 The best programs are rooted in quantitative and qualitative research. Before, that meant dozens of call transcripts and surveys, and hours looking for patterns. Gen AI can spot trends super fast. 🤖 Expert Bots 🤖 You can add a new performance consultant or facilitation coach to your team in about as much time as it takes to make a sandwich. Cover your talent gaps or offer learners a robot resource. ⏳ "Just in Time" Learning ⏳ When we talk about bite-sized learning, we're really dreaming of giving folks the exact right resources at the moment of need. AI makes these dreams a reality, offering live skill assessment and feedback. 👑 Personalized Learning 👑 With Gen AI, courses can become designed for each user's learning journey. Imagine curated, unique courses that address each individuals needs, not just what was convenient to put in the LMS. TL;DR 👉 If you're wondering how to hit your learning targets, don't sleep in AI. L&D has capabilities now that we wouldn't have dreamed about five years ago. Interested in learning how we're using AI to transform manager development at Kona? Send me a DM or leave a comment below! This post was inspired by a recent conversation I had with Ross Stevenson and some of the incredible work from Egle Vinauskaite. If you're looking to learn more about AI and L&D, stop reading and give them a follow. What other AI + learning use cases did we miss? Let me know in the comments! #ai #learninganddevelopment #management #hr #peopleops #tech
-
"Attack Prompt Generation for Red Teaming and Defending Large Language Models". From the paper: "Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost and quality. To address these issues, we propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts. In this work, we proposed two frameworks to attack and defend LLMs. 1) The attack framework combines manual and automatic prompt construction, enabling the generation of more harmful attack prompts compared to previous studies. 2) The defense framework fine-tunes the target LLMs by multi-turn interactions with the attack framework. Empirical experiments demonstrate the efficiency and robustness of the defense framework while posing minimal impact on the original capabilities of LLMs. #llmsecurity #llm #largelanguagemodels #threats #redteaming #defense #aidefense #security #finetuning #prompt #promptinjection #artificialintelligence #ai #redteam #blueteam #defensivesecurity
-
A.I. is not just a tool, but a driving force in reshaping the landscape of science. In today's episode, I dive into the profound implications A.I. holds for scientific discovery, citing applications across nuclear fusion, medicine, self-driving labs and more. Here are some of the ways A.I. is transforming science that are covered in today's episode: • Antibiotics: MIT researchers uncovered two new antibiotics in a single year (antibiotic discovery is very rare so this is crazy!) by using an ML model trained on the efficacy of known antibiotics to sift through millions of potential antibiotic compounds. • Batteries: Similar sifting was carried out by A.I. at the University of Liverpool to narrow down the search for battery materials from 200,000 candidates to just five highly promising ones. • Weather: Huawei's Pangu-Weather and NVIDIA's FourCastNet use ML to offer faster and more accurate forecasts than traditional super-compute-intensive weather simulations — crucial for predicting and managing natural disasters. • Nuclear Fusion: AI is simplifying the once-daunting task of controlling plasma in tokamak reactors, thereby contributing to advancements in clean energy production. • Self-Driving Labs: Automate research by planning, executing, and analyzing experiments autonomously, thereby speeding up scientific experimentation and unveiling new possibilities for discovery. • Generative A.I.: Large Language Models (LLMs) tools are pioneering new frontiers in scientific research. From improving image resolution to designing novel molecules, these tools are yielding tangible results, with several A.I.-designed drugs currently in clinical trials. Tools like Elicit are streamlining the process of scientific literature review over vast corpora, allowing connections within or between fields to be uncovered automatically and suggesting new research directions. The SuperDataScience Podcast is available on all major podcasting platforms and a video version is on YouTube. This is Episode #750! #superdatascience #artificialintelligence #science #innovation #machinelearning
-
Want a prompting technique that is better than RAG or Chain of Thought? Merge them. Chain of Thought is easy. Just add “let’s think step by step” to the instructions. It miraculously encourages the language model to break down its task into subtasks, and gets better results than just asking for the answer. Unfortunately, it also increases the model’s propensity to hallucinate, especially when the “chain” has a lot of links. Retrieval Augmented Generation is the technique of looking up some references and sticking them into the prompt, so that the model can use outside knowledge. You get fewer hallucinations, but also less planning and reasoning than with CoT. A new paper proposes a fusion of these two techniques to get both long term planning and factuality. It apparently works really well, particularly for tasks that require planning and multiple steps: 13.63% improvement on code generation, 16.96% on mathematical reasoning, 19.2% on creative writing, and 42.78% on embodied task planning. Here’s how it works: You start with a Chain of Thought, then you go to Thought #1 and try to validate it with some outside knowledge for it. If it is invalid, you adjust Thought #1. If it is valid, you go on to thought #2, and so on until you are done. The researchers unfortunately named this technique Retrieval Augmented Thoughts (RAT). But we won’t hold that against them… Paper: https://buff.ly/43xuCOD Code: https://buff.ly/4amIrll #ArtificialIntelligence #AIResearch #DeepLearning #NLP #CodeGeneration #MathematicalReasoning #CreativeWriting #TaskPlanning #RetrievalAugmentedGeneration
-
In recent times, I've had the pleasure of engaging with many individuals who are enthusiastically venturing into the fascinating domains of AI, ML and GenAI as part of their ongoing learning endeavors. Personal motivation is the biggest factor and the more you learn, more hungry you will feel about acquiring knowledge and explore applications. I suggest this valuable concept from Japanese martial arts known as "Shuhari” for AI learning endeavors. This concept provides a structured approach to learning and mastery: Shu (守) - Grasp the Fundamentals: - Begin at the Shu stage, where your focus is on acquiring a strong understanding of the basics. - Just as martial arts students learn by emulating their master's precise movements, in AI and ML, this involves immersing yourself in the foundational principles, algorithms, and tools (understanding of mathematics, including linear algebra, calculus, and statistics, is essential for comprehending the underlying principles of AI algorithms). - This is the phase for building a robust knowledge base and skill set. Ha (破) - Explore and Integrate: - Transitioning to the Ha stage signifies a broader exploration. Here, it's about experimentation and learning from multiple sources, akin to martial artists who incorporate various styles into their practice. - Experiment with different AI and ML approaches, blend insights from diverse experts, and integrate these learnings into your AI and ML practice. Your personal strength will be what you bring to the table at this level - domain-specific knowledge in applying AI effectively in real-world scenarios. - This phase encourages adaptability and synthesis. Ri (離) - Innovate and Apply Creatively: - The Ri stage represents the zenith of mastery. At this point, you should aim to become a problem solver in the AI and ML domain. - Like martial artists who develop their unique styles, you'll apply your knowledge creatively across a range of industry domains. Innovate by creating novel algorithms, new approaches, and pushing the boundaries of what's possible. - This is the stage where you could truly begin to lead in the field. And on a personal note, I've been on this path for six years straight, and I genuinely believe this investment is worth it for personal transformation and staying relevant in this dynamic field. AI can benefit from all, and AI can benefit all. #AI #GenAI #MachineLearning #ShuhariMastery
-
Welcome Back to School It's time for our kids to go back to school, and this year much of the talk is about #AI in the classroom and how to handle it. There are understandably many views on this topic, and it is sure to take many directions in various schools. Formulating a policy around a technology that is moving so quickly and is now ubiquitous is challenging. Rather than focusing on the issue of plagiarism, here are some short thoughts on the positives that could allow teachers and students to embrace AI and work together to see the benefits it could bring to the classroom. For this week’s #JordiPlusJarvis analogy, I am using the classroom from "Welcome Back, Kotter." In the world of "Welcome Back, Kotter," Mr. Kotter often struggles to address the individual needs of his diverse group of students. John Travolta's character, Vinnie Barbarino, for instance, is the charismatic leader of the 'Sweathogs' but often struggles to maintain focus. Arnold Horshack, on the other hand, is enthusiastic but frequently misunderstands concepts. With AI-driven personalized learning, a virtual tutor could adapt to the unique strengths, weaknesses, and interests of each 'Sweathog.' For instance, the AI tutor could offer Barbarino interactive exercises to keep him engaged, while providing Horshack with more detailed explanations and visual aids to enhance his understanding. Just like Mr. Kotter, who often uses humor to engage his students, the AI tutor could adapt its approach to resonate with each student’s personality and learning style. Freddie "Boom Boom" Washington is confident and street-smart but often struggles with written expression. An AI-powered writing assistant could provide instant feedback on grammar and structure, freeing up Mr. Kotter to help Washington develop his analytic thinking and creativity. For Mr. Kotter, AI teacher assistants could help manage administrative duties, provide real-time coaching during lessons, and analyze student performance data. This would allow Mr. Kotter to spend more time on interpersonal instruction, mentoring, and addressing the unique challenges faced by each 'Sweathog.' Traditional school assessments fail to capture the full scope of a student's skills and knowledge. Juan Epstein, for example, often finds himself at odds with conventional testing methods. AI-driven assessments could provide a more nuanced understanding of Epstein's knowledge and skills by adapting questions based on his real-time performance and pinpointing concepts that require review. While the 'Sweathogs' would undoubtedly benefit from the support of AI-driven tools, it is crucial to remember that these tools are meant to augment Mr. Kotter's role, not replace it. The mentorship, empathy, and inspiration that Mr. Kotter provides are irreplaceable elements of education that AI currently cannot replicate. If only AI could sing and dance like Travolta did to the Barbarino song.
-
#VPspeak [^404] 🤷🏽♂️ What are the challenges in realization of Deep Learning models in wireless communication? In one of my previous posts, I discussed how AI models can be applied to deploy (and manage) a 5g network more efficiently. 👉🏾 It is important to note however that wireless network present their own challenges in realizing an AI oriented learning model. 1️⃣ With nodes and cell sites scattered geographically, data is distributed across sites. A centralized server is not an ideal choice for data processing as that would involve sending tons of data to a central location introducing overhead on both communication and storage. 2️⃣ The physical environment in which radio networks operate are quite dynamic. They vary depending on the environment, mobility etc. So we need a model that learns continuously and fine tune itself according to the varying conditions. 3️⃣ Specifically for 5g, and considering requirements around low latency(URLLC), low power consumption (mMTC) and high bandwidth (eMBB), deep learning system cannot be overloaded with high compute or high power or high bandwidth consumption. ✅ A federated learning model, with its focus on collaborative learning between distributed nodes and a centralized server can overcome some of these challenges and provides and effective way for wireless networks to deploy machine learning models. ….plus it allows continuous learning to address the dynamic nature of a wireless network. An example image from a hospital scenario below. #5g #machinelearning #wireless #telecom #network #ml image source: NVIDIA blog