In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Today’s AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9
Productivity
Explore top LinkedIn content from expert professionals.
-
-
Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]
-
6 proven techniques to increase productivity, And reclaim your time: This sheet highlights the ↳What ↳When ↳Why ↳And how So you can start putting these to work today: 1) Eisenhower Matrix What it is - A system to prioritize When to use it - You feel busywork is keeping you from "real" work Why it works - The least important tasks keep rising to the top because they're the easiest How to use it - Sort your tasks into quadrants: ↳Important and urgent: do it now ↳Important but less urgent: schedule it ↳Not important but urgent: delegate it ↳Not important and not urgent: delete it 2) 80/20 Rule What - A rule for focusing only on the most impactful work When - You feel over-capacity, and you need to cut things Why - 80% of outcomes come from 20% of causes, and then results diminish quickly after that How - Focus on just the most critical 20%: ↳20% of effort → 80% of results ↳20% of products → 80% of sales ↳20% of habits → 80% of impact ↳20% of innovations → 80% of growth 3) 1-3-5 Method What - A tool for simplifying your to-do list so you can actually complete it When - Your list is never-ending, and it's hard to know what to tackle Why - In reality, committing to work on less lets you finish more How - The night before or morning of, choose for the day just: ↳1 key project (only 1!) ↳3 medium items ↳5 smaller items ↳Leave everything else off 4) Eat Your Frog What - A commitment to do your most critical item first When - You keep putting off an important (but scary or intimidating) task Why - Doing it likely won’t be as bad as you thought, and it builds momentum How - Follow these 4 simple steps: ↳Identify the big task you're avoiding ↳Schedule time for it early in the day ↳Eat your frog: actually complete the task ↳Celebrate an early win and progress 5) Deep Work What - A block of distraction-free time to work on a key item When - You constantly get interrupted and can't focus Why - Multitasking doesn't work - you dramatically increase productivity by focusing on just one thing How - Create a deep work environment: ↳Schedule a block on your calendar ↳Put away your phone, exit your email, close Slack, shut the door ↳Focus on just 1 task for at least an hour (and preferably 2 to 3) 6) Pomodoro Technique What - A style of working in intervals When - Your feel your energy fade over time or your work seems too big Why - Short bursts paired with breaks keeps your energy and productivity up How - Alternate medium work, short break: ↳Typical: work for 25 minutes, break for 5 ↳Experiment to find what’s best for you ↳Your break should be restful (breathing, time outside) not staring at your phone or answering email The most productive people you know aren't superhuman, They're simply using these strategies. Put these to work, And you'll soon get much more done AND have more time. Any you'd add to this list? --- ♻️ Repost to help your network reclaim their time. And follow me George Stern for more productivity content
-
Just out in Harvard Business Review, summary of the Hybrid Experiment results and lessons on how to make hybrid succeed. Experiment: randomize 1600 graduate employees in marketing, finance, accounting and engineering at Trip.com into 5-days a week in office, or 3-days a week in office and 2-days a week WFH. Analyzed 2 years of data. Two key results A) Hybrid and fully-in-office showed no differences in productivity, performance review grade, promotion, learning or innovation. B) Hybrid had a higher satisfaction rate, and 35% lower attrition. Quit-rate reductions were largest for female employees. Four managerial lessons 1) Hybrid needs a strong performance management system so managers don’t need to hover over employees at their desks to check their progress. Trip.com had an extensive performance review process every six months. 2) Coordinate in-office days at the team or company level. Schedule clarity prevents the frustration of coming to an empty office only to participate in Zoom calls. Trip.com coordinated WFH on Wednesday and Friday. 3) Having leadership buy-in is critical (as with most management practices). Trip.com’s CEO and C-suite all support the hybrid policy. 4) A/B test new policies (as well as products) if possible. Often new policies turn out to be unexpectedly profitable. Trip.com made millions of dollars more profits from hybrid by cutting expensive turnover.
-
Apache Spark has levels to it: - Level 0 You can run spark-shell or pyspark, it means you can start - Level 1 You understand the Spark execution model: • RDDs vs DataFrames vs Datasets • Transformations (map, filter, groupBy, join) vs Actions (collect, count, show) • Lazy execution & DAG (Directed Acyclic Graph) Master these concepts, and you’ll have a solid foundation - Level 2 Optimizing Spark Queries • Understand Catalyst Optimizer and how it rewrites queries for efficiency. • Master columnar storage and Parquet vs JSON vs CSV. • Use broadcast joins to avoid shuffle nightmares • Shuffle operations are expensive. Reduce them with partitioning and good data modeling • Coalesce vs Repartition—know when to use them. • Avoid UDFs unless absolutely necessary (they bypass Catalyst optimization). Level 3 Tuning for Performance at Scale • Master spark.sql.autoBroadcastJoinThreshold. • Understand how Task Parallelism works and set spark.sql.shuffle.partitions properly. • Skewed Data? Use adaptive execution! • Use EXPLAIN and queryExecution.debug to analyze execution plans. - Level 4 Deep Dive into Cluster Resource Management • Spark on YARN vs Kubernetes vs Standalone—know the tradeoffs. • Understand Executor vs Driver Memory—tune spark.executor.memory and spark.driver.memory. • Dynamic allocation (spark.dynamicAllocation.enabled=true) can save costs. • When to use RDDs over DataFrames (spoiler: almost never). What else did I miss for mastering Spark and distributed compute?
-
The silent productivity killer you've never heard of... Attention Residue (and 3 strategies to fight back): The concept of "attention residue" was first identified by University of Washington business professor Dr. Sophie Leroy in 2009. The idea is quite simple: There is a cognitive cost to shifting your attention from one task to another. When our attention is shifted, there is a "residue" that remains in the brain and impairs our cognitive performance on the new task. Put differently, you may think your attention has fully shifted to the next task, but your brain has a lag—it thinks otherwise! It's relatively easy to find examples of this effect in your own life: • You get on a call but are still thinking about the prior call. • An email pops up during meeting and derails your focus. • You check your phone during a lecture and can't refocus afterwards. There are two key points worth noting here: 1. The research indicates it doesn't seem to matter whether the task switch is "macro" (i.e. moving from one major task to the next) or "micro" (i.e. pausing one major task for a quick check on some minor task). 2. The challenge is even more pronounced in a remote/hybrid world, where we're free to roam the internet, have our chat apps open, and check our phones all while appearing to be focused in a Zoom meeting. With apologies to any self-proclaimed proficient multitaskers, the research is very clear: Every single time you call upon your brain to move away from one task and toward another, you are hurting its performance—your work quality and efficiency suffer. Author Cal Newport puts it well: "If, like most, you rarely go more than 10–15 minutes without a just check, you have effectively put yourself in a persistent state of self-imposed cognitive handicap." Here are three strategies to manage attention residue and fight back: 1. Focus Work Blocks: Block time on your calendar for sprints of focused energy. Set a timer for a 45-90 minute window, close everything except the task at hand, and focus on one thing. It works wonders. 2. Take a Breather: Whenever possible, create open windows of 5-15 minutes between higher value tasks. Schedule 25-minute calls. Block those windows on your calendar. During them, take a walk or close your eyes and breathe. 3. Batch Processing: You still have to reply to messages and emails. Pick a few windows during the day when you will deeply focus on the task of processing and replying to these. Your response quality will go up from this batching, and they won't bleed into the rest of your day. Attention residue is a silent killer of your work quality and efficiency. Understanding it—and taking the steps to fight back—will have an immediate positive impact on your work and life. If you enjoyed this or learned something, share it with others and follow me Sahil Bloom for more in future! The beautiful visualization is by Roberto Ferraro.
-
Employees don’t just leave for money. Understand your employees. They leave because they feel unheard, undervalued, and unsupported. Because their reasons for leaving matter. Many companies focus only on hiring replacements, instead of fixing the root cause. 🚨 Ignoring the reasons behind turnover creates: ⚠️ A toxic cycle of dissatisfaction ⚠️ A disengaged workforce ⚠️ Higher hiring and training costs But this approach leads to more problems. It creates a cycle of dissatisfaction. Every organization has a chance to do better. Instead of reacting when employees quit, proactively address retention: ✅ Spot early signs of disengagement (e.g., missed deadlines, lack of participation). ✅ Conduct stay interviews, not just exit interviews. ✅ Provide clear career growth paths and meaningful work. ✅ Equip managers with leadership training to support their teams. ✅ Foster a culture of recognition and flexibility. Be proactive, not reactive. Understand what employees need. Focusing on these areas builds loyalty. Loyal employees stay longer and work harder. Companies that care about their teams attract top talent. They also save money on hiring costs. A healthy workplace culture is key to success. Engaged employees drive better results. Commit to continuous improvement. This is how organizations thrive in a competitive world. Invest in your employees. It pays off in the long run. If you’re a leader, ask yourself: What am I doing today to keep my best people tomorrow? ❓ How does your company approach retention? 💬 Let’s discuss in the comments. ♻️ Repost to promote retention. 👋 I write posts like this every day at 9:30am EST. Follow me (Dr. Chris Mullen) so you don't miss the next one.
-
It’s easy as a PM to only focus on the upside. But you'll notice: more experienced PMs actually spend more time on the downside. The reason is simple: the more time you’ve spent in Product Management, the more times you’ve been burned. The team releases “the” feature that was supposed to change everything for the product - and everything remains the same. When you reach this stage, product management becomes less about figuring out what new feature could deliver great value, and more about de-risking the choices you have made to deliver the needed impact. -- To do this systematically, I recommend considering Marty Cagan's classical 4 Risks. 𝟭. 𝗩𝗮𝗹𝘂𝗲 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗦𝗼𝘂𝗹 𝗼𝗳 𝘁𝗵𝗲 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 Remember Juicero? They built a $400 Wi-Fi-enabled juicer, only to discover that their value proposition wasn’t compelling. Customers could just as easily squeeze the juice packs with their hands. A hard lesson in value risk. Value Risk asks whether customers care enough to open their wallets or devote their time. It’s the soul of your product. If you can’t be match how much they value their money or time, you’re toast. 𝟮. 𝗨𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗨𝘀𝗲𝗿’𝘀 𝗟𝗲𝗻𝘀 Usability Risk isn't about if customers find value; it's about whether they can even get to that value. Can they navigate your product without wanting to throw their device out the window? Google Glass failed not because of value but usability. People didn’t want to wear something perceived as geeky, or that invaded privacy. Google Glass was a usability nightmare that never got its day in the sun. 𝟯. 𝗙𝗲𝗮𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗔𝗿𝘁 𝗼𝗳 𝘁𝗵𝗲 𝗣𝗼𝘀𝘀𝗶𝗯𝗹𝗲 Feasibility Risk takes a different angle. It's not about the market or the user; it's about you. Can you and your team actually build what you’ve dreamed up? Theranos promised the moon but couldn't deliver. It claimed its technology could run extensive tests with a single drop of blood. The reality? It was scientifically impossible with their tech. They ignored feasibility risk and paid the price. 𝟰. 𝗩𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗥𝗶𝘀𝗸: 𝗧𝗵𝗲 𝗠𝘂𝗹𝘁𝗶-𝗗𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝗮𝗹 𝗖𝗵𝗲𝘀𝘀 𝗚𝗮𝗺𝗲 (Business) Viability Risk is the "grandmaster" of risks. It asks: Does this product make sense within the broader context of your business? Take Kodak for example. They actually invented the digital camera but failed to adapt their business model to this disruptive technology. They held back due to fear it would cannibalize their film business. -- This systematic approach is the best way I have found to help de-risk big launches. How do you like to de-risk?
-
Imagine using video game technology to solve one of the toughest challenges in nuclear fusion — detecting high-speed particle collisions inside a reactor with lightning-fast precision. A team of researchers at UNIST has developed a groundbreaking algorithm inspired by collision detection in video games. This new method dramatically speeds up identifying particle impacts inside fusion reactors, essential for improving reactor stability and design. By cutting down unnecessary calculations, the algorithm enables real-time visualization and analysis, paving the way for safer and more efficient fusion energy development. 🎮 Gaming tech meets fusion science: The algorithm borrows from video game bullet-hit detection to track particle collisions. ⚡ 15x faster detection: It outperforms traditional methods by speeding up collision detection by up to fifteen times. 🔍 Smart calculation: Eliminates 99.9% of unnecessary computations with simple arithmetic shortcuts. 🌐 3D digital twin: Applied in the Virtual KSTAR, a detailed Korean fusion reactor virtual model. 🚀 Future-ready: Plans to leverage GPU supercomputers for faster processing and enhanced reactor simulations #FusionEnergy #VideoGameTech #ParticleDetection #NuclearFusion #Innovation #AIAlgorithm #VirtualKSTAR #CleanEnergy #ScientificBreakthrough #HighSpeedComputing https://lnkd.in/gfcssNTC
-
My next tutorial on pretraining an LLM from scratch is now out. It starts with a step-by-step walkthrough of understanding, calculating, and optimizing the loss. After training, we update the text generation function with temperature scaling and top-k sampling. And finally, we also load openly available pretrained weights into our scratch-built model architecture. Along with this pretraining tutorial, I also have bonus material on speeding up the LLM training. These apply not just to LLMs but also to other transformer-based models like vision transformers: 1. Instead of saving the causal mask, this creates the causal mask on the fly to reduce memory usage (here it has minimal effect, but it can add up in long-context size models like Llama 3.2 with 131k-input-tokens support) 2. Use tensor cores (only works for Ampere GPUs like A100 and newer) 3. Use the fused CUDA kernels for `AdamW` by setting 4. Pre-allocate and re-use GPU memory via the pinned memory setting in the data loader 5. Switch from 32-bit float to 16-bit brain float (bfloat16) precision 6. Replace from-scratch implementations of attention mechanisms, layer normalizations, and activation functions with PyTorch counterparts that have optimized CUDA kernels 7. Use FlashAttention for more efficient memory read and write operations 8. Compile the model 9. Optimize the vocabulary size 10. After saving memory with the steps above, increase the batch size Video tutorial: https://lnkd.in/gDRycWea PyTorch speed-ups: https://lnkd.in/gChvGCJH