
Beyond 'LGTM': Creative Code Review Practices That Actually Work
Move beyond the 'LGTM' mentality with creative code review practices that enhance code quality, knowledge sharing, and team collaboration. Explore pair reviewing, AI agents, and synchronous techniques.

We’ve all been there: scrolling through a pull request, giving it a quick scan, and dropping the classic “LGTM 👍” (Looks Good To Me!) comment before moving on to the next task. Don’t get me wrong, it’s definitely useful especially when it might be a quick change, or it actually does look good to me. However, it’s often a masquerade of authentic quality assurance. The reality is that many pull requests sail through with minimal scrutiny.
As someone who isn’t primarily technical but is increasingly involved in pull request workflows, I’ve realized that simply receiving “LGTM” isn’t always enough. I want to contribute more meaningfully to code quality and learn how to provide better feedback.
The good news? I’m not the only one thinking the same way. Engineering teams globally are pioneering creative code review approaches that actually move the needle on code quality, knowledge sharing, and team collaboration.
The Problem with Traditional Code Reviews
Traditional asynchronous code reviews, come with significant drawbacks. With small pull requests, teams lose throughput because delays in communication start dominating PR lead time. Developers often find themselves in a frustrating cycle: submit code, wait for feedback, context-switch to other work, then struggle to remember the original intent when reviews finally arrive. Team may have async code reviews that are actually reducing company throughput, with engagement dropping significantly on larger pull requests due to delayed feedback. The problem isn’t just speed. It’s that traditional reviews often fail to catch the nuanced issues that really matter.
Creative Code Review Practices That Work
1. Pair Reviewing: The Collaborative Deep Dive
Pair reviewing involves two developers reviewing code together via a pull request, spending time examining the changes collaboratively. Unlike traditional pair programming where developers write code together, pair reviewing handles the phase after the code is written: evaluation.
How it works:
- Schedule dedicated time for collaborative review sessions
- Use screen sharing to examine code together in real-time (great for remote teams!)
- Discuss potential issues, alternative approaches, and architectural decisions
- Document findings and decisions in the PR comments
Benefits: Newer engineers have the chance to improve by learning how to read code and understand team expectations with mental models of what the codebase does. Knowledge transfer is successful in both directions, the team can catch subtle issues that could easily slip through individual reviews, and this builds stronger team relationships through collaborative problem-solving and other perspectives.
2. Synchronous vs. Asynchronous: Finding the Right Balance
Teams should use asynchronous code review as the default type, switching to synchronous review or pair programming when necessary. The key is understanding when each approach works best.
Asynchronous Reviews Excel When:
- Changes are straightforward and well-documented
- Team members are in different time zones
- The reviewer needs time to thoroughly understand complex logic
- Testing code readability without additional explanation from the author
Synchronous Reviews Work Better When:
- The reviewer lacks knowledge about the goal of the task and needs explanation
- There are many expected improvements due to experience gaps
- Excessive back-and-forth discussion is happening in async reviews
- Complex architectural decisions need real-time discussion
3. AI Code Review Agents: Catching What Humans Miss
As someone who isn’t deeply technical, I’ve discovered that AI code review agents can be invaluable allies; they catch subtle issues that I might completely overlook and help me understand complex code changes. These intelligent systems excel at spotting patterns, security vulnerabilities, and performance issues that even experienced developers sometimes miss during manual reviews.
Why AI Agents Excel at Code Review:
AI code review agents have been trained on millions of code repositories, giving them an almost encyclopedic knowledge of common bugs, security patterns, and best practices. Unlike human reviewers who might be tired, distracted, or unfamiliar with certain coding patterns, AI agents consistently apply the same rigorous analysis to every line of code.
What AI Agents Can Catch That Humans Often Miss:
- Security vulnerabilities like SQL injection risks, XSS vulnerabilities, and insecure data handling
- Performance bottlenecks such as inefficient database queries, memory leaks, or unnecessary API calls
- Subtle logic errors that could cause edge-case failures
- Code consistency issues across large codebases that would be impossible to track manually
- Accessibility problems in frontend code that might not be immediately obvious
- Documentation gaps where complex logic lacks adequate comments
Popular AI Code Review Agents:
- CodeRabbit uses advanced AI models like GPT-4 to provide instant, line-by-line insights on pull requests, offering contextual suggestions and identifying potential bugs, security issues, and code quality improvements
- GitHub Copilot now includes pull request summaries and review assistance, helping explain code changes and suggesting improvements based on best practices
- Greptile provides AI-powered code understanding that can explain complex logic, suggest optimizations, and help reviewers understand the broader context of changes
The key insight is that AI agents don’t replace human judgment: they augment it. While I might focus on whether the code aligns with business requirements or user experience goals, the AI agent ensures we don’t ship code with hidden security flaws or performance issues.
4. Co-Creation Patterns: Continuous Review
Co-creation patterns like pair and mob programming enable continuous code review, achieving both high throughput and high quality. Instead of reviewing code after it’s written, these approaches integrate review into the creation process.
Pair Programming as Continuous Review:
- One writes code (the driver) while the other reviews in real-time (the navigator)
- Generates continuous and synchronous review of code, eliminating the need for a final review phase
- Particularly effective for complex business problems requiring deep collaboration
Mob Programming for Team Alignment:
- The whole team works simultaneously on a single task using a single computer
- Ensures shared understanding and collective code ownership
- Dramatically reduces work-in-progress and context switching
5. Meeting-Based Reviews: The Team Learning Session
Meeting-based code reviews involve the whole team sitting together to review difficult pieces of code, helping everyone understand the purpose and goals of code review. Yeah, it definitely takes up time, and while not suitable as a permanent solution, these sessions can be valuable for:
- Onboarding new team members to review practices
- Discussing particularly complex or controversial changes
- Sharing knowledge about critical system components
- Establishing team coding standards and conventions
Implementation Strategies
Start Small, Iterate Fast
Keep reviews small and manageable, with most effective reviews covering changes that can be thoroughly examined in a single session. Small pull requests make testing and debugging more manageable, with unit tests becoming less overwhelming when working with focused chunks of code.
Optimize for Response Time
Minimize the response lag between author and reviewer, even if the whole review process takes longer. Let co-workers know ahead of time that they will receive a code review soon to reduce turn-around times substantially.
Use the Right Tool for the Job
When conversation gets into a long back-and-forth, reviewers should switch to talking face-to-face instead of burning more time using code review tools. Better code reviews proactively reach out to the person making the change after doing a first pass, saving time, misunderstandings, and hard feelings.
Focus on Constructive Feedback
A constructive code review focuses on helping, not just pointing out mistakes, with feedback like “What if we did this instead?” inviting deeper conversation. Give feedback about the code, not about the author, and accept that there are multiple correct solutions to a problem.
How PullFlow Customers Are Revolutionizing Code Review Workflows
Many of our customers at PullFlow have discovered that the biggest barrier to creative code review practices isn’t methodology: it’s tooling friction. Teams report 4X faster review cycles by eliminating constant context switching between GitHub, Slack, and their IDE.
PullFlow synchronizes user identities and code-review activity across GitHub, Slack and VS Code, enabling natural conversation across platforms. This seamless integration enables several creative review practices:
Contextual AI-Assisted Reviews: PullFlow’s AI agent works alongside popular code review agents like Copilot, CodeRabbit, and Greptile on every PR thread, helping teams pre-screen common issues before human reviewers get involved.
Slack-Native Collaboration: Instead of forcing reviewers to context-switch to GitHub, PullFlow gives teams an at-a-glance view of PRs in Slack and VS Code, letting them instantly jump into conversations and take action without missing a beat.
The result? Development teams can focus on the creative, high-value aspects of code review (architectural discussions, knowledge sharing, and collaborative problem-solving) while PullFlow handles the coordination overhead.
Looking Ahead
The future of code review isn’t about choosing between human insight and AI efficiency; it’s about thoughtfully combining both. Whether you’re implementing pair reviewing sessions, experimenting with AI pre-screening, or adopting co-creation patterns, the key is matching your approach to your team’s specific context and needs.
The most successful implementations view AI as augmentation of human expertise, not replacement, defining complementary roles that leverage the strengths of both. By moving beyond the “LGTM” mentality and embracing creative review practices, teams can transform code review from a necessary evil into a powerful catalyst for code quality, team learning, and collaborative innovation.
The tools and techniques exist today to revolutionize how your team approaches code review. The question isn’t whether these practices work, but which ones will work best for your team’s unique context and goals.
Ready to eliminate tooling friction and accelerate your code review workflows? Try PullFlow and see how seamless cross-platform collaboration can transform your team’s review process.