Leveraging AI in Development

What are the benefits of using AI in development?

AI empowers existing engineers to be more productive and versatile – in some cases, 95% of product code is written by AI. Rather than manually writing every line of code, developers can describe what they need in natural language, let AI generate the implementation, and review the suggested output. This approach saves time and is especially valuable for routine tasks that take hours to complete manually. AI also enables engineers to write software in codebases and languages they didn’t previously know.

Engineers produce better code when leveraging AI – AI serves as an educational resource for developers looking to learn new technologies or use new tools. Engineers can request code examples, explanations, and best practices that accelerate the learning curve for unfamiliar frameworks, libraries, or programming languages.

Tools

What are the different levels at which AI tools can be used to code?

CategoryExamplesUse Case
Tools for non-developersLovable, V0Quickly build simple websites or applications based on image or text prompts.
 
Note: use these tools to make design prototypes instead of building functional products – these tools face challenges when projects must adhere to specific brand guidelines or integrate with existing systems.
Tools for professional developersCursor, Cline, Microsoft Copilot, WindsurfEnhance the productivity of experienced engineers.
 
Strong technical use cases include:
• Writing tests
• Implementing evals
• Implementing known patterns
• Finding relevant files in a codebase
• Handling routine coding work
Fully autonomous coding agentsDevin by Cognition Labs, Codex from OpenAIAble to own small tasks to create software with minimal human input.

What are the different phases of scoping work and leveraging AI to code within your product?

A structured approach lets you use AI as a partner, not an instant solution – AI can dramatically accelerate output when its work receives human guidance, context, and oversight. The 3-step process you would use to scope and build something manually applies here, though it might need to be iterated or structured differently depending on the project.

PhaseKey Activities
Explore – use the LLM to build context and understanding before any code is written.• Explain what you are trying to build.
• Share information about similar existing solutions or potentially relevant approaches.
• Ask the LLM to recommend various implementation strategies.
• Discuss those options.
Plan – break your implementation strategy down into manageable steps.• Create a detailed plan that comprises specific, clear, concise, and incremental steps.
• Save those steps in a document that the model can refer to later.
 
Note: during this phase, think of the LLM as an intern that is good at coming up with ideas but needs specific directions to get anything done.
Build – implement step by step.• Request that the LLM complete one step from your plan.
• Review the implementation and make any necessary adjustments.
• Have the LLM update the document checklist based on what was done.
• Repeat.

How can you use AI for research during the Explore phase?  

Use voice mode to brainstorm – talk through your ideas, questions, and considerations and have ChatGPT record it. Even if you ramble, this approach lets you explore concepts more fluidly, create a comprehensive record that you can reference later or transfer to other tools if needed, and provide extensive context for the LLM.

Generate a design document – instruct the LLM to generate formal documentation once the exploratory conversation is complete. Explain the type of document you want it to write and provide examples of previous design documents you’ve written. Both you and the LLM can refer back to this document throughout the project.

How should you approach using Cursor? What should you use it for?

The more you use Cursor, the better you’ll get – there is a learning curve that prevents some users from envisioning how useful Cursor could become to their dev process. Use Cursor to do mundane things to increase your comfort level with using natural language to code, then explore how it can help execute more complicated work.

Be realistic when estimating the value add that comes from speed – engineers often protest using Cursor because they know how to write code manually. However, if you time yourself writing something manually and compare it to the speed with which you can verbally dictate and have Cursor follow your instructions, Cursor will almost always be faster. To use a soccer analogy, manual coding is dribbling a ball down a field by yourself; using Cursor is passing the ball to get it there faster.

Cursor can be used to execute a wide range of tasks, including:

  • Development
    • Make presentations
    • Create internal admin tools
    • Handle merge conflicts
    • Debug long tail Sentry errors
    • Create shared codemods and lint rules
  • Documentation & APIs
    • Write documentation for SDKs
    • Generate prompts/evals for RAG
    • Write PR descriptions from commits
    • Stand up API endpoints
    • Create CLIs around functions
  • Code Quality
    • Add generics to shared code
    • Pick better names for variables
    • Learn areas of new codebases
    • Debug production issues

Note: don’t forget to use git when using Cursor – engineers often lose work because they forget to save version histories when using Cursor.

Examples of How to Use Cursor for Common Activities
Activity3-Step Workflow
Write better tests• Explore – share context about the files you need to test. Let Cursor identify related files and dependencies. Provide any information it misses, and talk through your testing goals and options.
Plan – create a structured markdown file. Provide examples and instructions for how to generate a plan, and have Cursor create a checklist for each individual test it needs to write and implement. 
Build – commit to git and have Cursor execute the steps one-by-one.

Note: tests built in Cursor retain their context, which makes it easier to create additional tests in the future.
Discover your system’s limits (load testing)Explore – since scripts like this are often one-off as explorations, use the LLM to help you understand vLLM deployment performance.
Plan – skip the heavy planning since this is a quick throwaway script. However, you should still use Cursor to help you come up with ideas for what to test within the system. 
Build – have Cursor go through the same process you would use to build this manually.
Handle feedback faster (PR comments) Explore – go into GitHub and screenshot each PR comment (including file names), feed them into a single Cursor context, verify the OCR, and ask it to list out the comments
Plan – ask Cursor to group all related changes and create a plan file with name, etc. 
Build – have Cursor read the planning file, read each one-by-one, and suggest fixes. Implement simple fixes and either directly edit or use more prompting to resolve larger changes.

How can you ensure that an LLM understands your specific tech stack and environment constraints?

Provide the context tools use to generate usable outputs:

  • Linting helps produce code that complies with the standards of your codebase – linting provides a set of rules about how to code, preferred patterns/ patterns to avoid, and other common errors that should be prevented.
  • Specify the functions and tools that already exist in the codebase – if you don’t mention these, Cursor will reinvent them. Older, more established codebases require more context, but newer codebases don’t require as much context sharing.

Type systems provide crucial guardrails – strong typing helps AI tools understand expected inputs and outputs and results in better-generated code with fewer runtime errors.

Run incremental tests during development – test small pieces of functionality as they’re implemented to catch issues early instead of waiting to test large sections together.  

Have LLMs test integrations – testing is commonly skipped if engineers run out of time on a project, but AI tools make it easy to write tests. Create a test guide plan you can use to make sure all systems are working properly.

Team Considerations

How should engineering leaders encourage use of AI throughout their teams? 

Upskill a core group of influential engineers first – this “SWAT” team of respected technical leaders (often the CTO, 2 staff engineers, a senior DevOps person, and an AI expert) leads the way by becoming proficient with AI tools and creating internal champions who can train others.

Create dedicated communication channels for AI questions – establish a Slack channel where engineers can ask for help (e.g., “I can’t figure out how to do X in Cursor”) and designate an “AI expert” who will respond to each question with suggestions or instructions by the end of the day.  

Highlight success stories during team meetings – encourage people to share ways that Cursor has saved them time during weekly engineering syncs.

Engineering leaders should lead by example – get senior team leaders to write code (which many of them haven’t done in a long time) using AI tools. Their use and code productivity will encourage the rest of the team to try applying those tools to their own work.  

How does leveraging AI in development impact how engineers spend their time? Which skills become more important?

Engineers spend more time reviewing code than writing it manually – regardless of how code is written, engineers remain responsible for its quality. They must evaluate whether AI-generated code is correct and become skilled at testing, validating, and assessing the overall quality of the code they’re creating.

Higher-level thinking and architecture become more central to the engineer’s role – a greater working knowledge of system design, integration patterns, and overall architecture enables engineers to give AI tools better direction and therefore receive more sustainable outputs.

Documentation and context-sharing are essential – engineers must effectively communicate project requirements and constraints to AI systems, which respond better to clear thinking and communication. 

What are the most common objections from engineers and how do you overcome them?

ObjectionResponse/ Consideration
“I could code faster myself because I know the codebase.”• Test whether this is true by timing both options. 
• Knowing a codebase well will allow you to use Cursor better.
“It does too much and makes mistakes.” / “It will fail.”• Break the tasks down into smaller sub-tasks instead of asking it to create an end-to-end solution. 
• Don’t assume that if it fails once, it will fail every time.
• Learning tools will allow you to better support more junior engineers who will need to use these tools.
“It only works if you’re working on a new codebase”As long as you can structure the context and problems you ask for help on, it’s possible to work on even large codebases.
“It takes away all the joy in writing code”You can still code, but some activities might be more effectively executed using AI tools
“My neovim setup is too good”Times and tooling have changed, and it’s necessary to adapt. neovim made sense when typing was the limiting factor, but now it’s about getting ideas and articulating them.

Who should own the set of tools and practices your organization uses to develop with AI? 

The CTO and staff engineers should own AI development practices – senior technical leaders have the respect and credibility needed to drive adoption of new tools and approaches.

They should also ensure that rigorous documentation is maintained – use a monorepo or component library to prevent errors getting inputted into AI-generated code or inconsistent elements appearing in your codebase. Component libraries should be carefully maintained and easily accessible.    

Overall

How do you stay up to date as AI tools change quickly?

Use the Local LlaMA subreddit – people regularly come here to share thoughts, experience, and advice on deploying ML and AI applications. You can check out the subreddit here

Follow and/or chat with people who are curious and doing interesting things with AI – very few worthwhile insights are posted on LinkedIn, but X and even group chats can be excellent sources of inspiration. Find other people who are wrestling with the same questions and problems as you and create an open line of communication.

What are best practices for using AI to code effectively? 

Be comfortable iterating on a request 5+ times – don’t expect AI to produce the perfect solution the first time around. Asking the same question can yield different answers and changing your approach can help the LLM solve the problem in a more useful way.

Don’t give an LLM too much work to do at one time – executing many small tasks instead of a few big tasks increases the chances that you get useful, accurate outputs.

Ask AI to tell you what it’s doing before you have it execute – this simple step can save significant time and effort by flagging issues before they appear and giving you an opportunity to proactively catch errors.

What are common pitfalls? 

AI tools compound a lack of foundational knowledge – if you don’t have the structural and technical context to understand the implications of different paths forward suggested by an LLM, the increased speed and breadth of work it lets you produce could create more problems than you would create coding manually.

Using AI to generate code without considering system design – if you write code without considering how it fits into a broader environment, your outputs won’t be compatible with other systems and tools. 

Vibe coding in a non-exploratory context is impractical – if you’re trying to work within an existing large codebase, the model will start deleting things or referencing files that don’t exist. The larger your codebase, the more practical you must be about your prompts.

Responses