GitHub Copilot - Patterns & Exercises
GitHub 🌟
en 🇬🇧
en 🇬🇧
  • Introduction
  • Contributing to the Project
  • General
    • Code completion
    • Comment to code
    • Code to comment
    • Quick Q&A
    • Regular expression
    • Language translation
    • Type hinting
    • Code to document
    • Object generation from structured data
    • Showing examples
  • Client Side Tips
    • Copilot snnipet handling
    • GitHub Copilot Shortcuts
    • Go to definition
    • Pin the files you need
  • Design Patterns
    • AI readable naming convention
    • Consistent coding style
    • High-level architecture first
    • Working on small chunks
    • Context-less Architecture
    • Eliminating a tiny OSS dependency
  • Collaboration
    • AI friendly documentation
    • Coaching on prompts
  • Test
    • Creating unit tests
    • Specify how to generate test code
    • Writing failure case first
    • Writing test cases in natural language first
    • Test only what is necessary
  • Refactoring
    • Writing test code before refactoring
    • Making the calculation part independent
    • Asking with open-ended questions
  • Archived
    • GitHub Copilot Patterns & Exercises Guide
    • Translations
      • German 🇩🇪
      • Spanish 🇪🇸
      • French 🇫🇷
      • Italy 🇮🇹
      • Japanese 🇯🇵
      • Portuguese 🇵🇹
      • Chinese 🇨🇳
Powered by GitBook
On this page
  • Description
  • Example
  • Exercise
  • Checklist for Further Learning
Edit on GitHub
  1. Collaboration

Coaching on prompts

Review your peers' prompts and give them feedback so that they can improve their prompts.

Last updated 1 year ago

While this sounds reasonable, it is not a systematic or established pattern.

Description

Using an AI tool like GitHub Copilot can make the output look cleaner than it should. Code that looks perfect when you review it may actually be inefficient in the generation process or missing something important. Coaching on the generative process becomes essential to ensure that developers are aware of potential pitfalls and can create efficient and accurate code.

In modern software development, coaching is more than just a review; it's an opportunity to guide, inspire, and improve. This pattern emphasizes coaching peers on the quality of prompts used in code generation, particularly with AI-driven tools like GitHub Copilot. Coaching aims to enhance not only the prompts but also the understanding of the generative process.

Example

Imagine a scenario where a teammate has created a prompt to generate code for a specific task. Your role is to provide coaching to improve the prompt's clarity and understand the underlying generative process.

Original Prompt:

"Create a function to find prime numbers within a range."

Coached Prompt:

"Develop a Python function that takes two integers as input and returns a list of prime numbers within that range. Ensure the function efficiently handles different ranges, including edge cases."

Exercise

  • Exercise 1: Provide coaching on a prompt from your teammate. Identify areas for enhancement, give insights into the generative process, and explain why the changes are beneficial.

  • Exercise 2: Analyze a code snippet previously generated by GitHub Copilot, focusing on both the prompt and the underlying generative process. Offer coaching on how it could be more specific and efficient.

  • Exercise 3: Practice crafting your own prompts for various programming scenarios. Engage with peers for coaching, emphasizing both the quality of prompts and the understanding of the code generation process.

Checklist for Further Learning

  • Have I recognized the distinction between mere reviewing and coaching for continuous improvement?

  • How can I be more effective in my coaching to enhance both the prompt quality and understanding of code generation?

  • What collaborative tools and practices can enhance the coaching process within my team?

  • How can consistent coaching lead to more efficient and accurate code generation, particularly when utilizing AI powered tools?