From 9543d01a8a359658f50c53263cff2b249bf1cbaa Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Johannes=20G=C3=A4=C3=9Fler?= Date: Wed, 20 Nov 2024 11:43:49 +0100 Subject: [PATCH] GitHub: ask for more info in issues [no ci] --- .github/ISSUE_TEMPLATE/01-bug-low.yml | 53 ++++++++++++++++++++-- .github/ISSUE_TEMPLATE/02-bug-medium.yml | 53 ++++++++++++++++++++-- .github/ISSUE_TEMPLATE/03-bug-high.yml | 53 ++++++++++++++++++++-- .github/ISSUE_TEMPLATE/04-bug-critical.yml | 53 ++++++++++++++++++++-- 4 files changed, 200 insertions(+), 12 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/01-bug-low.yml b/.github/ISSUE_TEMPLATE/01-bug-low.yml index 54785854f..275cf7d96 100644 --- a/.github/ISSUE_TEMPLATE/01-bug-low.yml +++ b/.github/ISSUE_TEMPLATE/01-bug-low.yml @@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "low severity"] body: - type: markdown attributes: - value: | + value: > Thanks for taking the time to fill out this bug report! Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. - If possible, please provide a minimal code example that reproduces the bug. + If you encountered the bug using a third-party frontend (e.g. ollama), + please reproduce the bug using llama.cpp only. + The `llama-cli` binary can be used for simple and reproducible model inference. - type: textarea id: what-happened attributes: label: What happened? - description: Also tell us, what did you expect to happen? + description: > + Please give us a summary of what happened. + If the problem is not obvious: what did you expect to happen? placeholder: Tell us what you see! validations: required: true + - type: textarea + id: hardware + attributes: + label: Hardware + description: Which CPUs/GPUs and which GGML backends are you using? + placeholder: > + e.g. Ryzen 5950X + RTX 4090 (CUDA) + validations: + required: true - type: textarea id: version attributes: @@ -42,6 +55,40 @@ body: - Other? (Please let us know in description) validations: required: false + - type: textarea + id: model + attributes: + label: Model + description: > + If applicable: which model at which quantization were you using when encountering the bug? + If you downloaded a GGUF file off of Huggingface, please provide a link. + placeholder: > + e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M + validations: + required: false + - type: textarea + id: steps_to_reproduce + attributes: + label: Steps to Reproduce + description: > + Please tell us how to reproduce the bug. + If you can narrow down the bug to specific hardware, compile flags, or command line arguments, + that information would be very much appreciated by us. + placeholder: > + e.g. when I run llama-cli with -ngl 99 I get garbled outputs. + When I use -ngl 0 it works correctly. + Here are the exact commands that I used: ... + validations: + required: true + - type: textarea + id: first_bad_commit + attributes: + label: First Bad Commit + description: > + If the bug was not present on an earlier version: when did it start appearing? + If possible, please do a git bisect and identify the exact commit that introduced the bug. + validations: + required: false - type: textarea id: logs attributes: diff --git a/.github/ISSUE_TEMPLATE/02-bug-medium.yml b/.github/ISSUE_TEMPLATE/02-bug-medium.yml index a6285c6f0..ebd80d91b 100644 --- a/.github/ISSUE_TEMPLATE/02-bug-medium.yml +++ b/.github/ISSUE_TEMPLATE/02-bug-medium.yml @@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "medium severity"] body: - type: markdown attributes: - value: | + value: > Thanks for taking the time to fill out this bug report! Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. - If possible, please provide a minimal code example that reproduces the bug. + If you encountered the bug using a third-party frontend (e.g. ollama), + please reproduce the bug using llama.cpp only. + The `llama-cli` binary can be used for simple and reproducible model inference. - type: textarea id: what-happened attributes: label: What happened? - description: Also tell us, what did you expect to happen? + description: > + Please give us a summary of what happened. + If the problem is not obvious: what did you expect to happen? placeholder: Tell us what you see! validations: required: true + - type: textarea + id: hardware + attributes: + label: Hardware + description: Which CPUs/GPUs and which GGML backends are you using? + placeholder: > + e.g. Ryzen 5950X + RTX 4090 (CUDA) + validations: + required: true - type: textarea id: version attributes: @@ -42,6 +55,40 @@ body: - Other? (Please let us know in description) validations: required: false + - type: textarea + id: model + attributes: + label: Model + description: > + If applicable: which model at which quantization were you using when encountering the bug? + If you downloaded a GGUF file off of Huggingface, please provide a link. + placeholder: > + e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M + validations: + required: false + - type: textarea + id: steps_to_reproduce + attributes: + label: Steps to Reproduce + description: > + Please tell us how to reproduce the bug. + If you can narrow down the bug to specific hardware, compile flags, or command line arguments, + that information would be very much appreciated by us. + placeholder: > + e.g. when I run llama-cli with -ngl 99 I get garbled outputs. + When I use -ngl 0 it works correctly. + Here are the exact commands that I used: ... + validations: + required: true + - type: textarea + id: first_bad_commit + attributes: + label: First Bad Commit + description: > + If the bug was not present on an earlier version: when did it start appearing? + If possible, please do a git bisect and identify the exact commit that introduced the bug. + validations: + required: false - type: textarea id: logs attributes: diff --git a/.github/ISSUE_TEMPLATE/03-bug-high.yml b/.github/ISSUE_TEMPLATE/03-bug-high.yml index ff816b937..c14bb70bd 100644 --- a/.github/ISSUE_TEMPLATE/03-bug-high.yml +++ b/.github/ISSUE_TEMPLATE/03-bug-high.yml @@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "high severity"] body: - type: markdown attributes: - value: | + value: > Thanks for taking the time to fill out this bug report! Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. - If possible, please provide a minimal code example that reproduces the bug. + If you encountered the bug using a third-party frontend (e.g. ollama), + please reproduce the bug using llama.cpp only. + The `llama-cli` binary can be used for simple and reproducible model inference. - type: textarea id: what-happened attributes: label: What happened? - description: Also tell us, what did you expect to happen? + description: > + Please give us a summary of what happened. + If the problem is not obvious: what did you expect to happen? placeholder: Tell us what you see! validations: required: true + - type: textarea + id: hardware + attributes: + label: Hardware + description: Which CPUs/GPUs and which GGML backends are you using? + placeholder: > + e.g. Ryzen 5950X + RTX 4090 (CUDA) + validations: + required: true - type: textarea id: version attributes: @@ -42,6 +55,40 @@ body: - Other? (Please let us know in description) validations: required: false + - type: textarea + id: model + attributes: + label: Model + description: > + If applicable: which model at which quantization were you using when encountering the bug? + If you downloaded a GGUF file off of Huggingface, please provide a link. + placeholder: > + e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M + validations: + required: false + - type: textarea + id: steps_to_reproduce + attributes: + label: Steps to Reproduce + description: > + Please tell us how to reproduce the bug. + If you can narrow down the bug to specific hardware, compile flags, or command line arguments, + that information would be very much appreciated by us. + placeholder: > + e.g. when I run llama-cli with -ngl 99 I get garbled outputs. + When I use -ngl 0 it works correctly. + Here are the exact commands that I used: ... + validations: + required: true + - type: textarea + id: first_bad_commit + attributes: + label: First Bad Commit + description: > + If the bug was not present on an earlier version: when did it start appearing? + If possible, please do a git bisect and identify the exact commit that introduced the bug. + validations: + required: false - type: textarea id: logs attributes: diff --git a/.github/ISSUE_TEMPLATE/04-bug-critical.yml b/.github/ISSUE_TEMPLATE/04-bug-critical.yml index 7af42a80b..fb37e926b 100644 --- a/.github/ISSUE_TEMPLATE/04-bug-critical.yml +++ b/.github/ISSUE_TEMPLATE/04-bug-critical.yml @@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "critical severity"] body: - type: markdown attributes: - value: | + value: > Thanks for taking the time to fill out this bug report! Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. - If possible, please provide a minimal code example that reproduces the bug. + If you encountered the bug using a third-party frontend (e.g. ollama), + please reproduce the bug using llama.cpp only. + The `llama-cli` binary can be used for simple and reproducible model inference. - type: textarea id: what-happened attributes: label: What happened? - description: Also tell us, what did you expect to happen? + description: > + Please give us a summary of what happened. + If the problem is not obvious: what did you expect to happen? placeholder: Tell us what you see! validations: required: true + - type: textarea + id: hardware + attributes: + label: Hardware + description: Which CPUs/GPUs and which GGML backends are you using? + placeholder: > + e.g. Ryzen 5950X + RTX 4090 (CUDA) + validations: + required: true - type: textarea id: version attributes: @@ -42,6 +55,40 @@ body: - Other? (Please let us know in description) validations: required: false + - type: textarea + id: model + attributes: + label: Model + description: > + If applicable: which model at which quantization were you using when encountering the bug? + If you downloaded a GGUF file off of Huggingface, please provide a link. + placeholder: > + e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M + validations: + required: false + - type: textarea + id: steps_to_reproduce + attributes: + label: Steps to Reproduce + description: > + Please tell us how to reproduce the bug. + If you can narrow down the bug to specific hardware, compile flags, or command line arguments, + that information would be very much appreciated by us. + placeholder: > + e.g. when I run llama-cli with -ngl 99 I get garbled outputs. + When I use -ngl 0 it works correctly. + Here are the exact commands that I used: ... + validations: + required: true + - type: textarea + id: first_bad_commit + attributes: + label: First Bad Commit + description: > + If the bug was not present on an earlier version: when did it start appearing? + If possible, please do a git bisect and identify the exact commit that introduced the bug. + validations: + required: false - type: textarea id: logs attributes: