move powershell script & update readme
This commit is contained in:
parent
e5bbecaf2d
commit
735c77acf1
2 changed files with 23 additions and 19 deletions
28
README.md
28
README.md
|
@ -289,29 +289,33 @@ python3 convert.py models/gpt4all-7B/gpt4all-lora-quantized.bin
|
|||
|
||||
- The newer GPT4All-J model is not yet supported!
|
||||
|
||||
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
|
||||
### Obtaining the Facebook LLaMA original model and Stanford Alpaca model data
|
||||
|
||||
- **Under no circumstances should IPFS, magnet links, or any other links to model downloads be shared anywhere in this repository, including in issues, discussions, or pull requests. They will be immediately deleted.**
|
||||
- The LLaMA models are officially distributed by Facebook and will **never** be provided through this repository.
|
||||
- Refer to [Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to request access to the model data.
|
||||
- Please verify the [sha256 checksums](SHA256SUMS) of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
|
||||
- The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
|
||||
|
||||
`sha256sum --ignore-missing -c SHA256SUMS` on Linux
|
||||
### Verifying the Facebook LLaMA original model and Stanford Alpaca model data
|
||||
|
||||
or
|
||||
Please verify the [sha256 checksums](SHA256SUMS) of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files. The following commands will verify if you have all possible latest files in your self-installed `./models` subdirectory:
|
||||
|
||||
`shasum -a 256 --ignore-missing -c SHA256SUMS` on macOS
|
||||
- on Linux: `sha256sum --ignore-missing -c SHA256SUMS`
|
||||
- on macOS: `shasum -a 256 --ignore-missing -c SHA256SUMS`
|
||||
- on windows there is a PowerShell script available which can perfom the verification, use the following steps to run it:
|
||||
1. Open PowerShell
|
||||
2. `cd <PathToLLama\scripts>`
|
||||
3. `.\check_SHA256_windows.ps1`
|
||||
|
||||
- If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
|
||||
|
||||
If you have issues with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
|
||||
- LLaMA:
|
||||
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
|
||||
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
||||
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
|
||||
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
||||
- GPT-3
|
||||
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
|
||||
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
|
||||
- GPT-3.5 / InstructGPT / ChatGPT:
|
||||
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
|
||||
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
|
||||
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
|
||||
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
|
||||
|
||||
### Perplexity (measuring model quality)
|
||||
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
# Get the working directory
|
||||
$modelsPath = $pwd
|
||||
# Define the path to the llama directory (parent folder of script directory)
|
||||
$llamaPath = Split-Path -Path $pwd -Parent
|
||||
|
||||
# Define the file with the list of hashes and filenames
|
||||
$hashListPath = "SHA256SUMS"
|
||||
$hashListFile = Join-Path -Path $llamaPath -ChildPath "SHA256SUMS"
|
||||
|
||||
# Check if the hash list file exists
|
||||
if (-not(Test-Path -Path $hashListPath)) {
|
||||
Write-Error "Hash list file not found: $hashListPath"
|
||||
if (-not(Test-Path -Path $hashListFile)) {
|
||||
Write-Error "Hash list file not found: $hashListFile"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Read the hash file content and split it into an array of lines
|
||||
$hashList = Get-Content -Path $hashListPath
|
||||
$hashList = Get-Content -Path $hashListFile
|
||||
$hashLines = $hashList -split "`n"
|
||||
|
||||
# Create an array to store the results
|
||||
|
@ -24,7 +24,7 @@ foreach ($line in $hashLines) {
|
|||
$hash, $filename = $line -split " "
|
||||
|
||||
# Get the full path of the file by joining the models path and the filename
|
||||
$filePath = Join-Path -Path $modelsPath -ChildPath $filename
|
||||
$filePath = Join-Path -Path $llamaPath -ChildPath $filename
|
||||
|
||||
# Informing user of the progress of the integrity check
|
||||
Write-Host "Verifying the checksum of $filePath"
|
Loading…
Add table
Add a link
Reference in a new issue