cosmopolitan/third_party/radpajama
Justine Tunney 791f79fcb3
Make improvements
- We now serialize the file descriptor table when spawning / executing
  processes on Windows. This means you can now inherit more stuff than
  just standard i/o. It's needed by bash, which duplicates the console
  to file descriptor #255. We also now do a better job serializing the
  environment variables, so you're less likely to encounter E2BIG when
  using your bash shell. We also no longer coerce environ to uppercase

- execve() on Windows now remotely controls its parent process to make
  them spawn a replacement for itself. Then it'll be able to terminate
  immediately once the spawn succeeds, without having to linger around
  for the lifetime as a shell process for proxying the exit code. When
  process worker thread running in the parent sees the child die, it's
  given a handle to the new child, to replace it in the process table.

- execve() and posix_spawn() on Windows will now provide CreateProcess
  an explicit handle list. This allows us to remove handle locks which
  enables better fork/spawn concurrency, with seriously correct thread
  safety. Other codebases like Go use the same technique. On the other
  hand fork() still favors the conventional WIN32 inheritence approach
  which can be a little bit messy, but is *controlled* by guaranteeing
  perfectly clean slates at both the spawning and execution boundaries

- sigset_t is now 64 bits. Having it be 128 bits was a mistake because
  there's no reason to use that and it's only supported by FreeBSD. By
  using the system word size, signal mask manipulation on Windows goes
  very fast. Furthermore @asyncsignalsafe funcs have been rewritten on
  Windows to take advantage of signal masking, now that it's much more
  pleasant to use.

- All the overlapped i/o code on Windows has been rewritten for pretty
  good signal and cancelation safety. We're now able to ensure overlap
  data structures are cleaned up so long as you don't longjmp() out of
  out of a signal handler that interrupted an i/o operation. Latencies
  are also improved thanks to the removal of lots of "busy wait" code.
  Waits should be optimal for everything except poll(), which shall be
  the last and final demon we slay in the win32 i/o horror show.

- getrusage() on Windows is now able to report RUSAGE_CHILDREN as well
  as RUSAGE_SELF, thanks to aggregation in the process manager thread.
2023-10-08 08:59:53 -07:00
..
scripts Get radpajama to build 2023-05-13 20:44:36 -07:00
common-gptneox.cc Fix small matters and improve sysconf() 2023-08-17 00:32:11 -07:00
common-gptneox.h Fix small matters and improve sysconf() 2023-08-17 00:32:11 -07:00
copy-gptneox.cc Make changes needed for new demo 2023-06-15 23:22:49 -07:00
gptneox-util.h Switch public headers to getopt_long() entirely 2023-07-02 19:57:43 -07:00
gptneox.cc Replace COSMO define with _COSMO_SOURCE 2023-08-13 20:55:04 -07:00
gptneox.h Make more ML improvements 2023-05-16 08:07:23 -07:00
LICENSE Import radpajama (a redpajama.cpp fork) (#814) 2023-05-11 07:12:08 -07:00
main-redpajama-chat.cc Fix warnings 2023-09-01 20:50:18 -07:00
main-redpajama.cc Make improvements 2023-07-10 04:35:14 -07:00
quantize-gptneox.cc Make changes needed for new demo 2023-06-15 23:22:49 -07:00
radpajama.mk Make improvements 2023-10-08 08:59:53 -07:00
README.cosmo Import radpajama (a redpajama.cpp fork) (#814) 2023-05-11 07:12:08 -07:00
README.md Get radpajama to build 2023-05-13 20:44:36 -07:00

gglm Support for RedPajama Model

Ackonwledgement

We highly appreciate the great effort from the fork of gptneox.cpp. Our support of the RedPajama Model is mainly based on this implementation. We extend the model configure and fixed a bug when setting use_parallel_residual flag to False in their original implementation. We also extend the chat model for RedPajama.

Usage:

RedPajama Chat model:

  • Make the code:

      make redpajama-chat quantize-gptneox
    
  • Prepare the RedPajama model (f16 and q4_0) for gglm:

      bash ./examples/redpajama/scripts/install-RedPajama-INCITE-Chat-3B-v1.sh
    
  • Run RedPajama chat model (fp16):

      ./redpajama-chat -m ./examples/redpajama/models/pythia/ggml-RedPajama-INCITE-Chat-3B-v1-f16.bin \
      -c 2048 \
      -b 128 \
      -n 1 \
      -t 8 \
      --instruct \
      --color \
      --top_k 30 \
      --top_p 0.95 \
      --temp 0.8 \
      --repeat_last_n 3 \
      --repeat_penalty 1.1 \
      --seed 0
    

    Note that you may need to install torch and transformers to run the above scripts, e.g.:

      pip install torch==2.0.0
      pip install transformers==4.28.1
    
  • Run RedPajama chat model (q4_0):

      ./redpajama-chat -m ./examples/redpajama/models/pythia/ggml-RedPajama-INCITE-Chat-3B-v1-q4_0.bin \
      -c 2048 \
      -b 128 \
      -n 1 \
      -t 8 \
      --instruct \
      --color \
      --top_k 30 \
      --top_p 0.95 \
      --temp 0.8 \
      --repeat_last_n 3 \
      --repeat_penalty 1.1 \
      --seed 0
    
  • Run other quantized version of RedPajama Chat model (Make sure you get the f16 model prepared before you run this):

    • Make the code to quantize the model if you have not:

      make quantize-gptneox
      
    • Generate the quantized model, the supported types include: q4_0, q4_1, q4_2, q5_0, q5_1, and q8_0. For example, to run q4_1, you need to do the following convertion:

      python ./examples/redpajama/scripts/quantize-gptneox.py ./examples/redpajama/models/pythia/ggml-RedPajama-INCITE-Chat-3B-v1-f16.bin --quantize-output-type q4_1
      
    • Then you can chat with the quantized model:

      ./redpajama-chat -m ./examples/redpajama/models/pythia/ggml-RedPajama-INCITE-Chat-3B-v1-q4_1.bin \
      -c 2048 \
      -b 128 \
      -n 1 \
      -t 8 \
      --instruct \
      --color \
      --top_k 30 \
      --top_p 0.95 \
      --temp 0.8 \
      --repeat_last_n 3 \
      --repeat_penalty 1.1 \
      --seed 0
      

RedPajama Base/Instruct model:

  • Make the code:

      make redpajama quantize-gptneox
    
  • Prepare the RedPajama Base/Instruct model (f16 and q4_0) for gglm:

      bash ./examples/redpajama/scripts/install-RedPajama-INCITE-Base-3B-v1.sh
    
      # Or 
    
      bash ./examples/redpajama/scripts/install-RedPajama-INCITE-Instruct-3B-v1.sh
    
  • Run other quantize version of RedPajama Base/Instruct model (Make sure you get the f16 model prepared before you run this). Then you can generate the quantized model, the supported types include: q4_0, q4_1, q4_2, q5_0, q5_1, and q8_0. For example, to run q4_1, you need to do the following convertion, e.g for RedPajama-Base q8_0:

      python ./examples/redpajama/scripts/quantize-gptneox.py ./examples/redpajama/models/pythia/ggml-RedPajama-INCITE-Base-3B-v1-f16.bin --quantize-output-type q8_0
    
  • Run RedPajama Base/Instruct model (e.g., RedPajama-Instruct q8_0) :

      ./redpajama -m ./examples/redpajama/models/pythia/ggml-RedPajama-INCITE-Instruct-3B-v1-q8_0.bin \
      -c 2048 \
      -b 128 \
      -n 1 \
      -t 8 \
      --color \
      --top_k 30 \
      --top_p 0.95 \
      --temp 0.8 \
      --repeat_last_n 3 \
      --repeat_penalty 1.1 \
      --seed 0 \
      --n_predict 256 \
      --verbose-prompt \
      -p "How to schedule a tour to Anfield:"
    

Attribution

The following files are covered by a MIT license and were taken from:

https://github.com/byroneverson/gptneox.cpp

Thank you Byron.

common-gptneox.cpp	
copy-gptneox.cpp	
gptneox.cpp		
quantize-gptneox.cpp
common-gptneox.h	
gptneox-util.h		
gptneox.h
convert_gptneox_to_ggml.py
quantize-gptneox.py