More LLM questions
This commit is contained in:
parent
a02e042eb9
commit
000c4681e4
1 changed files with 8 additions and 3 deletions
|
@ -8,7 +8,7 @@ In the context of LLMs, what is decoding?
|
|||
In the context of LLMs, what is encoding?
|
||||
In the context of LLMs, what is tokenizing?
|
||||
In the context of LLMs, what is an embedding?
|
||||
In the context of LLMs, what is an quantization?
|
||||
In the context of LLMs, what is quantization?
|
||||
In the context of LLMs, what is a tensor?
|
||||
In the context of LLMs, what is a sparse tensor?
|
||||
In the context of LLMs, what is a vector?
|
||||
|
@ -25,7 +25,7 @@ In the context of neural nets, what is cross-entropy?
|
|||
In the context of neural nets, what is over-fitting?
|
||||
In the context of neural nets, what is under-fitting?
|
||||
What is the difference between an interpreted computer language and a compiled computer language?
|
||||
What is a debugger?
|
||||
In the context of software development, what is a debugger?
|
||||
When processing using a GPU, what is off-loading?
|
||||
When processing using a GPU, what is a batch?
|
||||
When processing using a GPU, what is a block?
|
||||
|
@ -36,4 +36,9 @@ When processing using a GPU, what is a cache?
|
|||
When processing using a GPU, what is unified memory?
|
||||
When processing using a GPU, what is VRAM?
|
||||
When processing using a GPU, what is a kernel?
|
||||
When processing using a GPU, what is "metal"?
|
||||
When processing using a GPU, what is "metal"?
|
||||
In the context of LLMs, what are "Zero-Shot", "One-Shot" and "Few-Shot" learning models?
|
||||
In the context of LLMs, what is the "Transformer-model" architecture?
|
||||
In the context of LLMs, what is "Multi-Head Attention"?
|
||||
In the context of LLMs, what is "Self-Attention"?
|
||||
In the context of transformer-model architectures, how do attention mechanisms use masks?
|
Loading…
Add table
Add a link
Reference in a new issue