mirror of
https://github.com/carlospolop/hacktricks
synced 2024-11-22 20:53:37 +00:00
668 lines
27 KiB
Markdown
668 lines
27 KiB
Markdown
|
# 5. LLM Argitektuur
|
||
|
|
||
|
## LLM Argitektuur
|
||
|
|
||
|
{% hint style="success" %}
|
||
|
Die doel van hierdie vyfde fase is baie eenvoudig: **Ontwikkel die argitektuur van die volle LLM**. Sit alles saam, pas al die lae toe en skep al die funksies om teks te genereer of teks na ID's en terug te transformeer.
|
||
|
|
||
|
Hierdie argitektuur sal gebruik word vir beide, opleiding en voorspellings van teks nadat dit opgelei is.
|
||
|
{% endhint %}
|
||
|
|
||
|
LLM argitektuur voorbeeld van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01\_main-chapter-code/ch04.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01\_main-chapter-code/ch04.ipynb):
|
||
|
|
||
|
'n Hoë vlak voorstelling kan waargeneem word in:
|
||
|
|
||
|
<figure><img src="../../.gitbook/assets/image (3) (1).png" alt="" width="563"><figcaption><p><a href="https://camo.githubusercontent.com/6c8c392f72d5b9e86c94aeb9470beab435b888d24135926f1746eb88e0cc18fb/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830345f636f6d707265737365642f31332e776562703f31">https://camo.githubusercontent.com/6c8c392f72d5b9e86c94aeb9470beab435b888d24135926f1746eb88e0cc18fb/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830345f636f6d707265737365642f31332e776562703f31</a></p></figcaption></figure>
|
||
|
|
||
|
1. **Invoer (Getokeniseerde Teks)**: Die proses begin met getokeniseerde teks, wat in numeriese voorstellings omgeskakel word.
|
||
|
2. **Token Inbed en Posisionele Inbed Laag**: Die getokeniseerde teks word deur 'n **token inbed** laag en 'n **posisionele inbed laag** gestuur, wat die posisie van tokens in 'n volgorde vasvang, krities vir die begrip van woordorde.
|
||
|
3. **Transformer Blokke**: Die model bevat **12 transformer blokke**, elk met verskeie lae. Hierdie blokke herhaal die volgende volgorde:
|
||
|
* **Gemaskerde Multi-Kop Aandag**: Laat die model toe om op verskillende dele van die invoerteks gelyktydig te fokus.
|
||
|
* **Laag Normalisering**: 'n Normalisering stap om opleiding te stabiliseer en te verbeter.
|
||
|
* **Voed Voor Laag**: Verantwoordelik vir die verwerking van die inligting van die aandaglaag en om voorspellings oor die volgende token te maak.
|
||
|
* **Dropout Lae**: Hierdie lae voorkom oorpassing deur eenhede tydens opleiding lukraak te laat val.
|
||
|
4. **Finale Uitvoer Laag**: Die model gee 'n **4x50,257-dimensionele tensor** uit, waar **50,257** die grootte van die woordeskat verteenwoordig. Elke ry in hierdie tensor kom ooreen met 'n vektor wat die model gebruik om die volgende woord in die volgorde te voorspel.
|
||
|
5. **Doel**: Die doel is om hierdie inbedings te neem en dit terug in teks om te skakel. Spesifiek, die laaste ry van die uitvoer word gebruik om die volgende woord te genereer, wat as "vorentoe" in hierdie diagram verteenwoordig word.
|
||
|
|
||
|
### Kode voorstelling
|
||
|
```python
|
||
|
import torch
|
||
|
import torch.nn as nn
|
||
|
import tiktoken
|
||
|
|
||
|
class GELU(nn.Module):
|
||
|
def __init__(self):
|
||
|
super().__init__()
|
||
|
|
||
|
def forward(self, x):
|
||
|
return 0.5 * x * (1 + torch.tanh(
|
||
|
torch.sqrt(torch.tensor(2.0 / torch.pi)) *
|
||
|
(x + 0.044715 * torch.pow(x, 3))
|
||
|
))
|
||
|
|
||
|
class FeedForward(nn.Module):
|
||
|
def __init__(self, cfg):
|
||
|
super().__init__()
|
||
|
self.layers = nn.Sequential(
|
||
|
nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
|
||
|
GELU(),
|
||
|
nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
|
||
|
)
|
||
|
|
||
|
def forward(self, x):
|
||
|
return self.layers(x)
|
||
|
|
||
|
class MultiHeadAttention(nn.Module):
|
||
|
def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False):
|
||
|
super().__init__()
|
||
|
assert d_out % num_heads == 0, "d_out must be divisible by num_heads"
|
||
|
|
||
|
self.d_out = d_out
|
||
|
self.num_heads = num_heads
|
||
|
self.head_dim = d_out // num_heads # Reduce the projection dim to match desired output dim
|
||
|
|
||
|
self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
|
||
|
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
|
||
|
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
|
||
|
self.out_proj = nn.Linear(d_out, d_out) # Linear layer to combine head outputs
|
||
|
self.dropout = nn.Dropout(dropout)
|
||
|
self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1))
|
||
|
|
||
|
def forward(self, x):
|
||
|
b, num_tokens, d_in = x.shape
|
||
|
|
||
|
keys = self.W_key(x) # Shape: (b, num_tokens, d_out)
|
||
|
queries = self.W_query(x)
|
||
|
values = self.W_value(x)
|
||
|
|
||
|
# We implicitly split the matrix by adding a `num_heads` dimension
|
||
|
# Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)
|
||
|
keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)
|
||
|
values = values.view(b, num_tokens, self.num_heads, self.head_dim)
|
||
|
queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)
|
||
|
|
||
|
# Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)
|
||
|
keys = keys.transpose(1, 2)
|
||
|
queries = queries.transpose(1, 2)
|
||
|
values = values.transpose(1, 2)
|
||
|
|
||
|
# Compute scaled dot-product attention (aka self-attention) with a causal mask
|
||
|
attn_scores = queries @ keys.transpose(2, 3) # Dot product for each head
|
||
|
|
||
|
# Original mask truncated to the number of tokens and converted to boolean
|
||
|
mask_bool = self.mask.bool()[:num_tokens, :num_tokens]
|
||
|
|
||
|
# Use the mask to fill attention scores
|
||
|
attn_scores.masked_fill_(mask_bool, -torch.inf)
|
||
|
|
||
|
attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
|
||
|
attn_weights = self.dropout(attn_weights)
|
||
|
|
||
|
# Shape: (b, num_tokens, num_heads, head_dim)
|
||
|
context_vec = (attn_weights @ values).transpose(1, 2)
|
||
|
|
||
|
# Combine heads, where self.d_out = self.num_heads * self.head_dim
|
||
|
context_vec = context_vec.contiguous().view(b, num_tokens, self.d_out)
|
||
|
context_vec = self.out_proj(context_vec) # optional projection
|
||
|
|
||
|
return context_vec
|
||
|
|
||
|
class LayerNorm(nn.Module):
|
||
|
def __init__(self, emb_dim):
|
||
|
super().__init__()
|
||
|
self.eps = 1e-5
|
||
|
self.scale = nn.Parameter(torch.ones(emb_dim))
|
||
|
self.shift = nn.Parameter(torch.zeros(emb_dim))
|
||
|
|
||
|
def forward(self, x):
|
||
|
mean = x.mean(dim=-1, keepdim=True)
|
||
|
var = x.var(dim=-1, keepdim=True, unbiased=False)
|
||
|
norm_x = (x - mean) / torch.sqrt(var + self.eps)
|
||
|
return self.scale * norm_x + self.shift
|
||
|
|
||
|
class TransformerBlock(nn.Module):
|
||
|
def __init__(self, cfg):
|
||
|
super().__init__()
|
||
|
self.att = MultiHeadAttention(
|
||
|
d_in=cfg["emb_dim"],
|
||
|
d_out=cfg["emb_dim"],
|
||
|
context_length=cfg["context_length"],
|
||
|
num_heads=cfg["n_heads"],
|
||
|
dropout=cfg["drop_rate"],
|
||
|
qkv_bias=cfg["qkv_bias"])
|
||
|
self.ff = FeedForward(cfg)
|
||
|
self.norm1 = LayerNorm(cfg["emb_dim"])
|
||
|
self.norm2 = LayerNorm(cfg["emb_dim"])
|
||
|
self.drop_shortcut = nn.Dropout(cfg["drop_rate"])
|
||
|
|
||
|
def forward(self, x):
|
||
|
# Shortcut connection for attention block
|
||
|
shortcut = x
|
||
|
x = self.norm1(x)
|
||
|
x = self.att(x) # Shape [batch_size, num_tokens, emb_size]
|
||
|
x = self.drop_shortcut(x)
|
||
|
x = x + shortcut # Add the original input back
|
||
|
|
||
|
# Shortcut connection for feed forward block
|
||
|
shortcut = x
|
||
|
x = self.norm2(x)
|
||
|
x = self.ff(x)
|
||
|
x = self.drop_shortcut(x)
|
||
|
x = x + shortcut # Add the original input back
|
||
|
|
||
|
return x
|
||
|
|
||
|
|
||
|
class GPTModel(nn.Module):
|
||
|
def __init__(self, cfg):
|
||
|
super().__init__()
|
||
|
self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])
|
||
|
self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])
|
||
|
self.drop_emb = nn.Dropout(cfg["drop_rate"])
|
||
|
|
||
|
self.trf_blocks = nn.Sequential(
|
||
|
*[TransformerBlock(cfg) for _ in range(cfg["n_layers"])])
|
||
|
|
||
|
self.final_norm = LayerNorm(cfg["emb_dim"])
|
||
|
self.out_head = nn.Linear(
|
||
|
cfg["emb_dim"], cfg["vocab_size"], bias=False
|
||
|
)
|
||
|
|
||
|
def forward(self, in_idx):
|
||
|
batch_size, seq_len = in_idx.shape
|
||
|
tok_embeds = self.tok_emb(in_idx)
|
||
|
pos_embeds = self.pos_emb(torch.arange(seq_len, device=in_idx.device))
|
||
|
x = tok_embeds + pos_embeds # Shape [batch_size, num_tokens, emb_size]
|
||
|
x = self.drop_emb(x)
|
||
|
x = self.trf_blocks(x)
|
||
|
x = self.final_norm(x)
|
||
|
logits = self.out_head(x)
|
||
|
return logits
|
||
|
|
||
|
GPT_CONFIG_124M = {
|
||
|
"vocab_size": 50257, # Vocabulary size
|
||
|
"context_length": 1024, # Context length
|
||
|
"emb_dim": 768, # Embedding dimension
|
||
|
"n_heads": 12, # Number of attention heads
|
||
|
"n_layers": 12, # Number of layers
|
||
|
"drop_rate": 0.1, # Dropout rate
|
||
|
"qkv_bias": False # Query-Key-Value bias
|
||
|
}
|
||
|
|
||
|
torch.manual_seed(123)
|
||
|
model = GPTModel(GPT_CONFIG_124M)
|
||
|
out = model(batch)
|
||
|
print("Input batch:\n", batch)
|
||
|
print("\nOutput shape:", out.shape)
|
||
|
print(out)
|
||
|
```
|
||
|
### **GELU Aktivering Funksie**
|
||
|
```python
|
||
|
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
||
|
class GELU(nn.Module):
|
||
|
def __init__(self):
|
||
|
super().__init__()
|
||
|
|
||
|
def forward(self, x):
|
||
|
return 0.5 * x * (1 + torch.tanh(
|
||
|
torch.sqrt(torch.tensor(2.0 / torch.pi)) *
|
||
|
(x + 0.044715 * torch.pow(x, 3))
|
||
|
))
|
||
|
```
|
||
|
#### **Doel en Funksionaliteit**
|
||
|
|
||
|
* **GELU (Gaussian Error Linear Unit):** 'n Aktiveringsfunksie wat nie-lineariteit in die model inbring.
|
||
|
* **Glad Aktivering:** Anders as ReLU, wat negatiewe insette op nul stel, kaart GELU insette glad aan uitsette toe, wat klein, nie-nul waardes vir negatiewe insette toelaat.
|
||
|
* **Wiskundige Definisie:**
|
||
|
|
||
|
<figure><img src="../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure>
|
||
|
|
||
|
{% hint style="info" %}
|
||
|
Die doel van die gebruik van hierdie funksie na lineêre lae binne die FeedForward-laag is om die lineêre data te verander na nie-lineêr om die model in staat te stel om komplekse, nie-lineêre verhoudings te leer.
|
||
|
{% endhint %}
|
||
|
|
||
|
### **FeedForward Neurale Netwerk**
|
||
|
|
||
|
_Vorms is as kommentaar bygevoeg om die vorms van matrikse beter te verstaan:_
|
||
|
```python
|
||
|
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
||
|
class FeedForward(nn.Module):
|
||
|
def __init__(self, cfg):
|
||
|
super().__init__()
|
||
|
self.layers = nn.Sequential(
|
||
|
nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
|
||
|
GELU(),
|
||
|
nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
|
||
|
)
|
||
|
|
||
|
def forward(self, x):
|
||
|
# x shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
x = self.layers[0](x)# x shape: (batch_size, seq_len, 4 * emb_dim)
|
||
|
x = self.layers[1](x) # x shape remains: (batch_size, seq_len, 4 * emb_dim)
|
||
|
x = self.layers[2](x) # x shape: (batch_size, seq_len, emb_dim)
|
||
|
return x # Output shape: (batch_size, seq_len, emb_dim)
|
||
|
```
|
||
|
#### **Doel en Funksionaliteit**
|
||
|
|
||
|
* **Posisiegewys FeedForward Netwerk:** Pas 'n twee-laag ten volle verbind netwerk op elke posisie apart en identies toe.
|
||
|
* **Laag Besonderhede:**
|
||
|
* **Eerste Lineêre Laag:** Brei die dimensie uit van `emb_dim` na `4 * emb_dim`.
|
||
|
* **GELU Aktivering:** Pas nie-lineariteit toe.
|
||
|
* **Tweede Lineêre Laag:** Verminder die dimensie terug na `emb_dim`.
|
||
|
|
||
|
{% hint style="info" %}
|
||
|
Soos jy kan sien, gebruik die Feed Forward netwerk 3 lae. Die eerste een is 'n lineêre laag wat die dimensies met 4 sal vermenigvuldig met behulp van lineêre gewigte (parameters om binne die model te train). Dan word die GELU-funksie in al daardie dimensies gebruik om nie-lineêre variasies toe te pas om ryker verteenwoordigings te vang en uiteindelik word 'n ander lineêre laag gebruik om terug te keer na die oorspronklike grootte van dimensies.
|
||
|
{% endhint %}
|
||
|
|
||
|
### **Multi-Head Aandag Meganisme**
|
||
|
|
||
|
Dit is reeds in 'n vroeëre afdeling verduidelik.
|
||
|
|
||
|
#### **Doel en Funksionaliteit**
|
||
|
|
||
|
* **Multi-Head Self-Attention:** Laat die model toe om op verskillende posisies binne die invoer volgorde te fokus wanneer 'n token gekodeer word.
|
||
|
* **Sleutel Komponente:**
|
||
|
* **Vrae, Sleutels, Waardes:** Lineêre projeksies van die invoer, gebruik om aandag punte te bereken.
|
||
|
* **Koppe:** Meervoudige aandag meganismes wat parallel loop (`num_heads`), elk met 'n verminderde dimensie (`head_dim`).
|
||
|
* **Aandag Punte:** Bereken as die skaalproduk van vrae en sleutels, geskaal en gemaskeer.
|
||
|
* **Maskering:** 'n Oorsaaklike masker word toegepas om te voorkom dat die model na toekomstige tokens aandag gee (belangrik vir outoregressiewe modelle soos GPT).
|
||
|
* **Aandag Gewigte:** Softmax van die gemaskeerde en geskaalde aandag punte.
|
||
|
* **Konteks Vektor:** Gewigte som van die waardes, volgens aandag gewigte.
|
||
|
* **Uitset Projekie:** Lineêre laag om die uitsette van al die koppe te kombineer.
|
||
|
|
||
|
{% hint style="info" %}
|
||
|
Die doel van hierdie netwerk is om die verhoudings tussen tokens in dieselfde konteks te vind. Boonop word die tokens in verskillende koppe verdeel om oorfitting te voorkom, alhoewel die finale verhoudings wat per kop gevind word, aan die einde van hierdie netwerk gekombineer word.
|
||
|
|
||
|
Boonop, tydens opleiding, word 'n **oorsaaklike masker** toegepas sodat latere tokens nie in ag geneem word wanneer die spesifieke verhoudings met 'n token gekyk word nie en 'n **dropout** word ook toegepas om **oorfitting te voorkom**.
|
||
|
{% endhint %}
|
||
|
|
||
|
### **Laag** Normalisering
|
||
|
```python
|
||
|
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
||
|
class LayerNorm(nn.Module):
|
||
|
def __init__(self, emb_dim):
|
||
|
super().__init__()
|
||
|
self.eps = 1e-5 # Prevent division by zero during normalization.
|
||
|
self.scale = nn.Parameter(torch.ones(emb_dim))
|
||
|
self.shift = nn.Parameter(torch.zeros(emb_dim))
|
||
|
|
||
|
def forward(self, x):
|
||
|
mean = x.mean(dim=-1, keepdim=True)
|
||
|
var = x.var(dim=-1, keepdim=True, unbiased=False)
|
||
|
norm_x = (x - mean) / torch.sqrt(var + self.eps)
|
||
|
return self.scale * norm_x + self.shift
|
||
|
```
|
||
|
#### **Doel en Funksionaliteit**
|
||
|
|
||
|
* **Laag Normalisering:** 'n Tegniek wat gebruik word om die insette oor die kenmerke (embedding dimensies) vir elke individuele voorbeeld in 'n bondel te normaliseer.
|
||
|
* **Komponente:**
|
||
|
* **`eps`:** 'n Klein konstante (`1e-5`) wat by die variansie gevoeg word om deling deur nul tydens normalisering te voorkom.
|
||
|
* **`scale` en `shift`:** Leerbare parameters (`nn.Parameter`) wat die model toelaat om die genormaliseerde uitset te skaal en te verskuif. Hulle word onderskeidelik geinitialiseer na een en nul.
|
||
|
* **Normalisering Proses:**
|
||
|
* **Bereken Gemiddelde (`mean`):** Bereken die gemiddelde van die inset `x` oor die embedding dimensie (`dim=-1`), terwyl die dimensie vir broadcasting behou word (`keepdim=True`).
|
||
|
* **Bereken Variansie (`var`):** Bereken die variansie van `x` oor die embedding dimensie, terwyl die dimensie ook behou word. Die `unbiased=False` parameter verseker dat die variansie bereken word met die bevooroordeelde skatter (deling deur `N` in plaas van `N-1`), wat toepaslik is wanneer daar oor kenmerke eerder as monsters genormaliseer word.
|
||
|
* **Normaliseer (`norm_x`):** Trek die gemiddelde van `x` af en deel deur die vierkantswortel van die variansie plus `eps`.
|
||
|
* **Skaal en Verskuif:** Pas die leerbare `scale` en `shift` parameters toe op die genormaliseerde uitset.
|
||
|
|
||
|
{% hint style="info" %}
|
||
|
Die doel is om 'n gemiddelde van 0 met 'n variansie van 1 oor alle dimensies van dieselfde token te verseker. Die doel hiervan is om **die opleiding van diep neurale netwerke te stabiliseer** deur die interne kovariate verskuiwing te verminder, wat verwys na die verandering in die verspreiding van netwerk aktiverings as gevolg van die opdatering van parameters tydens opleiding.
|
||
|
{% endhint %}
|
||
|
|
||
|
### **Transformer Blok**
|
||
|
|
||
|
_Vorms is as kommentaar bygevoeg om beter te verstaan hoe die vorms van matrikse lyk:_
|
||
|
```python
|
||
|
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
||
|
|
||
|
class TransformerBlock(nn.Module):
|
||
|
def __init__(self, cfg):
|
||
|
super().__init__()
|
||
|
self.att = MultiHeadAttention(
|
||
|
d_in=cfg["emb_dim"],
|
||
|
d_out=cfg["emb_dim"],
|
||
|
context_length=cfg["context_length"],
|
||
|
num_heads=cfg["n_heads"],
|
||
|
dropout=cfg["drop_rate"],
|
||
|
qkv_bias=cfg["qkv_bias"]
|
||
|
)
|
||
|
self.ff = FeedForward(cfg)
|
||
|
self.norm1 = LayerNorm(cfg["emb_dim"])
|
||
|
self.norm2 = LayerNorm(cfg["emb_dim"])
|
||
|
self.drop_shortcut = nn.Dropout(cfg["drop_rate"])
|
||
|
|
||
|
def forward(self, x):
|
||
|
# x shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
# Shortcut connection for attention block
|
||
|
shortcut = x # shape: (batch_size, seq_len, emb_dim)
|
||
|
x = self.norm1(x) # shape remains (batch_size, seq_len, emb_dim)
|
||
|
x = self.att(x) # shape: (batch_size, seq_len, emb_dim)
|
||
|
x = self.drop_shortcut(x) # shape remains (batch_size, seq_len, emb_dim)
|
||
|
x = x + shortcut # shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
# Shortcut connection for feedforward block
|
||
|
shortcut = x # shape: (batch_size, seq_len, emb_dim)
|
||
|
x = self.norm2(x) # shape remains (batch_size, seq_len, emb_dim)
|
||
|
x = self.ff(x) # shape: (batch_size, seq_len, emb_dim)
|
||
|
x = self.drop_shortcut(x) # shape remains (batch_size, seq_len, emb_dim)
|
||
|
x = x + shortcut # shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
return x # Output shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
```
|
||
|
#### **Doel en Funksionaliteit**
|
||
|
|
||
|
* **Samestelling van Lae:** Kombineer multi-head attention, feedforward netwerk, laanormalisering, en residuele verbindings.
|
||
|
* **Laanormalisering:** Toegepas voor die aandag en feedforward lae vir stabiele opleiding.
|
||
|
* **Residuele Verbindings (Kortpaaie):** Voeg die invoer van 'n laag by sy uitvoer om die gradiëntvloei te verbeter en die opleiding van diep netwerke moontlik te maak.
|
||
|
* **Dropout:** Toegepas na aandag en feedforward lae vir regulering.
|
||
|
|
||
|
#### **Stap-vir-Stap Funksionaliteit**
|
||
|
|
||
|
1. **Eerste Residuele Pad (Self-Aandagtigheid):**
|
||
|
* **Invoer (`shortcut`):** Stoor die oorspronklike invoer vir die residuele verbinding.
|
||
|
* **Laag Norm (`norm1`):** Normaliseer die invoer.
|
||
|
* **Multi-Head Attention (`att`):** Pas self-aandagtigheid toe.
|
||
|
* **Dropout (`drop_shortcut`):** Pas dropout toe vir regulering.
|
||
|
* **Voeg Residueel By (`x + shortcut`):** Kombineer met die oorspronklike invoer.
|
||
|
2. **Tweedee Residuele Pad (FeedForward):**
|
||
|
* **Invoer (`shortcut`):** Stoor die opgedateerde invoer vir die volgende residuele verbinding.
|
||
|
* **Laag Norm (`norm2`):** Normaliseer die invoer.
|
||
|
* **FeedForward Netwerk (`ff`):** Pas die feedforward transformasie toe.
|
||
|
* **Dropout (`drop_shortcut`):** Pas dropout toe.
|
||
|
* **Voeg Residueel By (`x + shortcut`):** Kombineer met die invoer van die eerste residuele pad.
|
||
|
|
||
|
{% hint style="info" %}
|
||
|
Die transformer blok groepeer al die netwerke saam en pas 'n paar **normalisering** en **dropouts** toe om die opleidingsstabiliteit en resultate te verbeter.\
|
||
|
Let op hoe dropouts gedoen word na die gebruik van elke netwerk terwyl normalisering voor toegepas word.
|
||
|
|
||
|
Boonop gebruik dit ook kortpaaie wat bestaan uit **die uitvoer van 'n netwerk by sy invoer te voeg**. Dit help om die verdwynende gradiënt probleem te voorkom deur te verseker dat aanvanklike lae "net soveel" bydra as die laaste.
|
||
|
{% endhint %}
|
||
|
|
||
|
### **GPTModel**
|
||
|
|
||
|
_Vorms is as kommentaar bygevoeg om die vorms van matrikse beter te verstaan:_
|
||
|
```python
|
||
|
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
||
|
class GPTModel(nn.Module):
|
||
|
def __init__(self, cfg):
|
||
|
super().__init__()
|
||
|
self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])
|
||
|
# shape: (vocab_size, emb_dim)
|
||
|
|
||
|
self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])
|
||
|
# shape: (context_length, emb_dim)
|
||
|
|
||
|
self.drop_emb = nn.Dropout(cfg["drop_rate"])
|
||
|
|
||
|
self.trf_blocks = nn.Sequential(
|
||
|
*[TransformerBlock(cfg) for _ in range(cfg["n_layers"])]
|
||
|
)
|
||
|
# Stack of TransformerBlocks
|
||
|
|
||
|
self.final_norm = LayerNorm(cfg["emb_dim"])
|
||
|
self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False)
|
||
|
# shape: (emb_dim, vocab_size)
|
||
|
|
||
|
def forward(self, in_idx):
|
||
|
# in_idx shape: (batch_size, seq_len)
|
||
|
batch_size, seq_len = in_idx.shape
|
||
|
|
||
|
# Token embeddings
|
||
|
tok_embeds = self.tok_emb(in_idx)
|
||
|
# shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
# Positional embeddings
|
||
|
pos_indices = torch.arange(seq_len, device=in_idx.device)
|
||
|
# shape: (seq_len,)
|
||
|
pos_embeds = self.pos_emb(pos_indices)
|
||
|
# shape: (seq_len, emb_dim)
|
||
|
|
||
|
# Add token and positional embeddings
|
||
|
x = tok_embeds + pos_embeds # Broadcasting over batch dimension
|
||
|
# x shape: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
x = self.drop_emb(x) # Dropout applied
|
||
|
# x shape remains: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
x = self.trf_blocks(x) # Pass through Transformer blocks
|
||
|
# x shape remains: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
x = self.final_norm(x) # Final LayerNorm
|
||
|
# x shape remains: (batch_size, seq_len, emb_dim)
|
||
|
|
||
|
logits = self.out_head(x) # Project to vocabulary size
|
||
|
# logits shape: (batch_size, seq_len, vocab_size)
|
||
|
|
||
|
return logits # Output shape: (batch_size, seq_len, vocab_size)
|
||
|
```
|
||
|
#### **Doel en Funksionaliteit**
|
||
|
|
||
|
* **Inbedingslae:**
|
||
|
* **Token Inbedings (`tok_emb`):** Converteer token-indekse na inbedings. Ter herinnering, dit is die gewigte wat aan elke dimensie van elke token in die woordeskat gegee word.
|
||
|
* **Posisionele Inbedings (`pos_emb`):** Voeg posisionele inligting by die inbedings om die volgorde van tokens vas te vang. Ter herinnering, dit is die gewigte wat aan tokens gegee word volgens hul posisie in die teks.
|
||
|
* **Dropout (`drop_emb`):** Toegepas op inbedings vir regularisering.
|
||
|
* **Transformer Blokke (`trf_blocks`):** Stapel van `n_layers` transformer blokke om inbedings te verwerk.
|
||
|
* **Finale Normalisering (`final_norm`):** Laag normalisering voor die uitvoerlaag.
|
||
|
* **Uitvoerlaag (`out_head`):** Projek die finale verborge toestande na die woordeskatgrootte om logits vir voorspelling te produseer.
|
||
|
|
||
|
{% hint style="info" %}
|
||
|
Die doel van hierdie klas is om al die ander genoemde netwerke te **voorspel die volgende token in 'n reeks**, wat fundamenteel is vir take soos teksgenerasie.
|
||
|
|
||
|
Let op hoe dit **soveel transformer blokke as aangedui** sal **gebruik** en dat elke transformer blok een multi-head attestasienet, een feed forward-net en verskeie normaliserings gebruik. So as 12 transformer blokke gebruik word, vermenigvuldig dit met 12.
|
||
|
|
||
|
Boonop word 'n **normalisering** laag **voor** die **uitvoer** bygevoeg en 'n finale lineêre laag word aan die einde toegepas om die resultate met die regte dimensies te verkry. Let op hoe elke finale vektor die grootte van die gebruikte woordeskat het. Dit is omdat dit probeer om 'n waarskynlikheid per moontlike token binne die woordeskat te kry.
|
||
|
{% endhint %}
|
||
|
|
||
|
## Aantal Parameters om te oefen
|
||
|
|
||
|
Met die GPT-struktuur gedefinieer, is dit moontlik om die aantal parameters om te oefen te vind:
|
||
|
```python
|
||
|
GPT_CONFIG_124M = {
|
||
|
"vocab_size": 50257, # Vocabulary size
|
||
|
"context_length": 1024, # Context length
|
||
|
"emb_dim": 768, # Embedding dimension
|
||
|
"n_heads": 12, # Number of attention heads
|
||
|
"n_layers": 12, # Number of layers
|
||
|
"drop_rate": 0.1, # Dropout rate
|
||
|
"qkv_bias": False # Query-Key-Value bias
|
||
|
}
|
||
|
|
||
|
model = GPTModel(GPT_CONFIG_124M)
|
||
|
total_params = sum(p.numel() for p in model.parameters())
|
||
|
print(f"Total number of parameters: {total_params:,}")
|
||
|
# Total number of parameters: 163,009,536
|
||
|
```
|
||
|
### **Stap-vir-Stap Berekening**
|
||
|
|
||
|
#### **1. Inbedingslae: Token Inbeding & Posisie Inbeding**
|
||
|
|
||
|
* **Laag:** `nn.Embedding(vocab_size, emb_dim)`
|
||
|
* **Parameters:** `vocab_size * emb_dim`
|
||
|
```python
|
||
|
token_embedding_params = 50257 * 768 = 38,597,376
|
||
|
```
|
||
|
* **Laag:** `nn.Embedding(context_length, emb_dim)`
|
||
|
* **Parameters:** `context_length * emb_dim`
|
||
|
```python
|
||
|
position_embedding_params = 1024 * 768 = 786,432
|
||
|
```
|
||
|
**Totale Inbedingsparameters**
|
||
|
```python
|
||
|
embedding_params = token_embedding_params + position_embedding_params
|
||
|
embedding_params = 38,597,376 + 786,432 = 39,383,808
|
||
|
```
|
||
|
#### **2. Transformer Blokke**
|
||
|
|
||
|
Daar is 12 transformer blokke, so ons sal die parameters vir een blok bereken en dan met 12 vermenigvuldig.
|
||
|
|
||
|
**Parameters per Transformer Blok**
|
||
|
|
||
|
**a. Multi-Head Aandag**
|
||
|
|
||
|
* **Komponente:**
|
||
|
* **Vraag Lineêre Laag (`W_query`):** `nn.Linear(emb_dim, emb_dim, bias=False)`
|
||
|
* **Sleutel Lineêre Laag (`W_key`):** `nn.Linear(emb_dim, emb_dim, bias=False)`
|
||
|
* **Waarde Lineêre Laag (`W_value`):** `nn.Linear(emb_dim, emb_dim, bias=False)`
|
||
|
* **Uitset Projektering (`out_proj`):** `nn.Linear(emb_dim, emb_dim)`
|
||
|
* **Berekenings:**
|
||
|
* **Elk van `W_query`, `W_key`, `W_value`:**
|
||
|
|
||
|
```python
|
||
|
qkv_params = emb_dim * emb_dim = 768 * 768 = 589,824
|
||
|
```
|
||
|
|
||
|
Aangesien daar drie sulke lae is:
|
||
|
|
||
|
```python
|
||
|
total_qkv_params = 3 * qkv_params = 3 * 589,824 = 1,769,472
|
||
|
```
|
||
|
* **Uitset Projektering (`out_proj`):**
|
||
|
|
||
|
```python
|
||
|
out_proj_params = (emb_dim * emb_dim) + emb_dim = (768 * 768) + 768 = 589,824 + 768 = 590,592
|
||
|
```
|
||
|
* **Totale Multi-Head Aandag Parameters:**
|
||
|
|
||
|
```python
|
||
|
mha_params = total_qkv_params + out_proj_params
|
||
|
mha_params = 1,769,472 + 590,592 = 2,360,064
|
||
|
```
|
||
|
|
||
|
**b. Voedingsnetwerk**
|
||
|
|
||
|
* **Komponente:**
|
||
|
* **Eerste Lineêre Laag:** `nn.Linear(emb_dim, 4 * emb_dim)`
|
||
|
* **Tweedel Lineêre Laag:** `nn.Linear(4 * emb_dim, emb_dim)`
|
||
|
* **Berekenings:**
|
||
|
* **Eerste Lineêre Laag:**
|
||
|
|
||
|
```python
|
||
|
ff_first_layer_params = (emb_dim * 4 * emb_dim) + (4 * emb_dim)
|
||
|
ff_first_layer_params = (768 * 3072) + 3072 = 2,359,296 + 3,072 = 2,362,368
|
||
|
```
|
||
|
* **Tweedel Lineêre Laag:**
|
||
|
|
||
|
```python
|
||
|
ff_second_layer_params = (4 * emb_dim * emb_dim) + emb_dim
|
||
|
ff_second_layer_params = (3072 * 768) + 768 = 2,359,296 + 768 = 2,360,064
|
||
|
```
|
||
|
* **Totale Voedingsparameters:**
|
||
|
|
||
|
```python
|
||
|
ff_params = ff_first_layer_params + ff_second_layer_params
|
||
|
ff_params = 2,362,368 + 2,360,064 = 4,722,432
|
||
|
```
|
||
|
|
||
|
**c. Laag Normalisasies**
|
||
|
|
||
|
* **Komponente:**
|
||
|
* Twee `LayerNorm` instansies per blok.
|
||
|
* Elke `LayerNorm` het `2 * emb_dim` parameters (skaal en skuif).
|
||
|
* **Berekenings:**
|
||
|
|
||
|
```python
|
||
|
pythonCopy codelayer_norm_params_per_block = 2 * (2 * emb_dim) = 2 * 768 * 2 = 3,072
|
||
|
```
|
||
|
|
||
|
**d. Totale Parameters per Transformer Blok**
|
||
|
```python
|
||
|
pythonCopy codeparams_per_block = mha_params + ff_params + layer_norm_params_per_block
|
||
|
params_per_block = 2,360,064 + 4,722,432 + 3,072 = 7,085,568
|
||
|
```
|
||
|
**Totale Parameters vir Alle Transformator Blokke**
|
||
|
```python
|
||
|
pythonCopy codetotal_transformer_blocks_params = params_per_block * n_layers
|
||
|
total_transformer_blocks_params = 7,085,568 * 12 = 85,026,816
|
||
|
```
|
||
|
#### **3. Finale Lae**
|
||
|
|
||
|
**a. Finale Lae Normalisering**
|
||
|
|
||
|
* **Parameters:** `2 * emb_dim` (skaal en verskui)
|
||
|
```python
|
||
|
pythonCopy codefinal_layer_norm_params = 2 * 768 = 1,536
|
||
|
```
|
||
|
**b. Uitsetprojeklaag (`out_head`)**
|
||
|
|
||
|
* **Laag:** `nn.Linear(emb_dim, vocab_size, bias=False)`
|
||
|
* **Parameters:** `emb_dim * vocab_size`
|
||
|
```python
|
||
|
pythonCopy codeoutput_projection_params = 768 * 50257 = 38,597,376
|
||
|
```
|
||
|
#### **4. Samevatting van Alle Parameters**
|
||
|
```python
|
||
|
pythonCopy codetotal_params = (
|
||
|
embedding_params +
|
||
|
total_transformer_blocks_params +
|
||
|
final_layer_norm_params +
|
||
|
output_projection_params
|
||
|
)
|
||
|
total_params = (
|
||
|
39,383,808 +
|
||
|
85,026,816 +
|
||
|
1,536 +
|
||
|
38,597,376
|
||
|
)
|
||
|
total_params = 163,009,536
|
||
|
```
|
||
|
## Genereer Tegnies
|
||
|
|
||
|
Met 'n model wat die volgende token voorspel soos die vorige, is dit net nodig om die laaste tokenwaardes van die uitvoer te neem (aangesien dit die waardes van die voorspelde token sal wees), wat 'n **waarde per inskrywing in die woordeskat** sal wees en dan die `softmax` funksie te gebruik om die dimensies te normaliseer in waarskynlikhede wat 1 optel en dan die indeks van die grootste inskrywing te kry, wat die indeks van die woord binne die woordeskat sal wees.
|
||
|
|
||
|
Code van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01\_main-chapter-code/ch04.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01\_main-chapter-code/ch04.ipynb):
|
||
|
```python
|
||
|
def generate_text_simple(model, idx, max_new_tokens, context_size):
|
||
|
# idx is (batch, n_tokens) array of indices in the current context
|
||
|
for _ in range(max_new_tokens):
|
||
|
|
||
|
# Crop current context if it exceeds the supported context size
|
||
|
# E.g., if LLM supports only 5 tokens, and the context size is 10
|
||
|
# then only the last 5 tokens are used as context
|
||
|
idx_cond = idx[:, -context_size:]
|
||
|
|
||
|
# Get the predictions
|
||
|
with torch.no_grad():
|
||
|
logits = model(idx_cond)
|
||
|
|
||
|
# Focus only on the last time step
|
||
|
# (batch, n_tokens, vocab_size) becomes (batch, vocab_size)
|
||
|
logits = logits[:, -1, :]
|
||
|
|
||
|
# Apply softmax to get probabilities
|
||
|
probas = torch.softmax(logits, dim=-1) # (batch, vocab_size)
|
||
|
|
||
|
# Get the idx of the vocab entry with the highest probability value
|
||
|
idx_next = torch.argmax(probas, dim=-1, keepdim=True) # (batch, 1)
|
||
|
|
||
|
# Append sampled index to the running sequence
|
||
|
idx = torch.cat((idx, idx_next), dim=1) # (batch, n_tokens+1)
|
||
|
|
||
|
return idx
|
||
|
|
||
|
|
||
|
start_context = "Hello, I am"
|
||
|
|
||
|
encoded = tokenizer.encode(start_context)
|
||
|
print("encoded:", encoded)
|
||
|
|
||
|
encoded_tensor = torch.tensor(encoded).unsqueeze(0)
|
||
|
print("encoded_tensor.shape:", encoded_tensor.shape)
|
||
|
|
||
|
model.eval() # disable dropout
|
||
|
|
||
|
out = generate_text_simple(
|
||
|
model=model,
|
||
|
idx=encoded_tensor,
|
||
|
max_new_tokens=6,
|
||
|
context_size=GPT_CONFIG_124M["context_length"]
|
||
|
)
|
||
|
|
||
|
print("Output:", out)
|
||
|
print("Output length:", len(out[0]))
|
||
|
```
|
||
|
## Verwysings
|
||
|
|
||
|
* [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|