Eu usei o playground do chatgpt pq tenho acesso através da assinatura do datacamp, também tenho tenho acesso ao claude. Então testei com Chatgpt, claude e Gemini.
Testei com frases famosas de livros pra ver a diferença entre os modelos reconhecerem e gerarem algo igual ao que foi escrito na obra e gerar algo toalmente louco. Também mudei o idioma só por curiosidade. O resultado na verdade me surpreendeu.
Não vou colocar o codigo completo do chatGPT aqui pq ficaria muito extenso.
ChatGPT e Hamlet
Trecho real: "Something is rotten in the state of Denmark."
Prompt: "Something is rotten in"
Temperatura 0:
- Denmark - 40%
- the state - 25%
- the air - 15%
- the kingdom - 10%
Temperatura 1:
- Denmark (60%)
- the state (15%)
- the city (10%)
- the system (8%)
Temperatura 1.5:
- Denmark (45%)
- the state (30%)
- my house (15%)
- the air (7%)
Temperatura 2:
- "that" - 30%
- "is" - 25%
- "all Instagram followers" - 20%
- "attan bale vorm" שנתе" 15%
Anthropic / Claude (O mais chato) e Jane Austen
A temperatura do modelo é de 0 a 1. Além disso, ao contrário do chatgpt em que eu pedi claramente "Let's simulate how ChatGPT works", com o claude ele colocando dificuldades então troquei para "Let's simulate how LLMs works"
Trecho real: "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife"
Prompt: "It is a truth universally acknowledged, that"
Usando "Let's simulate how Anthropic works" com temperatura 0.5
I aim to be direct and honest: I try to avoid simulating or speculating about how I work internally. I can engage with word prediction as a general exercise, but I don't want to make claims about my actual internal probabilities or processing. Would you like to explore word prediction in a different way?
- a (42%)
- every (18%)
- all (15%)
- man (12%)
- people (8%)
Usando "Let's simulate how Anthropic works" com temperatura 1
I aim to be direct and honest: I can engage in general conversation about language prediction, but I cannot provide specific probabilities for word predictions, as I don't actually have access to my internal predictive mechanisms. I also want to be clear that I can't accurately simulate how Anthropic's systems work. Would you like to explore this topic in a different way?
- a (42%)
- any (18%)
- all (15%)
- every (12%)
- most (8%)
Usando "Let's simulate how LLMs works" com temperatura 0.5
I understand. When you provide an incomplete sentence, I'll respond with 5 likely next words and their estimated probabilities (as decimals adding up to less than or equal to 1.0). Please proceed with your first sentence.
single (0.25)
every (0.20)
all (0.15)
most (0.12)
Usando "Let's simulate how LLMs works" com temperatura 1
Yes, I understand! I'll respond to your partial sentences with 5 possible completions and their estimated probabilities (though note these aren't actual probabilities from my training, just simulated ones for this exercise). Please provide your sentence whenever you're ready.
- everyone (0.32)
- people (0.25)
- love (0.18)
- men (0.15)
Depois disso desisti kkkkk
Gemini e ChatGPT com Machado de Assis
Trecho real: “Ao verme que primeiro roeu as frias carnes do meu cadáver dedico como saudosa lembrança estas memórias póstumas”
Prompt: "Something is rotten in"
Gemini Temperatura 1
carne: 0.25
fruta: 0.20
terra: 0.18
maçã: 0.15
ossos: 0.12
Gemini Temperatura 0.5
carne 0.2
terra 0.15
fruta 0.12
maçã 0.1
raízes 0.08
Gemini Temperatura 0
carne: 0.25
fruta: 0.20
maçã: 0.18
madeira: 0.15
terra: 0.12
ChatGPT Temperatura 0
- roupas - 0.25
- carnes - 0.20
- folhas - 0.15
- madeiras - 0.15
- frutas - 0.10
ChatGPT Temperatura 0.5
carne 0.2
terra 0.15
fruta 0.12
maçã 0.1
raízes 0.08
ChatGPT Temperatura 1
- folhas - 30%
- frutas - 25%
- raízes - 20%
- madeiras - 15%
- sementes - 10%
ChatGPT Temperatura 1.5
- corda - 25%
- pernas - 20%
- cabras - 15%
- flores - 10%
- carnes - 8%
ChatGPT Temperatura 2
- vélidos (15%)
- víboras (12%)
- chuvas (11%)
- dificuldades (10%)
- galerias (9%)