AI Scope
135 subscribers
193 photos
21 videos
18 files
113 links
Download Telegram
Here is the full system (meta) prompt from the article. It shows all the parts coming together. You can use it as a template.


CAPS-Aligned Worksheet Generation Assistant

You are a helpful and experienced teaching assistant specialising in creating worksheets aligned with the South African Curriculum and Assessment Policy Statements (CAPS). Your goal is to guide teachers through generating well-structured, engaging worksheets that complement their lesson plans and reinforce key learning objectives as outlined in CAPS.

To begin, greet the teacher and ask about the specific subject, grade, and topic for which they need a worksheet. Ask one question at a time and wait for the teacher’s reply. Once the teacher provides this information, follow these steps:

1. Review the relevant CAPS document to ensure the worksheet aligns with the curriculum requirements, including the content, skills, and assessment standards for the specific subject and grade.
2. Ask the teacher which specific learning outcomes they want to focus on for the worksheet, referring to the CAPS-specified outcomes for that subject and grade.
3. Guide the teacher through the worksheet creation process, prompting them to consider the following elements:
- Title and clear instructions in the language of instruction
- Vocabulary section (5-10 key terms related to the topic, including any necessary translations for multilingual classrooms)
- Variety of question types (e.g., multiple choice, short answer, fill-in-the-blanks, matching)
- Application questions or problem-solving tasks relevant to the South African context
- Visual elements (e.g., diagrams, charts, or images to label or analyse)
- Higher-order thinking questions aligned with Bloom’s Taxonomy
- Extension or challenge section for advanced learners
4. For each section of the worksheet, provide suggestions and examples based on CAPS guidelines and best practices in worksheet design. Encourage a mix of question types and difficulty levels to cater to diverse learners and meet CAPS assessment requirements.

5. After completing each section, confirm with the teacher if they want to refine it or if they’re happy to proceed to the next section.

6. Once all sections are approved, present the complete worksheet layout, including an answer key and marking guidelines as per CAPS requirements.

7. Offer to create a rubric or scoring guide for the worksheet that aligns with CAPS assessment standards.

8. Ask if the teacher would like the worksheet to be differentiated for various ability levels, ensuring it still meets CAPS requirements for all learners.

9. Inquire if the teacher needs any digital or interactive elements added to the worksheet, such as links to relevant

South African educational resources or online activities, if appropriate for their classroom context.

Always remember:

- Maintain a friendly and supportive tone throughout the conversation
- Prioritise the teacher’s needs and goals for the worksheet while ensuring CAPS compliance
- Ensure all content is age-appropriate, culturally relevant to South Africa, and aligns with CAPS standards
- Suggest ways to make the worksheet visually appealing and engaging for students
- Offer tips on how to use the worksheet effectively in class or as homework, considering potential resource constraints in various South African school settings
- Be mindful of the multilingual nature of South African classrooms and suggest ways to support language development alongside subject content

Always be ready to adjust the worksheet based on the teacher’s feedback, specific classroom needs, and any recent updates to C


🦴 @scopeofai | #concepts
دمای LLM چیه؟

🌡 وقتی از LLM (مدل‌های زبانی بزرگ) حرف می‌زنیم، دمـا (Temperature) یکی از پارامترهای کلیدیه که مشخص می‌کنه خروجی مدل چقدر «تصادفی» یا «خلاقانه» باشه.

مدل‌های زبانی همیشه پیش‌بینی می‌کنن که کدوم کلمه (یا توکن) بعدی با چه احتمالی بیاد.

دما این توزیع احتمالات رو دستکاری می‌کنه:

دمای پایین: احتمال بیشتر برای انتخاب مطمئن‌ترین کلمه ⬅️ متن قابل پیش‌بینی‌تر و منظم‌تر.

دمای بالا: فرصت بیشتر برای انتخاب کلمات غیرمنتظره ⬅️ متن متنوع‌تر و خلاقانه‌تر، ولی پرریسک‌تر.

پس بسته به هدف، می‌تونی دما رو تغییر بدی: برای جواب دقیق و محکم، دمای پایین؛ برای ایده‌پردازی یا داستان‌نویسی، دمای بالاتر.

What is LLM Temperature?

When we talk about LLMs (Large Language Models), temperature is a key parameter that controls how “random” the generated text will be.

LLMs predict the next token (word or part of a word) based on a probability distribution over possible tokens.
The temperature setting modifies that distribution:

Lower temperature → pushes probability more toward the highest-probability tokens → more predictable, more coherent text.

Higher temperature → flattens the distribution (or gives more chance to less probable tokens) → more variety, more creativity, but also risks of incoherence.

Why it matters: depending on your use case, you might want precision and consistency (e.g. factual answers, documentation) or you might want creativity (e.g. story-writing, brainstorming)


🦴 @scopeofai | #concepts
تنظیم دما

برای کنترل خروجی LLM فقط دما نیست، چندتا پارامتر مهم دیگه هم نقش دارن

🔹Temperature: دما مستقیماً میزان تصادفی بودن خروجی رو تعیین می‌کنه. دمای پایین یعنی مدل خیلی دقیق و قابل پیش‌بینی جواب می‌ده، دمای بالا یعنی متن خلاقانه‌تر ولی کم‌ثبات‌تر

🔹do_sample: اگه فعال باشه، مدل به‌جای انتخاب همیشه مطمئن‌ترین کلمه، از بین چند گزینه انتخاب می‌کنه. در واقع بدون فعال بودنش، تغییر دما هم بی‌فایده‌ست

🔹top_k: این پارامتر تعداد گزینه‌هایی رو که مدل می‌تونه انتخاب کنه محدود می‌کنه. عدد کم یعنی مدل محتاط‌تر و جواب‌ها قابل‌اعتمادتر. عدد بالا یعنی آزادی عمل بیشتر

🔹top_p: به‌جای تعداد مشخص، مدل از بین مجموعه‌ای انتخاب می‌کنه که مجموع احتمالش به یه حد خاص برسه (مثلاً ۹۵٪). این باعث میشه متن تنوع داشته باشه ولی پرت و پلا هم نشه


Configuring Temperature

To control LLM output, temperature isn’t the only factor. Several other parameters also shape the results:

Temperature: Directly controls the randomness of the output. Low values make the model very precise and predictable. Higher values add creativity but reduce stability.

do_sample: If enabled, the model samples from multiple possible tokens instead of always choosing the most likely one. Without this, temperature adjustments won’t matter.

top_k: Limits the model’s choices to the top k most probable tokens. A small value keeps it conservative and reliable; a larger value gives more freedom.

top_p: Instead of a fixed number, the model chooses from the smallest set of tokens whose cumulative probability passes a threshold (e.g., 95%). This keeps variety while avoiding nonsense.


🦴 @scopeofai | #concepts
کنترل خروجی فراتر از دما

دما تنها ابزار کنترل نیست. برای گرفتن خروجی دقیق‌تر، اینا هم کاربرد دارن:

❇️حداکثر طول (max length): جلوی پرحرفی یا بی‌راهه رفتن مدل رو می‌گیره.

✳️Stop sequences: به مدل می‌گه کجا متوقف بشه.

Frequency penalty: جلوی تکرار زیاد یه کلمه رو می‌گیره.

Presence penalty: تنوع ایجاد می‌کنه و باعث میشه مدل دنبال همون کلمات قبلی نره.

ترکیب این‌ها با دما می‌تونه خروجی خیلی دقیق‌تر و قابل‌مدیریت‌تر بده.

Controlling Output Beyond Temperature

Temperature isn't the only knob. To get output that better fits what you want, you often combine parameters and control mechanisms.


Here are other levers:

❇️Maximum length: how many tokens the model can output. Keeps responses from going off-tangent.

❇️Stop sequences: define sequences that tell the model, “stop here.” Handy for structured output: emails, lists, dialogues.

❇️Frequency penalty: penalizes tokens (words) that are used often in output; discourages repetition.

❇️Presence penalty: penalizes simply for whether a token has already appeared (not how many times). Helps ensure variety.


Combining these with temperature + sampling parameters gives you fine-grained control over what the LLM produces.


🦴 @scopeofai | #concepts
مقایسه خروجی با IBM Granite

برای روشن‌تر شدن موضوع، IBM با مدل Granite 3.1 یک مثال زده. پرامپت این بود:
«یک داستان بنویس درباره دانشمند داده‌ای که عاشق پایتون است.»

🔅 وقتی دما روی مقدار خیلی پایین (۰.۱) تنظیم شد، خروجی کاملاً امن و قابل پیش‌بینی بود؛ متن خشک بود و جزئیات زیادی نداشت.
وقتی دما روی متوسط (۰.۷۵) قرار گرفت، داستان زنده‌تر شد؛ توصیف‌ها بیشتر شدن و کمی خلاقیت به متن اضافه شد.

📈اما وقتی دما روی بالا (۱.۲۵) رفت، متن پر از ایده‌های غیرمنتظره شد؛ داستان تخیلی‌تر بود و گاهی از موضوع اصلی منحرف می‌شد.


Comparing Outputs with IBM Granite



To make this clearer, IBM tested its Granite 3.1 model with a simple prompt:
“Write a story about a data scientist who loves Python.”

At a very low temperature (0.1), the output was extremely safe and predictable. The story was dry, with little detail.
At a medium temperature (0.75), the story became richer. There were more vivid descriptions and a touch of creativity.
At a high temperature (1.25), the text was full of unexpected ideas. It felt more imaginative, but sometimes drifted away from the main topic


🦴 @scopeofai | #concepts
👌1
کی از چه دمایی استفاده کنیم

💡کار دقیق و فکت‌محور (گزارش، خلاصه، متن رسمی): دمای پایین (0.1–0.4)

📝کار خلاقانه (شعر، داستان، ایده‌پردازی): دمای بالا (0.7–1.2)

همیشه در کنارش از حداکثر طول، Stop sequence و Penaltyها استفاده کن تا متن عجیب‌غریب نشه

بهترین نتیجه معمولاً از آزمایش و تعادل بین این عوامل به‌دست میاد

When to Use What

To wrap up, here are guidelines for what temperature + settings you might choose depending on your purpose:

For factual, precise work (e.g. reports, summaries, technical writing): use low temperature (0.1-0.4), minimal top_k or top_p, lower randomness.

For creative work (stories, brainstorming, poetry): use higher temperature (0.7-1.2+), allow more sampling, allow higher top_k / top_p.

Always combine with stop sequences, max length, and penalties to avoid repetition or straying.

Experiment: sometimes a moderate temperature + restrictions gives a sweet balance.


🦴 @scopeofai | #concepts
👌1
ولی دونستن این مفاهیم بدون زدن پروژه های واقعی بی‌فایدست. تمرکزم اینه که به زودی متمرکزتر بشیم روی تفسیر این مفاهیم روی پروژه و دیتاست‌های واقعی
🫡2🥱1