AI Scope
113 subscribers
172 photos
21 videos
16 files
108 links
Download Telegram
AI Scope
Video
مدل GPT-4.5 از امروز برای کاربران Pro در دسترسه، هفته بعد برای کاربران Plus میاد و معلوم‌ نیست کی قراره برای کاربران عادی منتشر بشه
👍2
مدل GPT-4.5 از تمام مدل های Openai باهوش‌تره، اما از مدل‌های بقیه شرکت‌ها چی؟ هنوز مشخص نیست
👍1
Its greatest feature is how it makes conversations natural

یکی از مزیت های رقابتی این مدل، جوریه که بهتون طبیعی و ساده جواب میده
این مدل که با اطلاعات خیلی زیاد train شده، الان بیشتر از همیشه زبان و ادبیات انسان‌هارو می‌فهمه و گفتگو هارو طبیعی و صمیمی نگه می‌داره
👍1
مدل GPT-4.5 از بقیه مدل های این شرکت دقت بیشتر و خطای کمتری داره
👍1
توی کد زدن، ریاضیات، زبان و علوم از باقی مدل های این شرکت خیلی بهتره
1👍1
انسان هایی که این مدل رو تست کردن، توی آزمون ها بهش نمره بیشتری دادن نسبت به مدل GPT-4o
1👍1🔥1
I really wished GPT-4.5 was more competitive and had a hype like Grok 3 did
1👍1
Just Sam Altman being Sam Altman

به نقل از CNBC زاکربرگ قصد داشته اپی بسازه که بتونه با ChatGPT رقابت کنه‌‌.

آلتمن هم اینجوری با این توییت‌ واکنش نشون داده که:

اوکی، شاید ما هم یه شبکه اجتماعی ساختیم
1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
Thank you 26 subscribers❤️🎉🎉🎉

از تک‌ تکتون‌ ممنونم. امیدوارم محتوای کانال ارزش نگاه و وقتتون رو داشته باشه
👍3👏2🔥1
This study delves into the capabilities and constraints of ChatGPT, a prominent large language model, in the context of automated essay scoring (AES), particularly focusing on the TOEFL Independent Writing Task. This investigation is significant as it explores the potential of ChatGPT to evaluate essays based on the diverse scoring criteria outlined in the official TOEFL guide. The primary objective is to assess whether ChatGPT can effectively serve as an AES tool, especially when dealing with small sample sizes, which often pose challenges for traditional machine learning approaches.

📁 Paper : https://arxiv.org/pdf/2401.03401

@scopeofai
@deep_learning_proj
1👍1
Large Language Models (LLMs) have rapidly permeated the information landscape, sparking anxieties regarding the displacement of human labor. This essay moves beyond a purely technological assessment of neural networks to explore their broader socio-philosophical implications. By examining the functions of contemporary LLMs, particularly those developed by OpenAI, we revisit the enduring question of whether a machine can truly think. Furthermore, we address a critical, often overlooked aspect: are humans prepared to accept the social subjectivity of these increasingly sophisticated machines? Through the lens of social philosophy, we analyze LLMs not merely as technological products, but as social agents actively shaping and participating in the social order.

Paper: https://galacticamedia.com/index.php/gmd/article/view/502/421

@scopeofai
@deep_learning_proj
2
Agents built on LLMs (LLM agents) further extend these capabilities, allowing them to process user interactions and perform complex operations in diverse task environments. However, during the processing and generation of massive data, LLMs and LLM agents pose a risk of sensitive information leakage, potentially threatening data privacy. This paper aims to demonstrate data privacy issues associated with LLMs and LLM agents to facilitate a comprehensive understanding. Specifically, we conduct an in-depth survey about privacy threats, encompassing passive privacy leakage and active privacy attacks. Subsequently, we introduce the privacy protection mechanisms employed by LLMs and LLM agents and provide a detailed analysis of their effectiveness. Finally, we explore the privacy protection challenges for LLMs and LLM agents as well as outline potential directions for future developments in this domain.

🗂 Paper: Link

@scopeofai
@deep_learning_proj
👍2
توی این قسمت مقاله به نکته جالبی اشاره میشه

اینکه بیشتر تحقیقات روی LLM های متن محور مثل BERT و GPT انجام شده و تحقیقات کمتری به Multimodal LLM ها اختصاص پیدا کرده

این نوع از LLM ها هم متن رو و هم داده های بصری مثل ویدیو و عکس رو می‌تونن از کاربر بگیرن

مقاله میگه امنیت این نوع مدل های زبانی بزرگ باید بیشتر بررسی شه چون پیچیدگی بیشتری نسبت به بقیه LLM ها دارن
2
Forwarded from Github LLMs
This study focuses on fine-tuning Large Language Models (LLMs) for healthcare information in Vietnamese, a low-resource language, to improve medical information accessibility and healthcare communication in developing countries. The methodology involves selecting a base model (BloomZ-3B, LLaMA2–7B and LLaMA2–13B), compiling a domain-specific dataset of approximately 337,000 prompt-response pairs in Vietnamese from existing datasets, Vietnamese medical online forums, and medical textbooks, and fine-tuning the model using Low-Rank adaptation (LoRA) and Quantized Low-Rank adaptation (QLoRA) techniques. The fine-tuned models showed enhanced performance, demonstrating the potential to improve healthcare communication in low-resource languages and enhance data privacy and security.


📂 Paper: https://www.sciencedirect.com/science/article/pii/S0169260725000720/pdfft?md5=b348ebfecc8d8f8b481e23ec241da2de&pid=1-s2.0-S0169260725000720-main.pdf

@scopeofai
@deep_learning_proj
1
New filings reveal that Google has invested more in Anthropic than previously known, now exceeding $3 billion. The company will inject another $750 million this year through a convertible debt deal, giving it a 14% stake in Anthropic. While Google has no direct control, its involvement raises concerns about the startup’s independence. With Amazon also investing up to $8 billion.

🟠 سرمایه‌گذاری گوگل در آنتروپیک از ۳ میلیارد دلار بالاتر رفته.

این شرکت امسال ۷۵۰ میلیون دلار دیگه هم قراره به آنتروپیک تزریق کنه و ۱۴٪ سهامش‌ رو داره.

همچنین آمازون هم تا ۸ میلیارد دلار تو این استارتاپ سرمایه‌گذاری کرده.

با این سرمایه گذاری های عظیم، آیا آنتروپیک همچنان مستقله یا به بخشی از غول‌های فناوری تبدیل شده؟

https://techcrunch.com/2025/03/11/google-has-given-anthropic-more-funding-than-previously-known-show-new-filings/

#news
@scopeofai
1
OpenAI has developed a new AI model proficient in creative writing, particularly in the metafiction genre.This development has sparked criticism from authors and publishers concerned about copyright infringement, as the AI's training involves copyrighted materials. The UK creative sector's "Make It Fair" campaign opposes government plans allowing AI companies to use copyrighted works without permission, emphasizing the need for fair compensation to creators


⚪️ شرکت OpenAI یه مدل هوش مصنوعی جدید ساخته که توی نوشتن داستان‌های خلاقانه خیلی خوب عمل می‌کنه

اما این ماجرا باعث نگرانی نویسنده‌ها و ناشرها شده، چون این مدل با استفاده از متن‌های دارای کپی‌رایت آموزش دیده.

توی بریتانیا هم یه کمپین راه افتاده که می‌گه نباید به شرکت‌های هوش مصنوعی اجازه داده بشه بدون اجازه از آثار نویسنده‌ها استفاده کنن.

https://techcrunch.com/2025/03/11/openai-says-it-has-trained-an-ai-thats-really-good-at-creative-writing/

#news
@scopeofai
1
Meta, the parent company of Facebook, is testing its first in-house chip designed for training artificial intelligence systems. This dedicated accelerator chip aims to reduce Meta's reliance on suppliers like Nvidia and enhance energy efficiency. This initiative is part of Meta's strategy to cut infrastructure costs associated with substantial investments in AI. In collaboration with Taiwan's TSMC, Meta has completed the initial production phase of the chip and plans to scale up production for broader deployment if tests are successful.

🔵 متا (شرکت مادر فیسبوک) داره یه تراشه مخصوص خودش برای آموزش هوش مصنوعی می‌سازه و تست می‌کنه.

این کار باعث می‌شه کمتر به شرکت‌هایی مثل انویدیا وابسته باشه و مصرف انرژی رو هم بهینه کنه.

این تراشه رو با کمک شرکت تایوانی TSMC تولید کرده و اگه تست‌ها خوب پیش بره، تولیدش رو بیشتر می‌کنه.

https://techcrunch.com/2025/03/11/meta-is-reportedly-testing-in-house-chips-for-ai-training/

#news
@scopeofai
1
Large Language Models (LLMs), such as GPT-4, have shown high accuracy in medical board exams, indicating their potential for clinical decision support. However, their metacognitive abilities—the ability to assess their own knowledge and manage uncertainty—are significantly lacking. This poses risks in medical applications where recognizing limitations and uncertainty is crucial.

To address this, researchers developed MetaMedQA, an enhanced benchmark that evaluates LLMs not just on accuracy but also on their ability to recognize unanswerable questions, manage uncertainty, and provide confidence scores. Testing revealed that while newer and larger models generally perform better in accuracy, most fail to handle uncertainty effectively and often give overconfident answers even when wrong


📁 Paper: https://www.nature.com/articles/s41467-024-55628-6.pdf

@scopeofai
@LLM_learning
1