Github LLMs
750 subscribers
39 photos
3 videos
4 files
53 links
LLM projects
@Raminmousa
Download Telegram
This essay explores whether contemporary Large Language Models (LLMs) can pass the Turing test, a benchmark proposed by Alan Turing to evaluate machine intelligence. The study involved evaluating four systemsโ€”GPT-4.5, LLaMa-3.1-405B, GPT-4o, and ELIZAโ€”in randomized, controlled three-party Turing tests with two independent populations: UCSD undergraduate students and Prolific workers. Participants engaged in simultaneous conversations with a human and an AI system before judging which conversational partner they believed was human.

๐Ÿ“ Paper: https://arxiv.org/pdf/2503.23674


@scopeofai
@LLM_learning
๐Ÿ”ฅ4๐Ÿ‘2
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘2
Large Language Model Agent: A Survey on Methodology, Applications and Challenges


Paper: https://arxiv.org/pdf/2503.21460v1.pdf

Code: https://github.com/luo-junyu/awesome-agent-papers

@Machine_learn
๐Ÿ‘4
SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators

๐Ÿ“š Read


@LLM_learning
๐Ÿ‘3
Owen 3 release

๐Ÿ“– Blog


@LLM_learning
โค3๐Ÿ”ฅ1
Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating their prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies. Limited studies have been conducted to formalize and analyze these attacks and their mitigations. We bridge this gap by proposing a formalism and a taxonomy of known (and possible) jailbreaks. We survey existing jailbreak methods and their effectiveness on open-source and commercial LLMs (such as GPT-based models, OPT, BLOOM, and FLAN-T5-XXL). We further discuss the challenges of jailbreak detection in terms of their effectiveness against known attacks. For further analysis, we release a dataset of model outputs across 3700 jailbreak prompts over 4 tasks.

๐Ÿ—‚ Paper: https://arxiv.org/pdf/2305.14965

@scopeofai
@LLM_learning
๐Ÿ”ฅ3
Forwarded from AI Scope
With so many LLM papers being published, it's hard to keep up and compare results. This study introduces a semi-automated method that uses LLMs to extract and organize experimental results from arXiv papers into a structured dataset called LLMEvalDB. This process cuts manual effort by over 93%. It reproduces key findings from earlier studies and even uncovers new insightsโ€”like how in-context examples help with coding and multimodal tasks, but not so much with math reasoning. The dataset updates automatically, making it easier to track LLM performance over time and analyze trends.

๐Ÿ“‚ Paper: https://arxiv.org/pdf/2502.18791

โ–ซ๏ธ@scopeofai
โ–ซ๏ธ@LLM_learning
โค2