Agents built on LLMs (LLM agents) further extend these capabilities, allowing them to process user interactions and perform complex operations in diverse task environments. However, during the processing and generation of massive data, LLMs and LLM agents pose a risk of sensitive information leakage, potentially threatening data privacy. This paper aims to demonstrate data privacy issues associated with LLMs and LLM agents to facilitate a comprehensive understanding. Specifically, we conduct an in-depth survey about privacy threats, encompassing passive privacy leakage and active privacy attacks. Subsequently, we introduce the privacy protection mechanisms employed by LLMs and LLM agents and provide a detailed analysis of their effectiveness. Finally, we explore the privacy protection challenges for LLMs and LLM agents as well as outline potential directions for future developments in this domain.
๐ Paper: Link
@scopeofai
https://t.iss.one/LLM_learning
๐ Paper: Link
@scopeofai
https://t.iss.one/LLM_learning
๐2
This study focuses on fine-tuning Large Language Models (LLMs) for healthcare information in Vietnamese, a low-resource language, to improve medical information accessibility and healthcare communication in developing countries. The methodology involves selecting a base model (BloomZ-3B, LLaMA2โ7B and LLaMA2โ13B), compiling a domain-specific dataset of approximately 337,000 prompt-response pairs in Vietnamese from existing datasets, Vietnamese medical online forums, and medical textbooks, and fine-tuning the model using Low-Rank adaptation (LoRA) and Quantized Low-Rank adaptation (QLoRA) techniques. The fine-tuned models showed enhanced performance, demonstrating the potential to improve healthcare communication in low-resource languages and enhance data privacy and security.
๐ Paper: https://www.sciencedirect.com/science/article/pii/S0169260725000720/pdfft?md5=b348ebfecc8d8f8b481e23ec241da2de&pid=1-s2.0-S0169260725000720-main.pdf
@scopeofai
https://t.iss.one/LLM_learning
๐ Paper: https://www.sciencedirect.com/science/article/pii/S0169260725000720/pdfft?md5=b348ebfecc8d8f8b481e23ec241da2de&pid=1-s2.0-S0169260725000720-main.pdf
@scopeofai
https://t.iss.one/LLM_learning
โค3๐1
Please open Telegram to view this post
VIEW IN TELEGRAM
Telegram
Github LLMs
LLM projects
@Raminmousa
@Raminmousa
๐3
Large Language Models (LLMs), such as GPT-4, have shown high accuracy in medical board exams, indicating their potential for clinical decision support. However, their metacognitive abilitiesโthe ability to assess their own knowledge and manage uncertaintyโare significantly lacking. This poses risks in medical applications where recognizing limitations and uncertainty is crucial.
To address this, researchers developed MetaMedQA, an enhanced benchmark that evaluates LLMs not just on accuracy but also on their ability to recognize unanswerable questions, manage uncertainty, and provide confidence scores. Testing revealed that while newer and larger models generally perform better in accuracy, most fail to handle uncertainty effectively and often give overconfident answers even when wrong
๐ Paper: https://www.nature.com/articles/s41467-024-55628-6.pdf
@scopeofai
@LLM_learning
To address this, researchers developed MetaMedQA, an enhanced benchmark that evaluates LLMs not just on accuracy but also on their ability to recognize unanswerable questions, manage uncertainty, and provide confidence scores. Testing revealed that while newer and larger models generally perform better in accuracy, most fail to handle uncertainty effectively and often give overconfident answers even when wrong
๐ Paper: https://www.nature.com/articles/s41467-024-55628-6.pdf
@scopeofai
@LLM_learning
๐4
The rapid advancement of large language models (LLMs), such as ChatGPT and GPT-4, has led to a surge in synthetic text generation across various domains, including journalism, academia, cybersecurity, and online discourse. While these models offer immense benefits, their ability to generate highly realistic text raises concerns regarding misinformation, academic dishonesty, and content authenticity. Consequently, the detection of LLM-generated content has become an essential area of research.
This survey provides a comprehensive overview of existing detection methodologies, benchmarks, and challenges, offering insights into the strengths and weaknesses of current techniques.The study aims to serve as a guiding reference for researchers and practitioners striving to uphold the integrity of digital information in an era dominated by synthetic content.
๐ Paper: https://arxiv.org/abs/2310.15654
@scopeofai
@LLM_learning
This survey provides a comprehensive overview of existing detection methodologies, benchmarks, and challenges, offering insights into the strengths and weaknesses of current techniques.The study aims to serve as a guiding reference for researchers and practitioners striving to uphold the integrity of digital information in an era dominated by synthetic content.
๐ Paper: https://arxiv.org/abs/2310.15654
@scopeofai
@LLM_learning
๐5๐ฅ1
This repository is a curated collection of survey papers focused on Large Language Models (LLMs), organized to help researchers and practitioners navigate the rapidly evolving field. It compiles existing surveys across multiple topics, including foundational overviews of LLMs, technical aspects like Transformer architectures and efficient model design, and societal considerations such as alignment with human values, fairness, and safety. The repository also covers specialized areas like multimodal LLMs (handling text, images, etc.), knowledge-augmented models, and applications in education, healthcare, and law. Each section provides direct links to relevant papers (often on arXiv) and related GitHub repositories, emphasizing recent work from the past few years. the repository serves as a centralized resource for understanding both the technical advancements and ethical challenges of LLMs.
๐ Repository: https://github.com/NiuTrans/ABigSurveyOfLLMs
@scopeofai
@LLM_learning
๐ Repository: https://github.com/NiuTrans/ABigSurveyOfLLMs
@scopeofai
@LLM_learning
๐4
Forwarded from Machine learning books and papers
This media is not supported in your browser
VIEW IN TELEGRAM
Magic of open source is taking over the Video LoRA spaceโจ
just dropped๐๐ฅ
๐ฌLTX video community LoRA trainer with I2V support
๐ฌLTX video Cakify LoRA
๐ฌLTX video Squish LoRA
(๐งจdiffusers & comfy workflow)
trainer: https://github.com/Lightricks/LTX-Video-Trainer
LoRA: https://huggingface.co/Lightricks/LTX-Video-Cakeify-LoRA
LoRA2 : https://huggingface.co/Lightricks/LTX-Video-Squish-LoRA
๐ฅ
@Machine_learn
just dropped๐๐ฅ
๐ฌLTX video community LoRA trainer with I2V support
๐ฌLTX video Cakify LoRA
๐ฌLTX video Squish LoRA
(๐งจdiffusers & comfy workflow)
trainer: https://github.com/Lightricks/LTX-Video-Trainer
LoRA: https://huggingface.co/Lightricks/LTX-Video-Cakeify-LoRA
LoRA2 : https://huggingface.co/Lightricks/LTX-Video-Squish-LoRA
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐3
This essay explores whether contemporary Large Language Models (LLMs) can pass the Turing test, a benchmark proposed by Alan Turing to evaluate machine intelligence. The study involved evaluating four systemsโGPT-4.5, LLaMa-3.1-405B, GPT-4o, and ELIZAโin randomized, controlled three-party Turing tests with two independent populations: UCSD undergraduate students and Prolific workers. Participants engaged in simultaneous conversations with a human and an AI system before judging which conversational partner they believed was human.
๐ Paper: https://arxiv.org/pdf/2503.23674
@scopeofai
@LLM_learning
๐ Paper: https://arxiv.org/pdf/2503.23674
@scopeofai
@LLM_learning
๐ฅ4๐2
Please open Telegram to view this post
VIEW IN TELEGRAM
๐2
Forwarded from Machine learning books and papers
Large Language Model Agent: A Survey on Methodology, Applications and Challenges
Paper: https://arxiv.org/pdf/2503.21460v1.pdf
Code: https://github.com/luo-junyu/awesome-agent-papers
@Machine_learn
Paper: https://arxiv.org/pdf/2503.21460v1.pdf
Code: https://github.com/luo-junyu/awesome-agent-papers
@Machine_learn
๐4
Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating their prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies. Limited studies have been conducted to formalize and analyze these attacks and their mitigations. We bridge this gap by proposing a formalism and a taxonomy of known (and possible) jailbreaks. We survey existing jailbreak methods and their effectiveness on open-source and commercial LLMs (such as GPT-based models, OPT, BLOOM, and FLAN-T5-XXL). We further discuss the challenges of jailbreak detection in terms of their effectiveness against known attacks. For further analysis, we release a dataset of model outputs across 3700 jailbreak prompts over 4 tasks.
๐ Paper: https://arxiv.org/pdf/2305.14965
@scopeofai
@LLM_learning
๐ Paper: https://arxiv.org/pdf/2305.14965
@scopeofai
@LLM_learning
๐ฅ3
Deep-Live-Cam
Real time face swap and one-click video deepfake with only a single image
Creator: Hacksider
Stars โญ๏ธ: 50,498
Forked by: 7,491
Github Repo:
https://github.com/hacksider/Deep-Live-Cam
@LLM_learning
Real time face swap and one-click video deepfake with only a single image
Creator: Hacksider
Stars โญ๏ธ: 50,498
Forked by: 7,491
Github Repo:
https://github.com/hacksider/Deep-Live-Cam
@LLM_learning
GitHub
GitHub - hacksider/Deep-Live-Cam: real time face swap and one-click video deepfake with only a single image
real time face swap and one-click video deepfake with only a single image - hacksider/Deep-Live-Cam
Forwarded from AI Scope
With so many LLM papers being published, it's hard to keep up and compare results. This study introduces a semi-automated method that uses LLMs to extract and organize experimental results from arXiv papers into a structured dataset called LLMEvalDB. This process cuts manual effort by over 93%. It reproduces key findings from earlier studies and even uncovers new insightsโlike how in-context examples help with coding and multimodal tasks, but not so much with math reasoning. The dataset updates automatically, making it easier to track LLM performance over time and analyze trends.
๐ Paper: https://arxiv.org/pdf/2502.18791
โซ๏ธ@scopeofai
โซ๏ธ@LLM_learning
๐ Paper: https://arxiv.org/pdf/2502.18791
โซ๏ธ@scopeofai
โซ๏ธ@LLM_learning
โค2