Generative AI
24.3K subscribers
480 photos
2 videos
81 files
259 links
โœ… Welcome to Generative AI
๐Ÿ‘จโ€๐Ÿ’ป Join us to understand and use the tech
๐Ÿ‘ฉโ€๐Ÿ’ป Learn how to use Open AI & Chatgpt
๐Ÿค– The REAL No.1 AI Community

Admin: @coderfun
Download Telegram
GEN AI Oracle FREE course

https://education.oracle.com/genai/
โค2
Inside Generative AI, 2024.epub
4.6 MB
Inside Generative AI
Rick Spair, 2024
๐Ÿ‘5โค2
AI.pdf
37.3 MB
โค7
Neural Networks and Deep Learning
Neural networks and deep learning are integral parts of artificial intelligence (AI) and machine learning (ML). Here's an overview:

1.Neural Networks: Neural networks are computational models inspired by the human brain's structure and functioning. They consist of interconnected nodes (neurons) organized in layers: input layer, hidden layers, and output layer.

Each neuron receives input, processes it through an activation function, and passes the output to the next layer. Neurons in subsequent layers perform more complex computations based on previous layers' outputs.

Neural networks learn by adjusting weights and biases associated with connections between neurons through a process called training. This is typically done using optimization techniques like gradient descent and backpropagation.

2.Deep Learning : Deep learning is a subset of ML that uses neural networks with multiple layers (hence the term "deep"), allowing them to learn hierarchical representations of data.

These networks can automatically discover patterns, features, and representations in raw data, making them powerful for tasks like image recognition, natural language processing (NLP), speech recognition, and more.

Deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), and Transformer models have demonstrated exceptional performance in various domains.

3.Applications Computer Vision: Object detection, image classification, facial recognition, etc., leveraging CNNs.

Natural Language Processing (NLP) Language translation, sentiment analysis, chatbots, etc., utilizing RNNs, LSTMs, and Transformers.
Speech Recognition: Speech-to-text systems using deep neural networks.

4.Challenges and Advancements: Training deep neural networks often requires large amounts of data and computational resources. Techniques like transfer learning, regularization, and optimization algorithms aim to address these challenges.

LAdvancements in hardware (GPUs, TPUs), algorithms (improved architectures like GANs - Generative Adversarial Networks), and techniques (attention mechanisms) have significantly contributed to the success of deep learning.

5. Frameworks and Libraries: There are various open-source libraries and frameworks (TensorFlow, PyTorch, Keras, etc.) that provide tools and APIs for building, training, and deploying neural networks and deep learning models.

Join for more: https://t.iss.one/machinelearning_deeplearning
๐Ÿ‘4
How Coders Can Surviveโ€”and Thriveโ€”in a ChatGPT World

Artificial intelligence, particularly generative AI powered by large language models (LLMs), could upend many codersโ€™ livelihoods. But some experts argue that AI wonโ€™t replace human programmersโ€”not immediately, at least.

โ€œYou will have to worry about people who are using AI replacing you,โ€ says Tanishq Mathew Abraham, a recent Ph.D. in biomedical engineering at the University of California, Davis and the CEO of medical AI research center MedARC.

Here are some tips and techniques for coders to survive and thrive in a generative AI world.

Stick to Basics and Best Practices
While the myriad AI-based coding assistants could help with code completion and code generation, the fundamentals of programming remain: the ability to read and reason about your own and othersโ€™ code, and understanding how the code you write fits into a larger system.

Find the Tool That Fits Your Needs
Finding the right AI-based tool is essential. Each tool has its own ways to interact with it, and there are different ways to incorporate each tool into your development workflowโ€”whether thatโ€™s automating the creation of unit tests, generating test data, or writing documentation.

Clear and Precise Conversations Are Crucial
When using AI coding assistants, be detailed about what you need and view it as an iterative process. Abraham proposes writing a comment that explains the code you want so the assistant can generate relevant suggestions that meet your requirements.

Be Critical and Understand the Risks
Software engineers should be critical of the outputs of large language models, as they tend to hallucinate and produce inaccurate or incorrect code. โ€œItโ€™s easy to get stuck in a debugging rabbit hole when blindly using AI-generated code, and subtle bugs can be difficult to spot,โ€ Vaithilingam says.
๐Ÿ‘5
๐Ÿ—‚ A collection of the good Gen AI free courses


๐Ÿ”น Generative artificial intelligence

1๏ธโƒฃ Generative AI for Beginners course : building generative artificial intelligence apps.

2๏ธโƒฃ Generative AI Fundamentals course : getting to know the basic principles of generative artificial intelligence.

3๏ธโƒฃ Intro to Gen AI course : from learning large language models to understanding the principles of responsible artificial intelligence.

4๏ธโƒฃ Generative AI with LLMs course : Learn business applications of artificial intelligence with AWS experts in a practical way.

5๏ธโƒฃ Generative AI for Everyone course : This course tells you what generative artificial intelligence is, how it works, and what uses and limitations it has.
๐Ÿ‘9โค5
Nvidia delays next gen AI chip as investors issue โ€˜bubbleโ€™ warning

Nvidia highly anticipated โ€œBlackwellโ€ B-200 artificial intelligence chip will reportedly be delayed, sending the near-term future of the entire AI industry into a state of uncertainty.

Tech news outlet The Information claims that a Microsoft employee and at least two other people familiar with the situation have stated that the new chipโ€™s launch date has been pushed back by at least three months due to a design flaw.

While Nvidia hadnโ€™t given a public launch date, CEO Jensen Huang recently announced that the company would begin sending engineering samples โ€œthis weekโ€ on July 31 at the SIGGRAPH event in Denver, Colorado.

Source-Link : MSN
Tecnod8 AI
Generative AI - LLM Intern Internship ( Remote )

๐ƒ๐ฎ๐ซ๐š๐ญ๐ข๐จ๐ง : 3-6 months (10,000 )

๐‘๐ž๐ช๐ฎ๐ข๐ซ๐ž๐ ๐ฌ๐ค๐ข๐ฅ๐ฅ๐ฌ :
1. Proficiency in Python and experience with machine learning frameworks (TensorFlow, PyTorch).
2. Experience working with large datasets and data preprocessing techniques.
3. Familiarity with language models and generative AI is highly desirable.
4. Self-motivated, eager to learn, and able to thrive in a fast-paced environment.
5. Excellent problem-solving skills and ability to work collaboratively in a team.
6. Strong communication skills to effectively express ideas and solutions.

Benefits:
1. Potential for a Pre-Placement Offer (PPO) to join the founding team of the GenAI startup.
2. Flexible work hours.
3. Valuable industry exposure in Generative AI.

๐‚๐ฅ๐ข๐œ๐ค ๐จ๐ง ๐ญ๐ก๐ž ๐‹๐ข๐ง๐ค ๐๐ž๐ฅ๐จ๐ฐ ๐“๐จ ๐€๐ฉ๐ฉ๐ฅ๐ฒ๐Ÿ‘‡
https://www.linkedin.com/jobs/view/3991641317/
๐Ÿ‘4
Meta just announced a new LLM Evaluation Research Grant aimed at boosting innovation in the field of LLM evaluations. This grant offers *$200K* in funding to selected recipients to accelerate their research, particularly in areas like complex reasoning, emotional & social intelligence, and agentic behavior.

Proposals are being accepted until September 6th. You can check out all the details here [https://llama.meta.com/llm-evaluation-research-grant/?utm_source=linkedin&utm_medium=organic_social&utm_content=image&utm_campaign=llama].
๐Ÿ‘4โค2
Generative AI Apps

โ€ข ChatGPT, Pricing: $20/month for GPT-4. Free GPT-3.5.
โ€ข Claude, Pricing: $20/month for Claude 3 Opus. Free Claude 3 Sonnet.
โ€ข Google Gemini, Pricing: $20/month for Gemini Advanced. Free Gemini.
โ€ข Microsoft Copilot, Pricing: $20/month for Copilot +. Free Copilot.
โ€ข Perplexity, Pricing: $20/month. Free plan with limited features.
โ€ข Pi, Pricing: Free
๐Ÿ‘12
Future Trends in Artificial Intelligence ๐Ÿ‘‡๐Ÿ‘‡

1. AI in healthcare: With the increasing demand for personalized medicine and precision healthcare, AI is expected to play a crucial role in analyzing large amounts of medical data to diagnose diseases, develop treatment plans, and predict patient outcomes.

2. AI in finance: AI-powered solutions are expected to revolutionize the financial industry by improving fraud detection, risk assessment, and customer service. Robo-advisors and algorithmic trading are also likely to become more prevalent.

3. AI in autonomous vehicles: The development of self-driving cars and other autonomous vehicles will rely heavily on AI technologies such as computer vision, natural language processing, and machine learning to navigate and make decisions in real-time.

4. AI in manufacturing: The use of AI and robotics in manufacturing processes is expected to increase efficiency, reduce errors, and enable the automation of complex tasks.

5. AI in customer service: Chatbots and virtual assistants powered by AI are anticipated to become more sophisticated, providing personalized and efficient customer support across various industries.

6. AI in agriculture: AI technologies can be used to optimize crop yields, monitor plant health, and automate farming processes, contributing to sustainable and efficient agricultural practices.

7. AI in cybersecurity: As cyber threats continue to evolve, AI-powered solutions will be crucial for detecting and responding to security breaches in real-time, as well as predicting and preventing future attacks.

Like for more โค๏ธ

Artificial Intelligence
โค11๐Ÿ‘10
๐Ÿงฑ Large Language Models with Python

Learn how to build your own large language model, from scratch. This course goes into the data handling, math, and transformers behind large language models. You will use Python.


๐Ÿ”— Course Link
๐Ÿ‘5
Will LLMs always hallucinate?

As large language models (LLMs) become more powerful and pervasive, it's crucial that we understand their limitations.

A new paper argues that hallucinations - where the model generates false or nonsensical information - are not just occasional mistakes, but an inherent property of these systems.

While the idea of hallucinations as features isn't new, the researchers' explanation is.

They draw on computational theory and Gรถdel's incompleteness theorems to show that hallucinations are baked into the very structure of LLMs.

In essence, they argue that the process of training and using these models involves undecidable problems - meaning there will always be some inputs that cause the model to go off the rails.

This would have big implications. It suggests that no amount of architectural tweaks, data cleaning, or fact-checking can fully eliminate hallucinations.

So what does this mean in practice? For one, it highlights the importance of using LLMs carefully, with an understanding of their limitations.

It also suggests that research into making models more robust and understanding their failure modes is crucial.

No matter how impressive the results, LLMs are not oracles - they're tools with inherent flaws and biases

LLM & Generative AI Resources: https://t.iss.one/generativeai_gpt
๐Ÿ‘10
HandsOnLLM/Hands-On-Large-Language-Models
Official code repo for the O'Reilly Book - "Hands-On Large Language Models"
Language:Jupyter Notebook
Total stars: 194
Stars trend:
16 Sep 2024
5pm โ–Š +6
6pm โ–Š +6
7pm โ–‰ +7
8pm โ–Ž +2
9pm โ– +3
10pm โ–Œ +4
11pm โ– +3
17 Sep 2024
12am โ– +1
1am โ– +3
2am โ–‹ +5
3am โ–ˆโ–ˆโ–Ž +18
4am โ–ˆโ–ˆโ– +17

#jupyternotebook
#artificialintelligence, #book, #largelanguagemodels, #llm, #llms, #oreilly, #oreillybooks
๐Ÿ‘5โค1
New research out of Hong Kong suggests LLMs and humans remember things in similar ways.

Both humans and AI recall memories when triggered by input, rather than static info storage.

If proven correct, it suggests a lesser fundamental difference between AI and human cognition.
๐Ÿ‘2โค1
Forwarded from Artificial Intelligence
LLM Cheatsheet.pdf
3.5 MB
๐Ÿ‘6โค4๐Ÿ”ฅ2๐Ÿ‘1
OpenAI Mafia ๐Ÿ”ฅ

Over 87 former employees have launched around 32 AI startups and OpenAI mafia just getting bigger and bigger!

Notable ventures include Andrej Karpathy's Eureka Labs & Ilya Sutskever's Safe Superintelligence Inc.. With founders like Dario Amodei of Anthropic and Tim Salimans of Aidence, these ex-OpenAI talents are revolutionizing the AI landscape.

Today Several former OpenAI employees have launched their own AI startups. Companies such as Anthropic, Pilot, and Perplexity, have collectively raised almost $10 billion. Many of these startups focus on AI safety, robotics, and AI applications in various industries.

OpenAI had approximately 2600 employees as of last month & who knows how many more AI startups would spin out of the company. It is fascinating to see new tech entrepreneurs being born out of the OpenAI ecosystem, which is acting as a training ground for future AI leaders.
๐Ÿ‘13โค1
Stanford just uploaded their new "Building LLMS" lecture.

"This lecture provides a concise overview of building a ChatGPT-like model, covering both pretraining (language modeling) and post-training (SFT/RLHF).

For each component, it explores common practices in data collection, algorithms, and evaluation methods." https://www.youtube.com/watch?v=9vM4p9NN0Ts
๐Ÿ‘6โค1๐Ÿ‘Ž1