Continuous Learning_Startup & Investment
2.43K subscribers
513 photos
5 videos
16 files
2.76K links
We journey together through the captivating realms of entrepreneurship, investment, life, and technology. This is my chronicle of exploration, where I capture and share the lessons that shape our world. Join us and let's never stop learning!
Download Telegram
State of GPT talk by Andrej Karpathy: https://www.youtube.com/watch?v=bZQun8Y4L2A&t=373s

Would highly recommend watching the above! A 45-minute lecture going over the State of Generative LLMs, how are they trained, what they can and can't do, advanced techniques like CoT, ReAct, Reflection, BabyAGI, and Agents in general and finally some great tips on using LLMs in production. Pretty simple but very very informative
Continuous Learning_Startup & Investment
State of GPT talk by Andrej Karpathy: https://www.youtube.com/watch?v=bZQun8Y4L2A&t=373s Would highly recommend watching the above! A 45-minute lecture going over the State of Generative LLMs, how are they trained, what they can and can't do, advanced techniques…
Here's an https://assembly.ai transcript and chapter summaries:
πŸ‘‚πŸΌ πŸ€– πŸ“ƒ
https://www.assemblyai.com/playground/transcript/64kyzev80o-6ed4-4902-a066-7df25c363193

Andre Karpathi is a founding member of OpenAI. He will talk about how we train GPT assistants. In the second part he will take a look at how we can use these assistants effectively for your applications.

TRAINING NEURAL NETWORKS ON THE INTERNET

We have four major stages pretraining supervised fine tuning, reward modeling, reinforcement learning. In each stage we have a data set that powers that stage. And then we have an algorithm that for our purposes will be an objective for training a neural network.

GPT 3.1: BASE MODELS AND AGENTS

The GPT four model that you might be interacting with over API is not a base model, it's an assistant model. You can even trick base models into being assistants. Instead we have a different path to make actual GPT assistance, not just base model document completers.

NEUROANATOMY 2.8

In the reward modeling step, what we're going to do is we're now going to shift our data collection to be of the form of comparisons. Now, because we have a reward model, we can score the quality of any arbitrary completion for any given prompt. And then at the end, you could deploy a Rlhf model.

COGNITIVE PROCESSES AND GPT

How do we best apply a GPT assistant model to your problems? Think about the rich internal monologue and tool use and how much work actually goes computationally in your brain to generate this one final sentence. From GPT's perspective, this is just a sequence of tokens.

TREE OF THOUGHT AND PROMPT ENGINEERING

A lot of people are really playing around with kind of prompt engineering to bring back some of these abilities that we sort of have in our brain for LLMs. I think this is kind of an equivalent of AlphaGo but for text. I would not advise people to use it in practical applications.

WHAT ARE THE QUIRKS OF LLMS?

The next thing that I find kind of interesting is that LLMs don't want to succeed, they want to imitate. And so at test time, you actually have to ask for a good performance. Next up, I think a lot of people are really interested in basically retrieval augmented generation.

CONSTRAINT PROMPTING IN LLMS

Next, I wanted to briefly talk about constraint prompting. This is basically techniques for forcing a certain template in the outputs of LLMs. And I think this kind of constraint sampling is also extremely interesting.

FINE-TUNING A LANGUAGE MODEL

You can get really far with prompt engineering, but it's also possible to think about fine tuning your models. Fine tuning is a lot more technically involved. It requires human data contractors for data sets and or synthetic data pipelines. Break up your task into two major parts.

LIMITS TO FULLY AUTONOMOUS LLMS

There's a large number of limitations to LLMs today, so I would keep that definitely in mind for all your applications models. My recommendation right now is use LLMs in low stakes applications, combine them with always with human oversight. Think copilots instead of completely autonomous agents.
πŸ§‘πŸΌβ€βœˆοΈ πŸš§πŸ’»
In this post, I try to answer specific questions about the internals of Copilot, while also describing some interesting observations I made as I combed through the code. I will provide pointers to the relevant code for almost everything I talk about, so that interested folks can take a look at the code themselves.

https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals
<μžκ·Ήμ„ 쀄이고 생각을 늘리기>

μš”μ¦˜ ν˜„λŒ€μΈλ“€μ€ 거의 ADHD μƒνƒœλ‘œ 일을 ν•œλ‹€κ³  생각이 λ“œλŠ” 면이 μžˆλ‹€. μ§€μ†μ μœΌλ‘œ 높은 κ°•λ„μ˜ μžκ·Ήμ— μžμ‹ μ„ λ…ΈμΆœμ‹œν‚€κΈ° 쉽기 λ•Œλ¬Έμ΄λ‹€. 이런 ν™˜κ²½μ†μ—μ„œ 뭐 ν•˜λ‚˜μ— μ°¨λΆ„ν•˜κ²Œ μ§‘μ€‘ν•˜κ³  깊이 μžˆλŠ” 사고λ₯Ό ν•˜κΈ°κ°€ νž˜λ“€λ‹€.

두 κ°€μ§€ 사둀λ₯Ό λ¨Όμ € μ†Œκ°œν•˜κ² λ‹€.

사둀 1)

λ‚΄κ°€ μ•„λŠ” Kλͺ¨μ”¨λŠ” λŒ€κΈ°μ—… μ§μ›μ΄μ—ˆλŠ”λ°, ν•˜λ£¨μ— μ „μ‚¬μ—μ„œ λ“€μ–΄μ˜€λŠ” 업무 μš”μ²­λ§Œ 수백건이라고 ν–ˆλ‹€. κ·Έλž˜μ„œ λ‚ λ§ˆλ‹€ λ°€ 11μ‹œμ— 퇴근을 ν•˜κ³  μžˆμ—ˆλ‹€.

κ·ΈλŸ¬λ‹€κ°€ λ‚˜μ—κ²Œμ„œ μ• μžμΌ 이야기λ₯Ό λ“£κ³  μ‹€ν—˜μ„ ν•΄λ³΄κΈ°λ‘œ κ²°μ‹¬ν–ˆλ‹€. μ •μ‹œ 퇴근. κ·Έλž˜μ„œ νŒ€μž₯μ—κ²Œ μ œμ•ˆμ„ ν–ˆλ‹€. μ˜€λŠ˜λΆ€ν„° 18μ‹œ μ •μ‹œ 퇴근을 ν•˜κ² λ‹€. ν˜Ήμ—¬ 일 μ²˜λ¦¬κ°€ μ‘°κΈˆμ΄λΌλ„ λ–¨μ–΄μ§„λ‹€λŠ” λŠλ‚Œμ΄ λ“€λ©΄ μ–˜κΈ°ν•΄λΌ. λ°”λ‘œ μ›λ³΅ν•˜κ² λ‹€. 그러고 κ·Έλ‚ λΆ€ν„° 18μ‹œ 퇴근을 ν–ˆλ‹€. 집에 였면 저녁 7μ‹œλΆ€ν„° 9μ‹œκΉŒμ§€ 두 μ‹œκ°„μ”© 6μ‚΄ μ•„μ΄λž‘ 놀아쀬닀고 ν•œλ‹€. κ·Έμ „κΉŒμ§€ μ•„μ΄μ—κ²Œ μ•„λΉ λŠ” μ—†λŠ” μ‘΄μž¬μ˜€λ‹€. μ£Όμ€‘μ—λŠ” λ°€ 11μ‹œμ— 였고, μ•„μΉ¨μ—λŠ” μžκΈ°λ³΄λ‹€ λ¨Όμ € λ‚˜κ°€κ³  μ£Όλ§μ—λŠ” 계속 μ“°λŸ¬μ Έ μžˆμ—ˆμœΌλ‹ˆ. 그런 μ•„μ΄μ—κ²Œ "μ•„λΉ "κ°€ 생긴 κ±°λ‹€.

근데 λ¬Έμ œκ°€ ν•˜λ‚˜ μžˆμ—ˆλ‹€. μ •μ‹œ 퇴근을 ν–ˆμœΌλ‹ˆ λ‹€ 처리 λͺ»ν•œ 일듀이 문제. 그런데 λ³΄μ•ˆλ¬Έμ œ λ•Œλ¬Έμ— μ§‘μ—μ„œ νšŒμ‚¬ μ»΄ν“¨ν„°λ‚˜ μžλ£Œμ— μ ‘κ·Όν•  μˆ˜κ°€ μ—†μ—ˆλ‹€. κ·Έλž˜μ„œ κ·Έκ°€ λŒ€μ•ˆμœΌλ‘œ ν–ˆλ˜ κ±°λŠ” λ°€ 11μ‹œλΆ€ν„° 1μ‹œκΉŒμ§€ 두 μ‹œκ°„ λ™μ•ˆ 자기 책상에 이면지 펼치고 μ•‰μ•„μ„œ 였늘 ν–ˆλ˜ 일듀, 내일 ν•  일듀을 μ–΄λ–»κ²Œ ν•΄μ•Ό 더 ν˜„λͺ…ν•˜κ²Œ μ²˜λ¦¬ν•  건가 μ „λž΅μ„ μ§œλŠ” κ±°μ˜€λ‹€. κ·Έκ±Έ λ‚ λ§ˆλ‹€ ν–ˆλ‹€.

그러고 λ‹€μŒλ‚  μΆœκ·Όμ„ ν•˜λ‹ˆ 업무 μš”μ²­ μ€‘μ˜ 50% 이상은 μžλ™μœΌλ‘œ ν•΄κ²°λœ κ²½μš°κ°€ λ§Žμ•˜κ³ (μš”μ²­ν•œ λΆ€μ„œμ—μ„œ λ‹΅λ‹΅ν•˜λ‹ˆ 자체적으둜 ν•΄κ²°), 남은 50%λŠ” μ§€λ‚œ 밀에 κ³ λ―Όν•œ κ²°κ³Ό 더 ν˜„λͺ…ν•œ λ°©λ²•μœΌλ‘œ 처리λ₯Ό ν•΄μ„œ 금방 끝낼 수 μžˆμ—ˆλ‹€.

λ¬Όλ‘  결과적으둜 λ°€ 11μ‹œ 퇴근할 λ•Œλ³΄λ‹€ μˆ˜λ©΄μ‹œκ°„μ΄ μ€„μ—ˆλ‹€κ³  ν•œλ‹€. μ˜ˆμ „μ—λŠ” 집에 λ“€μ–΄μ˜€λ©΄ λ°”λ‘œ μ“°λŸ¬μ Έμ„œ μž€μœΌλ‹ˆκΉŒ. ν•˜μ§€λ§Œ λͺΈμ΄ λŠλΌλŠ” μ—λ„ˆμ§€λŠ” 훨씬 μ’‹μ•„μ‘Œλ‹€κ³  ν•œλ‹€.

사둀 2)

μ˜ˆμ „μ— κ΅°λŒ€μ‹œμ ˆ μžλŒ€ 배치λ₯Ό λ°›κ³  ν•΄λ‹Ή λΆ€λŒ€μ— κ°”κ³  μ‚¬μˆ˜λ₯Ό λ°°λ‹Ή λ°›μ•˜λ‹€. 근데 κ·Έ μ‚¬μˆ˜ 얼꡴을 보기가 νž˜λ“ κ±°λ‹€. λ©°μΉ  μ§€λ‚˜ μ•Œκ²Œ λλŠ”λ° κ·Έ μ‚¬μˆ˜ 전역일이 1주일 λ’€λž€λ‹€. λ‚΄ μ‚¬μˆ˜μ˜ 보직은 λŒ€λŒ€ μ •λΉ„κ³Ό μ„œλ¬΄λ³‘. μ›Œλ‚™ ν•˜λŠ” 일이 많고 λ³΅μž‘ν•΄μ„œ 톡상 1λ…„ μ •λ„λŠ” μΈμˆ˜μΈκ³„λ₯Ό λ°›μ•„μ•Ό μ œλŒ€λ‘œ 일을 ν•˜κ²Œ λœλ‹€κ³  ν•œλ‹€. 근데 이 μ‚¬λžŒμ€ 1주일 뒀에 μ „μ—­ν•˜κ³ , 이 1주일도 μ–Όλ λš±λ•… μ§€λ‚˜κ°€κ³  μžˆμ—ˆλ‹€. 가끔 정비과에 λ‚΄λ €μ™€μ„œλŠ” κΆκΈˆν•œ κ±° λ¬Όμ–΄λ΄ν•˜κ³  λˆ„μ›Œμžˆκ±°λ‚˜ ν•˜λŠ” 정도. 정말 λ¬Έμ œλŠ” 이 μ‚¬λžŒμ˜ 보직을 μ •ν™•ν•˜κ²Œ νŒŒμ•…ν•˜λŠ” μ‚¬λžŒμ΄ κ°„λΆ€λ‚˜ 병 쀑에 아무도 μ—†λ‹€λŠ” κ±°.

κ²°κ΅­ λ‚˜λŠ” 거의 아무것도 λ°°μš°μ§€λ„ λͺ»ν•œ μ±„λ‘œ μ‚¬μˆ˜κ°€ 전역을 ν–ˆκ³ , 업무 맀뉴얼도 ν•˜λ‚˜ μ—†μ—ˆλ‹€. μ°Έκ³ ν•  μžλ£Œκ°€ μ „ν˜€ μ—†λŠ” 상황.

κ³ λ―Όν•˜λ‹€κ°€ κ²°κ΅­ ν•˜κ²Œ 된 선택은 원리와 μ›μΉ™μœΌλ‘œ μƒκ°ν•΄μ„œ ν–‰λ™ν•˜μžλŠ” κ±°μ˜€λ‹€. μ–΄λ–€ 문제 상황이 λ°œμƒν•˜λ©΄ λ‚΄κ°€ μƒκ°ν•˜λŠ” 기본적인 원리에 따라(μ˜ˆμ»¨λŒ€ μ–΄λ–»κ²Œ ν•˜λŠ” 것이 μœ‘κ΅°μ—κ²Œ 이득이 λ˜λŠ” 행동인가 같은) λ…Όλ¦¬μ μœΌλ‘œ 말이 λ˜λŠ” 행동을 μƒκ°ν•΄μ„œ ν–ˆλ‹€. λ‚΄κ°€ λͺ¨λ“  κ·œμΉ™κ³Ό 법을 μ„€κ³„ν•˜λ©΄μ„œ ν–ˆλ‹€κ³  ν• κΉŒ. μ΄λŸ¬λ‹ˆκΉŒ κ±°μΉ  것이 μ—†μ—ˆλ‹€. 뭐든지 깊게 μƒκ°ν•΄μ„œ κ·ΈλŒ€λ‘œ ν•˜λ©΄ λ‹€ ν’€λ¦¬λ”λΌλŠ”.

근데 μ˜μ™Έλ‘œ 이 방법이 잘 ν†΅ν–ˆλ‹€. κ·Έλž˜μ„œ κ²°κ΅­ λ‚΄κ°€ λͺ¨λ“  체계λ₯Ό λ§Œλ“€μ—ˆκ³  이걸둜 상도 λͺ‡λ²ˆ λ°›μ•˜λ‹€. κ΅°λ‹¨μ—μ„œ 감사 내렀왔을 λ•Œμ—λŠ” λ‚΄κ°€ κ΅°λ¬΄μ›μ΄λž‘ μž₯ꡐ듀 λͺ¨μ•„놓고 비곡식 강연도 ν–ˆλ‹€.

----
λ•Œλ‘œλŠ” μ™ΈλΆ€ 자극/정보λ₯Ό μ œν•œν•˜κ³  생각에 μ§‘μ€‘ν•˜λŠ” 것이 도움이 λ˜λŠ” κ²½μš°κ°€ μžˆλ‹€. 덀으둜 μƒκ°ν•˜λŠ” 근윑과 κΈ°μˆ λ„ 늘게 λœλ‹€.

κ·Έλž˜μ„œ λ‚˜λŠ” μ˜ˆμ»¨λŒ€ λ‹€μŒκ³Ό 같은 것듀을 μΆ”μ²œν•œλ‹€:
* 버그가 λ‚˜μ˜€λ©΄ λ°”λ‘œ 검색창에 λ•Œλ €λ„£μ§€ 말고 적어도 5λΆ„, 10뢄간은 백지에닀가 λ¬Έμ œμƒν™©μ„ 그렀보고 원인 μœ μΆ”ν•΄λ³΄κΈ°
* μ „ν˜€ λͺ¨λ₯΄λŠ” 뢄야에 μž…λ¬Έν•˜κ³  싢을 λ•Œ 인터넷 κ²€μƒ‰λ³΄λ‹€λŠ” μ„œμ μ—μ„œ μž˜λ‚˜κ°€λŠ” μ±… 쀑에 μŠ€νƒ€μΌμ΄ λ‹€λ₯Έ μ±… 3κΆŒμ„ κ΅¬μž…ν•΄μ„œ μ–˜λ₯Ό λΉ„κ΅ν•΄λ³΄λ©΄μ„œ 보기 (λ‚˜λŠ” 이걸 bounded exploration이라고 λΆ€λ₯Έλ‹€ -- 이걸 μ•ˆν•˜λ©΄ μ–΄λŠ ν•˜λ‚˜ μ œλŒ€λ‘œ 보지 μ•Šκ³  계속 깔짝깔짝 λŒ€λ©΄μ„œ μ‹œκ°„μ„ λ‚­λΉ„ν•˜κΈ° 쉽닀)
* ν•΄κ²°ν•΄μ•Όν•  λ³΅μž‘ν•œ λ¬Έμ œκ°€ μžˆμ„ 경우 μΆ”κ°€ 정보λ₯Ό μ „ν˜€ μ°Ύμ§€ μ•Šκ³  λ°±μ§€λ₯Ό νŽΌμ³λ†“κ³  30λΆ„ λ™μ•ˆ 논리와 λ‚΄ 생각, λ‚΄ κ³Όκ±°κ²½ν—˜μœΌλ‘œλ§Œ 해결책을 섀계해 보기

https://www.facebook.com/100000557305988/posts/pfbid02joCFDgeyR58vuv2MyZqQWJ1cf7FwrYZHS6FLq9ox8Bqu2RE9cV3HdgzWdHJvopjkl/?mibextid=jf9HGS
πŸ‘5
Continuous Learning_Startup & Investment
Could one Language Learning Model handle all programming languages? Or should we tailor a model for each? What's your take? #LLM #ProgrammingLanguages https://www.linkedin.com/posts/mateizaharia_introducing-english-as-the-new-programming-activity-7080242815120637952…
λ„ˆλ¬΄λ‚˜ μ‰¬μ›Œμ§€λŠ” 데이터 μ‚¬μ΄μ–ΈμŠ€ πŸš€

ChatGPT 덕뢄에 데이터 μ‚¬μ΄μ–ΈμŠ€κ°€ λ†€λžλ„λ‘ μ‰¬μ›Œμ§€κ³  μžˆμŠ΅λ‹ˆλ‹€. πŸ€– μ΄μ „μ—λŠ” ν΄λŸ¬μŠ€ν„°λ§μ„ μ΄μš©ν•œ μ•„λž˜ 차트λ₯Ό λ§Œλ“€κΈ° μœ„ν•΄ ν•„μš”ν–ˆλ˜ 지식듀은 λ‹€μŒκ³Ό κ°™μ•˜μŠ΅λ‹ˆλ‹€.

## Google Colab ν•™μŠ΅ μ‹œκ°„ πŸ“š:

1. 기본적인 μ‚¬μš©λ²•μ„ μ΅νžˆλŠ”λ° μ•½ 1μ£Ό μ •λ„μ˜ ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

2. 더 λ³΅μž‘ν•œ μž‘μ—…, 예λ₯Ό λ“€μ–΄ μ™ΈλΆ€ 데이터λ₯Ό λΆˆλŸ¬μ˜€κ±°λ‚˜, 큰 규λͺ¨μ˜ 데이터λ₯Ό μ²˜λ¦¬ν•˜λŠ” 방법 등을 ν•™μŠ΅ν•˜λŠ”λ° 좔가적인 1~2주의 μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

## 데이터 κ³Όν•™ λ°°κ²½ 지식 πŸŽ“:

1. ν΄λŸ¬μŠ€ν„°λ§: 기본적인 이해λ₯Ό μœ„ν•΄ 1~2주의 ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

2. ν΄λŸ¬μŠ€ν„°λ§ 평가 μ§€ν‘œ: 각 μ§€ν‘œμ— λŒ€ν•œ 기본적인 이해λ₯Ό μœ„ν•΄ 1μ£Ό μ •λ„μ˜ ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

3. 데이터 뢄석 및 처리: 이 μ£Όμ œλŠ” κ΄‘λ²”μœ„ν•˜λ―€λ‘œ, 기본적인 데이터 μ „μ²˜λ¦¬ 및 뢄석 기법을 μŠ΅λ“ν•˜λŠ” λ°λŠ” μ΅œμ†Œν•œ 1~2κ°œμ›”μ˜ ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

## API 지식 πŸ’»:

1. Firebase Firestore: Firestore의 기본적인 μ‚¬μš©λ²•μ„ λ°°μš°λŠ” λ°λŠ” 1~2주의 μ‹œκ°„μ΄ μ†Œμš”λμŠ΅λ‹ˆλ‹€.

## μ½”λ”© μŠ€ν‚¬ πŸ–₯️:

1. 파이썬: 파이썬의 κΈ°λ³Έ 문법을 μ΅νžˆλŠ” λ°λŠ” μ•½ 1~2κ°œμ›”μ˜ ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

2. NumPy: 기본적인 NumPy κΈ°λŠ₯을 μ΅νžˆλŠ” λ°λŠ” μ•½ 1~2주의 ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

3. Matplotlib: 기본적인 κ·Έλž˜ν”„λ₯Ό κ·Έλ¦¬λŠ” 방법을 λ°°μš°λŠ” λ°λŠ” μ•½ 1주의 ν•™μŠ΅ μ‹œκ°„μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.

μœ„μ—μ„œ μ œμ‹œν•œ 각 ν•­λͺ©μ˜ ν•™μŠ΅ μ‹œκ°„μ„ ν•©μ‚°ν•˜λ©΄ λŒ€λž΅μ μœΌλ‘œ λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€: 데이터 κ³Όν•™ 기초: μ•½ 2~4κ°œμ›”, API 지식 (Firebase Firestore): μ•½ 1~2μ£Ό, μ½”λ”© μŠ€ν‚¬ (파이썬, NumPy, Matplotlib): μ•½ 2~3κ°œμ›”. λ”°λΌμ„œ 총 ν•™μŠ΅ μ‹œκ°„μ€ μ•½ 4~7κ°œμ›” μ •λ„λ‘œ μ˜ˆμƒν•  수 μžˆμŠ΅λ‹ˆλ‹€. πŸ“ˆ

----

# ChatGPTλ₯Ό μ΄μš©ν•˜λ‹ˆ λ‹€μŒκ³Ό 같이 λ˜μ–΄λ²„λ ΈμŠ΅λ‹ˆλ‹€. πŸ”„

AIκ°€ μ½”λ”©κ³Ό μ‹€ν—˜ 섀계λ₯Ό λ‹΄λ‹Ήν•˜λ―€λ‘œ κ·Έ λΆ€λΆ„μ˜ ν•™μŠ΅ μ‹œκ°„μ€ μ œμ™Έν•  수 μžˆμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ―€λ‘œ, 남은 뢀뢄은 데이터 과학에 λŒ€ν•œ κ°€λ²Όμš΄ λ°°κ²½ 지식과 Google Colab에 λŒ€ν•œ μ΄ν•΄μž…λ‹ˆλ‹€. πŸ€”

1. 데이터 κ³Όν•™ λ°°κ²½ 지식: AI λΉ„μ„œμ˜ μ„€λͺ…κ³Ό κ°€μ΄λ“œλ‘œ, μ•½ 1κ°œμ›”λ‘œ 단좕될 수 μžˆμŠ΅λ‹ˆλ‹€. κ²½μš°μ— λ”°λΌμ„œλŠ” 2주에도 κΈ°λ³Έ κ°œλ…μ„ 훑을 수 μžˆμŠ΅λ‹ˆλ‹€.

2. Google Colab: AI λΉ„μ„œμ˜ λ„μ›€μœΌλ‘œ, ν•™μŠ΅ μ‹œκ°„μ„ μ•½ 1주둜 쀄일 수 μžˆμŠ΅λ‹ˆλ‹€. - 사싀 1μ‹œκ°„λ§Œ 해도 될 것 κ°™κΈ΄ ν•΄μš”οΏΌ

이 경우, 총 ν•™μŠ΅ μ‹œκ°„μ€ μ•½ 1~2κ°œμ›” μ •λ„λ‘œ μΆ”μ •λ©λ‹ˆλ‹€. 이미 μ½”λ”© μŠ€ν‚¬κ³Ό API μ‚¬μš©μ— λŒ€ν•œ 지식이 μžˆλ‹€λ©΄, 이 μ‹œκ°„μ€ λ”μš± 단좕될 수 μžˆμŠ΅λ‹ˆλ‹€. βŒ›

----

κ²°κ΅­ 초보자의 경우 6κ°œμ›” μ½”μŠ€ -> 1κ°œμ›” μ½”μŠ€κ°€ λ©λ‹ˆλ‹€. πŸŽ‰ 데이터 μ‚¬μ΄μ–ΈμŠ€ λ°°κ²½ 지식을 μ•Œκ³  있고 파이썬 라이브러리 μ‚¬μš© 방법을 λͺ°λžλ˜ 제 μž…μž₯μ—μ„œλŠ” 3μ£Ό μ •λ„μ—μ„œ λ‘μ‹œκ°„μœΌλ‘œ 단좕 λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 😲 이 외에도 데이터 κ³Όν•™ μ „λ°˜μ„ 배우렀면 4년도 λͺ¨μžλžλ‹ˆλ‹€.

κ²°κ΅­ μ‹œλ‹ˆμ–΄ 데이터 μ‚¬μ΄μ–Έν‹°μŠ€ ν•œλͺ…이 ν•  수 μžˆλŠ” 일이 μ₯¬λ‹ˆμ–΄ μ‚¬μ΄μ–Έν‹°μŠ€νŠΈμ™€ μ₯¬λ‹ˆμ–΄ 데이터 μ—”μ§€λ‹ˆμ–΄ 10λͺ… 이상에 ν•΄λ‹Ήν•˜λŠ” 일이 λ˜μ–΄λ²„λ¦½λ‹ˆλ‹€.

μ‹€λ¦¬μ½˜λ°Έλ¦¬μ—μ„œλŠ” 이미 μ₯¬λ‹ˆμ–΄ 데이터 κ³Όν•™μžλ“€μ΄ λΉ λ₯Έ μ†λ„λ‘œ 직업을 μžƒκ³  μžˆμŠ΅λ‹ˆλ‹€. 😱

ν•™κ΅μ—μ„œμ˜ 과정도 λ°”λ€Œμ–΄μ•Ό ν•  것 κ°™μŠ΅λ‹ˆλ‹€. 였히렀 같은 μ‹œκ°„ 내에 더 깊이 μžˆλŠ” 이둠을 배울 수 μžˆμ„ 것 κ°™μŠ΅λ‹ˆλ‹€. λ˜ν•œ μ‹€μ œ μ½”λ”©λ³΄λ‹€λŠ” 연ꡬ 방법둠에 쀑점을 두고 ꡐ윑 섀계λ₯Ό ν•΄μ•Ό ν•  것 κ°™μŠ΅λ‹ˆλ‹€. 데이터 μ‚¬μ΄μ–Έν‹°μŠ€νŠΈλ“€μ΄ 싀무 κΈ°μˆ λ³΄λ‹€ μ§€μ‹μ μœΌλ‘œ 상ν–₯ 평쀀화 λ˜λŠ” 상황이 올 것 κ°™μŠ΅λ‹ˆλ‹€.

---

μ•„λž˜ scatter plot을 μœ„ν•΄ μ‚¬μš©ν•œ prompt:

1. Get the latest 1000 samples from user_tribes collection
2, tribeId is cluster id, x and y are the coordinates.
3. Measure the homogeneity and completeness using colab.
4. Visualize the results.

Kmeans라고 말도 μ•ˆ ν–ˆλŠ”λ° μ•Œμ•„μ„œ κ°–λ‹€ μ“°λ„€μš”.

https://www.facebook.com/634740022/posts/pfbid0cuABUXxgECdMwZfQaZ9u88HqXaLoLKzdJxBGLSsfHMfUovKRdQnuybjUYc9sJycsl/?mibextid=jf9HGS
슀λͺ°ν† ν¬ μž˜ν•˜λŠ” 팁, Conversation Threading

μ†Œμ…œλΌμ΄μ§•μ΄λΌλŠ” λ§₯λ½μ—μ„œ Conv. Threading μ΄λž€, μ˜λ„μ μœΌλ‘œ λ‚˜μ— κ΄€ν•œ ν‚€μ›Œλ“œλ“€μ„ λŒ€ν™”μ— μΆ”κ°€μ •λ³΄λ‘œ ν˜λ¦¬λ―€λ‘œμ¨ μƒλŒ€λ°©(ν˜Ήμ€ κ·Έλ£Ήλ‚΄ 타인)이 κ·Έ ν‚€μ›Œλ“œλ“€μ„ μ€μ€ν•˜μ—¬ μžμ—°μŠ€λŸ½κ²Œ ν ν„°λ ˆμŠ€νŒ…ν•œ λŒ€ν™”κ°€ 이어지도둝 ν•˜λŠ” ν–‰μœ„ λ˜λŠ” λŒ€ν™”λ²•.

예λ₯Ό λ“€μ–΄, κ³ ν–₯이 μ–΄λ””μ„Έμš”? λΌκ³ ν•˜λ©΄ μΌλ°˜μ μœΌλ‘œλŠ” "전남 μˆœμ²œμ΄μš”." 라고 λ‹¨λ‹΅μœΌλ‘œ 끝낼 수 μžˆλŠ”κ±Έ CTλ₯Ό 잘 ν•˜λŠ” μ‚¬λžŒμ€ β€œμ „λ‚¨ μˆœμ²œμ΄μš”, μ—¬μˆ˜ 밀바닀와 κ°€κΉŒμš΄λ° μˆœμ²œλ§ŒμŠ΅μ§€λ‘œ 유λͺ…ν•˜κ³  μƒνƒœν•™μŠ΅ ν•˜μ‹œλŠ” λΆ„λ“€μ˜ μ„±μ§€μ—μš”.” 라고 μ–˜κΈ°λ₯Ό ν•œλ‹€. 그러면 κ·Έ κ·Έλ£Ήμ—μ„œ λˆ„κ΅¬λ“  μ“°λ ˆλ“œλ₯Ό μ΄μ–΄κ°ˆ 수 μžˆλ‹€. μ—¬μˆ˜ λ°€λ°”λ‹€ λ…Έλž˜ μ–˜κΈ°λ₯Ό ν•  μˆ˜λ„, μ—¬μˆ˜ μ—¬ν–‰κ°„ μ–˜κΈ°λ₯Ό ν•  μˆ˜λ„, μŠ΅μ§€ μ–˜κΈ°λ‚˜ μƒνƒœν•™μŠ΅μ— λŒ€ν•œ μ§ˆλ¬Έμ„ ν•  μˆ˜λ„μžˆλ‹€.

κ³Όκ±° λ‚΄κ²Œ 영ν–₯을 쀬던 λ§Žμ€ 리더듀이 (특히 μ˜μ–΄κΆŒ) 이 μŠ€ν‚¬μ„ μžμ—°μŠ€λŸ½κ²Œ μ‚¬μš©ν•˜λŠ”κ±Έ 보고 배우렀고 λ…Έλ ₯ 많이 ν–ˆλ‹€. 그런데 아직도 TMI 와 ν ν„°λ ˆμŠ€νŒ…μ˜ 선을 κ΅¬λΆ„ν•΄μ„œ ν™œμš©ν•˜κΈ° μ°Έ μ–΄λ ΅λ‹€. μ–΄μ¨Œλ“  ν‚€μ›Œλ“œλ₯Ό 흘리렀고 λ…Έλ ₯ν•˜λ©΄, μ£Όλ³€μ΄λ“€μ˜ μ€μ€ν•˜λŠ” μƒν™©λ“€μ—μ„œ 자칫 λŠκΈΈλ§Œν•œ λŒ€ν™”κ°€ μ—°κ²°λ˜κ³  라포λ₯Ό λ”μš± μ‰½κ²Œ λ§Œλ“€ 수 μžˆλ‹€.

https://loopward.com/improve-conversation-skills-using-conversational-threads-and-sharing-experiences/

https://www.facebook.com/1150372185/posts/pfbid02ke1dLH2EPwSGkNSSGVL7NutMUkGN5ADNT2Zzeh3cQE8BK1rmHNoiGwz75kVT22v8l/?mibextid=jf9HGS
πŸ‘3
여성리더뢄듀을 μœ„ν•œ 이야기--

μ—¬μ„± λ¦¬λ”λ“€κ³Όμ˜ 토크 λͺ¨μž„이 μžˆμ—ˆλ‹€. 제게 λ‚¨μ„±μ˜ κ΄€μ μ—μ„œ μ—¬μ„±λ¦¬λ”μ—κ²Œ 이야기λ₯Ό ν•΄ 달라고 ν•΄μ„œ ν•œ 이야기 쀑 λͺ‡κ°€μ§€λ₯Ό μ •λ¦¬ν•˜λ©΄~

0. μ—¬μ„±μœΌλ‘œ 컀리어λ₯Ό μŒ“λŠ”λ‹€λŠ” 것은 훨씬 κΈ°μšΈμ–΄μ§„ μš΄λ™μž₯μ—μ„œ ν”Œλ ˆμ΄ν•˜λŠ” κ²ƒμž„μ€ λΆ„λͺ…ν•˜λ‹€. λ‹€ν–‰νžˆλ„ μ‚¬νšŒλ³€ν™”μ— 따라 μ‘°κΈˆμ”© λ‚˜μ•„μ§€λŠ” λ“― ν•˜λ‹€.

1. 남성 리더λ₯Ό 흉내내지말고 μ—¬μ„±μœΌλ‘œμ„œμ˜ 강점을 ν™œμš©ν•˜λŠ”κ²Œ μ–΄λ–¨κΉŒμš”
- κ³Όκ±°μ—λŠ” λ‚¨μžκ°™μ€ μŠ€νƒ€μΌ, λ‚¨μžλ³΄λ‹€ 더 쎈 여성듀이 λ¦¬λ”λ‘œ μ ν•©ν•˜λ‹€κ³  μ—¬κΉ€
- λΆˆν™•μ‹€ν•˜κ³  λ‹€μ–‘ν•œ μ‹œλŒ€, 곡감, μˆ˜ν‰, 포용의 리더십이 μ€‘μš”ν•œ μ΄λ•Œ μ—¬μ„±μ˜ 강점이 λ¦¬λ”λ‘œμ„œ 점점 ν•„μš”ν•΄μ§
- κ·ΈλŸ¬λ―€λ‘œ μ—¬μ„±μœΌλ‘œμ„œμ˜ 강점을 마음껏 λ°œνœ˜ν•˜μž.

2. μžμ‹ κ° κ°€μ§€κ³  ν‘œν˜„ν•˜μž
- λΆ€λ“œλŸ½κ³  곡감λ ₯이 μžˆλ‹€λŠ”κ²ƒκ³Ό μžμ‹ κ°μ΄ μ—†λ‹€λŠ” 것은 닀름. λΆ€λ“œλŸ¬μ›Œλ„ ν•¨λΆ€λ‘œ λŒ€ν•˜λŠ” 이듀에겐 λ‹¨ν˜Έν• μˆ˜ 있고 맀사 μžμ‹ κ°μ— κ°€λ“μ°°μˆ˜ μžˆμ–΄μš”.
- λ„ˆλ¬΄ κ²Έμ†ν•˜κ³  μ–‘λ³΄ν•˜μ§€ 말고 λ‹Ήλ‹Ήν•˜κ³  μžμ‹ κ°μ„ κ°€μ§‘μ‹œλ‹€. λŒ€κ°œ 당신보닀 μ‹€λ ₯μ—†λŠ” 남성듀이 훨씬 더 μžμ‹ κ°μ— μΆ©λ§Œν•˜λ‹€.

3. 더 큰 μ±…μž„, 리더십, ν”„λ‘œμ νŠΈλ₯Ό ν™•μž₯ν•©μ‹œλ‹€.
- R&R에 얽맀이고 μ£Όμ €ν•˜κΈ°λ³΄λ‹€ μ„±μž₯ν• μˆ˜ 있고 κΈ°μ—¬ν• μˆ˜ μžˆλŠ” ν”„λ‘œμ νŠΈ, μ±…μž„μ„ 과감히 μ·¨ν•˜μ‹œλΌ.

4. μžμ‹ μ—κ²Œ μ±…μž„λŒλ¦¬μ§€λ§κ³ , μƒν•˜κ±°λ‚˜ ν­λ°œν•˜λŠ” 감정은 λΉ λ₯΄κ²Œ νšŒλ³΅ν•©μ‹œλ‹€.
- μžμ‹ μ„ νƒ“ν•˜κ±°λ‚˜ μžμ‹ μ—κ²Œ μ±…μž„μ„ λŒλ¦¬μ§€ λ§ˆμ„Έμš”. λ‹Ήμ‹ μ˜ 잘λͺ»μ΄ μ•„λ‹ˆμ˜ˆμš”.
- 감정은 λ‚˜μœ 것이 μ—†μœΌλ‚˜ 두렀움, 싀망, μŠ¬ν””, λΆ„λ…Έ λ“±μ˜ 감정을 였래 λ‘κ±°λ‚˜ λ„ˆλ¬΄ κ°•ν•˜κ²Œ ν‘œμΆœν•˜κΈ° λ³΄λ‹€λŠ” (μš΄λ™, λͺ…상, κ±·κΈ° λ“±) 슀트레슀 ν•΄μ†Œλ²•μ„ λ§Œλ“€μ–΄ λΉ λ₯΄κ²Œ νšŒλ³΅ν•˜μ„Έμš”.

5. μ™„λ²½μ£Όμ˜λ₯Ό λ–¨μ³λ²„λ¦½μ‹œλ‹€.
- μ „λž΅μ μœΌλ‘œ 무λŠ₯ν•˜μ„Έμš”
- λͺ¨λ“ κ²ƒμ„ μž˜ν•˜λ € ν•  ν•„μš”λŠ” μ—†μ–΄μš”.
- 인생을 μˆ™μ œν•˜λ“― μ‹œν—˜λ³΄λ“― μ‚΄ ν•„μš”κ°€ μžˆλ‚˜μš”. 직μž₯, κ°€μ •, μΉœμ²™, μ‹œλŒ, μ‚¬νšŒ λ“± λͺ¨λ“ κ²ƒμ„ 100점 맞으렀 ν•˜λ©΄ λ„ˆλ¬΄ νž˜λ“€μ£ .

6. κ΄΄λ‘­νžˆλŠ” 상사, νž˜λ“  μ‚¬λžŒμ€ κΈνœΌν•˜κ²Œ λ΄…μ‹œλ‹€.
- μ§„μ§œ μ†Œμ‹œμ˜€λŠ” μƒκ°λ§ŒνΌ λ³„λ‘œμ—†λ‹€. μ•Œκ³ λ³΄λ©΄ λŒ€κ°œ ν‰λ²”ν•œ 아저씨, μ•„μ€Œλ§ˆμΌλΏ.
- λ‹€ μƒμ‘΄ν•˜κΈ°μœ„ν•΄ λΆ„νˆ¬ν•˜λŠ”κ²ƒμΌμˆ˜ μžˆμœΌλ‹ˆ 긍휼의 눈으둜 보자.

7. λˆ„κ΅°κ°€μ˜ 무엇이 μ•„λ‹ˆλΌ 슀슀둜의 길을 κ°‘μ‹œλ‹€.
- λ„ˆλ¬΄ 빨리 μžμ‹ μ˜ 컀리어λ₯Ό ν¬κΈ°ν•˜μ§€ λ§™μ‹œλ‹€. λ‚˜μ€‘μ— ν›„νšŒν•˜λŠ” κ²½μš°κ°€ 더 λ§Žλ”κ΅°μš”.
- λˆ„κ΅°κ°€μ˜ 무엇보닀 μžμ‹ μ˜ 컀리어와 삢을 μ‚½μ‹œλ‹€.

p.s. ν•œ 페친이 μ–ΈκΈ‰ν•œλ°”μ™€ 같이 μ €λŠ” ꡬ쑰적인 κ΄€μ λ³΄λ‹€λŠ” 개인의 κ΄€μ μ—μ„œ λ§μ”€λ“œλ ΈμŠ΅λ‹ˆλ‹€. μ•„λ§ˆ κ°œμΈμœΌλ‘œμ„œ μ–΄λ–»κ²Œ λ…Έλ ₯ν•˜λΌλŠ” 것보닀 사싀 μ‚¬νšŒ ꡬ쑰적 인식과 μ²΄κ³„μ˜ λ³€ν™”κ°€ 더 ν•„μš”ν•  κ²ƒμž…λ‹ˆλ‹€.

https://www.facebook.com/100006237757461/posts/pfbid02Nuwoy8XghsAmtxmR8vfCRi6naXNwMYVS5V4qybY4pgjbturAb1RHV26YSLYQBoXHl/?mibextid=jf9HGS
AI의 ν•«ν•œ μ£Όμ œλ“€μ— κ΄€ν•œ λ‚΄μš©μΈλ°, ν•˜λ‚˜ν•˜λ‚˜ λ‹€ μƒκ°ν•΄λ΄μ•Όν•˜λŠ” 뢀뢄듀이고 κ°œμΈμ μœΌλ‘œλŠ” μ „λΆ€ λ™μ˜.

그리고 무엇보닀도 κ²°λ‘  λ‚΄μš© λ„ˆλ¬΄ λ§žλŠ”λ§μ΄λ‹€... μ—­μ‹œ λ‚΄ μ΅œμ•  λ‰΄μŠ€λ ˆν„° λ‹΅κ΅°.

λ‹€λ₯Έ μ‚¬λžŒλ“€μ˜ μ˜κ²¬μ€ μ—΄μ‹¬νžˆ λ“£λ˜, νŒλ‹¨μ€ κ²°κ΅­ 슀슀둜 λ‚΄λ €μ•Όν•œλ‹€.

https://luttig.substack.com/p/hallucinations-in-ai
Forwarded from Buff
20-30개 νšŒμ‚¬ λ―ΈνŒ… ν›„κΈ° 정리 by λ†κ΅¬μ²œμž¬λ‹˜
좜처: https://blog.naver.com/tosoha1/223143233920

1. ν•΄μ™Έ μˆ˜μΆœμ„ μ€‘μ‹¬μœΌλ‘œ μ„±μž₯ν•˜λŠ” μ€‘μ†Œν˜• ν™”μž₯ν’ˆ 산업에 μ†ν•œ νšŒμ‚¬λ“€μ€ μ§€μ†λœ μ„±μž₯을 보여주고 μžˆλŠ” λ“― ν•˜λ‹€.

2. λ§Žμ€ μ˜λ£ŒκΈ°κΈ°λ‚˜ λ―Έμš©μ— κ΄€λ ¨λœ νšŒμ‚¬λ“€μ΄ 수좜 μ€‘μ‹¬μœΌλ‘œ 지속 μ„±μž₯ μ€‘μ΄μ—ˆλ‹€.

3. μœ κ°€κ°€ μ§€μ§€μ§€λΆ€μ§„ν•΄μ„œ μƒλŒ€μ μœΌλ‘œ μˆ˜μ£Όκ°€ μ•½ν•΄μ§€λŠ”κ²Œ μ•„λ‹κΉŒ μš°λ €ν–ˆλ˜ μ„μœ ν™”ν•™ 기자재 νšŒμ‚¬λ“€λ„ 계속 μ–‘ν˜Έν•œ 업황을 이어가고 μžˆμ—ˆλ‹€.

4. μ‘°μ„  μ‚°μ—… μ—­μ‹œ 2~3λ…„κ°„μ˜ 꽉찬 수주 μž”κ³ λ₯Ό 톡해 ν–₯ν›„ μ–‘ν˜Έν•œ 선가에 λŒ€ν•œ 수주λ₯Ό 이야기 ν•˜κ³  μžˆμ—ˆλ‹€.

5. μ „λ ₯κΈ°κΈ° νšŒμ‚¬λ“€μ˜ 업황은 μƒλŒ€μ μœΌλ‘œ μž‘λ…„λ³΄λ‹€ 더 μ’‹μ•„μ§„ λŠλ‚Œμ΄μ—ˆλ‹€. μ΄μ œλŠ” μž₯κΈ° 사이클을 μ‘°μ‹¬μŠ€λ ˆ 이야기 ν•˜λŠ” λ“― ν–ˆλ‹€.

6. 피크아웃 μš°λ €κ°€ 항상 μžˆλŠ” μžλ™μ°¨ μ„Ήν„° μ—­μ‹œ 아직은 쒋은 싀적을 μœ μ§€ 쀑.

7. λ°©μ‚° κ΄€λ ¨ νšŒμ‚¬λ“€λ„ μž‘λ…„μ˜ κ°•ν•œ λͺ¨λ©˜ν…€μ€ μ•„λ‹ˆμ§€λ§Œ μƒλŒ€μ μœΌλ‘œ μ–‘ν˜Έν•œ 업황을 μ§€μ†μ€‘μ΄μ—ˆλ‹€.

8. λ‹€ μ£½μ–΄κ°€λ˜ λ°˜λ„μ²΄ μ‚°μ—…λ‚΄ νšŒμ‚¬λ“€λ„ 1,2λΆ„κΈ°λ‚΄ 저점 톡과에 λŒ€ν•΄ λ‚˜λ¦„ μžμ‹ ν•˜κ³  μžˆλŠ” λ“― ν–ˆλ‹€.
Lessons gleaned from my wise bro

- Trust is built through a tapestry of gentle acts and kept promises.

Doug Leone of Sequoia Capital

Trust is forged through a blend of sincerity and excellence. In the absence of either, securing trust becomes challenging. Integrity is essential for fostering a healthy work environment; without it, you risk alienating your team. On the other hand, a lack of excellence hampers trust-building. It sets the tone and vision for a company. Once the company's direction is determined, it's crucial to attract the talent required to turn that vision into reality.

Jeff Bezos:

The founder of Amazon shared his 4-step method for building trust and reputation:
1. Do hard things: Earn trust by doing hard things well over and over again.
2. If you say you're going to do something, do it: Keep your promises and commitments.
3. Take controversial stances: Be willing to take risks and make tough decisions.
4. Have clarity: Be clear in your communication and decision-making.