🔥 Trending Repository: stagehand
📝 Description: The AI Browser Automation Framework
🔗 Repository URL: https://github.com/browserbase/stagehand
🌐 Website: https://stagehand.dev
📖 Readme: https://github.com/browserbase/stagehand#readme
📊 Statistics:
🌟 Stars: 17.4K stars
👀 Watchers: 74
🍴 Forks: 1.1K forks
💻 Programming Languages: TypeScript - JavaScript - HTML
🏷️ Related Topics:
==================================
🧠 By: https://t.iss.one/DataScienceM
📝 Description: The AI Browser Automation Framework
🔗 Repository URL: https://github.com/browserbase/stagehand
🌐 Website: https://stagehand.dev
📖 Readme: https://github.com/browserbase/stagehand#readme
📊 Statistics:
🌟 Stars: 17.4K stars
👀 Watchers: 74
🍴 Forks: 1.1K forks
💻 Programming Languages: TypeScript - JavaScript - HTML
🏷️ Related Topics:
#ai #selenium #agents #puppeteer #playwright #llms
==================================
🧠 By: https://t.iss.one/DataScienceM
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))
)
• Get the page source after JavaScript has executed.
dynamic_html = driver.page_source
• Close the browser window.
driver.quit()
VII. Common Tasks & Best Practices
• Handle pagination by finding the "Next" link.
next_page_url = soup.find('a', text='Next')['href']• Save data to a CSV file.
import csv
with open('data.csv', 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['Title', 'Link'])
# writer.writerow([title, url]) in a loop
• Save data to CSV using
pandas.import pandas as pd
df = pd.DataFrame(data, columns=['Title', 'Link'])
df.to_csv('data.csv', index=False)
• Use a proxy with
requests.proxies = {'http': 'https://10.10.1.10:3128', 'https': 'https://10.10.1.10:1080'}
requests.get('https://example.com', proxies=proxies)• Pause between requests to be polite.
import time
time.sleep(2) # Pause for 2 seconds
• Handle JSON data from an API.
json_response = requests.get('https://api.example.com/data').json()• Download a file (like an image).
img_url = 'https://example.com/image.jpg'
img_data = requests.get(img_url).content
with open('image.jpg', 'wb') as handler:
handler.write(img_data)
• Parse a
sitemap.xml to find all URLs.# Get the sitemap.xml file and parse it like any other XML/HTML to extract <loc> tags.
VIII. Advanced Frameworks (
Scrapy)• Create a Scrapy spider (conceptual command).
scrapy genspider example example.com
• Define a
parse method to process the response.# In your spider class:
def parse(self, response):
# parsing logic here
pass
• Extract data using Scrapy's CSS selectors.
titles = response.css('h1::text').getall()• Extract data using Scrapy's XPath selectors.
links = response.xpath('//a/@href').getall()• Yield a dictionary of scraped data.
yield {'title': response.css('title::text').get()}• Follow a link to parse the next page.
next_page = response.css('li.next a::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)• Run a spider from the command line.
scrapy crawl example -o output.json
• Pass arguments to a spider.
scrapy crawl example -a category=books
• Create a Scrapy Item for structured data.
import scrapy
class ProductItem(scrapy.Item):
name = scrapy.Field()
price = scrapy.Field()
• Use an Item Loader to populate Items.
from scrapy.loader import ItemLoader
loader = ItemLoader(item=ProductItem(), response=response)
loader.add_css('name', 'h1.product-name::text')
#Python #WebScraping #BeautifulSoup #Selenium #Requests
━━━━━━━━━━━━━━━
By: @DataScienceN ✨
❤2