Web Scraping with Visual Workflows: A Beginner's Guide
What is Visual Web Scraping?
Traditional web scraping requires writing code — Python scripts with BeautifulSoup, Node.js with Puppeteer. SkillChat changes this by letting you build scrapers visually.
Instead of writing `page.goto(url)` and `page.$$eval(selector)`, you drag nodes onto a canvas and connect them.
Building Your First Scraper
Step 1: Create a Scraper Workflow
Click **New Workflow**, select "Web Scraper" category, and name it (e.g., "Product Price Tracker").
Step 2: Navigate to a Page
Drag a **Scrape Page** node and set the URL. Choose "Network Idle" for the wait condition — this ensures all content loads before scraping.
Step 3: Extract Data
Add an **Extract Data** node. Set the CSS selector to target the elements you want:
Step 4: Handle Pagination
For multi-page scraping, use a **Loop** node:
Step 5: Save Results
Connect an **Output Data** node to collect all extracted data. The results are saved with your execution history.
Advanced Techniques
Screenshots for Debugging
Add a **Screenshot** node after navigation to capture what the page looks like. Screenshots are stored in Supabase Storage and viewable in your execution results.
Typing into Search Fields
Use the **Type Text** node to interact with search forms:
Scrolling for Lazy Content
Many sites load content as you scroll. Use the **Scroll Page** node: