Overview
This guide demonstrates how to scrape data from websites that require user authentication by using persistent browser sessions. You’ll learn to launch a browser session, authenticate manually, and then run automated tasks using that authenticated session.1
Launch Browser Session
First, create a new browser session that will persist your authentication cookies. This session will return a The
session_id
and live_url
for manual authentication.Python
live_url
opens an interactive browser window where you can manually authenticate.2
Authenticate Manually
Open the
live_url
in your browser and manually log in to Gmail:- Navigate to the live URL provided by the session
- Go to gmail.com in the browser
- Enter your email address and password
- Complete any two-factor authentication if required
- Verify you’re successfully logged in to your Gmail inbox
3
Run Automated Task
Now use the authenticated session to run an automated task. The agent will access Gmail as a logged-in user and retrieve your last email.The agent will automatically use the stored authentication from your session to access Gmail and extract the requested information.
Python
Key Benefits
- Persistent Authentication: Sessions maintain login state across multiple tasks
- Manual Control: You handle the authentication process manually for security
- Automated Execution: Once authenticated, agents can run complex tasks automatically
- Session Reuse: The same session can be used for multiple related tasks
Best Practices
- Use descriptive session IDs for better organization
- Keep sessions secure and don’t share session IDs
- Test authentication manually before running automated tasks
- Handle rate limits and be respectful to the target website
Sessions persist cookies and authentication state, making them perfect for accessing protected content while maintaining security through manual authentication.
Community
Join Discord
Join our community for support and showcases