Can my robot login to the websites? (Cookies vs. Credentials)

Yes, your robot can log into a website on your behalf using your session cookies or your login credentials.

There are two ways to have the robot log into a website on your behalf:

A. Have the robot use your logged-in session cookies.

Before starting to record, enable the "This website needs logging in" option. Your session cookies will be safely encrypted and stored on Browse AI AWS infrastructure. Then capture the data you are looking to scrape or monitor.

This works on most websites, but more secure sites may not accept the session cookies if they come from a different IP address. In those cases, approach B may be the only option.

Cookies store information about your login status. If you're already logged into your account in your browser, selecting the session cookies login method can automatically authenticate the robot. This eliminates the need for extra clicks and typing, potentially enhancing the success rate of data extraction.

Cookies can also work for websites with two-factor authentication (2FA) or multi-factor authentication (MFA). However, this is not a guaranteed solution, as it can vary depending on the specific website's implementation and security measures. If you're working with a site that uses 2FA/MFA, please proceed with an experimental mindset.

Despite their convenience, cookies come with their own set of caveats:

  • Session/expiry Issues: Cookies often have limited lifespans. Some expire in less than a day, requiring you to update them regularly to keep your robot logged in.
  • IP address sensitivity: Some websites tie cookies to specific IP addresses. If your robot attempts to use a cookie from a different location, it might be rejected, even if the cookie itself is still valid.


How do we address cookie-related issues?

To potentially overcome these challenges:

  1. Regularly Update Session Cookies: If your robot relies on cookies, make a habit of refreshing them within the robot's settings to ensure they remain valid. The fresher the cookies, the better. 🍪 Here’s how:
    1. Approve your robot.
    2. Navigate to the Settings tab of your robot
    3. On the Authentication section, click on the “Update Session Cookies” button like so:


  2. Fallback to the credentials method: In situations where cookie-based login fails due to expiration or IP restrictions, you may need to create a robot that utilises your user credentials.

B. Have the robot log in with your username and password.

Before starting to record, enable the "This website needs logging in" option. While recording, log in like you normally would. The robot will record those actions and securely encrypt and store your credentials on Browse AI AWS infrastructure. Then capture the data you are looking to scrape or monitor.

After you finish recording and building the robot, it will perform the same steps and log in as you did.

While simple, it can present a few challenges:


  • Unnecessary interactions: Logging in typically requires typing your credentials and clicking buttons. These extra steps introduce potential points of failure, such as typos, especially if the website's layout changes or the robot encounters unexpected elements. (E.g., A/B test websites, etc.)

How do we mitigate the drawbacks of using credentials?

If you find that the user credentials approach is causing issues, you can either:

  1. Re-train your robot to only use concise clicks and keyboard inputs as you log in and navigate through your target page, and make sure it's only the necessary ones - as the fewer they are, the better. Here's how to re-train your robot:
    1. Approve your robot.
    2. Navigate to the Settings tab of your robot
    3. On the Danger Zone section, click on the "Re-train robot" button like so:


  2. Consider creating a new robot that utilises cookies for login:

This can potentially streamline the process and improve reliability.


The ideal login method depends on your specific circumstances or use case. If your website's login process is straightforward and stable, using credentials might be sufficient. However, if you prioritize minimizing interactions and potential errors, cookies could offer a smoother experience.


By understanding the pros and cons of each method and implementing the suggested solutions, you can optimize your robot's login process and ensure reliable data extraction from your Origin URLs.


A warning: Cloud automations that login can potentially be detected.

Browse AI leverages many techniques to avoid being detected by websites. This includes opening the site in a regular browser, scrolling down and clicking on elements like a person would, using proxy servers to change the IP address, and more.

That being said, when you use cloud automation software to log into a site, the site might detect that you are logging in from different IP addresses. Popular sites that have advanced settings can become suspicious because of this and potentially send you a warning or block your account.

If you believe this is a risk with the automation you need, we recommend using local automation so it uses the same IP address as yours. We recommend using your robots for publicly available data extraction.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us