This Python script automates browsing actions through proxies and provides a graphical user interface (GUI) for user input. It uses Selenium for web browsing automation, threading for parallel processing, and Tkinter for the GUI. Here’s a breakdown of each major function and section:
1. scroll_and_open_articles:
- This function opens a specified URL, scrolls down the page smoothly, and randomly clicks on a few links (simulating a human-like behavior).
- It scrolls both down and slightly back up to simulate natural scrolling and pauses for a random amount of time to avoid detection.
2. scroll_and_open_multiple_articles:
- Takes a list of URLs and uses
scroll_and_open_articles
to open and interact with each URL in the list. - Waits for a random time between URLs to make browsing behavior more realistic.
3. generate_random_system_specs:
- Generates random system specifications for RAM, CPU, and ROM. These could be used to simulate different device capabilities, which might help in some scenarios that require varying device profiles.
4. get_timezone_from_proxy:
- Uses
ipapi.co
to get the timezone based on the provided proxy’s IP. - Maps certain country codes to timezones using a dictionary for better control and compatibility with different proxies.
5. get_geolocation_from_proxy:
- Also uses
ipapi.co
to fetch the geolocation (latitude and longitude) for a given proxy IP. - Returns the latitude and longitude, which could be used to emulate localized browsing behavior.
6. open_browser_with_proxy:
- This is the main function to handle browser automation:
- Configures Chrome with a specific proxy and user-agent.
- Conducts a Google search using a keyword and clicks on one of the random results.
- Calls
scroll_and_open_multiple_articles
to further scroll and click on articles within the resulting page. - Uses random wait times between actions to simulate natural user interactions.
7. start_browsing:
- This function is triggered by the GUI button and initiates browsing with the parameters provided in the GUI.
- Retrieves URLs, proxies, and keyword from the user inputs, and then starts multiple threads to handle browsing sessions across different proxies.
- Each thread runs the
open_browser_with_proxy
function for one proxy at a time until all proxies are used.
8. stop_browsing:
- Provides a way to stop all active browsing threads by setting a global flag
stop_threads
toTrue
.
9. load_user_agents:
- Loads user agents from a text file located on the user’s system. Each line in the file represents a different user agent, enabling the automation to simulate requests from various browsers/devices.
- If the file is empty or not found, it raises an error.
GUI (Tkinter):
- The GUI is built using Tkinter and includes the following elements:
- Text boxes for entering keywords, URLs, and proxies.
- Radio buttons for selecting the traffic type (Social or Organic).
- Entries for setting the number of threads and minimum/maximum wait times.
- Buttons to start and stop browsing.
- A ScrolledText widget for inputting multiple proxies.
- The GUI labels and widgets are styled with black backgrounds and cyan/white text.
Usage of Threading:
- The script allows browsing to be run in parallel threads, enabling multiple proxies to be processed simultaneously.
- This is controlled by a thread queue, with each thread picking a proxy and executing the browsing tasks on it.
This script is an advanced browser automation tool that allows customization and parallelized browsing with the help of proxies, making it well-suited for scenarios that require simulating organic or social traffic across various proxies.