How to Use MetaProducts Offline Browser for Offline Website Access

How to Use MetaProducts Offline Browser for Offline Website Access

1. Install and open the app

  • Download MetaProducts Offline Browser from the official site and run the installer.
  • Launch the program.

2. Create a new project

  1. Click New Project (or File → New).
  2. Enter a project name and the starting URL of the website or page you want to save.
  3. Choose a local folder where the site will be stored.

3. Configure download scope

  • Depth / Levels: Set how many link levels to follow (1 = only the starting page; 2+ = linked pages).
  • Include/Exclude: Add URL filters to include specific paths or exclude ads, external domains, or query strings.
  • File types: Select which file types to download (HTML, images, CSS, JavaScript, PDF, video, etc.).
  • Robots and auth: Optionally obey robots.txt and supply HTTP authentication if needed.

4. Set bandwidth and connection options

  • Limit simultaneous connections and download speed to avoid server overload.
  • Configure retry counts and timeouts for unstable sites.

5. Adjust link conversion and offline navigation

  • Enable automatic link rewriting so internal links point to the local copies.
  • Choose whether to mirror the site structure or store all files in a single folder.
  • Enable rewriting for CSS/JS references if necessary.

6. Start the download and monitor progress

  • Click Start or Download.
  • Monitor the log for errors (missing resources, blocked requests).
  • Pause/resume as needed.

7. Test the offline copy

  • Open the saved index.html in your browser and click through pages to verify links, images, and styles load correctly.
  • Re-run with adjusted filters if resources are missing.

8. Update or refresh content

  • Use the project’s Update or Rescan feature to fetch new or changed pages without re-downloading unchanged files.
  • Schedule automatic updates if supported.

9. Advanced tips

  • Use user-agent spoofing if the site serves different content to bots.
  • Save forms and dynamic content by exporting rendered pages (if supported) or using a headless browser capture.
  • For large sites, download in segments by restricting paths or subdomains.

10. Legal and ethical considerations

  • Respect copyright and site terms of use. Avoid scraping private or restricted content and heed server load limits.

If you want, I can produce step-by-step settings tailored to a specific site (e.g., news site, documentation, or intranet).

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *