This isn't really a bug on GP's side (and really a bug at all) but is there anyway to make it so that GP can connect and read web pages like amazon without being being thrown a 403 error?
And what are the reasons behind this? Is it because it send a header or something of the like or just because amazon assumes it's a robot somehow.
Anyway, this would be an amazing feature, especially for a project I'm working on with making a web scraping library for GP. :)
Report bugs. Post bug workarounds or fixes
1 post • Page 1 of 1