Kind of a Bug about the "http host" Block
Posted: Sep 16th, '18, 18:12
This isn't really a bug on GP's side (and really a bug at all) but is there anyway to make it so that GP can connect and read web pages like amazon without being being thrown a 403 error?
And what are the reasons behind this? Is it because it send a header or something of the like or just because amazon assumes it's a robot somehow.
Anyway, this would be an amazing feature, especially for a project I'm working on with making a web scraping library for GP. :)
And what are the reasons behind this? Is it because it send a header or something of the like or just because amazon assumes it's a robot somehow.
Anyway, this would be an amazing feature, especially for a project I'm working on with making a web scraping library for GP. :)