This isn't really a bug on GP's side (and really a bug at all) but is there anyway to make it so that GP can connect and read web pages like amazon without being being thrown a 403 error?
And what are the reasons behind this? Is it because it send a header or something of the like or just because amazon assumes it's a robot somehow.
Anyway, this would be an amazing feature, especially for a project I'm working on with making a web scraping library for GP. :)
Kind of a Bug about the "http host" Block
Moderator: MSandro
Re: Kind of a Bug about the "http host" Block
I'm guessing Amazon is doing a redirect to HTTPS.
GP's HTTP function is uber-simple and it only handles normal HTTP, not HTTPS. Doing HTTPS requires using encryption, managing certificates, and a bunch of other complex stuff that it would be tedious to re-implement in GP. Furthermore, since security is involved, it would be better to use a well-tested library such as the "curl" library. That would a great thing to add, if someone wants to, once the source code for the VM is released (probably this fall).
GP's HTTP function is uber-simple and it only handles normal HTTP, not HTTPS. Doing HTTPS requires using encryption, managing certificates, and a bunch of other complex stuff that it would be tedious to re-implement in GP. Furthermore, since security is involved, it would be better to use a well-tested library such as the "curl" library. That would a great thing to add, if someone wants to, once the source code for the VM is released (probably this fall).