Curl POST files and Squid
For releases we also package a PEAR package for each component. We have a channel server at http://components.ez.no that can be integrated to download each component separately, but with dependency checking. As server back-end we use Chiara_PEAR_Server, which allows us to upload each component's release with a web-form. Now that we're having more and more available components uploading them one by one is no fun anymore—even more because the Firefox developers thought to be smart and force you to use the dialog instead of pasting in the filename.
So I wrote a script not too long ago to upload all the .tgz PEAR packages through curl . That was working great for quite a few releases. Unfortunately, when I rolled our latest 2008.2beta1 release this script refused to work. I investigated a bit and saw that curl posted only the header of the request, and not the POST body. Being annoyed by that I tried older curl versions to see if those were working, but no luck. I even tried PECL 's HTTP package only in order to find out that it actually uses curl in order to do requests.
So because all of that failed, I looked a tad more at the headers that curl was posting, and found this " Expect: 100-continue " header, which can be used to test whether a web server will accept a certain request based on headers. Turns out that Squid doesn't quite support that and simply rejects the request. As we use Squid to accelerate our site, we now have to create an SSH tunnel to the web server so that we can run curl against localhost with a port forward to the web server. Not fun, but it works.
Comments
You can turn this feature off using curl_setopt($curl, CURLOPT_HTTPHEADER, array('Expect:'));
"Expect: 100-continue" only exists since HTTP/1.1. Forcing the use of HTTP/1.0 should work too. For instance: curl_setopt($curl, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
I'm curious to see your little script. Is it opensource ? Would you publish it ? Thanks
Life Line
I've finished reading Children of Memory, the third book in the series.
Another interesting take on forms of intelligent life.
A fourth one is going to get released later this year.
Updated a post_box, a beauty shop, and a restaurant; Confirmed 2 clothes shops, 2 pet shops, and a restaurant
I walked 5.9km in 1h40m39s
Updated a bicycle_parking
Updated 2 waste_baskets
I walked 7.9km in 1h37m12s
Created 3 waste_baskets; Updated 3 bus_stops, 2 benches, and 2 waste_baskets
I walked 8.1km in 1h25m53s
I walked 1.2km in 9m31s
I walked 9.4km in 1h39m05s
Merge branch 'xdebug_3_5'
Merged pull request #1071
Fixed issue #2411: Native Path Mapping is not applied to the initial …
Created 2 waste_baskets; Updated 3 waste_baskets, 2 benches, and 2 other objects; Deleted a waste_basket
I walked 7.9km in 1h45m36s
RE: https://phpc.social/@phpc_tv/116274041642323081
Now that phpc.tv and phpc.social are part of the same umbrella, I've upped my yearly contributions to their Open Collective: https://opencollective.com/phpcommunity/projects/phpc-social
Merge branch 'xdebug_3_5'
Merged pull request #1070
I walked 7.2km in 1h10m26s
Fixed issue #2405: Handle minimum path in .xdebug directory discovery
I've published a new blog post: "Human Creations", on the difference in content generation by LLMs, and the creation of text, art and code by humans.
You can find it at https://derickrethans.nl/human-creations.html or at @blog
I walked 7.8km in 1h38m32s
RE: https://phpc.social/@afilina/116274024588235234
It's good to see that more and more people are realising that the Web can be for-good, without all the enshittification.
That's why I'm happy to see endeavours like phpc.tv springing up, and helping out where I can.
Taking back the control of how the Web is for people, by people, without big tech making it all shit.
Created a waste_basket; Updated 5 crossings and a bicycle_parking
I walked 10.7km in 2h35m10s


Shortlink
This article has a short URL available: https://drck.me/cpfas-6jw