Azealia Banks is a 19 year-old rapper from Harlem and ’212′ is her latest and greatest single with a class video that seems hard to find on the net so here it is.
[flowplayer src='videos/azealia _banks_212.mp4' splash='videos/azealia _banks_212_screenshot.png' width=640 height=360 autoplay=false]
The Irish ISP eircom blocked thepiratebay.org about 2 years ago but it’s pretty easy to bypass using a proxy. But I was sick of slow proxies with loads of ads so I uploaded my own.
Click below to use it:
eircom thepiratebay.org proxy
I’ve booked loads of Ryanair flights and every time they ask you for your name, address, mobile and when you check in online, they ask for your passport number and date of birth. Why do you have to do this everytime? Why can’t you register an account on the ryanair website and have all this information stored for you so you can book flights with one click. It would make life easier for customers and provid loads more potential cash for Ryanair.
They could start a self-serve ad system like Facebook’s and let advertisers target customers by nationality, destination and age. Right now they just have generic Hertz ads when you book your flight which I doubt bring in much conversions. How much would a small car hire firm in Knock make if they could target all German people over the age of 40 with a German language ad right after they’ve booked their flight to Knock? And they more targeted and higher converting ads, the more money for Ryanair. And this is guaranteed correct info, nobody is going to put a fake nationality or date of birth on their online check in.
Of course all this data gathering would need a pretty good terms and conditions but I sure that could be drawn up easy. Ryanair are leaving so much €€€ on the table, their IT department must be retarded.
As I said in my last post, I’ve written many a scraper using php with curl or fsockopen in my time, trying to write automated tools and scraping data. I’ve tried many tools to help me sniff the HTTP traffic so I could emulate it in PHP as quick as possible. I started off using Wireshark or Ethereal as it was called at the time which was complete overkill, mostly used for network trouble shooting and grabs all TCP/UDP packets which is information overload, all we want is HTTP data. Then I think I used the LiveHTTPheaders addon for Firefox which was pretty limited. Then a Java program called Burpsuite which was pretty powerful but I ran into a problem trying to automate myspace myads submissions, trying to figure out what HTTP the myads flash file was sending over HTTPS. I ran the gamut of every proxy tool out there until I came across Charles Web Proxy.
It’s basically the best out there. It sits as a proxy between the web and your browser, grabbing all data as it comes in. This usually causes problems with SSL but it has a custom SSL cert that you manually add to your browser that lets you log HTTPS data with no warnings. It can grab Flash traffic as it seems to work as a Windows proxy, not just a browser one. It presents HTTP data many different ways so you can understand what’s going on quicker. For example, a multipart form upload is presented as the the raw HTTP data sent, just the headers, just the cookies, the text body and all the form fields. I won’t list all the features as they’re all listed on the site. If you’re using any other tool for automation/scraping, you’re wasting time.
In my many of years of php/curl use, I’ve hammered my head off my table countless times trying to debug scripts that weren’t emulating the browser like it was supposed to. This was pretty hard without seeing the exact HTTP request header sent by cURL each session, but this is possible now from PHP 5.1.3
Use the curl_getinfo php function with the CURLINFO_HEADER_OUT option but make sure to set option CURLINFO_HEADER_OUT to true as a curl option.
$ch = curl_init("http://www.google.com");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLINFO_HEADER_OUT, true);
$get = curl_exec($ch);
I remember seeing this ridiculous interview for a band called Raygun on Graham Linehan’s blog a few months. I went looking for it again and noticed Sony had forced most copies to be taken down! But I found this one on Youtube and decided to post on my site for safekeeping. My favourite bit is ‘they couldn’t even find me a job in a record store’. LOL. Sums up this coddled generation.
This is more for my own reference than anything. Say you see a flog on the intertubes and want to rip it and stick up for affiliate links. How to do it quickly on Linux? I used to use wget but it sucked. httrack is much better.
httrack "http://www.techcrunh.com/" -N1 -O "/home/techcrunch_rip/public_html" +techcrunch.com/* +crunchgear.com/* -v
This will rip the homepage of techcrunch and stick it in the folder specified by -O. URL filters next ensure it only downloads files from certain domains. The -N1 argument is the most important, it ensures htttrack sticks all images, css in one directory instead of creating loads of directories. Very handy.
Taken from Oxegen forum. Considering going on the Saturday myself.
I was looking for an iPhone app a while ago to search Irish business phonenumbers and couldn’t find one, so wrote one myself. And while I was waiting for Apple to approve my app, a different phonebook app was released with better user interface! BUT it just searches the goldenpages website so you need a net connection. I scraped the Goldenpages website and stuck it in the app, so no net connectio needed, handy when you quickly need a number.
Check it out here, only e2.39 to buy