How to record Youtube live streams on a Linux server

Coachella streams a load of their sets every year on Youtube and nobody seems to record them all. So I had to volunteer starting in 2015. Some rips you find are one-off screen recordings using some software like Camtasia, which nearly always drop frames and are kinda shit. You have to record directly from the Youtube source for the best quality. youtube-dl is great but is designed for downloading normal Youtube videos and I never got it working to record live. The only thing that worked for me was livestreamer, which was discontinued but was forked as streamlink. Here’s my streamlink command which worked for recording Coachella 2018 channel 1:
streamlink --http-no-ssl-verify best -o "output.ts"

I had a problem with SSL certs so had to add a no-verify but you probably don’t need it. Run that command and streamlink should start recording. Ctrl+C to stop recording. You’re now left with a .ts transport stream file which Youtube broadcasts in to help with error correction over the Internet. You can play these .ts files in VLC and MPC but seeking in them is annoyingly slow and VLC can sometimes play the audio out of sync. Also, they don’t seem to have a table of contents or whatever it’s called, so they’re missing a duration. This is bad for me as I need to play the video files direct from the server and pick the start and stop times to cut the Coachella streams into individual sets. So I prefer to convert to MP4 using ffmpeg, with this command:
ffmpeg -i input.ts -acodec copy -bsf:a aac_adtstoasc -vcodec copy output.mp4

If you’re recording a long Youtube live stream, like I am with the 9 hour Coachella streams, sometimes the stream can fail for whatever reason. Youtube seems to always break at 6 hours. You need a way for Streamlink to start back up if it fails so you don’t miss anything. The only solution I could come up with is a while true loop in a streamlink bash shell script.
#! /bin/bash
while true;
while [[ -e $dir/ts_segments/$name-$i.ts ]] ; do
let i++
streamlink --http-no-ssl-verify best -o "$dir/ts_segments/$name.ts";

Note that when you run this bash script, you can’t just stop it with Ctrl+C. It’ll restart the streamlink command like it’s supposed to. You have to kill -9 it. Find the PID of the bash script with ps -ef and kill it. You’re left with an orphaned streamlink command which you’ve to kill -9 as well, but it won’t restart as the bash script was already killed. Make sure you understand how to stop the shell script before starting it!

The shell script will start with saving the .ts files to sunday_ch1/ts_segments/coachella_ch1-1.ts. The while -e line returns true if the file exists, and if so increments i by one to ensure you always write sequentially. So when the script restarts, it writes to coachella_ch1-2.ts and won’t overwrite coachella_ch1-1.ts and so on.

I run my bash scripts in screensessions so I can log out of the server and let it do its thing. So that’s my recording setup. Other stuff I do is cut the long streams into individual sets with ffmpeg, add those sets to my database, upload them to Mega or Google Drive using the APIs so I get the Mega/GDrive link into the database and output the links into a formatted Reddit post, but that’s all outside the scope of this blog post.

Hope this helps someone! Leave a comment if you’ve any questions, don’t email me.

Ticketmaster Ticketfast Barcode Format

If you’ve ever bought tickets to a big sporting or music event, you’ve probably had to deal with TicketMaster and their extortionate handling fees. They charge it even if you’re using the print-at-home eTicket format called TicketFast. These tickets have a 16 digit number and barcode that is scanned at the gate against a database of purchases. I assumed this barcode was some kind of proprietary format to make it harder to clone tickets if you knew a valid number but it turns out it’s just an interleaved 2 of 5 barcode.

This doesn’t matter as long as customers keep their 16 digit number secret but a simple search of eBay and other buy/sell sites often have photos of seller’s printouts. As long as you can make out the number, you can create a barcode and paste it over another TicketFast and clone the ticket. Then you just need to get to the venue before the real ticketholder and their ticket won’t scan at entrance. Here’s some images I found on eBay, DoneDeal and Gumtree coming up to Electric Picnic recently. It was sold out and demand was high. I presume showing these tickets is OK now seeing as the festival is over.


Lol at this one. They blacked out all information except the important stuff.

Lol at this one. They blacked out all information except the important stuff.

I used this fantastic Barcode PHP library to generate the new barcodes. I needed to make a few adjustments to the code to get the barcodes looking the same as the TicketMaster ones but I think they ended up good. Take the last image as an example. You can just about make out the number to be: 3458 4242 6099 3291

And here’s my generated barcodes in both horizontal and vertical format:


They scanned fine on my Android barcode reader but I obviously didn’t test at the festival entrance with real stolen tickets as I’m not a scumbag and I had my own two real hardcopy tickets. I’m surprised nobody is stealing tickets like this as you always can find these numbers on the auction sites. Or maybe they are on the sly. Has anyone ever been refused at an entrance with printed tickets? They probably just blamed their seller but it could have been a randomer who just got the number, printed out new tickets and got to the venue early.

I think Ticketmaster should educate buyers more about how important the number is and to always pixelate when displaying online. Or they could have used a proprietary barcode format which would make cloning much more difficult.

How to serve files with a simple and quick PHP BitTorrent tracker

I needed to share some raw video files with a friend lately so I uploaded 17GB of data to my webserver and sent them the HTTP link. Problem was the video files were scattered over several directories and the directory structure was important so the end user would have had to download each one individually if they didn’t have some sort of download manager to download all. I could have given them anonymous FTP access but the end user isn’t that technical so I had to keep it as simple as possible. I could have zipped up the files on the server but that was taking ages for 17GB and would be a disaster if somehow the big ZIP file corrupted.

So I decided to make a torrent of the data. The great thing about torrents is nearly everybody knows how to use them no matter how technophobic they are as they all want to download Game of Thrones every Monday morning. I already have rtorrent and mktorrent on my Linux server so I already had a way to seed and a way to create the actual torrent files. I was just missing a tracker. I wanted the quickest and simplest option, and I found it in Bitstorm via an article on TorrentFreak.

You can download BitStorm’s one file of PHP source here.

1) Save as ui.php and upload to your web server to a publicly accessible folder. So your URL should be something like
Notice the port number of 80, the standard HTTP port. You need to specify this in your torrent files as a BitTorrent tracker can be on any port and won’t default to 80.
2) Change permissions so script has write access to /dev/shm/ to track peers. So chmod 0755 ui.php or something like that.
3) Create the torrent file of the folders you want to transfer with mktorrent and the announce URL of your ui.php file. eg mktorrent --announce= folder_of_files/
4) Start a rtorrent instance where your folder_of_files is and add your new torrent. It should do a hash check and start seeding. It’s best to open rtorrent in a screen so you can leave it running.
5) Send the torrent to whoever you want to download the files.

Damn, it feels weird using BitTorrent in legal ways!

How to disable Clickberry appearing on your Youtube videos

Like me, have you suddenly started seeing a ‘tag’ feature on your Youtube videos, that when clicked show a ‘Share moment’ and ‘Share object’ option and a little icon of a berry appears on your Youtube play bar? And when you click this berry icon, it brings you to Looks like this:



When Clicked

When Clicked

I thought the tag thing was a new feature of Youtube along with the Google+ comments but it’s actually added by a Chrome extension gone rogue called ‘FVD Video Downloader‘. You might remember that extension asking for more permissions in the last few days. Just uninstall and Clickberry will be gone from your Youtube.

How to use Overplay and other VPNs as a cURL proxy

UPDATE May 2015

When I posted this tutorial two years ago, I gradually got comments that the OpenVPN interface was setup successfully but curl traffic through the interface timed out. I tried to help but responded with the cliched ‘works on my machine’, with my machine being Debian Squeeze and kernel version 2.6. Recently I upgraded my server to Debian Jessie, kernel 3.14 and OpenVPN 2.3.4 and got the timeout problem that everyone was talking about. If I’d to guess, I’d say it’s some change in the kernel and routing tables but I really have no idea. But fear not, a random commentator from May 2014 called William Asssaad saved us all. We need to add a routing table for the VPN interface after it’s all setup. You can do it manually with the ip route add and ip rule add commands but I prefer to do it in a shell script, which I got from this blog post:

Proxies are like hard drive space, you can never have enough. Or enough IPs to be more accurate as Facebook, Google and other services are getting better at flagging the IPs of popular HTTP/SOCKS5 proxies. So we need to find fresh proxies to use in our PHP/Python scripts. A great source is the proliferation of VPN services that are popping up as consumers worry more about their Internet privacy. Problem is these are intended for use by endusers on their desktops and not in serverside PHP scripts. So it’s a bit tricky to get these working with cURL, but fear not, I explain all in this post.

The magic of being able to use a VPN in cURL is the CURLOPT_INTERFACE option. This lets you set the network interface that cURL uses. You can’t use a VPN directly in cURL as cURL/PHP operates on a higher network level than the VPN protocol.

So we need to setup the VPN on a new interface. Note that you absolutely need root access to your server to create interfaces so this guide is only useful for people with their own dedicated servers. People on shared $2/month servers are shit out of luck. You might get it working on a VPS, I’ve no idea. So to create an interface, you need to download and install OpenVPN if you haven’t it installed already. There’s loads of info online to help you do this, so figure it out and come back.

Next we need to get the configuration files we need from our VPN provider of choice. I use You need an account with them and it cost something like $5 a month. Download the ZIP file of connection files and unzip on your server in a new directory. Also download the Overplay public key certificate and make sure it’s in the same directory:
curl --insecure -o
unzip -u
curl --insecure -o OverplayCert.crt

Now we have the connection files which work fine if you run Linux as your desktop OS and just want to browse the web as described in this Overplay guide. But we DO NOT want to just start the VPNs as is as it will take over the main Internet connection and make your server inaccesible. I did this a few times and had to get my host to reboot my server.

So we need to edit the configuration files and add one command, route-nopull. This prevents Overplay from taking over the routing information. If you take one thing away from this blogpost, it’s the addition of the route-nopull option as it’s what lets you use these config files on your server.

May 2015: we need to change route-nopull to these 3 commands I got from this blogpost:
script-security 2
route-up /root/

route-up is the imporant bit, it runs the shell script at /root/ when the interface is created. Here’s the contents of


echo "$dev : $ifconfig_local -> $ifconfig_remote gw: $route_vpn_gateway"

ip route add default via $route_vpn_gateway dev $dev table 20
ip rule add from $ifconfig_local table 20
ip rule add to $route_vpn_gateway table 20
ip route flush cache

exit 0

I also want to add my login as a file so I don’t have type it everytime. So create a new file in the same directory and name it ‘auth_overplay’ or whatever you want. Enter your username and password, seperated by a newline. So if we’re taking the ‘Overplay – Ireland-1.conf’ file as our example, our config would now look like this. Our additions are in bold at the bottom:
dev tun
proto udp
remote 1443

resolv-retry infinite
ca OverplayCert.crt
verb 5
route-method exe
route-delay 2

tun-mtu 1500
tun-mtu-extra 32
mssfix 1450

script-security 2
route-up /root/
auth-user-pass auth_overplay

Note we also added daemon at the end, but commented it out. You can uncomment this when you’ve got everything working and want to start the VPN as a daemon so you can use it without having to have the SSH window open.

Now start up the VPN with OpenVPN:
openvpn "Overplay - Ireland-1.conf"

If everything works, you should see output like this:
Fri Jul 26 21:05:13 2013 us=31236 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Feb 21 2012
Fri Jul 26 21:05:13 2013 us=31325 WARNING: No server certificate verification method has been enabled. See for more info.
Fri Jul 26 21:05:13 2013 us=31331 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables
Fri Jul 26 21:05:13 2013 us=31678 LZO compression initialized
Fri Jul 26 21:05:13 2013 us=31725 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ]
Fri Jul 26 21:05:13 2013 us=31748 Socket Buffers: R=[124928->131072] S=[124928->131072]
Fri Jul 26 21:05:13 2013 us=31766 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ]
Fri Jul 26 21:05:13 2013 us=31777 Local Options String: 'V4,dev-type tun,link-mtu 1574,tun-mtu 1532,proto UDPv4,comp-lzo,cipher BF-CBC,auth SHA1,keysize 128,key-method 2,tls-client'
Fri Jul 26 21:05:13 2013 us=31781 Expected Remote Options String: 'V4,dev-type tun,link-mtu 1574,tun-mtu 1532,proto UDPv4,comp-lzo,cipher BF-CBC,auth SHA1,keysize 128,key-method 2,tls-server'
Fri Jul 26 21:05:13 2013 us=31794 Local Options hash (VER=V4): 'd3a7571a'
Fri Jul 26 21:05:13 2013 us=31802 Expected Remote Options hash (VER=V4): '5b1533a2'
Fri Jul 26 21:05:13 2013 us=31811 UDPv4 link local: [undef]
Fri Jul 26 21:05:13 2013 us=31816 UDPv4 link remote: [AF_INET]
WRFri Jul 26 21:05:13 2013 us=37221 TLS: Initial packet from [AF_INET], sid=a552afa0 928c908a
WFri Jul 26 21:05:13 2013 us=37266 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
Fri Jul 26 21:05:13 2013 us=72206 VERIFY OK: depth=0, /C=US/ST=IL/L=Chicago/O=OVERPLAY.NET_LLP/OU=SERVERS/CN=vpn1-us/
WRWRWRWRWWRRWWWWRRRRWRWRFri Jul 26 21:05:13 2013 us=472472 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
Fri Jul 26 21:05:13 2013 us=472487 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Fri Jul 26 21:05:13 2013 us=472528 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
Fri Jul 26 21:05:13 2013 us=472534 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
WFri Jul 26 21:05:13 2013 us=472560 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Fri Jul 26 21:05:13 2013 us=472576 [vpn1-us] Peer Connection Initiated with [AF_INET]
Fri Jul 26 21:05:15 2013 us=598837 SENT CONTROL [vpn1-us]: 'PUSH_REQUEST' (status=1)
WRRWRWRFri Jul 26 21:05:15 2013 us=604235 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS,route,topology net30,ping 10,ping-restart 120,ifconfig'
Fri Jul 26 21:05:15 2013 us=604257 Options error: option 'redirect-gateway' cannot be used in this context
Fri Jul 26 21:05:15 2013 us=604275 Options error: option 'route' cannot be used in this context
Fri Jul 26 21:05:15 2013 us=604293 OPTIONS IMPORT: timers and/or timeouts modified
Fri Jul 26 21:05:15 2013 us=604297 OPTIONS IMPORT: --ifconfig/up options modified
Fri Jul 26 21:05:15 2013 us=604301 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
Fri Jul 26 21:05:15 2013 us=604472 TUN/TAP device tun0 opened
Fri Jul 26 21:05:15 2013 us=604485 TUN/TAP TX queue length set to 100
Fri Jul 26 21:05:15 2013 us=604507 /sbin/ifconfig tun0 pointopoint mtu 1500
WFri Jul 26 21:05:17 2013 us=671302 Initialization Sequence Completed

Now open a new SSH session as root and enter ifconfig to see the list of network interfaces. You should see your proxy listed with the interface name tun0 or something like that.

Some OverPlay servers don’t complete the TLS handshake for me but I’m thinking that was because they were old IPs. Overplay seems to change actual proxy servers a lot.

If you see the tun0 interface in ifconfig, then it worked, probably. Test it with cURL on the command line:
curl --interface tun0

Replace tun0 with whatever your interface is called. is a simple site that outputs your IP and nothing else and has been online for 3 years so hopefully it stays online. If everything works, the site should output the IP of Overplays proxy and not your server IP.

To use the interface in your PHP scripts, you’d set it with something like this:
curl_setopt($curlh, CURLOPT_INTERFACE, "tun0");

I found from experience that PHP/cURL defaults to using the standard network interface if something doesn’t work, so it grabs pages using your real IP! This can be a disaster if you’re doing dodgy stuff and don’t want to get banned from FB or Google. For this reason, I always grab in my scripts and check to make sure it’s not my server IP.

Well that’s about it. I also wrote a few PHP scripts that automatically download the Overplay config files, edit them, check what country the IP is and add them to a MySQL table. Then I’ve other PHP code that just queries the table grabbing a random proxy from the country I need. I’ll do a blogpost and share that code if anyone actually reads this one and comments. And also comment if you run into any trouble, I’m usually quick to answer.

iMessage is a handy way to show people you’ve the money for an iPhone

ryanair leaving money on the table with no customer registration

I’ve booked loads of Ryanair flights and every time they ask you for your name, address, mobile and when you check in online, they ask for your passport number and date of birth. Why do you have to do this everytime? Why can’t you register an account on the ryanair website and have all this information stored for you so you can book flights with one click. It would make life easier for customers and provid loads more potential cash for Ryanair.

They could start a self-serve ad system like Facebook’s and let advertisers target customers by nationality, destination and age. Right now they just have generic Hertz ads when you book your flight which I doubt bring in much conversions. How much would a small car hire firm in Knock make if they could target all German people over the age of 40 with a German language ad right after they’ve booked their flight to Knock? And they more targeted and higher converting ads, the more money for Ryanair. And this is guaranteed correct info, nobody is going to put a fake nationality or date of birth on their online check in.

Of course all this data gathering would need a pretty good terms and conditions but I sure that could be drawn up easy. Ryanair are leaving so much €€€ on the table, their IT department must be retarded.

charles web proxy review

As I said in my last post, I’ve written many a scraper using php with curl or fsockopen in my time, trying to write automated tools and scraping data. I’ve tried many tools to help me sniff the HTTP traffic so I could emulate it in PHP as quick as possible. I started off using Wireshark or Ethereal as it was called at the time which was complete overkill, mostly used for network trouble shooting and grabs all TCP/UDP packets which is information overload, all we want is HTTP data. Then I think I used the LiveHTTPheaders addon for Firefox which was pretty limited. Then a Java program called Burpsuite which was pretty powerful but I ran into a problem trying to automate myspace myads submissions, trying to figure out what HTTP the myads flash file was sending over HTTPS. I ran the gamut of every proxy tool out there until I came across Charles Web Proxy.

It’s basically the best out there. It sits as a proxy between the web and your browser, grabbing all data as it comes in. This usually causes problems with SSL but it has a custom SSL cert that you manually add to your browser that lets you log HTTPS data with no warnings. It can grab Flash traffic as it seems to work as a Windows proxy, not just a browser one. It presents HTTP data many different ways so you can understand what’s going on quicker. For example, a multipart form upload is presented as the the raw HTTP data sent, just the headers, just the cookies, the text body and all the form fields. I won’t list all the features as they’re all listed on the site. If you’re using any other tool for automation/scraping, you’re wasting time.

php curl debugging – seeing the exact http request headers sent by curl

In my many of years of php/curl use, I’ve hammered my head off my table countless times trying to debug scripts that weren’t emulating the browser like it was supposed to. This was pretty hard without seeing the exact HTTP request header sent by cURL each session, but this is possible now from PHP 5.1.3

Use the curl_getinfo php function with the CURLINFO_HEADER_OUT option but make sure to set option CURLINFO_HEADER_OUT to true as a curl option.

$ch = curl_init("");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLINFO_HEADER_OUT, true);
$get = curl_exec($ch);