Justin.tv desktop streaming in linux

You should see the 'live' icon on your stream after you finish this tutorial

There are really no good tutorials for this that I could find, so here’s a tutorial on how to get justin.tv up on ubuntu. This should work with other linux distros. Before you start calculating the millions you’ll be making from ad revenue, lets first get you working with a stream. Then you can broadcast your sick rts skills or live coding sessions, as you see fit and rake in the dough.

http://apiwiki.justin.tv/mediawiki/index.php/Linux_Broadcasting_API
is the main site for how to do this, but it isn’t very detailed and doesn’t go over desktop streaming.

We’re going to use ffmpeg to stream the desktop and capture our audio as well.

1. First you need a justin.tv account, so go create one.

2.Then go to this page (while logged in) and click the ‘show’ link to view your stream key. This is sort of like a password, so don’t give it out.

3. Install ffmpeg if you don’t have it (you probably do,) but if you don’t for debian/ubuntu it’s the following:

sudo apt-get install ffmpeg libavcodec-extra-52

4.Start the stream.

INRES="1920x1080" # input resolution
OUTRES="1024x576"
FPS="20" # target FPS
QUAL="fast"  # one of the many FFMPEG preset
STREAM_KEY=live_231xxxxxxxxx #get your stream key as described above.

Then run ffmpeg with

ffmpeg -f x11grab -s "$INRES" -r "$FPS" -i :0.0  -f alsa -ac 2 -i hw:0,0 -vol 4096 -vcodec libx264 -vpre "$QUAL" -s "$OUTRES"  -acodec libmp3lame -ab 128k -threads 0   -f flv "rtmp://live.justin.tv/app/$STREAM_KEY flashver=FMLE/3.0\20(compatible;\20FMSc/1.0)" 

I modified this command from a forum post that used the one below. My changes were to not use pulseaudio and to boost the volume to twice as much (256 is default, so adjust accordingly)
some people had success with pulseaudio, but my ubuntu/wine config must not be setup to use it.
If audio fails, try this command instead.

ffmpeg -f x11grab -s "$INRES" -r "$FPS" -i :0.0  -f alsa -ac 2 -i hw:0,0 -vcodec libx264 -vpre "$QUAL" -s "$OUTRES"  -acodec libmp3lame -ab 96k -vol 4096 -ar 22050 -threads 0   -f flv "rtmp://live.justin.tv/app/$STREAM_KEY flashver=FMLE/3.0\20(compatible;\20FMSc/1.0)" 

The above command uses your audio output (what comes out of your speakers) as the audio to be streamed. To record from a mic, you can use pulse (I think it is installed by default):

ffmpeg -f x11grab -s "$INRES" -r "$FPS" -i :0.0  -f alsa -ac 2 -i pulse -vcodec libx264 -vpre "$QUAL" -s "$OUTRES"  -acodec libmp3lame -ab 96k -vol 4096 -ar 22050 -threads 0   -f flv "rtmp://live.justin.tv/app/$STREAM_KEY flashver=FMLE/3.0\20(compatible;\20FMSc/1.0)" 

This command should start your stream on justin.tv.
I am not sure why my audio is 1/16th what it should be and I need to scale the volume that much.

Next tutorial will be about how to integrate a webcam into a desktop streaming setup.

Update: I installed the pulse audio device/volume selector, and then ran the second command which uses pulse. While the stream is open I need to change from duplex stereo to output analog only, then change back. It’s weird that this fixes the filtering/low volume issue, but it does get around it. I only needed to make the swap once and it all works great.

sudo apt-get install padevchooser

I have recently switched to youtube because justin.tv/twitch tv doesn’t support saving or transfering videos to youtube if the content is not games. So while I may still use twitch for starcraft 2 and live streaming, I now use the following very similar command to record to mp4, then upload. I find I get much better framerates like this.
This command records the file to my videos folder with a timestamp attached. It wouldn’t take much more work to automate this upload process.

ffmpeg -f x11grab -s "$INRES" -r "$FPS" -i :0.0  -f alsa -ac 2 -i pulse -vcodec libx264 -vpre "$QUAL" -s "$OUTRES"  -acodec libmp3lame -ab 96k -vol 4096 -ar 22050 -threads 0 "$HOME/Videos/JLPTN1/cast720-$(date +%Y_%m_%d_%k_%M).mp4"

For some reason I needed $HOME instead of ~ to get the filepath to work.

Update2: I recently installed FFmpeg 0.8.10 which uses libavcodec 53.8.0, and this works for the mp4 capture, but fails silently with the rtmp output to justin.tv not going anywhere (Even after I take out the no longer valid -vpre and flashver=FMLE/3.0\20(compatible;\20FMSc/1.0) – if you leave the flashver in you get an error that says [rtmp @ 0x9be5fc0] Server error: Authentication Failed.
. My solution was to have two versions of ffmpeg – one compiled from source in /usr/local/bin/ and one using the safe libavcodec 52.72.2 – this is the one that apt-get gave me in /usr/bin/, and whenever I need justin.tv I use the older one.

Linux Audio Conference 2011

I did a performance a few weeks ago at the Linux Audio Conference in Maynooth, Ireland. I had no idea that the cars run their like in Japan – on the left side. I guess this is the urban equivalent of galapagos island evolutionary theory.

The performance was done as a 4 hour long installation in front of the main concert hall. This was nice because it made the whole thing really informal and people would come up and talk. It’s probably the only performance where I got multiple requests for specific sorting sonifications (e.g. ‘Please do mergesort on a size 30 array with 5 discrete values next’). It had the techy-conference problems of having the entire audience being male except for a few wives.

One of the nice things about LAC was that it was very low on expenses. This is something I have been thinking a lot more since my gradual departure from the academic ivory tower while still doing some experimental music and living like a cheap bastard in Berlin. LAC was completely free to register, and my total transportation (inc. airfare), food, and hostel costs were less than 200 EUR. For comparison, I’ve been accepted to ICMC the last two years but decided it wasn’t worth going to as they generally charge around $300 or more just to register. I don’t quite get this as they should be using university venues and funding to host. LAC is much smaller and they still manage to pull it off. Sometimes I wonder if ICMC does this specifically to keep non-academics who won’t have institutions to fund them out. ‘Pay to play’ kind of is a stupid model at any rate.

I was hoping that FFmpeg folks would be there, but they weren’t. I mean to work more on a seeking API for them over the next few months.
Instead, I met up with a friend from my Tokyo Denki University lab, Rennick Bell, who did a presentation on using haskell as an interface for livecoding with supercollider. Rennick also came to Berlin after which gave me a much needed reason to visit the SuperCollider meeting at NK.

I also met some interesting new folks working on new and interesting things.

Flávio Luiz Schiavoni and Giuliano Obici were fun Brazilians (apologies if this is a stereotype) who did a pretty nice installation called Concerto para Lanhouse:

Flavio also has created a system for networking music called medusa (pdf) which I am kind of interested in to use with someone for livestreaming performances.

I also met Daniel Mack there, who is working on writing a OS X version of pulseaudio. This sounds great because then I finally pipe audio between my linux and mac boxes (I have four). Apparently JACK works on mac, but I have never gotten it to. On the bus ride home from the airport I also met him again, meaning that we both live in Berlin so maybe there will be some work we can do together or I can help test.

Tim Blechmann of SuperCollider also presented a new system called ‘Supernova’ that uses parallelization in SC for better performance. I’m interested in that from a code standpoint. Apparently the work from Supernova is being or going to be used in boost libraries. Once we get the stable release of Audacity out I’m going to look at our on-demand engine, so it was nice to be able to talk to Tim about these ideas.

As far as my own musical work I’ve been modifying lstn somewhat – now there is some 8-bit visualization that actually looks kind of cool. But for the most part I have been focusing on Audacity and contract work. The stable release seems as always 2-3 months away despite my crunching about 8 bugs last month. God damn those P2 bugs. Since I’m not doing summer of code this year I hope I can pound it out.

To keep contributing to the music community though, I think I am going to do some writeups of the environment (both musical and japanese) in Berlin next.

gg.

Workshop and Concert in the Netherlands

Last month I did a concert in Den Haag at Loos Foundation. This was part of a series organized by Marie Guileray. Last year I did a concert there as a part of the Wonderwerp series, and the energy there was amazing, giving me one of the best memories I have of a performance. (I am not usually as excited about performance as I am about making the tools and presenting them.) Last month was also very nice, with a good number of people with genuine interest in the sonification tools.
One guy was even nice enough to record the performance and put it up on youtube:

Something about Den Haag is very special when it comes to atmosphere in experimental music. The city is home to the Royal Conservatory, where a number of notable people have taught, including Louis Andriessen. The conservatory also contains the sonology department, which has been a home to Gottfried Michael Konig and a large number of students focusing on some type of computer music. The city itself is not that big, so the experimental music scene revolves around the students of the school. This makes for a very knowledgeable audience. The level of focus the audience has feels very high there. It’s interesting that such a thing can be different for two audiences that are silent and watching the performance, but it really feels this way to me. I’m not getting spiritual on you; I think there are subtle cues that give this away, such as the breathing patterns of the room. It’s also a very positive place, preferring synergy over competition, which is important in an educational environment.

Three weeks later I did a program sonification workshop in Amsterdam at STEIM. I wasn’t sure what to expect for turnout, but it sold out two weeks before the workshop date. The people that showed up had a wide range of backgrounds. I would say most did not know programming per se, which was great, because the sonification of program concepts are not really about code at all, but rather about behavior, something which should be able to make sense to everyone that uses a computer. We did go into xcode a bit, to recompile programs to sonify themselves. While some people didn’t know how to code, it wasn’t so important since the steps to do this using my libraries were quite simple. Also because this was a workshop, and you’re supposed to learn things, everyone was open to the idea. To be honest there were a few bumps, but everyone came through in the end. I am thinking to polish the workshop up a bit and give it another shot.

SMB filesharing on mac (works with windows clients)

I have an old powerbook g4 (2004,) which was my main computer until last year. Apple products are expensive but I think it was pretty worth it’s money. Now I have a Core Duo (2006) macbook and a sweet homebrew core 2 quad pc whose main purpose in life is to fix audacity bugs on windows and make my actions per minute in starcraft higher. It is connected to my sound system so I want to be able to hear my library music from it. Little did I know, this small desire would bring to an epic smb quest.

WIFI is broken on both my new(er) macbook and pc, so I use the old powerbook as a network hub, sharing internet from wifi via ethernet. I want to use it to share files between mac and pc, so I tried using the “Windows File Sharing” option in the Sharing panel, which just boots up an smb server.

However it doesn’t work with PCs and has apparently crappy authentication. There are numerous posts about people trying to set the authentication to ntvlm1 by modifying the smb.conf. I tried this, and many other similar things, to no avail. Then I came across one solution which is to compile and install the latest smb.

It works now after messing with the damn thing for a good 4 or 5 hours. Since there are no obvious other guides on the net for the few folks who want to do this on mac, here are the steps and problems i ran into. You’ll need basic knowledge of the terminal and probably need the xcode developer tools installed to do this.

1. I typed “smb source download” and got version 3.2.6 or so.

Next steps in the terminal
2. Then I cd’d to the directory “source3” in the smb folder and typed

./configure --prefix=/usr;make;sudo make install

-note: there is also “source4”, but i believe this is an unstable branch.
-2nd note: also don’t use –prefix=/sur/local/sbin like some guy on a forum posts says he does because this will end up creating an (additional) sbin and lib directory within /usr/local/sbin.
-note: this takes about 15minutes to half an hour on a slow pc.

3. Now we need to edit the configuration file for smb.
In the terminal type:

emacs /usr/lib/smb.conf

and change

passdb backend = opendirectorysam

(or whatever it is) to

passdb backend = smbpasswd'

4.Above that line add 2 new lines :

smb passwdfile = /usr/lib/samba/smbpass.passwd
security = user

5.I also changed

guess account = unknown

to

guess account = youruseraccountname

(replace it with your user name of course)

6.Now save with control-x then control-s. quit emacs with control-x control-c

7.Now it’s time to start the client. in the terminal, type

sudo smbd -d

which starts the smb daemon.

8.You now need to add the password (this can only be done when smb is running, apparently.) Type

sudo smbpasswd -a yourmacusername

then enter whatever password you want (This can be different from your mac password.)

9.Connect from your mac/pc. If your ip is 192.168.1.50, then connect to smb://192.168.1.50/yourmacusername. On windows you can do this by mapping a network drive when you right click ‘My Computer’. On mac it’s in the ‘go’ menu->connect to server

-note if things don’t work out for you and you want to test and see error messages, use

sudo smb --debug-level=10 -i

if you do this you’ll need to kill it before starting the daemon (and vice versa). You may also need the kill command if smbd hangs.

I have some additional steps because I wanted to share an external USB drive, which is not possible by using symbolic links or mac aliases in your public folder.
to do this:
1.edit smb.conf again and add the lines

[extusb]
comment = extusb
path = /Volumes/My Passport
browseable = yes
read only = no

Replace “My Passport” with the name of your drive.
To access it just go to smb://192.168.1.50/extusb (if your ip is 192.168.1.50) and use the same password you provided above.

Then everything pretty much works. The latency is pretty crappy so I had to adjust VLC file cache size to play audio/video well. It’s probably not too hard to configure smb more to handle this, but I’m feeling smbd’d out right now. If someone figures it out thought let me know.

Some last notes:
My smb system was a 1.5ghz g4 on 10.4.11.
My smb clients were a macbook core2duo 2.0 ghz on 10.6.4 and 2.4 gHz Core 2 Quad on Vista x64
I read somewhere on a forum that the smb/opendirectory that comes with your mac is a modified version and not compatible with the newest public smb downloads.
I also kinda sorta know the proper way to handle daemons are with launchctl but I don’t know how to set it up, and I never restart this mac, so I haven’t looked into it.

Listen to your computer: lstn

Lstn is another project I’ve been working on with Institute for Algorhythmics.
It is a program that sonifies real-time debugging data of other programs. If you were interested in the FuckingAudacity or FuckingWebBrowser demos you’ll probably like this one. Those were sonification from the inside out, and lstn is about external sonification:

We sonify opcodes, callstack, and active memory. The sonification of active memory is direct, but sonifying opcodes and callstack info is more tricky to do ‘mostly honestly,’ and we’re still trying to figure it out. The project is still in alpha stage.
You can use lstn to attach to pretty much any program on the mac besides itself.

You can also use lstn to do old-school sonification of chunks of memory (as opposed to jumping around wherever the opcodes go.) The sonification of memory this way has a suprising amount of periodicity in it, creating both tones, and even things like crecendi and decrecendi.

There is built in osc support, but we’re not sure yet what data to send, as my impression is that osc might choke if you send too much stuff at irregular intervals.

I’m hoping to work more on the display to draw a visual map of the memory and program counter and also some tree-like structure for visualizing the callstack.

Another feature to note is the speed control which reduces the speed of the program being attached to. This is important because our ears won’t be able to catch things that happen at rates higher than the sample rate. I hope as the sonification methods become more meaningful that this feature will have more value.

This project started last February, but has only gotten this far because I keep changing OS’s and computers. The program needs different source for each of these combinations, since the opcodes and opcode format for i386, i386_64 and ppc are different. As of right now it works on ppc best and i386 (core duo) pretty well. on ppc you can simultaneously attach to many different processes, but I haven’t gotten that to work on core duo yet.

This project has taught me a lot more about cpu internals and program structure than any other program, (or class for that matter) has given me. So if you’re a programmer and this kind of stuff tickles your whatever, I suggest taking a look at the source and asking me about it. Otherwise just hold on till we get a real release.