Code is hard to remember.

This is where I put interesting bits of source so I can find them again. Maybe they will be useful to you. Don’t expect consistency, documentation, bike messengers, or aspiring actors.

Getting into Graduate School (Engineering / CS)

These are my personal opinions. Other people will say other things.



You must apply for fellowships to be competitive. It actually doesn’t really matter if you get one because you find out after you are admitted to the schools. But you need to be able to say on your application that you applied (they specifically ask). If the school thinks you might win one then they will be more likely to admit you because they won’t have to pay for you. The NSF deadline is /earlier than you think/. It is usually late October or early November… which means you need to ask for letters of recommendation in late September or early October.

You should at least apply to:

National Science Foundation Graduate Research Fellowship Program (NSF GRFP)
National Defense Science and Engineering Graduate (NDSEG)

and probably to:

DOE Office of Science Graduate Fellowship Program
DOE Computational Science Graduate Fellowship Program

Thoughts: Think about qualification exams. Many departments admit more people than they expect to pass quals and will force them to leave after a masters. Departments vary a lot — MIT MechE and Aero/Astro have much more difficult quals than EECS but also admit more people. Ask the other grad students what quals are like when you visit — they will definitely be able to tell you.The GRE is a huge pain, but in engineering you mostly need good scores in the math. English doesn’t appear to matter very much.In the end, there are really two things that get you into grad school. One is good grades, publication(s), and fantastic reference letters. The other is talking to professors. The professors/admissions committees are looking at people without enough information. If a professor has a choice between a great candidate on paper and a great candidate who s/he has talked to, that’s a huge difference. I fully believe I got accepted where I did because the professors knew who I was, knew what I was interested in and had met me. When my name came across someone stood up and said, “this guy is good, I’ve talked to him.” So you ABSOLUTELY MUST email and talk to professors. This is really hard. The way to start is to read their papers and write to them with a question about their work. It needs to be an actual good question. Then you can have a conversation with them which is key. Saying, “I’m interested in your lab” isn’t going to work. Too many people do that. This is really hard but it is what will get you into a top program.

When I was doing this as an undergraduate, it took me 4 to 6 hours per paper to read it, understand it, and come up with a question.

One thing that I found worthwhile was to read some of the sites out there on getting into grad school. I liked:


Weird resolution video to Mac

avconv -i 2015-10-08.10-3dvisualization.avi -b 100000k -vcodec qtrle


Convert 120fps MP4 to 30fps AVI

avconv -i autonomous-takeoff-from-launcher.mp4 -vf fps=fps=30 -b 100000k autonomous-takeoff-from-launcher.avi


AviSynth + VirtualDub + Linux + GoPro Hero 4 Black 120fps video

Editing 120 fps GoPro Hero 4 Black 1080p video without video conversion.

  1. Install AviSynth and VirtualDub for Linux
  2. Make sure you are using a recent version of VirtualDub (>= 1.10).
  3. Install the FFMpegSource plugin by downloading it (version 2.20-icl only) and placing all of it’s files from:

    in your

    ~/.wine/drive_c/Program Files (x86)/AviSynth 2.5/plugins


  4. Finally, open your MP4 file in your .avs:
    a = FFAudioSource("GOPR0002.MP4")
    v = FFVideoSource("GOPR0002.MP4")
    v = AudioDub(v, a)
  5. (Optional, allows VirtualDub to open MP4 files directly)
    Download FFInputDriver and unpack it into
    • Note: this is important because the AviSynth plugin seems to fail when loading huge files. Use this to open your source in VirtualDub and then trim to the relevant part.
  6. I’ve also been using the avisynth GUI proxy with wine along with Avidemux (in apt-get as avidemux) to improve load times on Linux.

    • File > Connect to avsproxy in Avidemux


view images in order using feh

feh `ls -v *.png`


Fitting nonlinear (small aircraft) models from experimental data

My labmate Ani Majumdar wrote up some useful notes from fitting models for our experimental data (bolded text is mine). See also Table 3.1 from my thesis:

I made progress on this. The model seems quite good now (comparing simulations using matab’s sysid toolbox for experimental flight trials, and looking at tracking performance on some preliminary experiments). Here are the things I tried in chronological order (and some lessons I learned along the way):

(1) Get parametric model from textbook (Aircraft Control and Simulation [Stevens], and Flight Dynamics [Stengel]), then do physical experiments on the plane to determine the parameters, and hope that F = ma.

The following parameters had to be measured/estimated:

– Physical dimensions (mass, moments of inertia, wing span/area, rudder/elevator dimensions, etc…)
– These are easy to just measure directly

– Relationship between servo command (0-255) and deflection angle of control surfaces
– This is simple to measure with a digital inclinometer/protractor (the reason this is not trivial is that the servo deflection gets transmitted through the wires to the actual control surface deflection.. so you actually do have to measure it)

– Relationship between throttle command and airspeed over wings
– I measured this using a hot-wire anemometer placed above the wings for different values of throttle commands (0-255). The relationship looks roughly like airspeed = sqrt(c*throttle_command), which is consistent with theory.

– Relationship between throttle command and airspeed over elevator/rudder
– Same experiment as above (the actual airspeed is different though).

– Relationship between throttle command and thrust produced
– I put the plane on a low-friction track and used a digital force-meter (really a digital scale) to measure the thrust for different throttle commands. The plane pulls on the force-meter and you can read out the force values. This is a scary experiment because there’s the constant danger of the plane getting loose and flying off the track! You also have to account for static friction. You can either just look at the value of predicted thrust at 0 and just subtract this off, or you can also tilt the track to see when the plane starts to slide (this can be used to compute the force: m*g*sin(theta)). In my case, these were very close. The relationship was linear, but the thrust saturates at around a throttle command of 140.

– Aerodynamic parameters (e.g. lift/drag coefficients, damping terms, moment induced by angle derivatives).
– I could have put the plane in a wind-tunnel for some of these, but decided not to. I ended up using a flat plate model.

(2) Use matlab’s sysid toolbox to fit stuff

Approach (1) wasn’t giving me particularly good results. So, I tried just fitting everything with matlab’s sysid toolbox (prediction error minimization, pem.m). I collected a bunch of experimental flight trials with sinusoid-like inputs (open-loop, of course).

This didn’t work too well either.

(3) Account for delay.

Finally, I remembered that Tim and Andy noticed a delay of about 50 ms when they were doing their early prophang experiments (they tried to determine this with some physical experiments). So, I took the inputs from my experimental trials and shifted the control input tapes by around 50 ms (actually 57 ms).

When I used matlab’s sysid toolbox to fit parameters after shifting the control input commands to account for delay, the fits were extremely good!

I noticed this when I was fitting a linear model to do LQR for prophang, back in October 2011. The fits are not good if you don’t take delay into account (duh). Got to remember this the next time.

Here is a summary of what worked, and how I would go about doing it if I had to do it again (which I probably will have to at some point on the new hardware):

(1) Do some physical experiments to determine the stuff that is easy to determine. And to get the parameteric form of the dependences (e.g. thrust is linear with the 0-255 prop command, and it saturates around 140).

(2) Use matlab’s sysid toolbox to fit parameters with bounds on the parameters after having accounted for delay. The bounds are important. If some of your outputs are not excited (e.g. yaw on our plane and pitch to some degree), pem will make your parameters aphysical (e.g. extremely large) in order to eke out a small amount of prediction performance. As an extreme example, let’s say you have a system:

xdot = f(x) + c1*u1 + c2*u2.

Let’s say all the control inputs you used for sysid had very small u2 (let’s say 0 just for the sake of arguments). Then, any value of c2 is consistent with the data.. so you can just make c2 = 10^6 for example, which is physically meaningless. If u2 is small (but non-zero) and is overshadowed by the effect that u1 has, then some non-physical values of c2 could lead to very slight improvements in prediction errors and could be preferred over physical values.

So, bounding parameters to keep them physically meaningful is a good idea in my experience. Of course, ideally you would just collect enough data that this is not a problem, but this can be hard (especially for us since the experimental arena is quite confined).

Another good thing to do is to make sure you can overfit with the model you have. This sounds stupid, but is actually really important. If the parametric model you have is incorrect (or if you didn’t account for delay), then your model is incapable of explaining even small amounts of data. So, as a sanity check, take just a small piece of data and see if you can fit parameters to explain that piece of data perfectly. If you can’t, something is wrong. I was seeing this before I accounted for delay and this tipped me off that I was missing something fundamental. (it took me a little while longer to realize that it was delay :). I also tried this on the Acrobot on the first day I tried to use Elena’s model to do sysid with. Something was off again (in this case, it was a sign error – my fault, not hers).

(3) Finally, account for delay when you run the controller by predicting the state forwards in time to compute the control input. This works well for Joe, and seems to be working well for me so far.


Converting between AprilCal and OpenCV

I recently wanted to use AprilCal from the April Robotics Toolkit‘s camera suite for camera calibration but to write my code in OpenCV. I got a bit stuck converting between formats so Andrew Richardson helped me out.

1) Run AprilCal’s calibration.

2) Enter the GUI’s command line mode and export to a bunch of different model formats with :model-selection

3) Find the file for CaltechCalibration,kclength=3.config which orders the distortion paramters like this: radial_1, radial_2, tangential_1, tangential_2, radial_3.

4) Your OpenCV camera matrix is:

        [  fc[0]   0     cc[0]  ]
    M = [    0   fc[1]   cc[1]  ]
        [    0     0       1    ]

5) Your OpenCV distortion vector is:

D =   lc[0]  


Mount multiple partitions in a disk (dd) image

sudo kpartx -av disk_image.img


Debugging cron

The way to log cronjobs:

* * * * * /home/abarry/mycommand 2>&1 | /usr/bin/logger -t my command name
... command runs ...
cat /var/log/syslog

Check for missing environment variables. Use env in cron vs. env in your shell.


Force a CPU frequency on an odroid (running linaro 3.0.x kernel)

Update: This works on 3.8.x kernels too. I used MIN_SPEED and MAX_SPEED of 1704000 instead.

For the 3.8.x kernels —

CPU speed file:


Temperature file:


From here:

sudo apt-get install cpufrequtils

Create a file: /etc/default/cpufrequtils with these contents:


Note:If your CPU temperature hits 85C, this will be overridden to force it down to 800Mhz. Check with this script:

Burn CPU:

while true
     sleep .5
     cpufreq-info |grep "current CPU"
     sudo cat /sys/devices/platform/tmu/temperature


OpenCV and saving grayscale (CV_8UC1) videos

OpenCV does some very odd things when saving grayscale videos. Specifically, it appears to covert them to RGB / BGR even if you have saving in a grayscale codec like Y800. This stack overflow post confirms, as does opening the files in a hex editor.

The real issue is that this conversion is lossy. When saving a grayscale image with pixel values of less than 10, they are converted to 0! Yikes!

The only reasonable solution I have found is to save all my movies at directories full of PGM files (similar to PPM but only grayscale).


Move windows between monitors using a hotkey in xfce

Also should work in other window managers.

Install xdotool

sudo apt-get install xdotool

Figure out the geometry of the destination by moving a terminal to the target location and size and running:

xfce4-terminal --hide-borders
xdotool getactivewindow getwindowgeometry

Giving for example,

Window 102778681
  Position: 2560,1119 (screen: 0)
  Geometry: 1050x1633

Sometimes terminals size themselves oddly, so you can do this instead:

ID=`xdotool search firefox`

And then use the ID:

xdotool getwindowgeometry $ID

There’s also an issue with the window decorations, so you’ll have to correct for that. Mine were 22 pixels tall.

Finally setup a hotkey with:

xdotool getactivewindow windowmove 2560 1119 windowsize 1050 1633

My hotkeys:

xdotool getactivewindow windowmove 440 0 windowsize 1680 1028 # top monitor
xdotool getactivewindow windowmove 0 1050 windowsize 1277 1549 # left on middle monitor
xdotool getactivewindow windowmove 1280 1050 windowsize 1277 1549 # right on middle monitor
xdotool getactivewindow windowmove 2560 1075 windowsize 1050 1633 # right monitor


Useful AviSynth Functions

I wrote a few useful AviSynth functions for:

  • Automatically integrating videos from different cameras: ConvertFormat(…)
  • import("abarryAviSynthFunctions.avs")
    goproFormat = AviSource("GOPR0099.avi")
    framerate = goproFormat.framerate
    audiorate = goproFormat.AudioRate
    width = goproFormat.width
    height = goproFormat.height
    othervid = AviSource("othervid.avi")
    othervid = ConvertFormat(othervid, width, height, framerate, audiorate)
  • Slowing down video slowly (ie go from 1x -> 1/2x -> 1/4x automatically): TransitionSlowMo(…)
    vid1 = AviSource("GOPR0098.avi")
    # trim to the point that you want
    vid1 = Trim(vid1, 2484, 2742)
    # make frames 123-190 slow-mo, maximum slow-mo factor is 0.25, use 15 frames to transition from 1x to 0.25x.
    vid1 = TransitionSlowMo(vid1, 123, 190, 0.25, 15) 


“Input not supported,” blank screen, or monitor crash when using HDMI to DVI adapter

You’re just going along and all of a sudden a terminal bell or something causes your monitor to freak out and crash. Restarting the monitor sometimes fixes the problem.

Turns out that sound is coming out through the HDMI adapter which your monitor thinks is video, and then everything breaks. Mute your sound.


Popping / clipping / bad sound on Odroid-U2

Likely your pulseaudio configuration is messed up. Make sure that pulseaudio is running / working.


The (New) Complete Guide to Embedded Videos in Beamer under Linux

We used to use a pdf/flashvideo trick. It was terrible. This is so. much. better:

Update: pdfpc is now at a recent version in apt.

1) Install

sudo apt-get install pdf-presenter-console

2) Test it with my example: [on github] [local copy]

# use -w to run in windowed mode
pdfpc -w video_example.pdf

3) You need a poster image for every movie. Here’s my script to automatically generate all images in the “videos” directory (give it as its only argument the path containing a “videos” directory that it should search.) Only runs on *.avi files, but that’s a choice, not a avconv limitation.


for file in `find $1/videos/ -type f -name "*.avi"`; do
#for file in `find $1/videos/ -type f -`; do
    # check to see if a poster already exists
    if [ ! -e "${file/.avi}.jpg" ]
        # make a poster
        #echo $file
        avconv -i $file -vframes 1 -an -f image2 -y ${file/.avi/}.jpg

4) Now include your movies in your .tex. I use an extra style file that makes this easy: extrabeamercmds.sty (github repo). Include that (\usepackage{extrabeamercmds} with it in the same directory as your .tex) and then:


or for a non-avi / other poster file:


If you want to include the movie yourself, here’s the code:


Installing from source:

1) Install dependencies:

sudo apt-get install cmake libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgee-0.8-dev librsvg2-dev libpoppler-glib-dev libgtk2.0-dev libgtk-3-dev gstreamer1.0-*

Install new version of valac:

sudo add-apt-repository ppa:vala-team
sudo apt-get update
sudo apt-get install valac-0.30

2) Download pdfpc:

git clone

3) Build it:

cd pdfpc
mkdir build
cd build
cmake ../
make -j8
sudo make install

Thanks to Jenny for her help on this one!


AviSynth: Add a banner that 1) wipes across to the right and 2) fades in

From Jenny.


function addWipedOverlay(clip c, clip overlay, int x, int y, int frames, int width)


c: the video clip you want to overlay
overlay: your banner
x: x position in image for banner to slide across from
y: y position in image
frames: the number of frames in which to accomplish wiping and fading in
width: the width over the overlay banner (if this is too big you will get an error from crop about destination width less than zero)


Assumes a transparency channel. To get this, you need to load your image with the pixel_type=”RGB32″ flag.


img = ImageSource("BannerName.png", pixel_type="RGB32")
clip1 = addWipedOverlay(clip1, img, 0, 875, 30, 1279, 0)

This is actually two functions because it uses recursion to implement a for loop:

function addWipedOverlay(clip c, clip overlay, int x, int y, int frames, int width)
    return addWipedOverlayRecur(c, overlay, x, y, frames, width, 0)

function addWipedOverlayRecur(clip c, clip overlay, int x, int y, int frames, int width, int iteration)
    cropped_overlay = crop(overlay, int((1.0 - 1.0 / frames * iteration) * width), 0, 0, 0)
    return (iteration == frames)
    \    ? Overlay(Trim(c, frames, 0), overlay, x = x, y = y, mask=overlay.ShowAlpha)
    \    : Trim(Overlay(c, cropped_overlay, x = x, y = y, mask = cropped_overlay.ShowAlpha, opacity = 1.0 / frames * iteration), iteration, (iteration == 0) ? -1 : iteration) + addWipedOverlayRecur(c, overlay, x, y, frames, width, iteration + 1)


Convert rawvideo / Y800 / gray to something AviSynth can read

avconv -i in.avi -vcodec mpeg4 -b 100000k out.avi

or in parallel:

find ./ -maxdepth 1 -name "*.avi" -type f |  xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -vcodec mpeg4 -b 100000k \${filename/.avi/}-mpeg.avi"


Wide Angle Lens Stereo Calibration with OpenCV

Update: OpenCV 2.4.10 adds a new fisheye namespace that might work better than the code below.

I’m using 150 degree wide-angle lenses for a stereo setup, and they are difficult to calibrate with OpenCV. A few must-have points:

  • When searching for chessboard matches, you must not use CALIB_CB_FAST_CHECK.
  • You need to calibrate each camera individually and then attempt the stereo calibration using the CV_CALIB_USE_INTRINSIC_GUESS flag.
  • I use about 30 chessboard images for each individual camera, making sure to cover the entire image with points and about 10 images for stereo calibration. I’ve found that more images on the stereo calibration does not help and actually may make it worse.


Print video framerate (and other info)

avconv -i GOPR0087.avi