Code is hard to remember.

This is where I put interesting bits of source so I can find them again. Maybe they will be useful to you. Don’t expect consistency, documentation, bike messengers, or aspiring actors.

ag: find source code fast

ag (the silver searcher) is great.


sudo apt install silversearcher-ag

My config for it (in ~/.bashrc):

alias ag="ag --smart-case --color-path \"31;1\" --color-match \"32;1\" --color-line-number \"34;1\""


How to find a source video in a Google Slides presentation

Edit: this used to be hard, but Google added a button for it. Hooray!

Read the rest of this entry »


How to use a GoPro Hero 8’s firmware webcam mode in linux

GoPro Hero 8 Black in Webcam mode show in Google Meet on Ubuntu 18.04

I’m running Ubuntu 18.04.

1. Install v4l2loopback:

sudo apt-get install v4l2loopback-dkms

2. modprobe it:

sudo modprobe v4l2loopback devices=1 max_buffers=2 exclusive_caps=1 card_label="VirtualCam"

3. Plug in the GoPro with the latest firmware (tested with 2.01). I read that you need a USB3 cable so that’s what I used.

4. GoPro will come up as a network interface. For me its IP was:

You can nmap to find it:


5. Start an ffmpeg stream (note: I could only make ffmpeg work when the GoPro was stopped first):

ffmpeg -fflags nobuffer -f:v mpegts -probesize 8192 -i udp:// -f mpegts -vf format=yuv420p -f v4l2 /dev/video10

Your /dev/video device might be different for your v4l2loopback device.

6. Point your browser at or

You should see the GoPro switch into webcam mode on the front and back screens. If all went well, you’ll have a webcam called “VirtualCam” that will contain the stream.

7. Cycle the GoPro by going to:

Sadly, the latency isn’t very good (I’d guess around 300ms), so I’m not sure it’s all that useful. I tried the Windows beta with my camera and I watched a YouTube video of the official app on Mac, the latency seemed about the same in both cases.

Reddit post with GoPro details


Amazing space shuttle video

Cameras on the solid rocket boosters, showing stage separation. One of the only up-close shots I’ve seen on the shuttle firing its engines high up.

STS-134 seen from an SRB after separation

9 min, 39 seconds:

Here’s another view where you can see the separation charges firing on the opposite SRB (14 min, 28 sec):


I love Firefox on Android because of ad-blocking

Firefox on Android + uBlock Origin is great.

1. When do I care most about bandwidth? Mobile.
2. When do I care most about power consumption? Mobile.

I haven’t had any compatibility issues that I originally worried about. It’s just lovely.

03/1/20 is for when you are searching in a range and you know the upper and lower bounds and want the most efficient search to find the middle.



Try my new site: Stop listening to boring video introductions and jump right to the point!


Programming an Arduino from a Chromebook with Crostini

It is now finally possible to run the Arduino IDE directly from a Chromebook without having to deal with internet-based compilers, removing Chrome OS, or any of that nonsense.

I am running Chrome OS Version 80.0.3987.18 (Official Build) dev (64-bit) from the Dev channel on a Samsung Chromebook and an Arduino Uno.

1. Enable Chrome OS dev channel and update to at least 80.0.3987.18.
2. Install Crostini via Settings > Linux (Beta) > Turn On
3. Go to chrome://flags/#crostini-usb-support and enable Crostini Usb Allow Unsupported

4. Install the arduino IDE by opening the Linux terminal and typing sudo apt-get install arduino
5. Plug your Arduino in. If everything is working correctly, you’ll see a popup like this:

Click “Connect to Linux”

6. Open the terminal and type arduino to run the IDE. The serial port should now work and you should be able to upload.


I believe it will be illegal to kill a cow for meat in 40 years

I was discussing this article about plant-based meat and Katya suggested that our society could ban the meat industry.  If plant-based or lab-grown meat becomes tastier, healthier, and cheaper that animal products, we will no longer tolerate killing animals for food.

In 2059, I believe it will be illegal to kill a cow for meat in the US.

We already ban the killing on many animals for meat, such as dogs, cats, and horses.  With a viable alternative, why wouldn’t we add cows?  People love cows.

I often ponder, “what will be tomorrow’s next social issue?”  Remember, the first Pride Parade was only 49 years ago.  I think it’s meat.


Thesis Thursday (how to graduate on time)

Thesis Thursday helped me graduate on time.

You need a thesis.  You can try to write it in a month or two, be sad, miss your deadline, and graduate late.

Instead, do Thesis Thursday.

It’s easy. Every Thursday, you do the most immediate task your thesis needs. Not necessarily the hardest, most painful, or whatever. The most immediate.

Example Thesis Thursday Day 1:

  1. Open a text editor and create thesis.tex
  2. Spend two hours figuring out your department’s template
  3. Skip writing your title, and instead make chapter headings
  4. Call Chapter 2 Related Work and write a sentence about the most recent paper you read

Example Thesis Thursday Day 2:

  1. Read a paper and put two sentences about it in your related work
  2. Read a second paper and put two sentences about it in your related work
  3. That’s probably it for the day.


If you’re closer to graduation, you might have a day that looks like this:

  1. Be sad that you only have 2 of 3 committee members scheduled.
  2. Be sad that Dr. Third Committee Member is ignoring your emails
  3. Look up Dr. Third’s class schedule
  4. Go to the class as it is letting out
  5. Follow Dr. Third to his/her office until they give in and look at their calendar for you
  6. That took the whole day, but was a huge success of a day!  Committee meeting scheduled!


Sticking with it

It’s tough to stay accountable with Thesis Thursday.  People want meetings and your research will seem more important than writing.  Don’t give in!

I’m here to help: I will personally email you every Thursday and ask how much progress you made. Sign up here:


gitg throwback edition

I used to really like gitg, but I find the new version harder to use and less featurefull. Enter gitg throwback edition, which is just gitg 0.2.7 updated to compile on a modern system:


PID Control Pitfalls

A nice explanation of PID control and its pitfalls (pdf).


Bash history finally done right

By default bash history is bad at sharing between terminals. I want:

  • Union of all terminals’ history in a new terminal
  • Each terminal to keep its own history while it is open
  • (optional) Type “sudo apt” and then press “up arrow” and it will search for everything starting with “sudo apt”

And finally, via this post and the comment by Jo Liss, I’m happy:

# Insert into .bashrc

# Make sure you remove the existing history lines
# Usually:
#    HISTCONTROL=ignoreboth
#    shopt -s histappend
#    HISTSIZE=1000

_bash_history_sync() {
    builtin history -a
history() {
    builtin history "$@"

if [[ "$-" =~ "i" ]] # Don't do this on non-interactive shells
    # Add MATLAB-style up-arrow, so if you type "ca[UP ARROW]" you'll get
    # completions for only things that start with "ca" like "cat abc.txt"
    bind '"\e[A":history-search-backward'
    bind '"\e[B":history-search-forward'


Guest Post: “Grumpy Sister” on Applying for Software Jobs

My sister gives some good advice:

Resume: Resumes are ONE PAGE long. (Plus a page for publications if you have them.) Nothing makes me grumpier than a new graduate with a three page resume. I do not care what you did in high school. I do not care about your ambitions for your life or your job search or your cat. I do not care to read paragraphs of text. If you include coursework (which you should only do if you’re applying right out of undergrad), only include unusual and high level courses. If you’re applying for a job in software, I should hope you’ve had Intro to CS.

Some places will ask you to do a presentation. If they do, YOU SHOULD ACE THE PRESENTATION. The entire rest of the day is going to be people asking you questions that may or may not be in your area of expertise. The only part of the interview you control is the presentation. There is simply no excuse for this not to be outstanding. Specifically:

  • Practice it. You probably will be given a time limit and it will probably be short. You will not get through twenty minutes worth of content if you have to stop and think and um and er your way through every slide. Plus, if you’ve practiced enough, you’ll have some inflection to your voice and you’ll have some leisure to look around and smile and make eye contact instead of frantically trying to remember your next slide. If you can find a friend and make them sit through it ahead of time, even better.
  • Words are bad. Pictures are good. Videos are great.
    • Make sure your graphics are readable. If you put in a graph, the lines need to be thick enough to be seen, the axes should be labeled, and it should have a legend if it needs one. Also make sure you actually know what your graph is of (you think I’m kidding? I’ve been in not one but two presentations in which the candidate couldn’t remember what his graphs were).
    • If you put math on the slide, define your variables. Please. Engineers can’t even agree on the correct letter for the square root of -1.
    • Everyone likes to see results. Especially results that are videos.
  • Be prepared for questions in the middle of the presentation.
  • Present one thing that you know very well. Within that, pick one aspect you want to go into depth on and teach it. Do not just present a slide of math because you learned how to use beamer in grad school and think it makes you look smart – it just makes you look like a poor communicator. If you have multiple things about which you think you could give an excellent presentation by all means pick the one you think will be most interesting to your interviewers. But that’s a concern secondary to being able to explain it in your sleep.
  • Don’t include a biography slide. We’ve all read your resume. Similarly, don’t waste five minutes telling me about the projects you’ve done that you’re not going to talk about today. If you think people might ask about them, make some backup slides.

Interviews: Almost any software interview is going to make you do whiteboard interviews. Review your favorite algorithms textbook (I recommend CLRS if you don’t have a favorite), find some example google interview problems online, and practice coding on a whiteboard. If you don’t have a whiteboard, tack some paper to the wall and use that. It’s really different from coding on a computer and it’s really important to get used to. In my very first interview ever, they asked me to code binary search and I messed it up. I’d actually been a TA for algorithms… and I got flustered with the different feel of the whiteboard and the interview and I made mistakes.


Sequencing DNA in our Extra Bedroom

MinION with flow-cell

MinION with flow-cell. The sensor is in the small window near the “SpotON” port. Video explanation of the principal of operation.

About a year ago, my girlfriend says, “I think we can sequence DNA in our extra room.” She showed me the website of Oxford Nanopore, which makes a USB DNA sequencer. We joked about it for a bit and that was that.

Until October, when I decided to buy her one for Christmas. I punched my credit card into their nice online store and waited for my shipping confirmation, which never arrived. Finally, I emailed them and found out I needed to join their “community.” At this point, I should mention that I’m not a biologist and I’m certainly not qualified to join a community of researchers. And they wanted to have a phone call. And this thing had better arrive by Dec. 25.

So I fessed up to my girlfriend (who has a PhD in genetics) and she wrote me a notecard with exactly what to say. I studied up. The day comes and I’m all nervous. The rep, I’ll call her Alice, phones and I proudly explain all the things I’m going to do, straight off my notecard.

Then Alice starts asking questions. I was not prepared for questions. Do I have a temperature controlled fridge? Duh, of course (in the kitchen)1 A freezer? Yes obviously2. A centrifuge and thermocycler? “Uh sure yeah I have that3.” Then Alice started asking questions about my research. Uh oh. I repeat something about bacterial colonies from my card but she isn’t buying it. I manage to get out that I understand this isn’t a spit-in-the-tube-and-done thing and that’s all I’ve got. She keeps pushing and I eventually admit that I’m really buying it for my girlfriend for Christmas. Apparently that’s okay, since once Alice finishes laughing she said that it will arrive by Dec. 25 but they don’t offer a gift-wrap service.

The box arrives, packed in some sweet dry ice stuff. Dec. 25 comes and I get a set of pipettes and a PCR machine.

The obvious thing to sequence is one of us but the MinION can only do 1-2 billion bases per run4; to have decent quality for a human you need more like 90 billion 5. We decide to swab our mouths before brushing our teeth and find out what’s in there. My girlfriend whips up some media, we plop our swabs in, and put the tubes in the $30 chicken egg incubator she found on Amazon. Two days later I’m informed that the cultures smell like “really really bad breath,” and that I “really ought to smell them for myself,” which I studiously avoid for the next two months.

With the help of a $80 genomic DNA extraction kit, we cut up the cells, filter all their bits out, and have genomic DNA ready to go. First we run a gel electrophoresis to prove to ourselves that we didn’t mess up the extraction too much.

At this step, all the real biologists out there are assuming I’m going to talk about quantifying the DNA to make sure we had the right concentration before blowing a $500 flow-cell (the consumable part of the MinION) on this. Yeah, that would be a lot of work and the line is pretty bright in the gel…

We open the fridge to get out the sequencer’s flow-cell and notice that everything is frozen. Oh %$@*#@#*. I set the fridge to “10” because that seemed like a good idea. Alice definitely isn’t going to buy my warranty-return story. Two days later we’ve got a new thermometer and are praying that 100 freeze-thaw cycles are, uh, totally fine.

Happily the MinION comes with a calibration program that doesn’t seem to notice our substandard storage: all green. At this point we discover that Oxford Nanopore helpfully sends everyone a set of sample DNA to run first. We decide that sounds like a really good idea.

The promotional videos for the MinION claim, “simple 10 minute setup.” About two hours later, we’ve done the library prep, and we’re pipetting into the device. There are lots of warnings on their webpage about how you really can’t let air into the thing (permanent destruction of the flow-cell, blah blah). So of course the first thing we do is introduce an air bubble. But it only covers half the sensor. I think it’s the most expensive 5µL of air I’ve ever seen.

It turns out the sequencer produces so much data the minimum requirements are a 1TB SSD and a quad-core CPU. My girlfriend’s laptop has 200 GB and a dual core, so that will have to suffice. We fire it up and it starts producing reads. We’re over the moon. 6-hours later the run finishes, but only 10% of the bases have been “called.” The way the system works is by reading tiny changes in electric current as the molecules pass through the nanopore. Apparently the signal processing is kind of hard because 2 days later it’s still going. I play Overwatch by myself.

Sequencing. You can see the 512 ports on the laptop screen. Approximately half are green (sequencing or waiting for DNA) and the other half are blue (air-bubbled).

The Oxford Nanopore website supports automated uploading and analysis of some datasets, including the calibration run. Our run produced 983 million (aligned) base pairs, which any professionals reading this are scoffing at, but I’m pretty sure Celera circa 2000 would have been impressed. I certainly am. The last time I did anything close to this, we tested PTC tasting with a gel electrophoresis that took all day and effectively sequenced 1 base. We prep and load our mouth-bacteria genomic DNA into the sequencer. We’re reusing the flow-cell to save money and it’s hard to load correctly. We’re all paranoid about air bubbles now, but there’s a ton of them in the little fluid pipes. Eventually we look at each other, shrug, and put the sample in. It goes nowhere.

The library prep includes adding “loading beads” to the sample which give the liquid a ghostly white color. You can see if your sample is on the sensor by tracking the movement of that color, and ours clearly was stuck on the top of the port. Eventually we searched the forums and found someone else incompetent enough to have the same problem, with a solution that can be summarized as, “pull some liquid out of a different port and hope.” It worked great.

The sequencer uploads data in realtime, so after about 20 minutes, we were looking at a report of what lives in our mouth. Good news: it’s all normal. Bad news: ew. Turns out that we’re hosting viruses that are preying on the bacteria we’re also hosting.

Classification of our data.  The primary species are types of bacillus and klebsiella.  You can see a klebsiella phage which is a virus that preys on the similarly named bacteria.

Classification of our data. The primary species are types of bacillus and klebsiella. You can see a klebsiella phage which is a virus that preys on the similarly named bacteria.

I can’t wait for the next time I get sick so I can confidently stride into the doctor’s office and inform them exactly what bacterial infection I have before throwing up on their table and finding out we massively contaminated the sample and I have a viral flu.


Getting into Graduate School (Engineering / CS)

These are my personal opinions.  Other people will say other things.



You must apply for fellowships to be competitive.  It actually doesn’t really matter if you get one because you find out after you are admitted to the schools.  But you need to be able to say on your application that you applied (they specifically ask).  If the school thinks you might win one then they will be more likely to admit you because they won’t have to pay for you.  The NSF deadline is /earlier than you think/.  It is usually late October or early November… which means you need to ask for letters of recommendation in late September or early October.

You should at least apply to:

National Science Foundation Graduate Research Fellowship Program (NSF GRFP)
National Defense Science and Engineering Graduate (NDSEG)

and probably to:

DOE Office of Science Graduate Fellowship Program
DOE Computational Science Graduate Fellowship Program

Thoughts:Think about qualification exams.  Many departments admit more people than they expect to pass quals and will
force them to leave after a masters.  Departments vary a lot — MIT MechE and Aero/Astro have much more difficult quals than EECS but also admit more people.  Ask the other grad students what quals are like when you visit — they will definitely be able to tell you.The GRE is a huge pain, but in engineering you mostly need good scores in the math.  English doesn’t appear to matter very much.In the end, there are really two things that get you into grad school.  One is good grades, publication(s), and fantastic reference letters.  The other is talking to professors.  The professors/admissions committees are looking at people without enough information.  If a professor has a choice between a great candidate on paper and a great candidate who s/he has talked to, that’s a huge difference.  I fully believe I got accepted where I did because the professors knew who I was, knew what I was interested in and had met me.  When my name came across someone stood up and said, “this guy is good, I’ve talked to him.”So you ABSOLUTELY MUST email and talk to professors.  This is really hard.  The way to start is to read their papers and write to them with a question about their work.  It needs to be an actual good question.  Then you can have a conversation with them which is key.  Saying, “I’m interested in your lab” isn’t going to work.  Too many people do that. This is really hard but it is what will get you into a top program.

When I was doing this as an undergraduate, it took me 4 to 6 hours per paper to read it, understand it, and come up with a question.

One thing that I found worthwhile was to read some of the sites out there on getting into grad school.  I liked:


Weird resolution video to Mac

avconv -i 2015-10-08.10-3dvisualization.avi -b 100000k -vcodec qtrle


Convert 120fps MP4 to 30fps AVI

avconv -i autonomous-takeoff-from-launcher.mp4 -vf fps=fps=30 -b 100000k autonomous-takeoff-from-launcher.avi


AviSynth + VirtualDub + Linux + GoPro Hero 4 Black 120fps video

Editing 120 fps GoPro Hero 4 Black 1080p video without video conversion.

  1. Install AviSynth and VirtualDub for Linux
  2. Make sure you are using a recent version of VirtualDub (>= 1.10).
  3. Install the FFMpegSource plugin by downloading it (version 2.20-icl only) and placing all of it’s files from:

    in your

    ~/.wine/drive_c/Program Files (x86)/AviSynth 2.5/plugins


  4. Finally, open your MP4 file in your .avs:
    a = FFAudioSource("GOPR0002.MP4")
    v = FFVideoSource("GOPR0002.MP4")
    v = AudioDub(v, a)
  5. (Optional, allows VirtualDub to open MP4 files directly)
    Download FFInputDriver and unpack it into
    • Note: this is important because the AviSynth plugin seems to fail when loading huge files. Use this to open your source in VirtualDub and then trim to the relevant part.
  6. I’ve also been using the avisynth GUI proxy with wine along with Avidemux (in apt-get as avidemux) to improve load times on Linux.

    • File > Connect to avsproxy in Avidemux


view images in order using feh

feh `ls -v *.png`


Fitting nonlinear (small aircraft) models from experimental data

My labmate Ani Majumdar wrote up some useful notes from fitting models for our experimental data (bolded text is mine). See also Table 3.1 from my thesis:

I made progress on this. The model seems quite good now (comparing simulations using matab’s sysid toolbox for experimental flight trials, and looking at tracking performance on some preliminary experiments). Here are the things I tried in chronological order (and some lessons I learned along the way):

(1) Get parametric model from textbook (Aircraft Control and Simulation [Stevens], and Flight Dynamics [Stengel]), then do physical experiments on the plane to determine the parameters, and hope that F = ma.

The following parameters had to be measured/estimated:

– Physical dimensions (mass, moments of inertia, wing span/area, rudder/elevator dimensions, etc…)
– These are easy to just measure directly

– Relationship between servo command (0-255) and deflection angle of control surfaces
– This is simple to measure with a digital inclinometer/protractor (the reason this is not trivial is that the servo deflection gets transmitted through the wires to the actual control surface deflection.. so you actually do have to measure it)

– Relationship between throttle command and airspeed over wings
– I measured this using a hot-wire anemometer placed above the wings for different values of throttle commands (0-255). The relationship looks roughly like airspeed = sqrt(c*throttle_command), which is consistent with theory.

– Relationship between throttle command and airspeed over elevator/rudder
– Same experiment as above (the actual airspeed is different though).

– Relationship between throttle command and thrust produced
– I put the plane on a low-friction track and used a digital force-meter (really a digital scale) to measure the thrust for different throttle commands. The plane pulls on the force-meter and you can read out the force values. This is a scary experiment because there’s the constant danger of the plane getting loose and flying off the track! You also have to account for static friction. You can either just look at the value of predicted thrust at 0 and just subtract this off, or you can also tilt the track to see when the plane starts to slide (this can be used to compute the force: m*g*sin(theta)). In my case, these were very close. The relationship was linear, but the thrust saturates at around a throttle command of 140.

– Aerodynamic parameters (e.g. lift/drag coefficients, damping terms, moment induced by angle derivatives).
– I could have put the plane in a wind-tunnel for some of these, but decided not to. I ended up using a flat plate model.

(2) Use matlab’s sysid toolbox to fit stuff

Approach (1) wasn’t giving me particularly good results. So, I tried just fitting everything with matlab’s sysid toolbox (prediction error minimization, pem.m). I collected a bunch of experimental flight trials with sinusoid-like inputs (open-loop, of course).

This didn’t work too well either.

(3) Account for delay.

Finally, I remembered that Tim and Andy noticed a delay of about 50 ms when they were doing their early prophang experiments (they tried to determine this with some physical experiments). So, I took the inputs from my experimental trials and shifted the control input tapes by around 50 ms (actually 57 ms).

When I used matlab’s sysid toolbox to fit parameters after shifting the control input commands to account for delay, the fits were extremely good!

I noticed this when I was fitting a linear model to do LQR for prophang, back in October 2011. The fits are not good if you don’t take delay into account (duh). Got to remember this the next time.

Here is a summary of what worked, and how I would go about doing it if I had to do it again (which I probably will have to at some point on the new hardware):

(1) Do some physical experiments to determine the stuff that is easy to determine. And to get the parameteric form of the dependences (e.g. thrust is linear with the 0-255 prop command, and it saturates around 140).

(2) Use matlab’s sysid toolbox to fit parameters with bounds on the parameters after having accounted for delay. The bounds are important. If some of your outputs are not excited (e.g. yaw on our plane and pitch to some degree), pem will make your parameters aphysical (e.g. extremely large) in order to eke out a small amount of prediction performance. As an extreme example, let’s say you have a system:

xdot = f(x) + c1*u1 + c2*u2.

Let’s say all the control inputs you used for sysid had very small u2 (let’s say 0 just for the sake of arguments). Then, any value of c2 is consistent with the data.. so you can just make c2 = 10^6 for example, which is physically meaningless. If u2 is small (but non-zero) and is overshadowed by the effect that u1 has, then some non-physical values of c2 could lead to very slight improvements in prediction errors and could be preferred over physical values.

So, bounding parameters to keep them physically meaningful is a good idea in my experience. Of course, ideally you would just collect enough data that this is not a problem, but this can be hard (especially for us since the experimental arena is quite confined).

Another good thing to do is to make sure you can overfit with the model you have. This sounds stupid, but is actually really important. If the parametric model you have is incorrect (or if you didn’t account for delay), then your model is incapable of explaining even small amounts of data. So, as a sanity check, take just a small piece of data and see if you can fit parameters to explain that piece of data perfectly. If you can’t, something is wrong. I was seeing this before I accounted for delay and this tipped me off that I was missing something fundamental. (it took me a little while longer to realize that it was delay :). I also tried this on the Acrobot on the first day I tried to use Elena’s model to do sysid with. Something was off again (in this case, it was a sign error – my fault, not hers).

(3) Finally, account for delay when you run the controller by predicting the state forwards in time to compute the control input. This works well for Joe, and seems to be working well for me so far.


Converting between AprilCal and OpenCV

I recently wanted to use AprilCal from the April Robotics Toolkit‘s camera suite for camera calibration but to write my code in OpenCV. I got a bit stuck converting between formats so Andrew Richardson helped me out.

1) Run AprilCal’s calibration.

2) Enter the GUI’s command line mode and export to a bunch of different model formats with :model-selection

3) Find the file for CaltechCalibration,kclength=3.config which orders the distortion paramters like this: radial_1, radial_2, tangential_1, tangential_2, radial_3.

4) Your OpenCV camera matrix is:

        [  fc[0]   0     cc[0]  ]
    M = [    0   fc[1]   cc[1]  ]
        [    0     0       1    ]

5) Your OpenCV distortion vector is:

D =   lc[0]  


Mount multiple partitions in a disk (dd) image

sudo kpartx -av disk_image.img


Debugging cron

The way to log cronjobs:

* * * * * /home/abarry/mycommand 2>&1 | /usr/bin/logger -t my command name
... command runs ...
cat /var/log/syslog

Check for missing environment variables. Use env in cron vs. env in your shell.


Force a CPU frequency on an odroid (running linaro 3.0.x kernel)

Update: This works on 3.8.x kernels too. I used MIN_SPEED and MAX_SPEED of 1704000 instead.

For the 3.8.x kernels —

CPU speed file:


Temperature file:


From here:

sudo apt-get install cpufrequtils

Create a file: /etc/default/cpufrequtils with these contents:


Note:If your CPU temperature hits 85C, this will be overridden to force it down to 800Mhz. Check with this script:

Burn CPU:

while true
     sleep .5
     cpufreq-info |grep "current CPU"
     sudo cat /sys/devices/platform/tmu/temperature


OpenCV and saving grayscale (CV_8UC1) videos

OpenCV does some very odd things when saving grayscale videos. Specifically, it appears to covert them to RGB / BGR even if you have saving in a grayscale codec like Y800. This stack overflow post confirms, as does opening the files in a hex editor.

The real issue is that this conversion is lossy. When saving a grayscale image with pixel values of less than 10, they are converted to 0! Yikes!

The only reasonable solution I have found is to save all my movies at directories full of PGM files (similar to PPM but only grayscale).


Move windows between monitors using a hotkey in xfce

Also should work in other window managers.

Install xdotool

sudo apt-get install xdotool

Figure out the geometry of the destination by moving a terminal to the target location and size and running:

xfce4-terminal --hide-borders
xdotool getactivewindow getwindowgeometry

Giving for example,

Window 102778681
  Position: 2560,1119 (screen: 0)
  Geometry: 1050x1633

Sometimes terminals size themselves oddly, so you can do this instead:

ID=`xdotool search firefox`

And then use the ID:

xdotool getwindowgeometry $ID

There’s also an issue with the window decorations, so you’ll have to correct for that. Mine were 22 pixels tall.

Finally setup a hotkey with:

xdotool getactivewindow windowmove 2560 1119 windowsize 1050 1633

My hotkeys:

xdotool getactivewindow windowmove 440 0 windowsize 1680 1028 # top monitor
xdotool getactivewindow windowmove 0 1050 windowsize 1277 1549 # left on middle monitor
xdotool getactivewindow windowmove 1280 1050 windowsize 1277 1549 # right on middle monitor
xdotool getactivewindow windowmove 2560 1075 windowsize 1050 1633 # right monitor


Useful AviSynth Functions

I wrote a few useful AviSynth functions for:

  • Automatically integrating videos from different cameras: ConvertFormat(…)
  • import("abarryAviSynthFunctions.avs")
    goproFormat = AviSource("GOPR0099.avi")
    framerate = goproFormat.framerate
    audiorate = goproFormat.AudioRate
    width = goproFormat.width
    height = goproFormat.height
    othervid = AviSource("othervid.avi")
    othervid = ConvertFormat(othervid, width, height, framerate, audiorate)
  • Slowing down video slowly (ie go from 1x -> 1/2x -> 1/4x automatically): TransitionSlowMo(…)
    vid1 = AviSource("GOPR0098.avi")
    # trim to the point that you want
    vid1 = Trim(vid1, 2484, 2742)
    # make frames 123-190 slow-mo, maximum slow-mo factor is 0.25, use 15 frames to transition from 1x to 0.25x.
    vid1 = TransitionSlowMo(vid1, 123, 190, 0.25, 15) 


“Input not supported,” blank screen, or monitor crash when using HDMI to DVI adapter

You’re just going along and all of a sudden a terminal bell or something causes your monitor to freak out and crash. Restarting the monitor sometimes fixes the problem.

Turns out that sound is coming out through the HDMI adapter which your monitor thinks is video, and then everything breaks. Mute your sound.


Popping / clipping / bad sound on Odroid-U2

Likely your pulseaudio configuration is messed up. Make sure that pulseaudio is running / working.


The (New) Complete Guide to Embedded Videos in Beamer under Linux

We used to use a pdf/flashvideo trick.  It was terrible.  This is so. much. better:

Update: pdfpc is now at a recent version in apt.

1) Install

sudo apt-get install pdf-presenter-console

2) Test it with my example: [on github] [local copy]

# use -w to run in windowed mode
pdfpc -w video_example.pdf

3) You need a poster image for every movie. Here’s my script to automatically generate all images in the “videos” directory (give it as its only argument the path containing a “videos” directory that it should search.) Only runs on *.avi files, but that’s a choice, not a avconv limitation.


for file in `find $1/videos/ -type f -name "*.avi"`; do
#for file in `find $1/videos/ -type f -`; do
    # check to see if a poster already exists
    if [ ! -e "${file/.avi}.jpg" ]
        # make a poster
        #echo $file
        avconv -i $file -vframes 1 -an -f image2 -y ${file/.avi/}.jpg

4) Now include your movies in your .tex. I use an extra style file that makes this easy: extrabeamercmds.sty (github repo). Include that (\usepackage{extrabeamercmds} with it in the same directory as your .tex) and then:


or for a non-avi / other poster file:


If you want to include the movie yourself, here’s the code:


Installing from source:

1) Install dependencies:

sudo apt-get install cmake libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgee-0.8-dev librsvg2-dev libpoppler-glib-dev libgtk2.0-dev libgtk-3-dev gstreamer1.0-*

Install new version of valac:

sudo add-apt-repository ppa:vala-team
sudo apt-get update
sudo apt-get install valac-0.30

2) Download pdfpc:

git clone

3) Build it:

cd pdfpc
mkdir build
cd build
cmake ../
make -j8
sudo make install

Thanks to Jenny for her help on this one!


AviSynth: Add a banner that 1) wipes across to the right and 2) fades in

From Jenny.


function addWipedOverlay(clip c, clip overlay, int x, int y, int frames, int width)


c: the video clip you want to overlay
overlay: your banner
x: x position in image for banner to slide across from
y: y position in image
frames: the number of frames in which to accomplish wiping and fading in
width: the width over the overlay banner (if this is too big you will get an error from crop about destination width less than zero)


Assumes a transparency channel. To get this, you need to load your image with the pixel_type=”RGB32″ flag.


img = ImageSource("BannerName.png", pixel_type="RGB32")
clip1 = addWipedOverlay(clip1, img, 0, 875, 30, 1279, 0)

This is actually two functions because it uses recursion to implement a for loop:

function addWipedOverlay(clip c, clip overlay, int x, int y, int frames, int width)
    return addWipedOverlayRecur(c, overlay, x, y, frames, width, 0)

function addWipedOverlayRecur(clip c, clip overlay, int x, int y, int frames, int width, int iteration)
    cropped_overlay = crop(overlay, int((1.0 - 1.0 / frames * iteration) * width), 0, 0, 0)
    return (iteration == frames)
    \    ? Overlay(Trim(c, frames, 0), overlay, x = x, y = y, mask=overlay.ShowAlpha)
    \    : Trim(Overlay(c, cropped_overlay, x = x, y = y, mask = cropped_overlay.ShowAlpha, opacity = 1.0 / frames * iteration), iteration, (iteration == 0) ? -1 : iteration) + addWipedOverlayRecur(c, overlay, x, y, frames, width, iteration + 1)


Convert rawvideo / Y800 / gray to something AviSynth can read

avconv -i in.avi -vcodec mpeg4 -b 100000k out.avi

or in parallel:

find ./ -maxdepth 1 -name "*.avi" -type f |  xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -vcodec mpeg4 -b 100000k \${filename/.avi/}-mpeg.avi"


Wide Angle Lens Stereo Calibration with OpenCV

Update: OpenCV 2.4.10 adds a new fisheye namespace that might work better than the code below.

I’m using 150 degree wide-angle lenses for a stereo setup, and they are difficult to calibrate with OpenCV. A few must-have points:

  • When searching for chessboard matches, you must not use CALIB_CB_FAST_CHECK.
  • You need to calibrate each camera individually and then attempt the stereo calibration using the CV_CALIB_USE_INTRINSIC_GUESS flag.
  • I use about 30 chessboard images for each individual camera, making sure to cover the entire image with points and about 10 images for stereo calibration. I’ve found that more images on the stereo calibration does not help and actually may make it worse.


Print video framerate (and other info)

avconv -i GOPR0087.avi


Bind Pithos’s play/pause to a key or script

dbus-send --print-reply --dest=net.kevinmehall.Pithos /net/kevinmehall/Pithos net.kevinmehall.Pithos.PlayPause


Reduce PDF size using ghostscript

Courtesy of Jenny:

Note: don’t use if you have transparency in your figures.

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf


Compress AviSynth / VirtualDub Output

Sometimes I output from VirtualDub in an uncompressed format.

Works with pdfpc:

F="file.avi"; avconv -i $F -acodec pcm_s16le -ac 2 -b 100000k -ab 100000 ${F/.avi/}-c.avi

Better audio compression:

F="file.avi"; avconv -i $F -acodec libmp3lame -ac 2 -b 100000k -ab 100000 ${F/.avi/}-c.avi

If Unknown encoder ‘libmp3lame’:

sudo apt-get install libavcodec-extra-53


Using the ArudPilot (APM 2.5) as a Sensor and I/O Board wth another onboard PC communicating over USB

I recently wrote a firmware package for the ardupilot / APM 2.5 for use as a sensor and I/O board. This strips out most of its logic and pumps sensor data up the USB as fast as I could make it go while simultaneously reading USB commands for publishing to servo motors.

It also will mux between input servo commands and the USB commands based on an input channel’s value.

Post here:

Code here (in the ArduRead folder):


How to Fix a Sticky Kinesis Advantage Keyboard

I opened a soda near my keyboard and it blew up all over the right side. Now you might be thinking “no big deal,” but I have had some RSI issues so I use a $300 Kinesis Advantage, which, if you have RSI problems, is simply amazing. It, along with Workrave, has almost completely solved my problems.

So, my keys were sticking. Bad day. I popped the keys off with their included tool, and wiped them down, but that only cleared up the issues for about 10 minutes. You can send yours in for a few weeks and like $50-80 and they’ll fix it, but I decided to give it a shot.

Step 1) Take it apart. Relatively simple. A few screws on the back and you’re set.


Step 2) Take out the side that you’re having issues with. A few more screws and one ribbon cable and it will come off

Kinesis right side top

Kinesis right side

Step 3) Get a syringe, fill it with water, and very carefully press the Cherry switch down and put a few drops of water inside. As soon as the water is in, flip the board over and shake while holding the switch down. Then use compressed air to blow out the water, all the while holding a napkin under the switch to absorb anything that comes out.

Repair kinesis

Step 4) Dry it with a heater or leave it for a while. I dried mine, put it back together, and almost all the keys worked. I turned out I hadn’t pushed the ribbon cable all the way in, so make sure you do that. Typing on it now as good as new!

Update: I’ve had to repeat this process a number of times now as keys that weren’t stuck have become stuck. I recommend cleaning most of the keys in the affected area all at once so you don’t have to keep opening up the keyboard.

Update 2: I’ve had to repeat this process about five times on the keys to get them unstuck (drying, then washing, then drying, etc.) Each time everything seems to be a little bit better. I’m crossing my fingers that I’m in the clear now.

Update 3: Seems to be working well now.

Update 4: Continued to have problems after a while. Next I took the right circuit board out (as pictured above) and put it in a warm bowl of water for a few minutes. I worked each key (with the keycaps off) and then dried it out. So far this has been the best solution yet, and the keyboard has survived its trip.

Update 5: After Update 4, the keyboard has been working without any issues for over a year.


Slow WiFi on Ubuntu with a Thinkpad

A post I’m trying out now:


Convert GoPro Hero 2/3/4 MP4 to AVI for AviSynth

avconv -i GOPR0047.MP4 -acodec mp2 -b 100000k -ab 224000 GOPR0047.avi

if not found:

sudo apt-get install libavcodec-extra-53

in parallel:

find ./ -maxdepth 1 -name "*.MP4" -type f |  xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -acodec mp2 -b 100000k -ab 224000 \${filename/.MP4/}.avi"

For 120 fps video:

avconv -i GOPR0058.MP4 -acodec mp2 -vcodec ffv1 -b 100000k -ab 224000 GOPR0058.avi

in parallel:

find ./ -maxdepth 1 -name "*.MP4" -type f |  xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -acodec mp2 -vcodec ffv1 -b 100000k -ab 224000 \${filename/.MP4/}.avi"

The old way (causes audio issues on 1080p / 60fps video):

avconv -i GOPR0051.MP4 -acodec libmp3lame -ac 2 -ar 44800 -b 100000k -ab 100000 GOPR0051.avi


Installing AviSynth and VirtualDub in Linux under wine

Follow the instructions here: [local mirror]

Don’t forget this step: put avs2yuv.exe in the folder: ~/.wine/drive_c/windows/system32/

Then install vcrun6sp6 using winetricks.

After you’re done, you might consider installing support for MP4 / GoPro files.


Gumstix notes

Update: Use this URL if your wget won’t resolve https connections:

The environment that has worked very well for me is to compile on-board (not cross-compiling) for anything small (see tip #2 below for how to get a compiler easily). That usually just works, is fast enough, and simplifies things a lot. When I need to compile something big, I’ve used distcc to get my desktop to help (over wifi). Below is an email from another guy in our lab who wrote down how to get that to work.

There’s a few other big tricks I’ve used.

1) If you’re using wifi, I wrote an article about getting that to work here:

2) Add more repositories to the package manager. The default one barely has anything in it, but the main angstrom repo has a lot of stuff (much more like using apt-get) Again, this will only work with a network connection, but I think you can get a board that will give you that.

Page about it:

Here’s the relevant part:

By default, opkg uses the Gumstix repositories defined in the configurations files found under /etc/opkg/. This repository, however, does not contain all the packages you can build using OpenEmbedded. For example, we can add the armv7a base·Angstrom Distribution repository to our list of repositories.

$ opkg list | wc -l
$ echo 'src/gz angstrom-base
base' > /etc/opkg/angstrom-base.conf
$ opkg update
$ opkg list | wc -l


I highly recommend doing that. It made my life much simpler by allowing me to just install things like a compiler, etc.

distcc email

distcc worked like a charm once I finally figured out where the cross compiler was, and that the compilers needed to have the same name on the two systems. State estimator built, and runs as far as complaining about not getting a bot-param :-)

to get to the cross compiler setup environment according to:

the cross compiler would then be in:


To get distcc to work, I followed the distcc part of these instructions:

The only other critical step was to create a symbolic link from the cross-compiler location to


Then to have Make to use distcc on the gumstix you need to do is set three environment variables:

export CC="distcc arm-angstrom-linux-gnueabi-gcc"
export CXX="distcc arm-angstrom-linux-gnueabi-g++"
export DISTCC_HOSTS="ip-of-desktop"


whaw — Tiling Windowing on Linux

Whaw comes from John Meacham. It’s awesome (pardon the pun) for use with tiling windows. I added a few extra command line options so you can move the “hot pixel” around on the screen.

I highly recommend that you spend 30 seconds and read the description of how to use it.

Link to my version: whaw-0.1.2.andy.tar.gz

To install you might need:

sudo apt-get install libxmu-dev


Using encfs


encfs ~/crypt ~/visible


encfs ~/crypt ~/visible


fusermount -u ~/visible


Ban email first thing in the morning

I noticed that when I check my email first thing in the morning, I spend the day putting out small fires instead of doing good work. I wrote a quick script that bans my email after about 7 hours of idle activity and then restores it sometime between the first 45-75 minutes of activity.

  1. Install xprintidle: sudo apt-get install xprintidle
  2. Paste this script somewhere on your system and make it executable.
  3. Add it to root’s crontab (sudo crontab -e):
    53 * * * * /home/abarry/scripts/banEmailCronjob

# this should be run as a cron job (as root)
# sudo crontab -e
#   53 * * * * /home/abarry/scripts/banEmailCronjob
# the idea is that within an hour of the user returning, the job
# will run and email will be restored, but just as you sit down,
# email will be banned.
# DEPENDS on xprintidle which is in apt.

set -x # tells bash to print everything it's about to run




# first get the idle time

IDLE=`DISPLAY=:0.0 xprintidle`
echo "Idle amount: " $IDLE
echo "Sleeping for a while..."

sleep 1727 # sleep for a while so email will never be active right as you sit down

# get the IPs of gmail
ADDRESSES=`dig +short A`
FIRSTIP=`dig +short A | head -n 1`

# dig seems unreliable
IPV6ADRESSES=`dig +short AAAA`
IPV6ADRESSES="$IPV6ADRESSES 2607:f8b0:4009:802::1015 2607:f8b0:4009:802::1016"


if [ $IDLE -gt "25200000" ]
    echo "Idle is greater than threshold, banning email..."
    # ban email here

    # check to see if it is already banned
    HOSTS=`iptables -L -v -n | grep "$FIRSTIP"`
    echo "$HOSTS"
    if [ -z "$HOSTS" ]
        # not already banned, so ban it

        # ban gmail's IPs
        # find these IPs using $ dig
        echo "banning via iptables"

        # loop through the IPs and ban them
        for thisip in ${ADDRESSES}; do
            iptables -A INPUT -s $thisip -j DROP

        # loop through IPv6 aadresses
        echo "banning via ip6tables"
        for thisip in ${IPV6ADRESSES}; do
            ip6tables -A INPUT -s $thisip -j DROP

        echo "banned!"

    echo "Idle is less than threshold, so removing ban on email..."
    # remove ban here

    echo "removing iptables ban"
    # unban IPs
    #for thisip in ${ADDRESSES}; do
    #    iptables -D INPUT -s $thisip -j DROP
    iptables -F

    echo "removing IPv6 ban"
    #for thisip in ${IPV6ADRESSES}; do
    #    echo "ip6tables -D INPUT -s $thisip -j DROP"
    #    ip6tables -D INPUT -s $thisip -j DROP
    ip6tables -F



VirtualDub audio compression

Best done with ffmpeg instead of virtual dub / avisynth:

ffmpeg -i knife-edge-narration.avi -sameq -acodec libmp3lame -ac 1 knife-edge-narration2.avi


The Complete Guide to Embedded Videos in Beamer under Linux

Edit: A better solution is to use pdfpc. See my new guide.

It is now possible to embed videos in Beamer presentations under Linux.  It’s stable and works well.

The strategy is to use acroread and a flash player to play .flv files. Credit to this post for a lot of this work. Here’s how to do it:

The short version:
1. Get acroread version 9.4.1 [local mirror]
2. Download the example.
3. Convert your video to flv (mess with the resolution to get smooth playback).

ffmpeg -i movie.avi -sameq -s 960x540 movie.flv


Now the explanation:

I. Get the right version of acroread.

1. Uninstall acroread using apt-get (which isn’t likely to be the right version)

sudo apt-get remove acroread

2. Download version 9.4.1 of acroread from Adobe (note that the i486 version will still work on 64-bit systems) [FTP page] [local mirror]
3. Mark the package executable:

cd your-download-directory
chmod +xx AdbeRdr9.4.1-1_i486linux_enu.bin

4. Install acroread:


I installed to /opt/acroread, so I run it like so:


II. Get Beamer files and flash player

Read the rest of this entry »


flash encoding using ffmpeg

ffmpeg -i movie.avi -sameq movie.flv

If you have issues with the sound and just want to remove it:

ffmpeg -i movie.avi -an -sameq movie.flv


S107G Helicopter Control via Arduino

I have worked with two types of S107G helicopters. One is a 2-channel (A and B) and the other is a 3-channel (A, B, and C) version. Their protocols differ significantly. The more common 2-channel (32-bit) version’s protocol is well documented elsewhere, so here I will only document the 3-channel (30-bit) version.

(First posted at rcgroups.)

S107G at FIAP Workshop


The protocol for this is 30 bits long.

  • To send a bit you flash the IR lights 16 times for a 0 and 32 times for a 1.
  • Each flash is off for 8-9 microseconds and on for 8-9 microseconds.
  • Between bits you wait for 300 microsecond
  • Between 30-bit packets you delay an amount depending on the channel you are using.
  • Channel A: 136500 us
  • Channel B: 105200 us
  • Channel C: 168700 us

The order of the bits is as follows:


C – channel
P – pitch
T – throttle
Y – yaw
X – checksum
R – trim

There are a few other things to note:

1) It has a checksum. The 21-24th bits are a bitwise XOR of 4-bit words with the two zeros appended to the end of the bitstring. Thus you can compute the checksum for the first packet:

1000 0000 1000 1100 0000 1111 1111 11
1000 ^ 0000 ^ 1000 ^ 1100 ^ 0000 ^ 1111 ^ 1100 = 1111

and for the second packet:

1000 0000 0011 1001 0000 0001 1111 11
1000 ^ 0000 ^ 0011 ^ 1001 ^ 0000 ^ 1111 ^ 1100 = 0001

Read the rest of this entry »


Run script on resume from suspend

Put your script into


with a number at the beginning and mark it executable. Here’s an example that sets my Thinkpad mouse sensitivity and enables two-fingered scrolling on my touchpad.

$ cat /etc/pm/sleep.d/99-trackpoint-and-twofinger
case "$1" in
	echo -n 220 > /sys/devices/platform/i8042/serio1/serio2/sensitivity 2> /dev/null
	echo -n 95 > /sys/devices/platform/i8042/serio1/serio2/speed 2> /dev/null
	xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Two-Finger Pressure" 32 4
xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Two-Finger Width" 32 7
xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Two-Finger Scrolling" 8 1 1
xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Jumpy Cursor Threshold" 32 250
exit $?

Note that if you want to have the script run at boot as well you probably want to add your code to



VirtualDub settings and conversion for .MTS video

Video: MTS off a Canon Vixia HG21.
Setup: Linux, working in wine.

Conversion to avi for VirtualDub/AviSynth:

avconv -i 00394.MTS -vcodec libxvid -b 100000k -deinterlace -acodec mp2 -ab 224000 output.avi

Or in parallel:

find ./ -maxdepth 1 -name "*.MTS" -type f | xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -vcodec libxvid -b 100000k -deinterlace -acodec mp2 -ab 224000 \${filename/.MTS/}.avi"

Jenny says, “I had to add the -ac 2 flag for audio”

For .mp4 video:

avconv -i 00394.MTS -vcodec libxvid -b 100000k -deinterlace -acodec mp2 -ab 224000 output.mp4

Or in parallel:

find ./ -maxdepth 1 -name "*.MTS" -type f | xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -vcodec libxvid -b 100000k -deinterlace -acodec mp2 -ab 224000 \${filename/.MTS/}.mp4"

Set up VirtualDub:

  • Options > Preferences > AVI > Check Directly decode uncompressed YCbCr (YUV) sources
  • Select Video > Compression…
    • Select ffdshow Video Codec
    • Select Configure and then set the bitrate to 10000
  • Select Video > Fast recompress


Speed up MATLAB figures with OpenGL

You can substantially increase your MATLAB figure performance by using OpenGL rendering. Put this in your startup.m file:

set(0, 'DefaultFigureRenderer', 'OpenGL');

You can check if MATLAB has detected your hardware by using:

>> opengl info

Other relevant figure properties are:

>> set(gcf,'Renderer','OpenGL')
>> set(gcf,'RendererMode','manual')

Warning: this breaks saving EPS files as vectorized figures.


Gumstix wifi (wlan1: link not ready)

I just spent a long time trying to diagnose an issue with a Gumstix Overo Fire and brining up WiFi (802.11b/g) on boot.  I did all the standard things (when using a desktop image, you must uninstall NetworkManager and then set up your configuration in /etc/network/interfaces. I could get a connection sometimes, but it was very unclear why it would or would not connect. So unclear that I couldn’t write a script that would bring the WiFi up on boot.

I kept getting this issue:

ADDRCONF(NETDEV_UP): wlan1: link is not ready

occasionally followed by the better:

ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready

Also, something odd was going on that I don’t understand because when the interface would configure on boot, it would rename to wlan1:

[... boot messages ...]
libertas_sdio: Libertas SDIO driver
libertas_sdio: Copyright Pierre Ossman
Remounting root file system...
libertas: 00:19:88:21:59:1c, fw 9.70.7p0, cap 0x00000303
libertas_sdio mmc1:0001:1: wlan0: Features changed: 0x00004800 -> 0x00004000
libertas: wlan0: Marvell WLAN 802.11 adapter
udev: renamed network interface wlan0 to wlan1
Caching udev devnodes

Finally, I found a solution: use an image from 2010. I’m using the omap3-console-image-overo-201009091145 build found here and mirrored locally here.


Colors in `ls`

How to make (a gumstix say) show colors for ls (assuming your login shell is bash). Edit ~/.bashrc and ~/.bash_profile to match these files:

Make sure your shell is bash (if it’s not, you can change it in /etc/passwd):

root@overo:~# echo $SHELL

In ~/.bashrc:

root@overo:~# cat .bashrc 
export LS_OPTIONS='--color=auto'
eval `dircolors`
alias ls='ls $LS_OPTIONS'

In ~/.bash_profile:

root@overo:~# cat .bash_profile 
source ~/.bashrc


Recursively get (or set) a property in svn

Edit: There’s an easier way to delete properties recursively:

svn propdel -R rlg:email robotlib-mydev 

Use find to traverse recursively through the directory structure:

find . -type d \! -path *.svn* -exec echo {} \; -exec svn propget rlg:email {}@ \;

To print out the current directory:

-exec echo {} \;

To find only directories:

-type d

To ignore svn folders:

\! -path *.svn*

To execute the propget command:

-exec svn propget rlg:email {}@ \;

The {} is the place find puts the path and the @ at the end causes SVN to parse correctly for paths that have “@” in them.


Firefox 5 extension compatibility check

The old preference for disabling compatibility checks (extensions.checkCompatibility) is now gone. The new one marks the version you chose to do this at:

To re-enable your broken (but likely still working) add-ons in Firefox 5.0, add the extensions.checkCompatibility.5.0 preference in about:config:

1. Type about:config into the firefox URL bar.

Read the rest of this entry »


Enable editable shortcut keys in Gnome

There used to be a checkbox for it, but it’s gone now. You can still enable the option by opening


and checking the box under



Reload udev rules without restarting udev

sudo udevadm control --reload-rules


Email CPU usage

Say you’re going to be out of solid contact for a while, but you’d like to monitor your CPU usage (so you can make sure to SSH in and fix something if the CPUs go idle, indicating a simulation crash.)

Solution: a quick script to email me my CPU load every hour. Then all my sessions are on screen, so I can SSH in with my phone and edit/restart what I need over 3G. If it looks bad, I’ll know to get to a real net connection ASAP.

Here’s the script (requires sudo apt-get install sendemail)


while true; do
    echo `uptime` | sendemail -f -t -u "Uptime report (auto)"

    sleep 3600


video conversion with mencoder

With sound:

mencoder VIDEO0046.3gp -ovc x264 -oac mp3lame -o VIDEO0046.ogg

No sound:

mencoder VIDEO0046.3gp -ovc x264 -nosound -o VIDEO0046.ogg


Reverting to old versions in SVN

If you want to revert everything to a previous version (ie revision 72):

svn merge -rHEAD:72 .

Note the “.” at the end!

If you want to revert all local changes (ie say go back to the head revision):

svn revert -R .

Again, note the “.” at the end.

To view the diff between two versions (ie what changed between revisions 72 and 73), use:

svn diff -r 72:73


Resize with imagemagick

convert pic.jpg -resize 50% pic.png


for file in *.png; do convert $file -resize 15% thumbs/$file; done


convert ../task1.png -resize 300x100 task1.png

File conversion is also nice:

for foo in *.jpg ; do convert ${foo} ${foo%.jpg}.png ; done


Browse old revision in SVN using the web interface


!svn/bc/[revision number]/

to the URL of the repository (not just any folder in the repository).

So in other words:!svn/bc/681/


Export Cinelerra Video to YouTube

I took me a while to figure out the best Cinelerra export settings for YouTube. I ended up going with MPEG4 and an AVI container.


Use mencoder to convert from recordmydesktop to dv format that cinelerra can open

mencoder -vf crop=720:576:0:0 -ovc lavc -lavcopts vcodec=dvvideo seeded-1.ogv -o scaled2/colorized.avi

crop options: width:height:x:y

where x and y are the top left coordinates of the cropping box.


Antialiased rings / filled circles in pygame

This shouldn’t be hard. But it is.

If you want to have a filled circle / ring in pygame you need to draw the antialiased part yourself. This is a terribly complicated way to get antialiased circles.

Unfortunately, pygame does not implement an antialiased filled circle, so you basically have to create them yourself. Also, the antialiased circles that pygame does implement seem to only antialiase to white, so they cause further problems by encroaching on the colored parts of the image.

def DrawTarget(self):
    # outside antialiased circle
    pygame.gfxdraw.aacircle(self.image, self.rect.width/2, self.rect.height/2, self.rect.width/2 - 1, self.color)

    # outside filled circle
    pygame.gfxdraw.filled_ellipse(self.image, self.rect.width/2, self.rect.height/2, self.rect.width/2 - 1, self.rect.width/2 - 1, self.color)
    temp = pygame.Surface((TARGET_SIZE,TARGET_SIZE), SRCALPHA) # the SRCALPHA flag denotes pixel-level alpha
    if (self.filled == False):
        # inside background color circle
        pygame.gfxdraw.filled_ellipse(temp, self.rect.width/2, self.rect.height/2, self.rect.width/2 - self.width, self.rect.width/2 - self.width, BG_ALPHA_COLOR)
        # inside antialiased circle
        pygame.gfxdraw.aacircle(temp, self.rect.width/2, self.rect.height/2, self.rect.width/2 - self.width, BG_ALPHA_COLOR)
    self.image.blit(temp, (0,0), None, BLEND_ADD)


terminal redirection

wxProcess *process = new wxProcess(this); // by giving the process this frame, we are notified when it dies


this is a bit tricky (and took 5 hours to figure out). It turns out that many systems act differently when output is redirected to a file than to a terminal. For example “ls” gives different results than “ls > t” and looking at “t”.

This is an issue for us, because the way the buffering works makes many printf statements get buffered so nothing shows up. The workaround is the program called “script” which is used to write down a script of terminal commands to a file. We can invoke it with the -c option to run a command, and act like a terminal. We then tell it to send its file to /dev/null and then capture its output through standard output redirection.

Finally, we must invoke the process with wxEXEC_MAKE_GROUP_LEADER so we can later kill it with wxKILL_CHILDREN, which causes everything to close correctly.

wxString processStr = wxT("script -c ") + m_current_dir + m_text_ctrls->Item(index)->GetValue() + (wxT(" /dev/null"));
int pid = int(wxExecute(processStr, wxEXEC_ASYNC | wxEXEC_MAKE_GROUP_LEADER, process));