Code is hard to remember.

This is where I put interesting bits of source so I can find them again. Maybe they will be useful to you. Don’t expect consistency, documentation, bike messengers, or aspiring actors.


Move windows between monitors using a hotkey in xfce

Also should work in other window managers.

Install xdotool

sudo apt-get install xdotool

Figure out the geometry of the destination by moving a terminal to the target location and size and running:

xfce4-terminal --hide-borders
xdotool getactivewindow getwindowgeometry

Giving for example,

Window 102778681
  Position: 2560,1119 (screen: 0)
  Geometry: 1050x1633

Sometimes terminals size themselves oddly, so you can do this instead:

ID=`xdotool search firefox`

And then use the ID:

xdotool getwindowgeometry $ID

There’s also an issue with the window decorations, so you’ll have to correct for that. Mine were 22 pixels tall.

Finally setup a hotkey with:

xdotool getactivewindow windowmove 2560 1119 windowsize 1050 1633

My hotkeys:

xdotool getactivewindow windowmove 440 0 windowsize 1680 1028 # top monitor
xdotool getactivewindow windowmove 0 1050 windowsize 1277 1549 # left on middle monitor
xdotool getactivewindow windowmove 1280 1050 windowsize 1277 1549 # right on middle monitor
xdotool getactivewindow windowmove 2560 1075 windowsize 1050 1633 # right monitor


Useful AviSynth Functions

I wrote a few useful AviSynth functions for:

  • Automatically integrating videos from different cameras: ConvertFormat(…)
  • import("abarryAviSynthFunctions.avs")
    
    goproFormat = AviSource("GOPR0099.avi")
    
    framerate = goproFormat.framerate
    audiorate = goproFormat.AudioRate
    width = goproFormat.width
    height = goproFormat.height
    
    othervid = AviSource("othervid.avi")
    
    othervid = ConvertFormat(othervid, width, height, framerate, audiorate)
    
  • Slowing down video slowly (ie go from 1x -> 1/2x -> 1/4x automatically): TransitionSlowMo(…)
    vid1 = AviSource("GOPR0098.avi")
    
    # trim to the point that you want
    vid1 = Trim(vid1, 2484, 2742)
    
    # make frames 123-190 slow-mo, maximum slow-mo factor is 0.25, use 15 frames to transition from 1x to 0.25x.
    vid1 = TransitionSlowMo(vid1, 123, 190, 0.25, 15) 
    

Here’s the code:


########### functions ################


function ConvertFramerate(clip v, float framerateIn, float framerateOut)
{
    v = v.AssumeFPS(framerateIn)
    v = v.ChangeFPS(framerateOut)
    return v
}
function slow50(clip v, float framerate)
{
    v = v.assumeFPS(framerate/2)
    return v.changeFPS(framerate)
}

function AddBlankAudio(clip v, int rate)
{
    blankAudio = BlankClip(v, audio_rate=rate, channels=2)
    return AudioDub(v, blankAudio)
}

function slowmo_50(clip v, float framerate)
{
    v=assumeTFF(v)
    v=bob(v)

    return v.assumefps(framerate)
}

function ConvertFormat(clip v, int width, int height, float framerate, int audiorate)
{
    # convert clips to the right size, adding bars where needed
    
    
    v = HasAudio(v) ? ((v.AudioRate == audiorate) ? v : v.ResampleAudio(audiorate) ) : AddBlankAudio(v, audiorate)
    
    
    # figure out if width or height is the limiting factor
    
    scaleFactor = (float(v.width) / float(v.height) > float(width)/float(height)) ? float(width)/float(v.width) : float(height)/float(v.height)
    
    newWidth = (int(float(v.width)*scaleFactor) % 2 == 0) ? int(float(v.width)*scaleFactor) : int(float(v.width)*scaleFactor) - 1
    
    newHeight = (int(float(v.height)*scaleFactor) % 2 == 0) ? int(float(v.height)*scaleFactor) : int(float(v.height)*scaleFactor) - 1
    
    
    #v = Subtitle(v, String(scaleFactor) + ": " + String(newWidth) + " x " + String(newHeight), 100, 100, font="Agency FB", text_color=$00ffffff, halo_color=$ff000000, size=20)
    
    v = LanczosResize(v, newWidth, newHeight)
    
    # now add bars
    
    v = AddBorders(v, (width - v.width)/2, (height - v.height)/2, (width - v.width)/2, (height - v.height)/2)
    
    v = v.ChangeFPS(framerate)
    
    return v
    
}

function TransitionSlowMoWorker(clip v, int frameStart, int frameEnd, float timeFactor)
{
    slow = Trim(v, frameStart, frameEnd)
    
    newFramerate = (v.framerate * timeFactor > 0) ? v.framerate * timeFactor : 1
    
    slow = slow.assumeFPS(newFramerate)
    slow = ConvertFramerate(slow, slow.framerate, v.framerate)
    
    fontsize = (v.width > 1000) ? v.width/14 : v.width/10
    
    slow = Subtitle(slow, "1/" + String(int(v.framerate/newFramerate)) + "x", int(v.width*.85), int(v.height *.85), font="Agency FB", text_color=$00ffffff, halo_color=$ff000000, size=fontsize)
    
    # debug subtitle
    #slow = Subtitle(slow, String(newFramerate), 200, 100, font="Agency FB", text_color=$00ffffff, halo_color=$ff000000, size=75)
    
    return slow
}


function TransitionBuild(clip v, float endingFactor, float timeFactor)
{
    # what is this going to look like?
    #
    #   length of clip is how many frames we have to transition
    #   parameter saying what our starting rate was
    #   parameter saying what our ending rate is
    
    # ok so every pass halves the framerate
    
    thisRate = (endingFactor >= timeFactor/2.0) ? endingFactor : timeFactor/2.0
    
    front = TransitionSlowMoWorker(v, 0, v.framecount/2, thisRate)
    
    back = Trim(v, v.framecount/2 + 1, 0)
    
    return (endingFactor >= timeFactor) ? front : front + TransitionBuild(back, endingFactor, thisRate)
}

function TransitionBuildOut(clip v, float endingFactor, float timeFactor)
{
    # what is this going to look like?
    #
    #   length of clip is how many frames we have to transition
    #   parameter saying what our starting rate was
    #   parameter saying what our ending rate is
    
    # ok so every pass halves the framerate
    
    thisRate = (endingFactor >= timeFactor/2.0) ? endingFactor : timeFactor/2.0
    
    front = Trim(v, 0, v.framecount/2)
    
    back = TransitionSlowMoWorker(v, v.framecount/2, 0, thisRate)
    
    return (endingFactor >= timeFactor) ? back : TransitionBuildOut(front, endingFactor, thisRate) + back
}

##
## Smoothly slows down and speeds up the video around the given frames
##
function TransitionSlowMo(clip v, int frameStart, int frameEndIn, float timeFactor, early_amount)
{
    # slow down at frameStart
    
    frameEnd = (frameEndIn == 0) ? v.framecount : frameEndIn
    
    #  take the first few frames
    #
    #earlyslow = TransitionSlowMoWorker(v, frameStart, frameStart + early_amount, timeFactor*2)
    #
    #slow = TransitionSlowMoWorker(v, frameStart + early_amount, frameEnd, timeFactor)
    #
    #return Trim(v, 0, frameStart) + earlyslow + slow + Trim(v, frameEnd, 0)
    
    transitionIn = TransitionBuild(Trim(v, frameStart, frameStart + early_amount), timeFactor, 1)
    
    slow = TransitionSlowMoWorker(v, frameStart + early_amount, frameEnd - early_amount, timeFactor)
    
    front = Trim(v, 0, frameStart)
    
    back = Trim(v, frameEnd, 0)
    
    
    
    transitionOutAndBack = (frameEndIn == 0) ? back : TransitionBuildOut(Trim(v, frameEnd - early_amount, frameEnd), timeFactor, 1) + back
    
    return front + transitionIn + slow + transitionOutAndBack
    
}


“Input not supported,” blank screen, or monitor crash when using HDMI to DVI adapter

You’re just going along and all of a sudden a terminal bell or something causes your monitor to freak out and crash. Restarting the monitor sometimes fixes the problem.

Turns out that sound is coming out through the HDMI adapter which your monitor thinks is video, and then everything breaks. Mute your sound.


Popping / clipping / bad sound on Odroid-U2

Likely your pulseaudio configuration is messed up. Make sure that pulseaudio is running / working.


The (New) Complete Guide to Embedded Videos in Beamer under Linux

We used to use a pdf/flashvideo trick.  It was terrible.  This is so. much. better:

1) Download pdfpc:

git clone https://github.com/davvil/pdfpc.git

2) Install dependencies:

sudo apt-get install libgstreamer0.10-dev libgee-dev valac-0.20 libgstreamer-plugins-base0.10-dev librsvg2-dev libpoppler-glib-dev libgtk2.0-dev

3) Build it:

cd pdfpc
git submodule init
git submodule update
mkdir build
cd build
cmake ../
make -j8
sudo make install

4) Test it with my example: [on github] [local copy]

# use -w to run in windowed mode
pdfpc -w video_example.pdf

5) You need a poster image for every movie. Here’s my script to automatically generate all images in the “videos” directory (give it as its only argument the path containing a “videos” directory that it should search.) Only runs on *.avi files, but that’s a choice, not a avconv limitation.

#!/bin/bash

for file in `find $1/videos/ -type f -name "*.avi"`; do
#for file in `find $1/videos/ -type f -`; do
    
    # check to see if a poster already exists
    
    if [ ! -e "${file/.avi}.jpg" ]
    then
        # make a poster
        #echo $file
        avconv -i $file -vframes 1 -an -f image2 -y ${file/.avi/}.jpg
    fi
done

6) Now include your movies in your .tex. I use an extra style file that makes this easy: extrabeamercmds.sty (github repo). Include that (\usepackage{extrabeamercmds} with it in the same directory as your .tex) and then:

\fullFrameMovieAvi{videos/myvideo}

or for a non-avi / other poster file:

\fullFrameMovie{videos/myvideo.avi}{videos/myvideo.jpg}

If you want to include the movie yourself, here’s the code:

\href{run:myvideo.avi?autostart&loop}{\includegraphics[width=\paperwidth,height=0.5625\paperwidth]{myposter.jpg}}

Thanks to Jenny for her help on this one!


AviSynth: Add a banner that 1) wipes across to the right and 2) fades in

From Jenny.

Prototype:

function addWipedOverlay(clip c, clip overlay, int x, int y, int frames, int width)

Arguments:

c: the video clip you want to overlay
overlay: your banner
x: x position in image for banner to slide across from
y: y position in image
frames: the number of frames in which to accomplish wiping and fading in
width: the width over the overlay banner (if this is too big you will get an error from crop about destination width less than zero)

Notes:

Assumes a transparency channel. To get this, you need to load your image with the pixel_type=”RGB32″ flag.

Example:

img = ImageSource("BannerName.png", pixel_type="RGB32")
clip1 = addWipedOverlay(clip1, img, 0, 875, 30, 1279, 0)

This is actually two functions because it uses recursion to implement a for loop:

function addWipedOverlay(clip c, clip overlay, int x, int y, int frames, int width)
{
    return addWipedOverlayRecur(c, overlay, x, y, frames, width, 0)
}


function addWipedOverlayRecur(clip c, clip overlay, int x, int y, int frames, int width, int iteration)
{
    cropped_overlay = crop(overlay, int((1.0 - 1.0 / frames * iteration) * width), 0, 0, 0)
   
    return (iteration == frames)
    \    ? Overlay(Trim(c, frames, 0), overlay, x = x, y = y, mask=overlay.ShowAlpha)
    \    : Trim(Overlay(c, cropped_overlay, x = x, y = y, mask = cropped_overlay.ShowAlpha, opacity = 1.0 / frames * iteration), iteration, (iteration == 0) ? -1 : iteration) + addWipedOverlayRecur(c, overlay, x, y, frames, width, iteration + 1)
}


Convert rawvideo / Y800 / gray to something AviSynth can read

avconv -i in.avi -vcodec mpeg4 -b 100000k out.avi

or in parallel:

find ./ -maxdepth 1 -name "*.avi" -type f |  xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -vcodec mpeg4 -b 100000k \${filename/.avi/}-mpeg.avi"


Wide Angle Lens Stereo Calibration with OpenCV

I’m using 150 degree wide-angle lenses for a stereo setup, and they are difficult to calibrate with OpenCV. A few must-have points:

  • When searching for chessboard matches, you must not use CALIB_CB_FAST_CHECK.
  • You need to calibrate each camera individually and then attempt the stereo calibration using the CV_CALIB_USE_INTRINSIC_GUESS flag.
  • You must use CV_CALIB_RATIONAL_MODEL.
  • I use about 30 chessboard images for each individual camera, making sure to cover the entire image with points and about 10 images for stereo calibration. I’ve found that more images on the stereo calibration does not help and actually may make it worse.
  • Flags I use for calibration: CV_CALIB_RATIONAL_MODEL, CV_CALIB_USE_INTRINSIC_GUESS, CV_CALIB_FIX_PRINCIPAL_POINT.


Print video framerate (and other info)

avconv -i GOPR0087.avi


Bind Pithos’s play/pause to a key or script

dbus-send --print-reply --dest=net.kevinmehall.Pithos /net/kevinmehall/Pithos net.kevinmehall.Pithos.PlayPause


Reduce PDF size using ghostscript

Courtesy of Jenny:

Note: don’t use if you have transparency in your figures.

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf


Compress AviSynth / VirtualDub Output

Sometimes I output from VirtualDub in an uncompressed format.

Compress it:

avconv -i uncompressed.avi -acodec libmp3lame -ac 2 -ar 44800 -b 100000k -ab 100000 compressed.avi

If Unknown encoder ‘libmp3lame’:

sudo apt-get install libavcodec-extra-53


Using the ArudPilot (APM 2.5) as a Sensor and I/O Board wth another onboard PC communicating over USB

I recently wrote a firmware package for the ardupilot / APM 2.5 for use as a sensor and I/O board. This strips out most of its logic and pumps sensor data up the USB as fast as I could make it go while simultaneously reading USB commands for publishing to servo motors.

It also will mux between input servo commands and the USB commands based on an input channel’s value.

Post here:

http://www.diydrones.com/forum/topics/code-for-using-the-ardupilot-with-an-onboard-computer-usb

Code here (in the ArduRead folder):

https://github.com/andybarry/ardupilot/tree/arduread


How to Fix a Sticky Kinesis Advantage Keyboard

I opened a soda near my keyboard and it blew up all over the right side. Now you might be thinking “no big deal,” but I have had some RSI issues so I use a $300 Kinesis Advantage, which, if you have RSI problems, is simply amazing. It, along with Workrave, has almost completely solved my problems.

So, my keys were sticking. Bad day. I popped the keys off with their included tool, and wiped them down, but that only cleared up the issues for about 10 minutes. You can send yours in for a few weeks and like $50-80 and they’ll fix it, but I decided to give it a shot.

Step 1) Take it apart. Relatively simple. A few screws on the back and you’re set.

kinesisInside

Step 2) Take out the side that you’re having issues with. A few more screws and one ribbon cable and it will come off

Kinesis right side top

Kinesis right side

Step 3) Get a syringe, fill it with water, and very carefully press the Cherry switch down and put a few drops of water inside. As soon as the water is in, flip the board over and shake while holding the switch down. Then use compressed air to blow out the water, all the while holding a napkin under the switch to absorb anything that comes out.

Repair kinesis

Step 4) Dry it with a heater or leave it for a while. I dried mine, put it back together, and almost all the keys worked. I turned out I hadn’t pushed the ribbon cable all the way in, so make sure you do that. Typing on it now as good as new!

Update: I’ve had to repeat this process a number of times now as keys that weren’t stuck have become stuck. I recommend cleaning most of the keys in the affected area all at once so you don’t have to keep opening up the keyboard.

Update 2: I’ve had to repeat this process about five times on the keys to get them unstuck (drying, then washing, then drying, etc.) Each time everything seems to be a little bit better. I’m crossing my fingers that I’m in the clear now.

Update 3: Seems to be working well now.


Slow WiFi on Ubuntu with a Thinkpad

A post I’m trying out now: http://www.mattkirwan.com/tech/slow-wireless-connection-using-ubuntu-1204-and-a-lenovo-thinkpad-x200/


Convert GoPro Hero 2/3 MP4 to AVI for AviSynth

avconv -i GOPR0047.MP4 -acodec mp2 -b 100000k -ab 224000 GOPR0047.avi

if not found:

sudo apt-get install libavcodec-extra-53

in parallel:

find ./ -maxdepth 1 -name "*.MP4" -type f |  xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -acodec mp2 -b 100000k -ab 224000 \${filename/.MP4/}.avi"

For 120 fps video:

avconv -i GOPR0058.MP4 -acodec mp2 -vcodec ffv1 -b 100000k -ab 224000 GOPR0058.avi

The old way (causes audio issues on 1080p / 60fps video):

avconv -i GOPR0051.MP4 -acodec libmp3lame -ac 2 -ar 44800 -b 100000k -ab 100000 GOPR0051.avi


Installing AviSynth and VirtualDub in Linux under wine

Follow the instructions here:

http://www.garratt.info/blog/?p=35 [local mirror]

Don’t forget this step: put avs2yuv.exe in the folder: ~/.wine/drive_c/windows/system32/

Then install vcrun6sp6 using winetricks.


Gumstix notes

Update: Use this URL if your wget won’t resolve https connections: http://feeds.angstrom-distribution.org/feeds/unstable/ipk/glibc/armv7a/base/

The environment that has worked very well for me is to compile on-board (not cross-compiling) for anything small (see tip #2 below for how to get a compiler easily). That usually just works, is fast enough, and simplifies things a lot. When I need to compile something big, I’ve used distcc to get my desktop to help (over wifi). Below is an email from another guy in our lab who wrote down how to get that to work.

There’s a few other big tricks I’ve used.

1) If you’re using wifi, I wrote an article about getting that to work here: http://abarry.org/gumstix-wifi-wlan1-link-not-ready/

2) Add more repositories to the package manager. The default one barely has anything in it, but the main angstrom repo has a lot of stuff (much more like using apt-get) Again, this will only work with a network connection, but I think you can get a board that will give you that.

Page about it: http://gumstix.org/add-software-packages.html

Here’s the relevant part:

—–
By default, opkg uses the Gumstix repositories defined in the configurations files found under /etc/opkg/. This repository, however, does not contain all the packages you can build using OpenEmbedded. For example, we can add the armv7a base·Angstrom Distribution repository to our list of repositories.

$ opkg list | wc -l
17698
$ echo 'src/gz angstrom-base http://www.angstrom-distribution.org/feeds/unstable/ipk/glibc/armv7a/
base' > /etc/opkg/angstrom-base.conf
$ opkg update
...
$ opkg list | wc -l
21755

——

I highly recommend doing that. It made my life much simpler by allowing me to just install things like a compiler, etc.


distcc email


distcc worked like a charm once I finally figured out where the cross compiler was, and that the compilers needed to have the same name on the two systems. State estimator built, and runs as far as complaining about not getting a bot-param :-)

to get to the cross compiler setup environment according to:
http://www.gumstix.org/software-development/open-embedded/61-using-the-open-embedded-build-system.html

the cross compiler would then be in:

overo-oe/tmp/sysroots/i686-linux/usr/armv7a/bin/arm-angstrom-linux-gnueabi-gcc

To get distcc to work, I followed the distcc part of these instructions:
http://blog.kfish.org/2010/06/speeding-up-cross-compiling-with-ccache.html

The only other critical step was to create a symbolic link from the cross-compiler location to

/usr/local/bin/arm-angstrom-linux-gnueabi-gcc

Then to have Make to use distcc on the gumstix you need to do is set three environment variables:

export CC="distcc arm-angstrom-linux-gnueabi-gcc"
export CXX="distcc arm-angstrom-linux-gnueabi-g++"
export DISTCC_HOSTS="ip-of-desktop"


whaw — Tiling Windowing on Linux

Whaw comes from John Meacham. It’s awesome (pardon the pun) for use with tiling windows. I added a few extra command line options so you can move the “hot pixel” around on the screen.

I highly recommend that you spend 30 seconds and read the description of how to use it.

Link to my version: whaw-0.1.2.andy.tar.gz

To install you might need:

sudo apt-get install libxmu-dev


Using encfs

Create:

encfs ~/crypt ~/visible

Mount:

encfs ~/crypt ~/visible

Unmount:

fusermount -u ~/visible


Ban email first thing in the morning

I noticed that when I check my email first thing in the morning, I spend the day putting out small fires instead of doing good work. I wrote a quick script that bans my email after about 7 hours of idle activity and then restores it sometime between the first 45-75 minutes of activity.

  1. Install xprintidle: sudo apt-get install xprintidle
  2. Create a file owned by root in /usr/bin/custom-startup that contains the following script and make it executable.
  3. Add a line at the very end, past any #includedirs in /etc/sudoers (use visudo to edit that file) to allow the script to be called as root without a password (change my username to yours):
    abarry ALL=(ALL) NOPASSWD: /usr/bin/custom-startup
    
  4. Add a call to the script (with sudo in front) to the Gnome startup applications in System > Preferences > Startup Applications.
#!/bin/bash

# this should be run (with sudo) on boot
# the idea is that within an hour of the user returning, the job
# will run and email will be restored, but just as you sit down,
# email will be banned.
#
# DEPENDS on xprintidle which is in apt.




# first get the idle time

while true
do
    IDLE=`xprintidle`

    echo "Idle amount: " $IDLE
    
    sleep 2727
    
    # get the IPs of gmail
    ADDRESSES=`dig +short A gmail.com`
    FIRSTIP=`dig +short A gmail.com | head -n 1`

    if [ $IDLE -gt "25200000" ]
    then
        echo "Idle is greater than threshold, banning email..."
        # ban email here
        
        # check to see if it is already banned
        HOSTS=`iptables -L -v -n | grep "$FIRSTIP"`
        echo "$HOSTS"
        if [ -z "$HOSTS" ]
        then
            # not already banned, so ban it
        
            # ban gmail's IPs
            # find these IPs using $ dig mail.google.com
            echo "banning via iptables"
            
            # loop through the IPs and ban them
            for thisip in ${ADDRESSES}; do
                iptables -A INPUT -s $thisip -j DROP
            done
        
            echo "banned!"
        fi
        
    else
        echo "Idle is less than threshold, so removing ban on email..."
        # remove ban here
        
        echo "removing iptables ban"
        # unban IPs
        for thisip in ${ADDRESSES}; do
            iptables -D INPUT -s $thisip -j DROP
        done

        
    fi
    
    sleep 2727
    
done


VirtualDub audio compression

Best done with ffmpeg instead of virtual dub / avisynth:

ffmpeg -i knife-edge-narration.avi -sameq -acodec libmp3lame -ac 1 knife-edge-narration2.avi


The Complete Guide to Embedded Videos in Beamer under Linux

Edit: A better solution is to use pdfpc. See my new guide.


It is now possible to embed videos in Beamer presentations under Linux.  It’s stable and works well.

The strategy is to use acroread and a flash player to play .flv files. Credit to this post for a lot of this work. Here’s how to do it:

The short version:
1. Get acroread version 9.4.1 [local mirror]
2. Download the example.
3. Convert your video to flv (mess with the resolution to get smooth playback).

ffmpeg -i movie.avi -sameq -s 960x540 movie.flv

 

Now the explanation:


I. Get the right version of acroread.


1. Uninstall acroread using apt-get (which isn’t likely to be the right version)

sudo apt-get remove acroread

2. Download version 9.4.1 of acroread from Adobe (note that the i486 version will still work on 64-bit systems) [FTP page] [local mirror]
3. Mark the package executable:

cd your-download-directory
chmod +xx AdbeRdr9.4.1-1_i486linux_enu.bin

4. Install acroread:

./AdbeRdr9.4.1-1_i486linux_enu.bin

I installed to /opt/acroread, so I run it like so:

/opt/acroread/Adobe/Reader9/bin/acroread

II. Get Beamer files and flash player


Read the rest of this entry »


flash encoding using ffmpeg

ffmpeg -i movie.avi -sameq movie.flv

If you have issues with the sound and just want to remove it:

ffmpeg -i movie.avi -an -sameq movie.flv


S107G Helicopter Control via Arduino

I have worked with two types of S107G helicopters. One is a 2-channel (A and B) and the other is a 3-channel (A, B, and C) version. Their protocols differ significantly. The more common 2-channel (32-bit) version’s protocol is well documented elsewhere, so here I will only document the 3-channel (30-bit) version.

(First posted at rcgroups.)

S107G at FIAP Workshop

 

The protocol for this is 30 bits long.

  • To send a bit you flash the IR lights 16 times for a 0 and 32 times for a 1.
  • Each flash is off for 8-9 microseconds and on for 8-9 microseconds.
  • Between bits you wait for 300 microsecond
  • Between 30-bit packets you delay an amount depending on the channel you are using.
  • Channel A: 136500 us
  • Channel B: 105200 us
  • Channel C: 168700 us

The order of the bits is as follows:

CC00 PPPP TTTT TTTT YYYY XXXX RRRR RR

C – channel
P – pitch
T – throttle
Y – yaw
X – checksum
R – trim

There are a few other things to note:

1) It has a checksum. The 21-24th bits are a bitwise XOR of 4-bit words with the two zeros appended to the end of the bitstring. Thus you can compute the checksum for the first packet:

1000 0000 1000 1100 0000 1111 1111 11
with:
1000 ^ 0000 ^ 1000 ^ 1100 ^ 0000 ^ 1111 ^ 1100 = 1111

and for the second packet:

1000 0000 0011 1001 0000 0001 1111 11
with
1000 ^ 0000 ^ 0011 ^ 1001 ^ 0000 ^ 1111 ^ 1100 = 0001

Read the rest of this entry »


Run script on resume from suspend

Put your script into

/etc/pm/sleep.d

with a number at the beginning and mark it executable. Here’s an example that sets my Thinkpad mouse sensitivity and enables two-fingered scrolling on my touchpad.

$ cat /etc/pm/sleep.d/99-trackpoint-and-twofinger
#!/bin/bash
case "$1" in
    thaw|resume)
	echo -n 220 > /sys/devices/platform/i8042/serio1/serio2/sensitivity 2> /dev/null
	echo -n 95 > /sys/devices/platform/i8042/serio1/serio2/speed 2> /dev/null
	xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Two-Finger Pressure" 32 4
xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Two-Finger Width" 32 7
xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Two-Finger Scrolling" 8 1 1
xinput set-int-prop 'SynPS/2 Synaptics TouchPad' "Synaptics Jumpy Cursor Threshold" 32 250
        ;;
    *)
        ;;
esac
exit $?

Note that if you want to have the script run at boot as well you probably want to add your code to

/etc/rc.local


VirtualDub settings and conversion for .MTS video

Video: MTS off a Canon Vixia HG21.
Setup: Linux, working in wine.

Conversion to avi for VirtualDub/AviSynth:

avconv -i 00394.MTS -vcodec libxvid -b 100000k -deinterlace -acodec libmp3lame output.avi

Or in parallel:

find ./ -maxdepth 1 -name "*.MTS" -type f | xargs -I@@ -P 8 -n 1 bash -c "filename=@@; avconv -i \$filename -vcodec libxvid -b 100000k -deinterlace -acodec libmp3lame \${filename/.MTS/}.avi"

Jenny says, “I had to add the -ac 2 flag for audio”

Set up VirtualDub:

  • Options > Preferences > AVI > Check Directly decode uncompressed YCbCr (YUV) sources
  • Select Video > Compression…
    • Select ffdshow Video Codec
    • Select Configure and then set the bitrate to 10000
  • Select Video > Fast recompress


Speed up MATLAB figures with OpenGL

You can substantially increase your MATLAB figure performance by using OpenGL rendering. Put this in your startup.m file:

set(0, 'DefaultFigureRenderer', 'OpenGL');

You can check if MATLAB has detected your hardware by using:

>> opengl info

Other relevant figure properties are:

>> set(gcf,'Renderer','OpenGL')
>> set(gcf,'RendererMode','manual')

Warning: this breaks saving EPS files as vectorized figures.


Gumstix wifi (wlan1: link not ready)

I just spent a long time trying to diagnose an issue with a Gumstix Overo Fire and brining up WiFi (802.11b/g) on boot.  I did all the standard things (when using a desktop image, you must uninstall NetworkManager and then set up your configuration in /etc/network/interfaces. I could get a connection sometimes, but it was very unclear why it would or would not connect. So unclear that I couldn’t write a script that would bring the WiFi up on boot.

I kept getting this issue:

ADDRCONF(NETDEV_UP): wlan1: link is not ready

occasionally followed by the better:

ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready

Also, something odd was going on that I don’t understand because when the interface would configure on boot, it would rename to wlan1:

[... boot messages ...]
libertas_sdio: Libertas SDIO driver
libertas_sdio: Copyright Pierre Ossman
Remounting root file system...
libertas: 00:19:88:21:59:1c, fw 9.70.7p0, cap 0x00000303
libertas_sdio mmc1:0001:1: wlan0: Features changed: 0x00004800 -> 0x00004000
libertas: wlan0: Marvell WLAN 802.11 adapter
udev: renamed network interface wlan0 to wlan1
Caching udev devnodes
[...]

Finally, I found a solution: use an image from 2010. I’m using the omap3-console-image-overo-201009091145 build found here and mirrored locally here.


Colors in `ls`

How to make (a gumstix say) show colors for ls (assuming your login shell is bash). Edit ~/.bashrc and ~/.bash_profile to match these files:

Make sure your shell is bash (if it’s not, you can change it in /etc/passwd):

root@overo:~# echo $SHELL
/bin/bash

In ~/.bashrc:

root@overo:~# cat .bashrc 
export LS_OPTIONS='--color=auto'
eval `dircolors`
alias ls='ls $LS_OPTIONS'

In ~/.bash_profile:

root@overo:~# cat .bash_profile 
source ~/.bashrc


Recursively get (or set) a property in svn

Edit: There’s an easier way to delete properties recursively:

svn propdel -R rlg:email robotlib-mydev 

Use find to traverse recursively through the directory structure:

find . -type d \! -path *.svn* -exec echo {} \; -exec svn propget rlg:email {}@ \;

To print out the current directory:

-exec echo {} \;

To find only directories:

-type d

To ignore svn folders:

\! -path *.svn*

To execute the propget command:

-exec svn propget rlg:email {}@ \;

The {} is the place find puts the path and the @ at the end causes SVN to parse correctly for paths that have “@” in them.


Firefox 5 extension compatibility check

The old preference for disabling compatibility checks (extensions.checkCompatibility) is now gone. The new one marks the version you chose to do this at:

To re-enable your broken (but likely still working) add-ons in Firefox 5.0, add the extensions.checkCompatibility.5.0 preference in about:config:

1. Type about:config into the firefox URL bar.

Read the rest of this entry »


Enable editable shortcut keys in Gnome

There used to be a checkbox for it, but it’s gone now. You can still enable the option by opening

gconf-editor

and checking the box under

/desktop/gnome/interface/can_change_accels


Reload udev rules without restarting udev

sudo udevadm control --reload-rules


Email CPU usage

Say you’re going to be out of solid contact for a while, but you’d like to monitor your CPU usage (so you can make sure to SSH in and fix something if the CPUs go idle, indicating a simulation crash.)

Solution: a quick script to email me my CPU load every hour. Then all my sessions are on screen, so I can SSH in with my phone and edit/restart what I need over 3G. If it looks bad, I’ll know to get to a real net connection ASAP.

Here’s the script (requires sudo apt-get install sendemail)

#!/bin/bash

while true; do
    echo `uptime` | sendemail -f abarry@abarry.org -t abarry@abarry.org -u "Uptime report (auto)"

    sleep 3600
done


video conversion with mencoder

With sound:

mencoder VIDEO0046.3gp -ovc x264 -oac mp3lame -o VIDEO0046.ogg

No sound:

mencoder VIDEO0046.3gp -ovc x264 -nosound -o VIDEO0046.ogg


Reverting to old versions in SVN

If you want to revert everything to a previous version (ie revision 72):

svn merge -rHEAD:72 .

Note the “.” at the end!

If you want to revert all local changes (ie say go back to the head revision):

svn revert -R .

Again, note the “.” at the end.

To view the diff between two versions (ie what changed between revisions 72 and 73), use:

svn diff -r 72:73 file.java


Resize with imagemagick

convert pic.jpg -resize 50% pic.png

or

for file in *.png; do convert $file -resize 15% thumbs/$file; done

or

convert ../task1.png -resize 300x100 task1.png

File conversion is also nice:

for foo in *.jpg ; do convert ${foo} ${foo%.jpg}.png ; done


Browse old revision in SVN using the web interface

Append

!svn/bc/[revision number]/

to the URL of the repository (not just any folder in the repository).

So in other words:
https://svn.csail.mit.edu/locomotion/!svn/bc/681/


Export Cinelerra Video to YouTube

I took me a while to figure out the best Cinelerra export settings for YouTube. I ended up going with MPEG4 and an AVI container.


Use mencoder to convert from recordmydesktop to dv format that cinelerra can open

mencoder -vf crop=720:576:0:0 -ovc lavc -lavcopts vcodec=dvvideo seeded-1.ogv -o scaled2/colorized.avi

crop options: width:height:x:y

where x and y are the top left coordinates of the cropping box.


Antialiased rings / filled circles in pygame

This shouldn’t be hard. But it is.

If you want to have a filled circle / ring in pygame you need to draw the antialiased part yourself. This is a terribly complicated way to get antialiased circles.

Unfortunately, pygame does not implement an antialiased filled circle, so you basically have to create them yourself. Also, the antialiased circles that pygame does implement seem to only antialiase to white, so they cause further problems by encroaching on the colored parts of the image.

def DrawTarget(self):
    
    # outside antialiased circle
    pygame.gfxdraw.aacircle(self.image, self.rect.width/2, self.rect.height/2, self.rect.width/2 - 1, self.color)

    # outside filled circle
    pygame.gfxdraw.filled_ellipse(self.image, self.rect.width/2, self.rect.height/2, self.rect.width/2 - 1, self.rect.width/2 - 1, self.color)
    
    
    temp = pygame.Surface((TARGET_SIZE,TARGET_SIZE), SRCALPHA) # the SRCALPHA flag denotes pixel-level alpha
    
    if (self.filled == False):
        # inside background color circle
    
        pygame.gfxdraw.filled_ellipse(temp, self.rect.width/2, self.rect.height/2, self.rect.width/2 - self.width, self.rect.width/2 - self.width, BG_ALPHA_COLOR)
    
        # inside antialiased circle
    
        pygame.gfxdraw.aacircle(temp, self.rect.width/2, self.rect.height/2, self.rect.width/2 - self.width, BG_ALPHA_COLOR)
    
    
    self.image.blit(temp, (0,0), None, BLEND_ADD)


terminal redirection

wxProcess *process = new wxProcess(this); // by giving the process this frame, we are notified when it dies

process->Redirect();

this is a bit tricky (and took 5 hours to figure out). It turns out that many systems act differently when output is redirected to a file than to a terminal. For example “ls” gives different results than “ls > t” and looking at “t”.

This is an issue for us, because the way the buffering works makes many printf statements get buffered so nothing shows up. The workaround is the program called “script” which is used to write down a script of terminal commands to a file. We can invoke it with the -c option to run a command, and act like a terminal. We then tell it to send its file to /dev/null and then capture its output through standard output redirection.

Finally, we must invoke the process with wxEXEC_MAKE_GROUP_LEADER so we can later kill it with wxKILL_CHILDREN, which causes everything to close correctly.

wxString processStr = wxT("script -c ") + m_current_dir + m_text_ctrls->Item(index)->GetValue() + (wxT(" /dev/null"));
int pid = int(wxExecute(processStr, wxEXEC_ASYNC | wxEXEC_MAKE_GROUP_LEADER, process));