Build ffplay and ffmpeg 2.6.2 on Mac OSX 10.10.2

Every few years we need ffmpeg and ffplay for some little job. The ffmpeg suite is my go-to swiss-army knife for whipping video into shape. Unfortunately, the compilation process is challenging. Here is a summary of the recipe I used to build these tools on OSX 10.10.2. It was not exactly straightforward. That’s why I wrote it down.

To jump to the end, the most difficult part was getting SDL-1 to build. (https://www.libsdl.org) I tried using SDL-2 with ffplay, but that combination did not compile correctly. ffplay requires SDL-1, and SDL-1 required some manual edits to get it installed.

Continue reading Build ffplay and ffmpeg 2.6.2 on Mac OSX 10.10.2

Video Summarization is the Biggest Problem with Internet Cameras

What happens when you record every frame emitted by an IP-camera? You end up with too much data to make sense of. I now have nearly unlimited data storage, but have little interest in reviewing everything stored. Watching recorded footage in real-time is too time-consuming to be enjoyable, or even reasonable. While playing it back at 2x or 4x speed might sound like a good idea, that’s still a lot of video to look at.

Please Summarize!

Most video editing software offers a “scrubbing” operation for rapidly finding a point in time. Scrubbing is the act of manually moving the transport control backwards and forwards through the images. If you’ve ever scrubbed looking for a single frame you remember seeing, you’ll have noticed that it’s sometimes hard to find the frame. Your monitor is displaying no more than 60 or 75 frames per second: if you scrub over a time period with a resulting rate faster than this, you are not seeing everything.

“Video summarization” is a field of study aimed at developing algorithms and methods to help abstract and identify interesting features in a segment of video to help direct viewer’s attention. Here is a great quote describing video summarization [ 1 ].

Video summarization methods attempt to abstract the main occurrences, scenes, or objects in a clip in order to provide an easily interpreted synopsis.

At Sensr.net, we consider video summarization to be an important part of our technology, recognizing that keeping a collection of all the frames your camera emits is just too much data to use. Our summarization techniques are straightfoward: we use motion-detection algorithms and save only those frames. We also offer a simple form of “hierarchical video-summarization.” When you look at a shot of all of the hours-of-the-day you are presented with the most important 24 frames of that day. Similarly, the days-of-the-month are summarized by the most important frame of each day.

Sensr.net has been hard at work laying the “pipes” for moving the frames emitted by internet cameras through our processors and into the cloud. You can expect to see more from us in the video summarization arena. Until then, take a look at this excellent slide presentation and think about what sorts of summarization you would like to see for your internet camera application.

[1] http://www.cs.utexas.edu/~grauman/courses/spring2008/slides/Video_Summarization.pdf

Embed your DCS-920 in your Web-page or Blog

To the right of this article, you can see a streaming view of the DCS-920 from my house.  This capability is enable by a new service from Sensr.net.  The viewer is called a “widget” and can be plugged into any web-site or blog.

Viewer widgets can be created for any cameras that you own.  First, add your DCS-920 to Sensr.net (shown here http://www.tsheffler.com/blog/?p=187).  Then, select the “My Widgets” item from the pull-down menu.  This will show you the HTML embed codes for the widget.

Sensr Widgets

The rest is easy.  Copy the HTML and paste it into your blog like I did here.

Cross-Origin Resource Sharing for JSON and RAILS

CORS (Cross-Origin Resource Sharing) is a protocol built on-top of HTTP for allowing Javascript on a page originating from one site to access methods on another site.  This is the preferred method for allowing Javascript code to escape from its default Same-Origin
Policy
.  While the protocol has been around for a few years, and is built into all of the major browsers, the protocol does not seem to be widely documented.  Here are some experiences I’ve encountered while enabling access cross-origin access for JSON from a Rails server.

Background

The Same-Origin policy restricts Javascript code to making Ajax calls to the site that its containing page came from.  For example, Javascript on webpage from http://mysite.com is not permitted to make Ajax calls to a web-service at http://othersite.com/method.json.  This policy is a security measure to prevent unwitting visitors to a website from executing malicious code in their browsers.

For web-service implementors, this is an annoying restriction.  Allowing Javascript to access data from multiple sites would allow programmers to create browser-based mash-ups.

Server-side proxying is a traditional method for getting around the Same-Origin restriction for Ajax requests.  With proxying, the owner of mysite.com implements a copy of the remote method that repeats the request from the page-owner’s own site.  The server on “mysite.com” must process each remote method call by calling the method on “othersite.com” so that it can return the results to the browser.

http://mysite.com/method.json --> http://othersite.com/method.json

The great advantage of this method is that it works with any browser.  The drawback is that it imposes work on web-service subscribers that seems redundant.

Cross-Origin Resource Sharing

CORS is a protocol negotiated between a browser and a web-service that tells the browser that it is “OK” to execute Javascript code from a cross-domain call.  The specification covers “Simple” transactions and complex transactions that use a “Preflight” request.  Cross-origin JSON requests with non-standard headers are not “Simple” and require the “Preflight” request.

A great introduction to the CORS protocol appears here:
http://www.nczonline.net/blog/2010/05/25/cross-domain-ajax-with-cross-origin-resource-sharing/

and  an example of the CORS transactions appears here.

http://arunranga.com/examples/access-control/preflightXSInvocation.txt

The “Preflight” request is a new HTTP verb called OPTIONS.  In a browser implementing CORS, each cross-origin GET or POST request is preceded by an OPTIONS request that checks whether the GET or POST is OK.  If it is, the server must return some headers to allow the subsequent GET or POST.  This is actually a wonderful capability.  The server can allow or disallow remote access on a per-method basis, with access determined by HTTP referrer, IP or any other criterial.

The OPTIONS request contains Access-Control headers that are part of the CORS specification.  The response must reply to these headers to allow the subsequent GET or POST to proceed.

For example, an access to

GET http://othersite.com/method.json

would be preceeded by an OPTIONS method that looks like this.

OPTIONS http://othersite.com/method.json
Origin: http://mysite.com
Access-Control-Request-Method: GET

The server would respond with an empty response body of type
“text/plain” that contains headers allowing the request.

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Max-Age: 1728000
Content-Length: 0
Content-Type: text/plain

Custom Headers

If your application uses non-standard headers, you must take special steps to permit them or the browser will flag a CORS violation.   I ran into this restriction in the application I was writing.  Unfortunately, the security violation messages from the browser are obscure, and it took me a while to figure this out.

In our application, our Javascript client uses prototype.js to make Ajax calls.  Prototype adds the following headers to the request.

X-Requested-With: XMLHttpRequest
X-Prototype-Version: N.N.N.N

Our server must explicitly allow these headers in the CORS exchange or the browser will disallow the cross-origin request. The OPTIONS request will specify the headers it wants to add. Our OPTIONS/Response exchange looks like this.

OPTIONS http://othersite.com/method.json
Origin: http://mysite.com
Access-Control-Request-Method: GET
Access-Control-Request-Headers: X-Requested-With, X-Prototype-Version

Response:

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: X-Requested-With, X-Prototype-Version
Access-Control-Max-Age: 1728000
Content-Length: 0
Content-Type: text/plain

CORS in Rails

I implemented the CORS protocol in a Rails application with just a couple of filter methods added to my controller. Here they are. (If you want to follow this technique, you’ll need to make sure your routes allow access to HTTP “:options” methods.)

before_filter :cors_preflight_check
after_filter :cors_set_access_control_headers

# For all responses in this controller, return the CORS access control headers.

def cors_set_access_control_headers
  headers['Access-Control-Allow-Origin'] = '*'
  headers['Access-Control-Allow-Methods'] = 'POST, GET, OPTIONS'
  headers['Access-Control-Max-Age'] = "1728000"
end

# If this is a preflight OPTIONS request, then short-circuit the
# request, return only the necessary headers and return an empty
# text/plain.

def cors_preflight_check
  if request.method == :options
    headers['Access-Control-Allow-Origin'] = '*'
    headers['Access-Control-Allow-Methods'] = 'POST, GET, OPTIONS'
    headers['Access-Control-Allow-Headers'] = 'X-Requested-With, X-Prototype-Version'
    headers['Access-Control-Max-Age'] = '1728000'
    render :text => '', :content_type => 'text/plain'
  end
end

The before_filter, cors_preflight_check is the last in my filter chain: earlier filters check for allowed access. If the request is for the OPTIONS method, then it short circuits the request, includes the necessary headers and returns a blank text body.

The after_filter, cors_set_access_control_headers, is for all requests that are returned by this controller. It includes the CORS headers for everything else.

Summary

CORS is implemented in all of the popular browsers, but client-side access seems to vary between IE and Safari/Chrome/Firefox. For me, it was interesting to see how server-side access control is not too different from what Adobe does for server-side policy files. It would be nice if these access controls could be unified, but I’m just happy to have them.