Soo… welcome to the Announcements blog! I’m excited to introduce this blog as a venue that will provide project updates, context on OMwiki’s internal structure, and a discussion forum for other issues surrounding OpenMeetings.org.
To start, the following email summarizes many of the known issues with scalability. If these issues can be properly addressed, I’m hoping that the amount of video indexed and delivered by OpenMeetings.org can be increased by a few orders of magnitude.
If you can help tackle these issues, or know somebody who can, please leave a comment either here or on my talk page.
From: “George Chriss” <GChriss -at- openmeetings.org>
Date: Wed, July 21, 2010 6:12 pm
To: “SFC Board” <board -at- freeculture.org>
Cc: metavid-l -at- lists.wikimedia.org
[…]
Video publication is a very manual process; production workflow is a process that definitely needs to be hacked. The following is a list of things that would help, and that I need help with:
A) Recording
I’ve taken prosumer-grade cameras about as far as possible —
currently, I’m using a Canon FS22 with 2×32GB flash. More expensive cameras don’t really help: they become cost-prohibitive in terms of scale, are less discrete, a pain to travel with, and don’t offer much advantage in visual quality at web resolutions. The largest shortcomings of the FS22 are that it requires ‘modcopy‘ to fix 16:9 aspect ratios during file-import, long recordings are split across multiple files, there is an accumulative as-recorded time drift vs. real-world time, and the mic-in preamp often picks up line noise with XLR sources (impedance issues?). Other than that, it’s a pretty good
camera.
I submitted a CC Catalyst application to fund hacking of Elphel
open-source, open-hardware cameras: bit.ly/bTLmQx
This will be a really fun project if funded!
The Elphel cameras could be set to request the event of the event title, speaker names/affiliations, CC license, “who’s speaking right now?”, etc., as this takes time to dig up after-the-fact.
There’s also a lot of work to be done developing reference
documentation for properly-equipped meeting spaces. I’ve started sketching out arrangements of in-room equipment (OMwiki:Gear), and more equipment documentation is on it’s way.
B) Editing
Cinelerra is a mess, but it’s the only viable way to edit video
professionally using all-FLOSS software. The majority of editing time on non-XLR recordings is spent on sound cleanup, as was the case with FCX. The remainder of the time is spent drafting graphic title slides (GIMP), scanning for sections that should be removed, and, if necessary, manually re-syncing audio from a secondary audio source (can Audacity do this?).
I’m looking forward to trying out VideoLAN Movie Creator or,
eventually, Lumiera, but I haven’t attempted either yet. Blender might be an option if it supported piped YUV output, as is the case with Cinelerra — I don’t trust built-in encoders.
After a Theora video is rendered via the YUV4MPEG pipe, I merge-in audio (oggz-merge), create a Skeleton (oggz-chop), validate the file (oggz-validate), create a .torrent file (BT Mainline + WINE — could valid files be produced from the command line?), created an animated GIF (see below), then upload to the Internet Archive. A script to automate this process shouldn’t be tooo hard to draft…
C) Internet Archive (IA)
In-page playback is busted, as is automated animated + static
thumbnail creation for files that are submitted as Ogg Theora. Both issues will need to be resolved by IA staff, but recommendations on the following items might be helpful:
-Edits of the .js file responsible for rendering the element,
especially in the absence of a H.264 derived file.
–ffmpeg recently changed the ‘-padtop’-style syntax for thumbnail generation — I haven’t figured out how to create thumbnails with the most-recent versions.
Additionally, there are a number of IA metadata fields that are
manually calculated and entered, such as a Unix timestamp for the date of the event, wgs84 geo-coordinates, user-generated md5sum hash (to check against incomplete uploads, file corruption, tampering), and a few other fields that could be integrated with the upload script and/or Elphel metadata.
D) OpenMeetings.org
After publication is complete, I download a copy of the meeting from the IA to OpenMeetings.org, create a new page in the ‘Stream:’ namespace, then supply the URL of the just-downloaded video. Then, I upload the animated thumbnail (MediaWiki), overwrite the MediaWiki-generated static thumbnail with the original animated thumbnail (SSH), and rotate-in the meeting as a Featured Meeting and add it to the Visual Finding Aid. I then enter the Unix timestamp into the appropriate MySQL field (phpMyAdmin) such that videos are searchable according to date, which is busted at the moment (see below).
I am embarrassed to say that I generate fully-specified Media RSS feeds by hand, and that I accidentally deleted some many-item feeds.
The MetaVidWiki extension needs to be rewritten for ‘Stream:’ asset management, compatibility with other platforms (e.g., Universal Subtitles), and integration with the latest Kaltura embedded player. Right now, this means that some of the MetaVidWiki controls are busted, such as advanced search, automatic caption scrolling, and “jump to” hyperlinks. On the plus side, in-browser video remixing is starting to come online.