Abhishek Bhatnagar

My views and experiences in the dual-natured worlds of open source and nonprofit

Category: Technology

Is DOCX really an open standard?

It is hard to believe that even in 2012 we struggle with standards as common as those of documents, presentations and spreadsheets. The de facto formats of these of course are those used by Microsoft Office (docx, pptx, xlsx (collectively called OpenXML or OOXML)), which causes a growing number of Libre and Open Office users such as myself much chagrin.

Like everyone else, the majority of office files I receive in my inbox belong to one of the OOXML category, and invariably as I edit and return the document to the owner, they complain that I have in some way corrupted or changed the elements within because of my choice of software, which is usually true. Then they berate me for being using “crappy” open source software and in one case, for being an “anti-Microsoft hippy”.

Let’s be clear, I am not an anti-Microsoft hippy. Like many of you, I run Linux and under a normal scenario, do not have access to Windows, so running MS Office is really not an option. Even if it were, I would detest having to pay for it. So for the simple reason of including myself and the millions of others who use the various open office suites out there, I request that you stop using OOXML formats, at least until Microsoft truly supports them in MS Office.

I’ve been angrily told before that OOXML or OpenXML is indeed an Open Format, which is technically correct. But there’s more to the story than that. If there weren’t, Libre and Open Office would have built perfect support for it a long time ago. They realize that not fully supporting Microsoft formats is one of the key repellers to new users for their base, so they would not not implement OOXML by choice.

The real reason that these software do not fully support OOXML is because there is a difference between the OOXML specification, and OOXML implementation in MS Office. To understand why, you have to familiarize yourself with three standards:

  • ECMA 376
  • ISO/IEC 29500 Transitional
  • ISO/IEC 29500 Strict

ECMA is a private international standards organization much like the better known ISO. The difference between the two is that ECMA is made out of companies, while ISO is made out of countries. There is certainly a need for both them in the technology market.

ISO along with another consortium called OASIS adopted the ODF (Open Document Format) back in 2006 to solve the document standardization crisis. This is the format that is used by Libre and Open office, along with most other open office suites. Such a format becoming successful would of course threaten Microsoft's already established monopoly in the Document market, which at the time ran on closed formats such as doc, ppt, and xls. So in 2007, they decided to create their own open standard with ECMA called OpenXML or OOXML, otherwise known as ECMA-376. This was the new “XML based” replacement for ODF, which of course seemed unnecessary to ISO and was initially rejected. But with the use of some muscle, Microsoft got the proposal fast-tracked in ISO even though reportedly 20 out of the 30 countries involved were not interested in passing it. This however didn’t stop the ISO secretariat Lisa Rachjel from pushing it through anyway after deciding “to move Open XML forward after consulting with staff at the International Technology Task Force”.

So ISO had a new incoming standard, but specific clauses of it still met resistance. To solve this problem, it was proposed that OOXML be split into two sub-standards, namely ISO 29500 Transitional, and ISO 29500 Strict. The Strict version was that which was accepted by ISO, and the Transitional version was fairly granted to Microsoft to allow them to slowly curb out older features from the closed source days. Nothing wrong with this, its only fair to their users.

However, the problem arose when Microsoft decided not to fully implement the Strict version of the standard in Office 2010. As published my Microsoft here and stated by Wikipedia here:

Microsoft Office 2010 provides read support for ECMA-376, read/write support for ISO/IEC 29500 Transitional, and read support for ISO/IEC 29500 Strict.

What this means is that when you save a document in MS Office 2010 or prior in any of the ‘X’ formats, you are not saving them in the advertised OpenXML format. This document will hence NOT be properly readable by other software such as Libre and Open Office and they will make changes to the document when they are opened and saved within them. The problem hence lies with the former, not the latter.

But, to be fair, we should note that we have been promised full ODF support in the upcoming Office 15. Alex Brown has an excellent post on this subject with more details about the gap between the promises Microsoft made in 2008 to what they actually delivered in 2010. Hopefully they won’t follow suit and actually keep their promises this time. I am actually genuinely excited to find out.

Lately there has been a shift towards the usage of PDF, especially when it comes to documents that do not need to be edited such as resumes, essays, and reports. The reason for the change of course is an organic realization that PDF is a no-bullshit format that works consistently and predictably across all platforms. While PDF is not exactly an open format, Adobe does provide free and consistent specifications for all to implement it as they please. If you are an MS Office user and also have been a part of the great PDF shift, you too have something to gain from the true open implementation of OOXML.

I would still prefer to see ODF win the battle, but if this happens, then at least their will be much fewer reasons to complain. Plus, Libre Office developers won’t be jerked around as much in trying to play catch up to an always moving target.

Anyway in the meanwhile, please save your documents in ODF when you use Microsoft Office.

Raspberry Pi as a mobile computer

Since acquiring a Raspberry Pi a few weeks ago, I’ve been spreading the good word about this awesome device in my social circle, but lots of people it seems are unsure as to what its importance is. At a Sentinel Project meeting recently, we had a bit of discussion on this, and I thought it would be a good idea to record some of what transpired here.

Data Crunchers vs Social Connectors
Today, people use their computers primarily for Email, Facebook, Reddit and Twitter. So much so that our operating systems have been entirely redesigned to put these features at their forefront. However this is divergent from the original connotation of the word ‘computing’ which applies to machines that are good at data crunching as opposed to solving problems of connectivity. With smartphones, we have brought these latter kinds of computers into the era of mobile computing, but not the former. Raspberry Pi helps us do just that.

iPhones, Androids and Blackberries
The three best known smartphone operating systems today all do essentially the same thing. They give us access to email, text messages, BBMs, and other various social media outlets. But they still follow a very simplistic “one app at a time” model, and hardly allow for much complexity in their usage. To be sure, the three do it in a different way from one another, and I like to put these differences like this: iPhones are to Android what Blackberries are to iPhones. Meaning that in the smartphone-ness hierarchy, the order starts at Android and ends at Blackberry. I have a feeling several of you will agree with me about Blackberries, but let’s consider why I am giving the iPhone a shaft here.

iPhones are notoriously closed devices that without jail breaking are owned as much by Apple as the person who paid for one. A key example of how they limit their users can be seen in the challenges that Mozilla faces in getting Firefox to run on iOS. Right now, if you go to the AppStore and search for Firefox, you’ll have to download “Firefox Home”, which is nothing more than a front-end to Firefox that gives you access to your bookmarks, and browsing history. Firefox normally runs on a layout engine called Gecko, but since iOS devices do not allow execution of any 3rd party interpreters, Gecko is not allowed to run on these devices. This excludes iOS from running anything more than apps that Apple approves, and hardly makes them worthy of the title “mobile computers”.

Android devices tend to do better in this department, but still are forced to follow a standard OSGi style activity life cycle.

Raspberry Pi
So what are those amongst us who wish to use mobile computers for untethered data crunching to do? Well, Canonical’s Ubuntu for Android program, which will allow high end Android devices to utilize one core to run Ubuntu Desktop edition is likely to help. But this project is far from being released, and will ultimately apply to expensive devices.

And this is where devices like Raspberry Pi come in. By giving us access to a free and open hardware that costs a pittance of smartphones, and can be run off 4 to 6 AA batteries, users and developers are free to write their applications in an unrestricted environment. The founders of this device created it primarily for use in Education, but what they’ve given the world is a true mobile computer that can do more than just run apps.

This phase of mobile computing is still in its infancy, but there’s nothing quite like an open environment to foster growth. I am sure that soon climate scientists, construction workers, zoologists, educators, rock climbers, geologists and others will find applications in devices like these that smartphones cannot handle. The world is already looking forward to an Arm processor revolution, but devices like Raspberry Pi will ensure that there will also be an application revolution for such devices.

For organizations like the Sentinel Project which seek the use of technology to solve humanitarian problems, Raspberry Pi is a boon.

Getting started with Raspberry Pi

Unless you live on Mars, you’ve probably heard of Raspberry Pi by now. I was one of the lucky 1000 who got dibs on the first batch of these, and mine shipped in four days ago. With the off chance that you are indeed a Martian, I’ll do a brief introduction to the project here.

If you already have yours, and just want to get it running, skip to the Making It Work section

Introduction

The Raspberry Pi Foundation is a non-profit based in Cambridge, UK with a stated goal to “promote the study of computer science and related topics, especially at school level, and to put the fun back into learning computing”. They used Broadcom’s BCM2835 system on a chip which contains

  • the ARM1176JZF-S 700 MHz processor
  • a GPU
  • 256MB RAM

In case you’re wondering, that’s exactly what a ‘system on a chip’ is – a chip that contains the basic components that are usually part of computers systems, such as a processor, RAM, ROM, etc. Using this design along with an SD Card for a hard drive, they produced a Model A and a Model B.

The differences between the two are the following:

Model A Model B
Cost $25 $35
USB Ports 1 2
Network None Ethernet
Power Rating 500mA 700mA

They started selling ‘Model B’ first, and here is what mine looks like. I have placed it next to an Arduino duemilanove for comparison of size.

Raspberry Pi (top) with Arduino Duemilanove (bottom)

Making it Work
All that comes in the box is a Raspi, and two pages to help you get started. You need to append this with

  • a power source
  • an SD Card (or SDHC)

The Power Source needs to be 5V and at least 700mA for Model B (500mA for Model A) with a micro-USB end. If you have an Android, Blackberry or Nokia phone, you’re probably in luck because your phone charger is likely to work. In 2010, GSMA got industry leaders to agree on standardization of cell phone chargers. Everyone agreed, but some companied have yet to implement this. Anyway, in general, be sure to pick a 5V adapter with at least a 700mA rating. Current, as opposed to voltage, in pulled in as needed, so your power source could even be 1A or 20A, and work properly, as long as your voltage is correct.

The SD card requires at least 4GB of space, but should probably be a minimum of 8GB. It needs to have the operating system pre-installed on it, and hence you need to prep it. I followed the excellent guide here to get mine running. You should download the actual OS from RaspberryPi’s download page. Your options are:

  • Debian Squeeze – recommended for now
  • Arch Linux ARM
  • QtonPi
  • Fedora 14 Remix – soon to be the recommended OS, but still buggy

Once you have the SD card done, you’re good to go. Hookup your power, HDMI or Composite, Network (optional), USB keyboard/mouse (I use a wireless port, so one USB), and hit the power button on your socket. After 3 seconds, you should see a Linux boot screen. Once you login, type ‘startx’ to start X and see a GUI.

And there you have it, a computer with a powerful GPU the size of a credit card all for $35. The Raspberry Pi Foundation sees an application of their product in education, especially in the third world. While that is absolutely likely, it has several other applications as well.

I personally intend on using it as the brain of the Quadcopter I’m working on with Chris Tuckwood.

A Year of Open Source: My Review

I just recently completed a whole year of working as an open source developer. I worked on various projects including Mozilla Firefox, EGit, JGit, SQLite, and NexJ Express. The following is a brief history of this process, followed by the lessons I have learned from it. Feel free to skip to the final section, because the rest is just about my personal experience.

Pre-History

I first flirted with the idea of ‘open source’ back in the good ol’ days of Windows XP with software like Azureus. I didn’t really understand what it was or meant, but just remember enjoying the fact that it came with no malware, or trial versions.

But it was in 2004 that I seriously started dabbling with open source software, when I started using Ubuntu. Ubuntu was new, cool and hip – Canonical has done an awesome job of making that the case. In this environment, I came face to face with terms like ‘contribute’, ‘report bugs’, ‘community’, and so on, on a daily basis. Eager to see what it all meant, I did some reading on Wikipedia and got on my way trying to contribute to some of the software that I used: Geany, Gnome, Rhythmbox. But I failed. I had no idea what a patch was, and I had no idea how to get started. The reason, as I discovered later, was because I was not using the right tools.

Summer at Red Hat

Them came last year, when I got an internship at Red Hat Canada. Here I was assigned to the Eclipse Team under which I got to work with some brilliant people. I started to work on the projects EGit and JGit, about which I have blogged before. Doing so, I got to learn some of the tools of the trade: irc, git, and github.

These three tools helped me immensely, I now had 24/7 tech support through irc, had loads of code available to read and follow through iterations, and had the freedom to experiment with code without the fear of breaking anything. These tools are very important to the functioning of an open source community, and I was finally beginning to see that.

DPS909

Then in September, with the resumption of school, I was naturally tempted when I saw a course available to me called ‘Topics in Open Source Development’ with Professor David Humphrey. This course turned out to be amazing. We spent the first few weeks in class discussing what ‘open source’ meant and how it stacked up to the competition. We watched and discussed Revolution OS, a documentary I recommend for all. We also read and discussed Eric Raymond’s seminal The Cathedral and the Bazaar.

I think the latter had a greater impact on me. While the documentary was great and gave you a good perspective of the history of the open source movement, The Cathedral and the Bazaar teaches you some of its fundamentals in an abstract way. I was extremely grateful for having read this.

At this time, we also read some common open source licenses, like MIT, BSD and GPL, and compared them to common proprietary licenses such as Microsoft’s EULA, or Apple’s various agreements. Most memorably, Dave recommended we listen to some of the more dramatic phrases in these licenses as read by master thespian Richard Dreyfuss.

More to the coding side of this course, we walked through some Mozilla Firefox code in class, adn had a talk with Mozilla co-founder Mike Shaver on how to get started as a MozDev. We read through a spec sheet Google had put out on the Mouse Lock API (now called Pointer Lock) and started to work on it. I personally wrote some Mochitests for it, but largely continued working on EGit and JGit from the summer.

DPS911

In the Winter semester, I continued with DPS911, the follow-up to DPS909, a more “project” oriented course. This semester, I made myself more familiar with Firefox code, and worked on three bugs for it.

Bug Blog # of patches
705234 Inconsistent use of “full screen” across Firefox code 3
620164 nsTheoraState::MaxKeyframeOffset doesn’t need to use MulOverflow 4
500784 Video/audio files over 2GBs in size are unseekable 1

I also continued working on EGit and JGit and released the following patches for these:

  1. Egit – Work on CleanCommand – (5 tries, 2 abandons and counting as seen here)
  2. Egit – CleanCommand Selector Dialog
Lessons
Anyway, this post is becoming much longer than I thought it would be, so I’ll bring it to a close with a few of the important lessons I have learned in this past year.

  1. Use the tools available to you: git[hub], irc, twitter, blogs!

    These are key! Being in an open source community means being in a virtual community. You have to make yourself heard and accessible in that space. If you simply release code with GPL stamped on it and tell no one, no one is going to use your code.

  2. No Code Dumping

    As an open source provider, do not work behind closed doors and then one day dump all your code outside and call it open source. Maybe you could get away with it if you are Google, maybe; but a big part of ‘open source’ is building a community. You can’t build a community with code dumping.

  3. What is open source software worth?

    In early 2011, the company I worked with was having a brain storming session to search for solutions to its telephony problems. While tossing out ideas, I mentioned an open source stack that could be used. The moment I did this, the person sitting next to me, a self-proclaimed “big business” type, turned to me and said.

    The problem with Open Source is that, with it you get what you pay for.

    The implication was that since one acquires open source software for free, that it worth must also be nothing. I won’t go into details of the two meanings of the word ‘free’ in this context (Gratis and Libre), but I’ll simply point out that studies have shown that the Linux Kernel is worth over €1 billion. I have seen other sources cite that Fedora is worth over $10 billion. So no, free does not mean worthless.

  4. The Open Source community does not owe you anything

    I came across a beautiful quote on the Apache Jakarta project’s website which stated this quite eloquently:

    If you see something wrong [in an open source project] and do nothing about it, the opensource system hasnt failed you, *you* have failed the opensource system.

    It is important to note that this does not only apply to developers, but also to bug-testers (ie users), writers, designers, etc. There are many types of open source contributors, not just programmers.

Anyway, I’ve been going on for a while so I’ll end this now, but on the whole, I am very grateful to Dave Humphrey and Red Hat for teaching me skills that I hold dear, and will help me as a programmer through out my career, I’m sure.

Building a Quadcopter: Part 2

Continuing from here, this is Part 2 of the ‘Building a Quadcopter’ series.

In the last post, my friend and I had built most of the quadcopter frame, and were awaiting parts from the store. As of now, we have the frame built. Most of the parts have shipped in as well.

This is what was on that parts list:

Click to see a close-up

  • Battery – Turnigy nano-tech 2200mah 3S 35~70C Lipo Pack
  • HobbyKing Multi-Rotor Control Board V2.1
  • Motors (4) – Turnigy D2836/8 1100KV Brushless Outrunner Motor
  • Electronic Stoppers (one for each motor) – Turnigy AE-30A Brushless ESC
  • 10-inch props – remember to get 2 clockwise and 2 anti-clockwise
  • Receiver and Transmitter – HobbyKing HK6DF 2.4Ghz FHSS 6Ch Tx & Rx
  • Connectors/Cables

Electrical Work

The control board is not necessary. It comes with some gyros that provide information to the ESCs and motors for stabilization purposes. You could instead use Arduino with some firmware – that’s our plan for the second one anyway.

Here is what the connection order looks like. You will of course have to multiply the ESCs and Motors with 4, as well the Receiver to Board connections.

Connection Hierarchy

Frame Work

We still had to drill holes in the frame for the bolts.

Drilling holes for bolts

The cutting became so loud, that we had to move outdoors.

Outdoors

After creating mount braces for the motors, we ended up with the following. Now all that remains is connecting all the wires, and then we should be good to go.

Current State

3 more reasons why Mozilla and Firefox are awesome

(In no particular order)

  1. Boot to Gecko
  2. Gecko, the “engine” behind Firefox probably lags behind WebKit in some metrics, but certainly not innovation. The B2G project will allow Gecko to run like an operating system, hopefully providing a more free and truly open alternative to AndroidOS. It might also open the door for other platforms (like Eclipse) to start taking themselves more seriously and finding more applications for themselves.

  3. BrowserQuest
  4. To demonstrate the power of HTML5 and WebSockets, Mozilla created BrowserQuest, a multiplayer retro-style adventure game. Try it. I finished it in 20 minutes, and I wish there was more.

    Flash still has some abilities that HTML5 and its related technologies don’t, but BrowserQuest is an excellent implementation of its growing powers. In keeping with the open tradition, its source is provided on GitHub.

  5. Tilt
  6. Firefox 11 introduced this tool that allows you to visualize and inspect the DOM in 3D. Somehow it brings a very intuitive sense to understanding the layout of a web page. I’m sure it will bring much needed relief to Web Developers working with floats and positions.

    Tilt is as implementation of WebGL (supported by Mozilla), so if your computer is little older (like mine), things might lag just a bit. To access it, go to Tools > Web Developer > Inspect. At the bottom right of the inspect view, you should see a button saying ’3D’. That’s it!


    All three of these projects speak to the larger innovative nature of Mozilla. I am amongst those of the belief that it is made possible by its open nature. Firefox’s growth was key in the evolution of “Web 2.0″, and even today Mozilla leads the development of open technologies and standards.

    If they weren’t around, I’m sure the Internet would look very different today than it really is.

Firefox Bug 500784 – Video/audio files over 2GBs in size are unseekable

After having worked on another Theora/Firefox bug, I thought I would continue working in the same stream. While hunting on Bugzilla, I came across 500784.

This bug was filed in 2009, and has received decreasing attention since. Two hacky patches were submitted for it in 2009 by Chris Double and Matthew Gregan, but for whatever reason, were not formalized. None the less, they made my job easier.

To produce the error, I first had to find a video file over 2GB (preferably OGG), and a smaller one for comparison. I ended up finding some random lecture videos online and used those. The following screen-shots show the difference in how the smaller and larger videos render by default in Firefox 11.

As you can see, the first video, that which is smaller than 2GB, renders a duration for the video which shows to the bottom right of the video. The second video though, that which is larger than 2GB does not return a duration, and thus the entire video plays like a stream, not allowing the user to seek any portion of it.

Upon investigation, I found that this was occurring because the video length returned a very large negative number instead of the actual value (overflow anyone?). Hence the video player assumed that the duration of the video was unknown, and that it was indeed a stream.

The problem was occurring, as you might imagine, because the video duration was returned in a 32-bit Integer which overflowed.

These two lines of code is how it was actually retrieved:

      PRInt32 cl = -1;
      hc->GetContentLength(&cl);

Fixing this required getting the content length from another source, which along with the entire patch diff can be seen here.

Building a Quadcopter: Part 1

NOTE: Part 2 is available here

Things that fly are fun. Some things that fly are quadcopters. Hence Quadcopters are fun.

I recently started building one with Chris Tuckwood and even though we’re just in the beginning stages, the excitement is already high.

Since this is only the first in what will likely be an army of quadcopters, we’re more or less winging it with less planning, and more doing. We ordered the following parts to start with:
- a controller board (eventually to be replaced by an Arduino)
- Some rotors and motors
- Couple of ESCs
- RC receiver/transmitter
- A battery
- And some other connectors/wires

Most of these should get here by next week (hopefully), and that’s when the real building process will start. In the meanwhile, we bought some aluminium tubing locally for the body, and with Chris’ expertise, have started chiseling it to make an X-Frame body. This has required some tools, and the good people at HackLabTo have been providing us with them. In fact, hacklab has been running a meeting group of interested parties who are all building quadcopters, and meet weekly to exchange ideas and plans. This excellent community is run through a mailing list by Eric Boyd here.

Here are some pics from that first cutting session Chris and I worked at in hacklab’s glorious* bathroom.

Chris Cutting

What the final product should look like

What the final product did look like...success

Frame Close Up

* you would call it glorious too if you saw some of the things in there. Other than it being a fully stacked server room, it also has a laser cutter, and several other well placed tools.

Firefox Bug 620614 – nsTheoraState::MaxKeyframeOffset doesn’t need to use MulOverflow

Several weeks ago, I worked on Firefox bug 620614. The title simply said “nsTheoraState::MaxKeyframeOffset doesn’t need to use MulOverflow”.

Yeah I know, I was stumped too.

Understanding it required me to go through some the basics of Theora and decoders in Firefox. I’ll try and report on some of what I learned here.

To best understand the problem, let’s understand the code around it by running through a test case and emulating the need for it.

Go to the OGG video here.
Why a debate from Davos 2012 on Capitalism? Because it’s under Creative Commons and ~70 MB. The length and the size are important, you’ll see why.

What you see now is the video slowly buffer up from start to the 26th minute. When a video buffers, it is simply downloading some “video data” required to play it to your computer, so that when you seek any portion of it, the player can get to it easily. Now as the video loads, click around in the timebar in some unbuffered areas and wait for the video to load up. Try doing this a few times.

Now go to a second video here. This is a desktop recording of me doing the exact same thing that you are, but of an intentionally hijacked build of Firefox. As you can see, when I click on a part of the video, the frame that loads up is choppy and blocky everytime without fail, as opposed to yours which is smooth. To understand why this happens, read on. Also remember this point as point [1], which I will refer back to.

Our current usecase occurs in the method SeekInUnbuffered() in nsOggReader.cpp. It does so like this:

if (HasVideo() && mTheoraState) {
keyframeOffsetMs = mTheoraState->MaxKeyframeOffset();
}

Alright, so this means that when a user “seeks in unbuffered”, or clicks a part of the video timeline that has not yet buffered, the codec, upon some conditions, tries to calculate the ‘keyframe offset’ in milliseconds.

If we look inside the method in question, MaxKeyframeOffset(), we see this

PRInt64 d = 0;
MulOverflow(USECS_PER_S, mInfo.fps_denominator, d);
frameDuration = d / mInfo.fps_numerator;

So what seems to be happening here is that MaxKeyFrameOffset is being set based on the calculation mInfo.fps_denomicator x USECS_PER_S.

mInfo.fps_denomicator is, as the name suggests, the denominator of the fps (frames per second) the video plays at, and USECS_PER_S is the number of microseconds in a second, or 1000000.

The product of these is stored in the PRInt64 type ‘d’, which is then used to calculate the frameDuration, and subsequently MaxKeyFrameOffset. But notice that d is a 64-bit Integer, and multiplication overflows can occur if the FPS of the video is large enough. This would result in ‘d’ being a 0, and hence the MaxFrameOffset also being 0. So why would this produce the broken video buffers demonstrated at [1]?

This has to do with some of the information that OGG videos carry. One piece of the data is called the Skeleton. This gives a keyframe by keyframe structure to the video, such that the location of all keyframes is known by the skeleton. When a user seeks any given frame, the player uses what’s called a “bisection search” to determine the closest keyframe, and play the video from there. Some videos don’t have skeletons, in which case the player cannot correctly determine the closest keyframe, as in our case. When this happens, and a bisection search is impossible, a MaxKeyframeOffset is sougth. In some cases, while this is being calculated, our d would be 0. In such a case the player will render the closest frame which, by virtue of not being a keyframe, looks messy and choppy.

This occurrence is rare; it would happen only in videos where a multiplication overflow is likely to occur (or where the FPS is really high), and where a skeleton does not exist. Software such as ffmpeg2theora today, by default generate a skeleton for any video they process, but this was not the case until early 2011. The demo video that I produced in [1] was produced with ffmpeg2theora with the parameter –no-skeleton. It’s size and length enable us to experiment with it and increase the probability of the bug occurring. Like I mentioned earlier though, the bug here is EMULATED. I built a copy of Firefox in which I forced d to be always 0, and hence we see what would happen when the overflow would occur.

Fun stuff!

But our specific bug is a little less interesting. All it says is that the MulOverflow64() is not needed as one of the two numbers involved is 32-bit. The solution to that is simple, as included in my patch, but to really understand what was going on and why I was doing what I was doing required me to understand the above. Thought I’d share it.

Firefox mochitest: test_fullscreen

At first I was like :(

But then I was like :)

Follow

Get every new post delivered to your Inbox.