iCufflinks

These cufflinks look amazing.  Basically the breathing suspend lighting pattern in a CNC machined aluminum cufflink.  Am I a fanboy for wanting them? 

http://www.adafruit.com/products/379

Posted in Uncategorized | Leave a comment

Picking a color for your new iPad 2

While you're all waiting to get your new iPad's (I am too!), you might be struggling with the idea of what color to get.  Contrary to what you might think, it's not just personal preference.  There are real functional differences between the two.

Let's draw analogues to other consumer products.

First, TV's.  You may note that with very few exceptions, all TV bezels are done in black/dark grey/steel tones.  Interesting, no?  The reason for this is that a bright colored bezel will distract the eye and make the viewer acutely aware of the edge of the screen at all times.  It's slightly more immersive to have a black bezel, which allows the edge of the screen to dissolve into the frame in many cases.

Second, cars.  If you are a car person, you know that the reputation of black cars is one of requiring high maintenance to keep looking pristine.  Dust and fingerprints contrast heavily with a shiny black frame.  On a white iPad, this kind of dirtiness is hardly noticeable on the frame, although the screen itself may still present temporary uncleanliness.

Last, I've heard it claimed that the white iPad is a little less prone to overheat in sunlight due to the heat reflectiveness of the bezel.  I'm not sure if this is true enough to an extent that it is noticeable, but, hey, why not mention it?

In the end, of course, what probably matters most is that you pick the color you feel more emotionally connected with.  But it's fun to note that even tiny choices like color have real physical implications upon the way you use a product like the iPad.

 

Posted in Apple, Uncategorized | Leave a comment

Picking a computer to encode videos with

So let's say you just bought some sort of new Boxee box or Apple TV, and you have a lot of videos you want to watch on them.  You need to process those videos into a device friendly format, and you’re thinking about building or buying a dedicated machine to handle the job.  What do you buy?

Not surprisingly, the answer depends pretty heavily on what you're trying to optimize for.

The choices

Optimize for peak throughput.  For example, a tv show airs, you record it to your PC, and you want it ready to stream or transfer to any number of mobile or in-home devices as soon as possible.

Optimize for total throughput.  You have a bunch of DVD's or Blu-Ray's and you want to queue them up and convert them as quickly as possible, in total.

Optimize for energy consumption.  You want a little bit of everything above, but what you don’t want to do is inflate your energy bill or consume watts like crazy over the course of the year.

And let’s just assume you’re always optimizing for cost, within.  You need to decide how much roughly how much throughput you want … but going too cheap actually results not only in low total throughput but also unacceptably low throughput per dollar.  Going expensive will net you higher peak throughput, but higher total throughput might be better served by extra machines, not just a single beefy machine.  We’re not building supercomputers here. (yet!)

The details

Peak throughput – Your best bet here is a single processor system with a lot of cores (probably around 4), clocked as high as possible.  A lot of modern encoding systems can use multiple cores, but simply adding cores doesn't scale indefinitely.

The most economical single processor systems for this scenario tend to be Core i7 based.  Even though AMD’s processors are generally a good bang for your buck, Core i7 systems actually have a leg up over AMD's processors in this case because they are:

1. Easily overclockable
2. Especially efficient at encoding videos
3. Hyperthreaded (The extra logical processors actually help significantly here … on the order of 10 to 20%)

GPU-based encoding solutions.  Video encoding, falling squarely in the parallelizable camp of problem sets, is often held up as a poster child for GPU-accelerated software.  In practice, however, these encoders are poorly optimized compared to general purpose encoders.  These tend to yield interesting speed increases at the expense of quality per bit and all of the flexibility that traditional software provides.  In short, academically interesting, but not particularly practical.

For reference, I took a 1080p reference video and encoded it to an 720p .m4v using the High Profile settings in Handbrake.  The system had a 3.06GHz Core i7 with hyperthreading enabled (4 cores, 8 logical processors). The source video was 134 min., and the encode itself took 103 min.  Faster than real-time is always good!  I don't want to overload this post with stats, but that should give you a feel for the scale that we are talking about.

If you want to spend more money, you can always buy a 12-core Mac Pro or build the equivalent.  But benchmarks show that jumping from four to twelve cores here doesn't make the encodes three times faster … only 1.42 times faster.  So you'll have to decide whether the extra money is worth the diminishing returns increase in speed.

Total throughput

OK, so we established above that, past a certain point, multiple cores stop helping as much with the performance of a single encode.  Looking at Handbrake, it scales pretty well up to 4 cores, starts to diminish from 4 to 8 cores, and drops off significantly from 8 to 12 cores.  In other words, that 48-core system isn’t going to help you much.  But what you CAN do with that many cores, assuming you have multiple files to process, is run multiple encodes at once … enough so that you have roughly the ratio of encodes to optimal number of cores.  Sure, the encodes will bump into each other somewhat, but the total throughput/utilization of the system will be much closer to optimal than if you just ran one file at a time.

Unfortunately, buying more than four cores on a single system right now is just not that cheap.   The multiprocessor scenarios for adding extra cores seem to fall into the business segment of the market and now your cost increases rapidly.  If your objective is not raw speed on a single system, you can get more total processing power per dollar by simply buying and building more single processor Core i7 systems.  The problem then becomes the coordination necessary to distribute the encoding work amongst multiple machines.

Going green

Now this was a topic I found interesting.  It turns out that going really green with a full encoding load is kind of difficult.  Computers range in green-ness all the way between really green (Mac Mini 2010) to really power hungry (Mac Pro 2010).  Most desktop computers aren't designed to be that green.  The best example I could come up with … the Mac Mini 2010 … still doesn't compete that favorably with a Core i7 running at 3.6 GHz if you are comparing them at full load.

For example, if you assume, living out here in California, that your cost per KWh is $0.30, then you end up with the following, based on some other tests I ran on my own machines using a 720p source as the reference (so not comparable to the other test above).

  Mac Mini 2010 Homemade Core i7
Purchase price

$999 ($699 base) $699
Watts under full load 30 200
Source hours processed per day 11 50
Cost per 1000 source hours 65.00 96.00

So yep, the Mac Mini does pretty well on the energy efficiency side of things, but surprisingly the high powered Core i7 holds up pretty well too, just because it's so damned good at encoding video! 

Where a machine like the Mac Mini 2010 really wins out is on the idle side of things, tho.  So if your pipeline sits around unutilized for decent stretches, the 10 watt idle power consumption of a Mac Mini vs a desktop’s 100-150 watts adds up quickly.  It's actually for this reason that I use a Mac Mini as my primary desktop and leave the high powered machines off unless I want to do something requiring serious horsepower … like playing a video game.

It turns out, however, that you really shouldn't expect to win much back over time in terms of the dollars saved in energy costs.  Encoding 1000 hours of video is a LOT, and you only make back 30 dollars over that range, even with the high energy costs here in California.  It just doesn't add up to much.  And Core i7 desktops are not that expensive, especially compared to Mac's!  You can bet the Core i7 system is going to have a much longer useful lifetime.

In short, what I found was that, despite the desire to be energy efficient, you'd be hard pressed to make your money back in energy savings over time.

What isn't that important

RAM – You only need enough to run the encoding jobs comfortably and no more. 2GB in a pinch, 4GB is plenty.  I would go with 4GB at least just so that you can actually repurpose the computer for something else if desired.

Hard drive space – You need just enough space to buffer the input and output files reasonably.  Regular hard drives are fine and SSD's are overkill unless you plan on multitasking the machine heavily with other workloads.  Even something like a 120GB drive is probably safe … but nowadays, reasonably cheap drives start around at least 320GB for notebook drives and almost 1TB for internal desktop drives.

Network throughput – In a more complex encoding scenario involving multiple computers, network throughput becomes a significant factor as you start having to shove bits around to get them encoded on different nodes.  But it's not really relevant to the current discussion.

 Summary

Obviously there are quite a few factors I haven't included here for the sake of brevity, but the upshot is that a lot of factors push you towards the single Core i7 build when it comes to encoding performance as defined on a number of fronts.  What's going to be really interesting in future is the advent of massively cored systems on the horizon.  Eating the energy overhead of individual systems is not so great and having many cores on one system could alleviate that problem as well as push the performance envelope in terms of encoding parallelization.

Posted in Uncategorized | Leave a comment

The Walking Dead

AMC has a massive ratings hit on their hands.

It's definitely well deserved.  The show itself is uncomfortable and bleak, just like you expect life in the zombie apocalypse to be.  It takes its subject matter seriously, unlike so many of its campy brethren.  That provides plenty of real opportunities for drama.

If I had to be critical of the show, it's that some character behavior is done clearly for dramatic purposes.   The protagonist wakes up and stumbles around in the apocalypse without shoes for 15 minutes.  Strange as it seems, that was what made me uncomfortable.  Not the fact that there were zombies locked behind a door.  Just the fact that walking around had to mean he would be stepping on glass and that he would be running really slowly if he actually did encounter a zombie.

Rick (the sheriff protagonist) must have feet of leather.  If I woke up in a hospital full of debris, you can be damn sure the first thing I would be doing is stealing someone's shoes.

More importantly, however, there are points where you wonder why the protagonist doesn't just come out and ask what the hell is going on.   That's all I would be doing every time I met someone.  Try to figure out what's going on.

Instead, everything is treated like one long reveal … as if there is a path of discovery and the characters need to not ask questions so that we'll have a steady dose of new information each episode.  I'm not a script writer, but I feel like there has to be a better way.

Regardless, I'm addicted.  I'm surprised that the season is only 6 episodes, however.  This must have been either a real financial commitment or considered a risky development if they had to limit the season length to something that short.

Right now I'm saving the last 3 episodes to marathon them in one go next week with some friends.  I figure that's as close as we'll get to a real "zombie night" around here!

Posted in Uncategorized | Leave a comment

Gaming in stereoscopic 3D

Stereoscopic 3D is a great technology, even with all of the inherent flaws and glitches that exist in today's implementations.  But while movies are the main way most people are finding out about three dimensional viewing, in reality, gaming is a much more interesting way to take advantage of 3D viewing.  Here's why.

Gaming requires more immersion and focus

Today, the requirement to sit in front of the TV and wear glasses inherently limits the number of viewers and the freedom of the viewer to go do other things.  Not great, especially when you're walking around the room doing other things or watching with other people.

However, most gaming is done in a single player or solitary context.  In this sense, it requires focused attention from the gamer anyway.  Therefore, you aren't really losing much by asking the gamer to put on some glasses or sit directly in front of the TV … he's already doing it.  In return, the 3D presentation of the game makes the game more immersive and engaging.  Gaming is a much more natural place to take advantage of stereoscopic 3D because the viewer has already decided to dedicate his attention and focus to begin with.

3D environments in games are presented in real time

Because 3D viewing is in a transitionary period, many movies are shot with a relatively shallow depth.  For an experienced viewer, this is unfortunate … it is not difficult for them at all to see scenes presented in a deeper and more realistic manner.  Unfortunately, it is impossible for the filmmaker to go back and reshoot those scenes with more depth … the movie is fixed at its current depth forever.

Gaming is different.  First, on a technical basis, you can adjust the depth of any 3D scene to something that you, the viewer, are comfortable with handling.  This is much better than having to use the lowest common denominator that is generally given to you by a filmmaker.

More importantly, games can represent fantastic huge realities in 3D that could never be explored interactively within a movie.  I will state without any doubt that you really can't appreciate the artistry of how someone has designed an entire city within a game until you experience actually standing within and moving around it in stereoscopic 3D.

To put this another way … the sense of perception is greatly enhanced when viewing a movie in 3D.  The sense of exploration … the feeling of moving within something new … is greatly enhanced when playing a game in 3D.

3D gaming is often still glitchy

The nice thing about games today is that nearly all of them are rendered using three dimensional data.  In other words, the computer is already storing the scene you are viewing in 3D anyway … it's just presenting a 2D scene to you on your monitor.  To get to stereoscopic 3D, it's almost trivial to just render the scene twice as often (once for each eye) to create the illusion of depth for the 3D gamer.

Where this falls down is that most existing games assume you are viewing them using a standard 2D monitor.  What often happens is that special effects such as lights or menus are presented, using 2D programming "tricks",  in such a way as to look fine on a 2D screen but be seen at completely wrong or impossible depths in a 3D context.

Technologies such as nVidia 3D Vision take advantage of the fact that games are already 3D anyway to retrofit many past games for 3D gaming, but the aforementioned glitches are where the promise falls short of the reality.

Fortunately, as 3D gaming becomes more commonplace, you can expect developers to see the glitches during development and fix them on the spot.  Companies such as nVidia and Sony have a vested interest in making new games look good in 3D and are spending a lot of money evangelizing proper 3D programming techniques with developers.  Moving forward, these problems are likely to disappear quickly in newly released games.  But it's very difficult to go back and fix problems with older games.

Gaming in 3D can make you perform better … or worse

While gaming in stereoscopic 3D adds realism, that doesn't necessarily translate to better competitive performance.

First, your framerates will be significantly lower unless your system is overbuilt with enough performance slack to compensate.  A lower framerate can translate into poorer competitive performance.

Second, it takes time for your eyes to change their focus depth.  That split second can be precious when switching between a close by target to a far away one … something that you would artificially not have to deal with when playing in 2D.

Third, some games don't lend themselves perfectly to control schemes when viewed in 3D.  RTS's are an example where the cursor must float in 3D and adjust its selection dynamically.  It can feel unintuitive to select and move objects in these gaming contexts.

In short … there are not many cases where depth perception will enhance your performance in games as a player (although there are a few).  Depth perception is valuable only in very particular gaming contexts.  Think of it more as a way to add enjoyment or immersion, not a way to add extra wins to your scorecard.

Just so I don't leave this on a downer note, there have been two cases where I have found depth perception to be valuable.

The first is in Mirror's Edge, where you run across rooftops and must land precisely and often hit a button to roll as you land.  The depth perception added here by stereoscopic 3D helps significantly in judging in a split second how close you are to landing and where you are.

The second is any racing game (cars).  Depth perception helps you gauge the nature of turns quickly at high speed and also how close you are to touching other cars.

RTS's look OK, but are less impressive than other 3D games due to the fact that your view tends to be from a bird's eye perspective.  In my experience, UI selection of units can be glitchy, so best performance is achieved by playing in 2D.

First person shooter games are probably the best way to showcase any 3D setup … the immersiveness is amazing.  With a sufficiently high performing system, the difference in your performance will probably minimal until you get to high levels of play, but I do expect the disparity to become signficant at those levels.

Of course, any low pressure game like puzzle or adventure games will have no problems at all being played in 3D.

Summary

In short, I find gaming to be a far more fertile ground for stereoscopic 3D than movies … mostly due to the added immersion and interactivity.  I hope you agree.

Posted in Uncategorized | Tagged , , | Leave a comment

Visual anomalies with stereoscopic 3D display technologies

We all know what stereoscopic 3D should look and behave like … we see it every day.  Unfortunately, this isn't how stereoscopic 3D behaves in the home today.  Here's a list of ways where the idea doesn't quite match up with reality.

Color distortion

Glasses distort the color of the screen.  It's as simple as that.
In the case of active shutter displays, the glasses used to shutter each eye have a slight yellowish or greenish tint to them.  It’s an unavoidable side effect of looking through a liquid crystal lens.  In the case of polarized glasses, the distortion is not as pronounced and may be more color neutral, but it's easy enough to look through a pair of glasses and observe a difference in hue and contrast without the glasses on.
HMD and auto-stereoscopic display technologies don't suffer from color distortion directly as they have separate pixels directly allocated for each eye.

Significant loss of brightness and contrast

Active shutter displays refresh about 120 times a second.  That means that, even in an ideal scenario, each eye only gets 50% of the light.
However, we live in an analog world … and despite what you may initially think, the fact is that the switching doesn’t happen instantaneously.  What actually happens is that it takes time for the displays to transition from the left eye image to the right eye image and back again.  If each shutter were really open 50% of the time, a good portion of your time would be spent seeing the screen transitioning between frames … which would completely ruin the 3D effect.
In practice, what I see is that shutters are open about 25% of the time.  Keep in mind the glasses themselves don’t perfectly pass light through either.  So you’re probably talking about 20% of the original light getting through to each eye or less.
All that time you spent calibrating your TV to be THX spec?  Totally out the window with 3D glasses on. 

Top/bottom stereo extinction problems (or ghosting)

This is a specific problem with active shutter displays and an important variation on the loss of brightness issue. Most TV’s do not refresh every portion of the screen at exactly the same time.  Instead, they typically refresh each row of pixels from top to bottom.  Again, the shutter for a particular eye should not be open until the image finishes refreshing on the TV … otherwise all that eye will see is an incorrect blend of the previous and current image.
So not only do you have the problem of waiting for the screen to transition its pixels from the left to the right eye image and vice versa, the screen isn't even trying to change all the pixels at the same time!
In practice, this is a very big problem for LCD’s.  The screens can barely switch between each eye's image in time in order to provide an adequately long shutter time for each eye.  So the manufacturers play loose with the shutter timing in order to let enough light through to each eye.  Unfortunately, this means in cases where the screen doesn't switch in time (usually in high contrast scenarios such as dark building against blue or white sky), bits of the previous and next image are leaking through.
Another interesting aspect of this is that the leaking through of the opposite eye's image tends to happen more at the top of the screen.  Why is this?  Again, all refreshing happens from top to bottom … and the majority of the change in a pixel while it is changing happens at the very start.  So even playing slightly loose with the shutter timings can have a bad effect on the image near the top of the screen.

DLP projectors seem to have no top/bottom stereo extinction problems.  Presumably the design of the technology allows the images to flip nearly instantaneously. Plasma’s are also less susceptible to due to the fast duty cycle/switching times of the technology.

If you want to see this problem in action, I took a high speed 600 fps video of some 3D ready monitors.  The video shows these things in a much more clear way than I can describe by writing it.  Here's the YouTube link.

Fast motion stuttering

This is another active shutter specific problem.  The sequential display of left and right eye images in active shutter systems has additional implications for fast moving objects.  While the correct images may be getting to each eye, they are not arriving at the same time.  Is this a problem?  Yes, it definitely is.  The human brain is sensitive to this incongruency … and 120Hz is not a fast enough of a refresh rate for your brain to be unaware of the difference.  In short, 3D scenes with heavy horizontal panning will appear very stuttery and possibly disorienting.  This is not a problem for other 3D display technologies which display the left and right images at the same time.

As noted above, it may be possible for the the stuttering to be remedied by even faster switching times such as 240Hz, but the state of the art isn't caught up yet.

Overdrive ghosting

Another active shutter specific problem.  With current technology, most displays are just at the limit of being able to switch back and forth at 120Hz.  To achieve fast switching times, manufacturers use much higher voltage differentials than normal when a pixel transition begins.  This allows the display to get the pixel to the correct color more quickly.

Unfortunately, this is an imperfect process and has tradeoffs.  Over or undershooting the value results in the pixel not quite arriving at its intended value before having to switch back.  The end effect is weirdly colored ghosts of the opposite eye image leaking into the target eye.

Expensive shutter glasses

Each pair of active shutter glasses costs around $100 to $200.  For most people, that’s not an insignificant expense.  I would guess these prices will drop like a commodity over time … there's really no good reason for these things to cost that much.  But, for now, expect financial pain if you’re going to have more than one viewer at a time.

Glasses need recharging

Active shutter glasses last a long time if fully charged … around 40 hours.  Charging only takes a couple of hours.  But nevertheless, with the dearth of 3D content out there, it's pretty likely that your glasses will go unused for long periods of time right now.  So when you do finally decide to watch something, you may have to deal with some delayed gratification issues.
Off-angle/Off-level viewing issues
All stereoscopic 3D technologies, with the exception of HMD's, require the viewer to be sitting level with the TV and reasonably directly in front of it so that the left and right eye images are displayed properly spaced and oriented to each eye.  This means you can't lie on the couch and watch in 3D, and you also can't stand off in the corner in the kitchen and watch the TV in 3D while it's tucked away in the other corne
r.

Polarized displays and glasses have the additional problem of each lens actually letting through light from the other eye's image if you are not sitting perfectly horizontally.  Real3D glasses in the theater work around this by using circular polarization, which is not sensitive to the rotation of your head in the viewing scenario.  But since the entire stereoscopic 3D viewing process relies on your head being level and well placed anyway, this is of minor consequence. 

Conclusion

As you can see, stereoscopic 3D is the home is not just fire and forget … and it does not "just work".  In particular, active shutter displays have a host of problems not present in other display approaches … but they make up for it by being the cheapest way to go.  At least now you know all the ways things can go wrong!  Don't let this document scare you off completely though … I think the 3D viewing experience is very immersive and enjoyable in many contexts.  You just need to know what you're getting.
Posted in Uncategorized | Tagged , , , | Leave a comment

Types of stereoscopic 3D displays

There are many different ways to individually send an image to each of your eyes.  We talk about the different technologies that exist below. 

Active shutter
In combination with a pair of powered glasses, these displays work by switching very rapidly between the left and right eye images.  The glasses shutter each eye very quickly in tandem with the screen so that only the correct image is seen by each eye.  So while your regular TV might refresh 60 times per second at maximum, a 3D TV would have to refresh at 120Hz in order to get 60 frames per second to each of your eyes.

If you are looking at the current crop of 3D TV’s, then you’re going to end up with an active shutter device.  Why is this?  I’ll explain in more detail later, but in essence active shutter displays are the cheapest way right now to get 3D into the home.

One point of confusion that we need to clear up here is that just because a TV says it is "120Hz" or "240Hz" does not mean it is 3D capable.  Most of these TV's accept only a 60Hz input and do processing to interpolate the frames in between up to 120Hz.  If you are buying a truly stereoscopic 3D capable active shutter display, it will be marketed as 3D capable and it will consequently accept a true 120Hz input.

Polarized
In combination with a cheap pair of polarized glasses, these displays actually display the left and right eye images at the same time.  However, the left eye image is polarized in an “opposite” way to the right eye image.  The polarizing filter in each lens is then able to let all of the desired image to its eye while blocking out the opposite eye’s image.

In theaters, this is usually accomplished by running two separate projectors with different polarizations.  In the home, there are certain types of monitors that accomplish this by alternately polarizing every other row of pixels in a 1-2-1-2 pattern.  Unfortunately, as you might guess, this halves the effective resolution of your display while viewing in 3D.  I believe full resolution displays of this type also exist, but since twice the pixels are required, no full resolution display for the home has mainstream pricing.

It is actually possible to set up a dual projector rig in the home similar to a 3D setup in theater.  However, this requires two projectors to be driven independently and in sync by the same computer.  In essence, this approach is very hacky and requires a lot of cooperation between computer, video driver, software and projectors … making it unlikely to work well in more than a few cases.

Anaglyph
Anaglyph 3D technology shows both left and right images on the display at the same time, but each eye's image is encoded with a different color mapping.  Different color lenses are present in the glasses used to view the image … hence the iconic red and blue glasses … one color for each eye.  Newer implementations have used different image coloring, claiming to be superior to the red and blue lenses.  This may be true, but in no way does any anaglyph technology hold a candle to the other display technologies mentioned here.

Anaglyph display technology was popular mostly in the 80’s and all the way back to the 50's … mainly because it requires no unusual display technologies at viewing time.  Any display can display in anaglyph mode.  Due to the state of display technology, this was the public perception of “3D” viewing for a long time.  The glasses used for this are also extremely cheap … essentially being just some colored cellophone and cardboard.

Anaglyph technology suffers from severe color distortion and crosstalk between left/right eye images … which can really give you a headache.  This technology doesn't even make a pretense at trying to get it "correct" … it just shoots for creating some illusion of depth.

Head mounted displays
These are basically powered glasses or helmet like devices that actually contain a separate display for each eye.  While theoretically the least prone to problems, the main problem with these is that they are expensive and displays have not been miniaturized to a usable point.  So far only 720p per eye has been achieved and costs run into the thousands.  Of course, only one person can use these at a time and doing so is completely anti-social.  And, if the head mounted display is not sufficiently miniaturized, it can result head and neck strain for the wearer.  But, of all the stereoscopic 3D approaches, it is worth noting that head mounted displays are an approach that can actually create a perfect stereoscopic 3D image for the viewer.

Auto-stereoscopic displays
Auto-stereoscopic displays are termed so because they hold the magical promise of being able to show you something stereoscopic 3D without making you wear glasses.  Typically, this works because the display literally angles half of its pixels to the left eye and half to the right eye.  While this obviates the need for glasses, it also means the effective viewing position is extremely constrained.  In practice this limits the number of viewers to one and also cuts the effective resolution of the display in half.  As such, auto-stereoscopic displays are more useful in a handheld or certain computing contexts.  For example, the Nintendo 3DS is probably going to be the most mainstream example of auto-stereoscopic technology for the next few years.

Posted in Uncategorized | Tagged | Leave a comment

The complete guide to understanding stereoscopic 3D

I first got into the whole idea of stereoscopic 3D in the home when nVidia announced their 3D Vision technology back at the beginning of 2009.  I actually built a high end gaming rig and bought the special monitor and glasses just to see what it was all about.

While It was a worthwhile and immersive experience, but certainly not without its problems.  Needless to say, I've learned a lot about just how stereoscopic 3D works in the mean time.  Based on my experience, I strongly feel that the industry doesn't do nearly enough of its job in explaining why viewing things in 3D doesn't just "work" quite the way you might expect … and that's why you're reading these articles now.
 
Quite frankly, I've been sitting on these blog posts for a while because there's so much to write and expound on that it just seemed overwhelming.  Finally, I decided to just break things down a bit and get this out piece by piece.  I'm going to start with a quick overview here and then roll out a few detailed articles explaining the rest of what I have to say.

Stereoscopic 3D technologies exist in order to create the illusion of depth for the viewer. The basic principle of how any stereoscopic 3D technology works is that, for any given scene, a separate left and right view of that scene is presented to each of your eyes.  That’s it.  That’s how the real world works too.  When this is done properly, the viewer perceives the illusion of depth, which enhances the feeling of realism and engages the viewer in a way that cannot possibly be accomplished by viewing a traditional 2D image.

Does it work?  Absolutely.  It’s the difference between seeing a cathedral on paper and actually seeing it in person.  Or seeing a picture of a rock vs seeing the depth and texture in the palm of your hand.  Stereoscopic 3D imagery engages your senses more fully in a way that traditional 2D images and video cannot.  It really does enhance the viewing experience.

I’m writing this document to educate people on the issues they are likely to encounter when evaluating or purchasing a 3D display.  There are a lot of different gotchas when it comes to the current state of the art in 3D displays … and, surprisingly, I don’t see many of them being written about or covered in much detail.  In short, 3D displays don’t work perfectly today, and you should be aware of the limitations.  Nobody likes to buy a TV for thousands of dollars and be surprised when something doesn’t just work the way you want it to.

With that in mind, please read the following articles.

  1. Stereoscopic display technologies
  2. Visual anomalies (or stereoscopic 3D is not perfect)
  3. Gaming in stereoscopic 3D
  4. Social issues with viewing in 3D (or why inviting everyone over to watch the Super Bowl in 3D might not work as well as you’d hope) (TBD)
  5. Frequently asked questions (TBD)

Thanks for reading this guide. I hope you'll come out of this feeling a little more confident about any future stereoscopic 3D purchase and avoiding all of the surprises I've encountered along the way.

Posted in Uncategorized | Tagged | Leave a comment

On the pursuit of happiness

I attended a talk today by Tony Hsieh of Zappos fame.  He's out plugging his new book "Delivering Happiness".  Thankfully, rather than this being a generic leadership talk, I found myself pleasantly surprised at the amount of research that Tony has done and the effort to which Zappos has gone to integrate a lot of research takeaways into actionable items for their company.  I recognized a lot of the observations that Tony had made in forming his own opinions on company culture, and so a lot of what he said rang true.  And not in a trite manner.

So, long story short, I'm pretty sure I'm going to get around to reading this book.  But first, I thought it might be useful for me to expound here on the subject of happiness.  Fair warning … some of this is derived from my own reading … some of it was jogged by Tony's talk.

Being a little unhappy is good.  And necessary.  The desire to be better or to improve the state of things requires a certain level of dissatisfaction with the status quo.  To be perfectly content means that there is no possible betterment to be had … and paradoxically, that is something one rightfully should not be too happy about.

Happiness reverts to the mean.  There are plenty of studies out there indicating that individuals aren't any happier now than they were 50 or 100 years ago, despite vast increases in the standard of living.  Objectively, we should be happier.  In truth, once we rise up Maslow's Hierarchy of Needs, much of our happiness is actually determined by our relative wellbeing compared to others.  If you've never heard of a plasma TV, it doesn't factor into your calculation of happiness.  The instant your neighbor has one, you may damn well want one as well.

Happiness can be measured along simple vectors.  The prior point about reverting to the mean isn't saying that you can't really be happier.  Many measurements of happiness look at your quality of life in several areas.  Personally, I like the simple areas … health, wealth, and love.  Health … does your body get in the way of your ability to do things?  Wealth … can you buy the things you need or desire?  How do you socially compare to others?  And love … do you have or are you easily capable of pursuing a fulfilling relationship?

However you measure your happiness, it's generally important to balance the different factors.  They are interrelated and ignoring one area will drag down the other areas of your life.

Personally, I view the health and love areas of happiness as the ones that have some fixed level of maintenance.  Perhaps another way of saying it is that there are decreasing returns as you allocate more time to them.  Wealth is where all the self actualization and learning occurs.

People have different time orientations when it comes to achieving happiness.  It's also worth mentioning that different types of people approach the achievement of happiness in different ways.  Specifically, some people have short term or impulsive orientations to happiness, whereas others have more long term or measured orientations.

The implications of this are quite interesting.  Can you name some extreme examples of short term oriented people?  Let's try.  Drug addicts.  The next hit is always just around the corner.  Assholes.  These are the kinds of people that cannot help but insult you or cut others down to make themselves feel better.  Cheaters.  They have to win now no matter what the risk could be.  The problem with this mindset is that the short term "hit" of whatever behavior in question always goes away, requiring another "hit" or, eventually, more extreme behavior to generate feelings of happiness.  Short term happiness orientation tends to result in all kinds of socially unacceptable behavior.  Do an exercise and consider some of the people that you don't particularly like.  Are they short term oriented?

Long term orientations are healthier … the ability to forgo happiness now for greater returns in the future.  This is usually associated with higher self esteem … in other words, some inner core of positive happiness that isn't fully associated with your actions in the past hour.  Consequently, there is also a lack of need to continually replace that emptiness with harmful short term behavior.  It goes without saying that you want to be on this side of the fence.  I'm not sure if one can simply be told "this is the right way to behave", however … I think most people simply *are*, or evolve here over time if they are lucky.

On a side note, observing this characteristic in people is, not surprisingly, an unusually powerful way of predicting how someone will react in a particular situation.  It is one of a few major factors taught in some law enforcement programs for "reading" people.

So how do I end up applying this happiness stuff to my life?  Well, first of all, I try to spend some appropriate amount of time on the health and love areas of my life.  As far as happiness and wealth … I view that as more of a journey.  The understanding that some level of dissatisfaction is appropriate actually minimizes the impact of that dissatisfaction on me.  I don't strive to be perfectly happy, and I don't assume that if I reach some level or accomplish some "thing", that it will keep me happy forever.  Instead, I strive to constantly move from one level to the next.  And this doesn't mean a job … it means whatever task or objectives you've set for yourself to accomplish.

Refer to the chess kids in The Art of Learning for the negative aspects of focusing just on outcomes and not the process.  It's the moving through the process that is important.  Reaching new levels of understanding or skill will come naturally as a consequence.  In other words, focus on the journey, not the destination.

P.S. If you made it all the way to the end of this, I was pleasantly surprised to discover that Zappos offers the book Tribal Leadership for download as an audio book.  Probably worth getting!

Posted in Uncategorized | Leave a comment

Learning Circuit Design

Prior to picking up the Boxster as a real track car, I built a pretty cool setup for sim racing.

I settled on using Forza Motorsport 3 with multiple Xbox 360's.  This game is neat in that it supports multiple monitors for a panoramic view.  In conjunction with the proper wheel, it also supports true clutch manual shifting action.

For the cockpit, the Obutto.  The steel frame cockpit is only 300 dollars … a relative bargain compared to all other cockpits.  It doesn't look quite as nice as some, but it functionally gets the job done, and it actually has a real car seat in it.

The steering wheel and pedals … Fanatec 911 Turbo S wheel and Clubsport pedals.  Much stronger force feedback than the Microsoft steering wheel, and, again, it supports a clutch.

Put it all together, and you have something like this.

The only problem was the shifter.  The Fanatec shifter is just terrible.  It attaches to the wheel, which puts it in entirely the wrong place.  It feels resistant, moves around when you shift, and wears out quickly.

Long story short, I decided to fix this myself.

First order of business … the Fanatec wheel.  Via some helpful reverse engineering, we discovered that the shifter position is represented to the wheel by variable voltages on an X-Y grid.  OK … now we know what needs to happen to create a replacement shifter.

LTSpice is an analog circuit design program.  I learned how to use it over Christmas break.  In the early 90's, we were putting together text files that described circuits.  Now, we have a graphical UI that honestly feels like it belongs back in Windows 3.1 … but I guess it gets the job done.

I used LTSpice to model the X and Y resistor chains that would produce the various voltages.  Some equation solving gave me the resistor values, and running the simulation in LTSpice verified the results.  See the following.

This was sufficient for modeling the analog behavior of the voltages.  It was entirely unhelpful for modeling the behavior of the circuit needed to control the voltages.

I took a gamble and purchased this SST Lightning shifter as the actual replacement.  The shifter is machined aluminum, mountable in several ways, and has 8 switches corresponding to different switch positions.  Based on the individual switch being activated, my circuit would need to do two things.  One, control the voltage being tapped out of each resistor chain.  And two, display an number on an LED representing the current shift position.

I went to work thinking through the design.  Looked at datasheets online over and over.  What a pain this must have been before the Internet!  I bought a bunch of parts from Anchor Electronics, and got to work.  Breadboarding the circuit took the entire MLK weekend.  I made a store run each day due to missing parts and equipment.  After numerous tries, I finally I ended up with this.

Soldering out the connections from the SST Lightning was pretty annoying, but doable. In the end, the circuit worked, but was easy to destroy with an errant step or hand, and messy as hell.  But it worked.

 

After a while, I thought there had to be a better way.  There was.  A printed circuit board.

I eventually descended on Eagle PCB as the design program after doing the requisite Googling.  Great choice, and free for projects below a certain size of board.

It takes a lot of time to figure out the exact parts in Eagle that correspond to the physical parts that you need to wire everything up correctly.  For some of the confusing parts, I used Sparkfun's part library and catalog of parts to make sure that everything would be consistent.

The really nice thing about EaglePCB is that the autorouting function it has automatically finds a way to path the traces between all the parts you have on the board.  For a project with as many chips as I had, it was an absolute necessity.

There is no way to simulate the design behavior in Eagle, just the schematic and layout.  Therefore you just look really hard and end up praying for the best.  I sent the design from EaglePCB off to batchpcb.com, which is an offshoot of Sparkfun that … you guessed it … prints circuit boards.  The lead times are long, but the prices are cheap.  Here's an example of what I sent.

Eventually, I got back my first PCB.  There were some flaws, but very minor ones.  I was able to patch the board with a couple of wires.  Voila … my first clean prototype!

The biggest problem I had at this point was that I had picked a rather poor enclosure, and I had also slightly mislaid the drill holes needed for the screw holes in the enclosure.  In addition, I hadn't placed a reasonable power connector on the board.  The current board would have been entirely sufficient as a one off, but, since the point to me was to learn something, I took a second pass.

I chose this clear enclosure from Sparkfun.  It was much larger than necessary, but it had a single large area for connectors. I also figured that, being vetted by Sparkfun, it would probably not have any odd issues with little things being in the way like the previous enclosure that I had ordered.  So I added a barrel power jack to the layout, fixed the couple of errors that I had made in the previous schematic, and redid the layout for the larger board format of the new enclosure.  Off to batchpcb.com again.

Here you can see the final results.  This time assembling everything worked perfectly from start to finish.  It's bigger, but it is also entirely self contained like any self respecting electronic gadget should be. More importantly, it worked immediately after all the parts were soldered to it.  No fixing up required.

 

So yeah, mission accomplished.  I can make printed circuits now from start to finish.  The funny thing is that I've spent far more time learning and doing all this stuff than I have actually playing Forza.  Go figure.

Posted in Uncategorized | Tagged , , , | Leave a comment