Code This

std::cout <<me.ramble() <<std::endl;

Archive for October 2008

A Review of Arch Linux 2008.06

with 5 comments

Given my concerns with the current state of Gentoo, coupled with the number of users I’ve seen mention Arch Linux in various channels on freenode, I decided to do a little research on the distro. I checked it out at DistroWatch.com, and it seemed to be what I was looking for. I had also been considering moving to 64-bit Linux, as I have a 64-bit machine running 32-bit Linux currently. This seemed like a good opportunity.

Some things that turned me on to the distro:

  • Rolling release cycle – This one is pretty much a must for me.  It’s nice to always be using the most current release of the distro, without having to go through a brittle “upgrade” process or re-installation.
  • No bloat – The base system includes virtually nothing.  You install only what you want to install.
  • Highly configurable – Through well documented configuration scripts.
  • Competent package management system – The only missing feature I saw was command line search of available packages. This can be done from the web, but I’d rather not have to use a browser to do it.
  • Optimized packages – Because, well, it just makes sense.

So I downloaded an install image (which was only ~300 MB!), and created a new vm in VMware server so I could evaluate the distro before making the switch. I made sure to have the beginner’s guide handy, and got to work. If you want a decent resolution for the installer (say 1024×768), you’ll have to manually edit the grub configuration, otherwise the default size should get the job done.  After booting the install CD and starting the installer, you’ll find yourself at this screen:

Now, the install process might be a bit intimidating to some less seasoned Linux users. It is definitely not recommended for beginners.  Some intermediate users might even have a bit of trouble, but it’s much less scary than a Gentoo install.

There’s nothing too crazy going on here.  Walking through each screen, referencing the documentation when needed, leads to a pretty straightforward install.  I’ve installed Arch on a few different systems now with no issues. Once you are finished, reboot and you will find yourself at a non-graphical login prompt.

My first inclination at this point is to run a system update, and then proceed to install the packages I want. I read through the docs for pacman (the Arch package manager – yeah, real creative name ; ), and proceeded with my update. This is when a ran into my first issue. Updating klibc required some manual intervention. No big deal, googling the problem quickly revealed the answer, which was quite simple:

“rm /usr/lib/klibc/include/asm”

I am a KDE user, so I of course opted to install the KDE 4.1 desktop. There were no issues here. After a bit of waiting and a bit of configuration, kdm was up and running and I had a bright, shiny new graphical login:

Yay. Here is a screenshot of the default KDE 4.1 desktop environment:

Oo, pretty. At this point I was pretty impressed with the distro. I was unable to get VMware tools installed and running correctly, but no big deal, this wasn’t a permanent solution anyway. Other than that, no real problems.

Then I wanted to install Surround SCM, which is of course my source code management tool of choice. When I tried to run it, I kept getting an error. Ldd didn’t seem to recognize it as a valid, dynamically linked executable. Well, the Surround client is a 32-bit application. Arch 64 is “pure” 64-bit environment, which means no 32-bit support.

And this, folks, is what we call a show stopper.

This is the first 64-bit distro I had encountered w/o support for 32-bit apps. Of course, I could have went with the 32-bit version, but the main reason for switching distros was to go 64-bit. So, at the end of the day, I am still using Gentoo. If lack of 32-bit support in a 64-bit OS is not a problem for you, then I would recommend Arch. It seemed like a pretty solid distro, otherwise.

Written by Kris Wong

October 31, 2008 at 4:55 pm

The Current State of the Gentoo Project

with one comment

I have been using Gentoo Linux for over 2 years now. Let me start by saying, I like Gentoo. I don’t know that I would ever again want to use a distro without a rolling release cycle.  It’s great to have almost immediate access to all the latest and greatest. But therein lies the problem. Portage used to packed full of bleeding edge software. Sometimes software that hadn’t even been released yet. But this no longer seems to be the case, more often than I’d like.

Portage is Gentoo’s package management system. It is a build from source system, based on BSD’s ports system. Many who oppose Gentoo oppose it because of the idea of build from source. Let’s face it, it takes a while to update something like KDE in a build from source system. It could be quite painful on an older machine. But after living with it for 2 years, I can say it’s really not that bad. I feel the advantages of portage greatly outweigh the disadvantages.

  • Access to multiple versions of every package – I can install the latest and greatest, or easily rollback to an older release if I am experiencing some sort of problem. This is very handy indeed.
  • Easy command line search of all packages.
  • The use of USE flags to enable or disable certain parts of a package. For instance if I don’t want GTK on my system, I can still use gvim.
  • Effective dependency system.
  • Complete control through well documented configuration files. This paradigm is present throughout every part of Gentoo.

The problem is, lately, it has been taking a while to get updated ebuilds in portage for many popular apps and libs. Some examples:

  • KDE 4.1 – released 7/29, ebuilds available in portage approx. 2 months later
  • Boost 1.36.0 – released 8/14, no ebuild in portage
  • Boost 1.35.0 – released 3/29, still hard masked
  • GCC 4.3.2 – released 8/27, available in portage 10/4

This varies by package. Some package maintainers are good about staying on top of new releases, some are not so good.

Now, the lack of KDE packages for 2 months caused quite a ruckus, but all in all, things aren’t too bad at this point. Some distros, such as Arch Linux, are a bit more responsive, but that’s Ok. I believe this is the beginning of what will be a larger problem, though. Gentoo without a responsive rolling release cycle is likely bound to fail. I’m a firm believer in identifying problems, and subsequently fixing them, when they are small rather than when they are large.

So what is the problem? On the surface, it seems to be lack of manpower. It was actually described on the Gentoo homepage as a “severe lack of manpower”. Open source projects have to be rooted in a strong community of volunteers. A diminishing community will certainly lead to the demise of a project. But why is the community diminishing? What is the root of the problem? The answer, I believe, is twofold – organization and marketing. Both have suffered in recent years.

This is exemplified in many package maintainers either leaving the project voluntarily, or being asked to leave. In fact, this is what happened to the KDE release team. Distrowatch.com has highlighted the cons of the distro as: “the project suffers from lack of direction and frequent infighting between its developers”. I am not in any way involved with the Gentoo project as an organization, so I can’t speculate as to how to solve this problem. All I can say is, they need to get themselves out of self destruct mode while they still can.

The other major, and also related, problem is the recent degradation of the Gentoo “brand”. Gentoo was once considered a “sexy” distro amongst hardcore Linux users and other geeks. This was due to its highly configurable nature and rather demanding installation procedure. But their brand image has diminished. Partly because of the organizational issues mentioned above, but partly because Gentoo needs to stay true to the philosophies that made it popular in the first place. Always remember who your target user is, it has not changed. It is still the same Linux fanatic that doesn’t want to deal with the bloat or generalized nature of most desktop Linux distributions on the market. Give these people what they really want.

Written by Kris Wong

October 20, 2008 at 2:48 pm

HD Audio Encoding, Decoded

leave a comment »

Disclaimer: Before I start, I just want to state that I am no authority on this subject matter.  I’m just a guy who has done a bit of research, so that hopefully you don’t have to.

I like to consider myself a fairly smart guy… modest too.  I’ve successfully setup/configured a number of home theater and car audio systems.  So my surprise when I found myself a bit befuddled when trying to properly setup my new home theater system was not without warrant.  It can be quite overwhelming when you consider the number of variables involved in the process.  It’s no wonder people dump hundreds (sometimes thousands) of dollars into having professionals take care of this process for them.  After spending hours reading online articles, discussions, and setup/user guides, I finally have my system configured correctly.  Hopefully I’ll be able to break the process down for you.

Unfortunately, since much of the new home audio and video technologies were released onto the market before they were ready, the process can be much more difficult.  From what I understand, products released within the last 6 months or so should be compliant with all the latest and greatest standards.  I’ll start with a breakdown of all the variables at play.

  • Video player technology: At this point, essentially blu-ray or DVD.  This guide is limited to HD audio, i.e., blu-ray.
  • Audio encoding format: This is were it starts to get fun.  Current formats include: Dolby Digital, Dolby TrueHD, DTS, DTS-HD, DTS-HD Master Audio, to name a few.  The two most current technologies, which are both lossless, include Dolby TrueHD and DTS-HD Master Audio.
  • Audio output type (from the player): Bitstream (encoded) and LPCM (decoded), for HD audio types.
  • Connection type: HD – HDMI or multi-channel analog; S/PDIF (digital) – Optical (toslink) or coaxial.
  • HDMI specification (if using HDMI connection): Current revision is 1.3a.
  • Point of decoding (what device actually decodes the encoded audio) – Blu-ray player, receiver, or even the TV.

And there’s also the fun fact that there are relationships going on between these different variables.

You need to start by considering your technology.  Most importantly, how old is your home theater receiver?  Does it have HDMI inputs?  If it does, great, your system can likely handle at least 1, if not both, lossless audio formats.  If not, does it have multi-channel analog 5.1 inputs? optical?  If it does have HDMI, is it version 1.3?  Lastly, does it support Dolby TrueHD or DTS-HD MA (at this point, it probably does not).  Here’s the list of scenarios, in order by preference:

  1. HDMI inputs
  2. Multi-channel analog
  3. Optical

Next we need to consider your blu-ray player.  You’ll need 2 invaluable resources for this: the player’s user guide, and cnet.com.  Cnet.com has tons of professional reviews on all the latest electronic equipment.  I strongly recommend you use this site when researching an electronics purchase.  It will also have all sorts of useful info on your blu-ray player.  Ideally, you want your blu-ray player to handle decoding the audio.  There are a couple reasons for this.  Blu-ray profile 1.1 (current standard) supports both primary and secondary audio and picture-in-picture.  In order to hear both audio streams, they have to be mixed.  To be mixed, they have to be decoded.  Also, as audio encoding standards change, it’s usually much cheaper to replace your player than your receiver (and possibly your speaker system).  Some people may say you want the receiver to handle the decoding because it is better at D/A conversion.  This is not true.  Decoding is done according to standard; it does not matter if it happens on the worst, or best, equipment on the market.  You need only be concerned that all audio processing happens in the receiver.  If your player cannot decode lossless audio, chances are your receiver cannot either.  Lets start with the ideal situation:

Your blu-ray player can decode all current audio encoding formats, and your receiver has an HDMI audio input.  Congrats.  Here’s what you need to do…  Hook the blu-ray player to the receiver using HDMI.  Configure your blu-ray player to decode (and mix) the audio streams.  It will send the audio in multi-channel LPCM format to the receiver.  Then just use the HDMI out on the receiver to hook to your television.  Easy enough.  If your player can’t decode the lossless formats and your receiver can (unlikely), configure it to output the audio in bitstream (encoded) format and hook everything up the same way.  You will not be able to hear any secondary audio data with this second configuration.

If your blu-ray player can decode all current audio formats, and your receiver does not have HDMI input, you have 2 options.  First, if the player has multi-channel analog outputs, connect it to the receiver using these outputs.  Then configure the player to decode the audio as above.  Your second option (and probably the better option) is to hook up your receiver following the steps below.

Let’s say your receiver is a little bit older (like mine) and doesn’t have HDMI inputs.  You’ll have to settle for optical.  You will not hear the lossless audio streams, they will be down-converted to plain ‘ole Dolby Digital or DTS 5.1 formats.  Unless you are an anal audiophile with high end equipment, or the type of person who has to have the latest and greatest for no apparent reason, it doesn’t really matter.  You won’t notice a difference anyway.  Here’s what you want to do… connect your blu-ray player to your receiver using an optical (or coaxial) cable.  Set your player up to mute HDMI audio output (HDMI carries audio and video.  You don’t want sound coming from your TV speakers and your surround sound system).  Configure the player to output the audio in bitstream (encoded) format to the receiver.

If your TV is like mine, it may have an optical out.  You probably don’t want to use your TV to provide the audio to your receiver.  I could find no documentation on what format (encoded or decoded) my TV uses to output audio.  You may also run into the problem of sound coming from both your TV speakers and your surround sound system.

A couple more things to keep in mind when purchasing/setting up a new home theater system:

  • Check cnet.com before you buy.
  • Configuring your TV correctly makes a big difference in picture quality and power consumption.  It is also very complicated to do.  Cnet.com has the ideal settings for most TVs available on the market.  You don’t need to pay someone $300 to do it for you.

Update: You may also want to check out this FAQ.  It provides some deeper explanations to some of the topics discussed here.

Written by Kris Wong

October 10, 2008 at 5:19 pm

Automatic Deallocation With AutoPtr

with one comment

One of the major concepts in C++ that makes it so powerful, and therefore so difficult, is memory management.  Even experienced programmers sometimes struggle with allocating and deallocating memory correctly and effectively.  However, if done correctly (which is, of course, rather subjective), C++ will always be more efficient than any garbage collected language will ever be.

I recently ran into a memory management issue.  I was employing a partial caching strategy, so in certain scenarios I wanted to return a pointer to memory stored in cache, and in other scenarios I needed to allocated new memory to return because the data did not exist in cache.  This left me with two options: copy the data, or try to deal with deallocating the memory in the latter case.  I chose to deal with deallocating the memory.

Initially, I considered std::auto_ptr.  Why reinvent the wheel for no good reason?  However, std::auto_ptr did not provide facilities for specifying that the auto_ptr did not own the memory upon construction, which is something I needed to be able to specify.  For this reason, and simply for the learning opportunity, I wrote my own version of an auto_ptr class with the functionality I needed (I realize I could have simply inherited std::auto_ptr to provide this functionality, but how fun would that have been).  Here is the source code for this class:

template<typename _T>
class AutoPtr
{
   template<typename _T1>
   friend class AutoPtr;

public:
   //------------------------------------------------------------------------------------------------------
   // Function: Constructor
   //
   // Parameters:
   //   ptr - the allocated pointer managed by this class
   //   owned - whether the memory should be deleted upon destruction
   //------------------------------------------------------------------------------------------------------
   explicit AutoPtr(_T* ptr = 0, bool owned = true)
      : m_owned(owned),
        m_ptr(ptr) { }
   //------------------------------------------------------------------------------------------------------
   // Function: Copy constructor
   //
   // Parameters:
   //   other - object to copy (will be disaccosiated from the managed pointer)
   //------------------------------------------------------------------------------------------------------
   AutoPtr(AutoPtr<_T>& other)
      : m_owned(false),
        m_ptr(0) {
      // Can't use the initializer list here because some compilers initialize from the bottom
      // up, which incorrectly sets m_owned to false.
      m_owned = other.m_owned;
      m_ptr = other.Detach();
   }
   //------------------------------------------------------------------------------------------------------
   // Function: Copy constructor
   //   Similar to above copy constructor, but works for types that are convertible to _T.
   //
   // Parameters:
   //   other - object to copy (will be dissociated from the managed pointer)
   //------------------------------------------------------------------------------------------------------
   template<typename _T1>
   AutoPtr(AutoPt<_T1>& other)
      : m_owned(false),
        m_ptr(0) {
      // Can't use the initializer list here because some compilers initialize from the bottom
      // up, which incorrectly sets m_owned to false.
      m_owned = other.m_owned;
      m_ptr = other.Detach();
   }
   //------------------------------------------------------------------------------------------------------
   // Function: Destructor
   //------------------------------------------------------------------------------------------------------
   ~AutoPtr() {
      if (m_owned) delete m_ptr;
   }

   _T* operator->() { return m_ptr; }
   _T& operator*() { return *m_ptr; }
   operator _T*() { return m_ptr; }
   const _T* operator->() const { return m_ptr; }
   const _T& operator*() const { return *m_ptr; }
   operator const _T*() const { return m_ptr; }

   //------------------------------------------------------------------------------------------------------
   // Function: operator=
   //
   // Parameters:
   //   lhs - object to copy (will be disaccosiated from the managed pointer)
   //------------------------------------------------------------------------------------------------------
   AutoPt& operator=(AutoPtr& lhs) {
      bool owned = lhs.m_owned;
      Reset(lhs.Detach(), owned);
      return *this;
   }
   //------------------------------------------------------------------------------------------------------
   // Function: operator=
   //   Similar to above operator=, but works for types that are convertable to _T.
   //
   // Parameters:
   //   other - object to copy (will be disaccosiated from the managed pointer)
   //------------------------------------------------------------------------------------------------------
   template<typename _T1>
   AutoPtr& operator=(AutoPtr<_T1>& lhs) {
      bool owned = lhs.m_owned;
      Reset(lhs.Detach(), owned);
      return *this;
   }

   //------------------------------------------------------------------------------------------------------
   // Function: Detach
   //   Disassociates the managed pointer from this instance.
   //
   // Parameters:
   //   None
   //
   // Returns:
   //   _T*
   //------------------------------------------------------------------------------------------------------
   _T* Detach() {
      _T* t = m_ptr;
      m_ptr = 0;
      m_owned = false;
     return t;
   }
   //------------------------------------------------------------------------------------------------------
   // Function: Reset
   //   Updates this instance to manage a new allocated pointer. Will free any previously owned pointer.
   //
   // Parameters:
   //   ptr - the allocated pointer managed by this class
   //   owned - whether the memory should be deleted upon destruction
   //
   // Returns:
   //   None
   //------------------------------------------------------------------------------------------------------
   void Reset(_T* ptr = 0, bool owned = true) {
      if (m_owned) delete m_ptr;
      m_owned = owned;
      m_ptr = ptr;
   }

protected:
   // variable: m_owned (protected)
   //   Whether or not we own the memory (and should therefore deallocate it).
   bool m_owned;

   // variable: m_ptr (protected)
   //   The pointer.
   _T* m_ptr;

protected:
   // The following is a helper class and supporting methods that allow AutoPtr to
   // support reference semantics. They are not intended to be consumed publically.
   struct PtrWrapper
   {
      bool m_owned;
      _T* m_ptr;

      explicit PtrWrapper(bool owned, _T* ptr)
         : m_owned(owned),
           m_ptr(ptr) { }
   };

public:
   AutoPtr(PtrWrapper wrapper)
      : m_owned(wrapper.m_owned),
        m_ptr(wrapper.m_ptr) { }
   AutoPt& operator=(PtrWrapper lhs) {
      Reset(lhs.m_ptr, lhs.m_owned);
      return *this;
   }

   operator PtrWrapper() {
      bool owned = m_owned;
      return PtrWrapper(owned, Detach());
   }
};

Now, this type of thing has been done a thousand times in the past, but that’s OK. I’ll take the opportunity to walk through the code anyway. I’ll assume you have at least some basic template knowledge.

We start with a basic constructor and copy constructor, easy enough.  Next is something slightly more interesting:

template<typename _T1>
AutoPtr(AutoPtr<_T1>& other);

This is a copy constructor that allows us to copy from an AutoPtr of a convertible type.  i.e., An AutoPtr of a child class type being passed to the constructor of an AutoPtr of it’s parent class type.  We then have some simple operators that give the class pointer semantics.  These operators are what allow us to treat the AutoPtr class just like a real pointer.  We then have 2 operator=’s, which are exactly like the 2 copy constructors.

Next we have:

_T* detach();

This method explicitly takes ownership of the managed memory from the AutoPtr instance. It is, of course, used in the copy constructors and operator=’s.  And:

void reset(_T* ptr = 0, bool owned = true);

Which instructs the AutoPtr instance to manage new memory, first destroying any previously managed memory.

The ReferenceHelper type is also interesting.  This simple struct gives AutoPtr reference semantics:

AutoPtr<MyClass> getMyClass()
{
   return AutoPtr<MyClass>(new MyClass);
}

int main()
{
   AutoPtr ptr = getMyClass();
}

Without this struct, we would not be able to properly manage the new’d instance of MyClass when returning from getMyClass.  What actually happens here is:

AutoPtr is implicitly converted to ReferenceHelper

ReferenceHelper is implicitly converted to AutoPtr.

This allows us to correctly remember whether or not we own the allocated memory, while not destructing it when returning from getMyClass.

This class is especially useful in complex methods that “save” data, where errors can essentially happen at any point.  If memory is allocated and managed with AutoPtr, we do not have to worry about cleaning up the allocated memory on various different code branches.

Written by Kris Wong

October 6, 2008 at 9:25 am

What’s With Wireless?

with one comment

Borrowing a phrase from Jerry Seinfeld, I have to rant a bit… what’s with wireless?  Being an avid linux fan, I’m used to wasting hours troubleshooting obscure, and what seem to be impossible, issues.  But I always seem to have some sort of problem with my wireless configuration.  It’s not that complicated, why does it never seem to work quite right?

I had a linksys router and wireless NIC that just did not want to talk to each other.  The signal wasn’t bad, I had an IP address, my gateway and DNS servers were all setup correctly through DHCP, yet for some reason I could not even ping my router.  I know each piece worked independently (through testing against other devices), just not together.  So I used linksys’s live online help.  The support person tells me the NIC I’m using doesn’t support Windows Vista 64-bit.  1. Vista has been around for over 2 years now, get with the program.  2. It was working before, so it’s obviously possible to get it to work.  Lets just say I returned that hardware for some netgear equipment.

Of course I got the “Range Max” version of the wireless router.  It’s supposed to work throughout the entire house.  I have a 2 bedroom apartment.  I get almost no signal on my desktop that’s maybe 30 – 40 feet away from the router, no major obstructions in the way.  Maybe it just wasn’t meant to be. =/

UPDATE: After looking into the issue a little more, there appear to be 10 – 20 wireless networks at my apartment complex in range of my desktop.  Since there are only 3 wireless channels that do not interfere with one another, this is obviously an issue.  Eventually I had to purchase a separate D-Link wireless antenna to increase the coverage at my desktop, and switch my wireless to a channel that would have the least amount of interference.  The net result: I have a decent signal, and am getting 6000+ Kb/s from dslreports.com.  Another option would have been to go with equipment that supports 802.11 n, but this standard is still in draft and the equipment is quite expensive.

Written by Kris Wong

October 3, 2008 at 4:54 pm

Hello World

leave a comment »

“Hello World”.  It works as the first program we all wrote, it works for my first blog post.  After creating this blog about 10 months ago, I have finally decided to start posting content to it.  Go me.  Hopefully it doesn’t become just another hopeless blog that never gets updated.  Pretend you care – cross your fingers. =]

Written by Kris Wong

October 3, 2008 at 3:43 pm

Posted in Uncategorized