Friday, July 15, 2011

Thoughts on Harry Potter and the Deathly Hallows: Part II

Let me be upfront and state that is not a movie review, nor review of any sort. You can find plenty of those all over the internet -- or being told by others. I just wanted to take the opportunity provided by the fervor surrounding the release of the last Harry Potter film (Harry Potter and the Deathly Hallows, Part 2) to reflect on the entire series, and my past experiences with it.

In more ways than one, JK Rowling's epic is special to me. For one, I am part of the generation who grew up with Harry Potter: we are approximately of the same age and, more importantly, shared the same school year. The effect of Harry Potter with my life story is further highlighted by the fact that I lived more than 3 years in England (Great Britain). In those 3 years, from 1999 to 2002, I witnessed the meteoric rise of Harry Potter from a popular children's book into a cultural and global icon. Of course I became a huge fan, along with many of my friends. It is incredibly nostalgic, therefore, to see the (final?) conclusion of the series. Harry Potter and the Deathly Hallows, Part 2 is the last film made on Rowling's now-timeless classic.


Two memories in particular come to mind, when I look back now on my past experiences with Harry Potter. The first occurred in the sixth grade, a few months after I arrived in England from Portugal. By this time (winter of 1999), the fourth book in the series (Harry Potter and the Goblet of Fire) had already been released -- which I remember because I recall a classmate reading it during our "reading periods" in school. The series had already gained considerable traction in England but, using myself as an example, it had yet spread to other countries. The memory involved my inquiry about what Harry Potter was/is to the teacher, which was overheard by my classmates -- who thereafter wore very incredulous looks. It was obvious they could not believe what they just heard, that someone did not know who Harry Potter was! I learned quickly and became an addict of the books, but this specific memory remained.

The second memory follows the release of the first Harry Potter film (Harry Potter and the Philosopher's Stone). I recall that there was considerable craze, especially in schools, about going to watch the newly released film. Since I had become a "Harry Potter convert", I schemed with a friend to go watch the film. Fortunately his parents were able to take us, and I remember we were extremely excited. Lines were long -- for the showing after ours, the line went around an entire block! But it was not a positive experience. As shared by others at the time, I had been disappointed by the movie's shallowness and the general unimpressiveness of the special effects use ( to be fair, this was 1999). I have since singled out the Nimbus Two Thousand, the greatest broom for half of the epic, to resemble some old stick. I suppose my experience supports an old adage: movies rarely do justice to the books they are based on. For better or for worse, I had since refused to watch any of the subsequent movie releases of Harry Potter.

The fact as it currently stands is that I have no intention to watch this final movie. I heard it's good, but I feel that many viewers are watching it for the reasons similar to those expressed above (e.g. sentimental).

This last one should be better than the previous ones, as Time Warner had (wisely) split the final book into two separate movies. If it did not seem outright greedy, every book should have been split into two movies each. The overall continuity would suffer, but I believe the result experience would more than justify such an action. In simple economic terms, the elasticity of demand for Harry Potter films is relatively inelastic -- meaning the population of viewers is pretty much the same regardless. Once a Potter fan, always a Potter fan. But carrying out my vision is a logistical nightmare: it would span at least a decade, and require the involvement of all important actors/actresses to come to fruition.

Tuesday, July 12, 2011

Apple iPad 3 Preview

[I'm going to try something new in this blog post. Instead of the usual commenting on reports from other technology-oriented blogs, I will try something more original...]

Nope, I do not have special insider information about Apple's imminent iPad 3. Nor do I have one on hand to tinker around -- and afterward write about my experiences with it. This post is simply to make calculated suggestions about the iPad 3.

Are you disappointed? Well, I frankly do not see a reason to be. Rumor mills on the iPad 3 have been running rampant lately, with all sorts of reports floating around about the expected arrival time or supply predictions. One point I am proving to you (as in, right now!) is that rumors are just that: rumors. The culture Steve Jobs has so carefully constructed at Apple means that no Apple employee is going to leak anything involving the device -- I doubt not even if you put a gun to their head. Unless a third-party outs something, or Apple itself releases information, the best information we have is just a series of calculated predictions.

And there should be nothing wrong about making calculated predictions, especially if they are made with an honest intent. This is precisely what I am about to do. Although I previously may have expressed interest in alternatives to Apple's now iconic product, there are still no viable alternatives. The Asus Transformer was arguably the strongest contender to the iPad 2, but it has gotten a negative rap about build quality and the immaturity of Google's Android Honeycomb OS. I am very curious to see what Amazon has in store nonetheless.

Here's the list of my calculated suggestions (most of which are no-brainers) for the iPad 3:
  1. Yes to "Retina" display -- Apple would be stupid not to do this. They need to implement the same display resolution used in the iPhone 4 into the iPad 3, if the goal remains to generate substantial revenue. Competitors are enhancing the resolution of their respective screens, and Apple can ill-afford to not follow suit in this increasingly competitive market.
  2. No to quad-core processor -- this one will not happen, because the technology is not there yet. Not enough time has passed to be able to integrate a quad-core processor successfully into the next iPad (they could but the cost would be too high). iPad 3 will retain the same A5 processor used in the iPad 2.
  3. Yes to 1GB+ of RAM -- one of the flaws of the iPad 2 was the pairing of a powerful processor with insufficient system memory. The result was the touting of amazing performance that could not be replicated in real life. Again, competitors are also pairing their devices with 1GB of memory.
  4. No to an "iPad Mini" -- Steve Jobs himself has declared the lack of a market for tablets with screen sizes below 10 inches. What else needs to be said? This one is pretty obvious, unless Apple plans to phase out the iPod Touch completely in favor of the iPad.
  5. No to lighter or smaller device -- there really isn't any point to doing this, in addition to the dangers of constraining the available space for circuitry boards. I believe we will see the iPad 3 have very similar dimensions and weight to the iPad 2.
  6. Yes to the same pricing scheme -- unless a quad-core A6 processor is integrated into the iPad 3, I see the prices for devices to remain the same. This means $499 for the smallest capacity iPad 3 (probably the same 16GB hard drive) and $599 for the next tier up.
  7. Yes to a late October arrival date -- Steve Jobs is too smart not to realize that if the release date is December or November, then there will be shortages and frustrated customers. If the release date is before late October, there is a great risk of cannibalizing the sales of the current iPad 2. In addition, this date allows Apple to compete directly with Amazon's alleged tablet.
So there you have it, a list of expected changes for Apple iPad 3. The keyword now is....patience.

Sunday, July 10, 2011

Overuse of Apologies and "Sorry"

A couple of months ago, I read an article in Men’s Health magazine on the topic of being prepared for “Master[ing] Any Disaster”. The article overall was not every impressive, but I remember one specific subtopic as being very much to the point. That subtopic was titled “Don’t Apologize for Anything – Ever” and, as its title suggests, was about how our culture has become overly apologetic – to the point of throwing out apologies with little meaning attached. Although the author is a little extreme in his viewpoint of advocating that we never offer apologies, I think there nonetheless is merit in what he is advising.

We hear the words “I’m sorry” or “I apologize” being thrown out in everyday situations. Some of us probably are even guilty of such behavior – I think I apologize more for than the average individual. And that is the simple truth: we are a nation of apologizers. There are no perfect people but, perhaps in our quest to attain perfection, we compensate for our shortcoming or screw ups by apologizing for them. For one, apologizing often does make the culprit feel better by creating a sense of exhuming virtuous behavior (e.g. the “I am being the better person” attitude). But the act of apologizing rarely results in eliciting the desired effect of being forgiven for that shortcoming or screw up – at least via the casual utterance of “sorry”. Instead, I will argue that the act of apologizing has the unintended effect on the audience.

How can apologizing have the opposite effect than we intend? Well, there are three main reasons for this. First and foremost, no apology is ever considered sufficient for the mistake committed. Examples include Tiger Wood’s constant apologies for his misdeeds, or the frequent homophobic slurs uttered by athletes and politicians alike. How can words ever bring wholeness to something broken? What we care to see is action, or a decisive commitment to act in the future – and not repeated acts of hosting press conferences to apologize. The second reason tacks onto the first reason: the delivery process and the number of times we apologize. If we are always apologizing, then those we are apologizing to will quickly grow tired of hearing us. In addition, people often do not take apologizing seriously – it takes more than just “I’m sorry about ___”. One should offer an apology with sincerity and reason for it, and be unwilling to walk away without an acknowledgement (not necessarily a resolution) from the audience. The third reason is that apologies generate demand for future punishment. Unfortunately, we tend to be vengeful individuals; we given others a chance to extract revenge on us.

On a personal level, I have been reflecting on this lately. At the workplace, for example, I sometimes apologize for screw ups but often do not hear back about them. Inasmuch as I desire to be seen as responsible, I fear the apologies become a reflection of my incompetence to coworkers. Hence this presents a dilemma: how do you project an image of responsibility and accountability, without being remembered negatively for the screw ups? I think the first step is reducing the number of times I apologize; the second step is to be serious (pretty much what I advocated in the previous paragraph). And in my last email to Mandy, I definitely apologized a bit too much – a lot of self-blame and expressions of guilt. In addition, a couple of days ago I heard a string of “I’m sorry” from Comcast customer representatives about scheduling errors.

The author of the article I read brought up an interesting example of model behavior on the issue of apologizing: George Bush. The former president was ridiculed throughout his presidency for the mistakes he made but, true to the author’s claim, he never (or lately very rarely) admitted failure.

Cuisine Fix: Hot Dogs

As announced in this post, I will be putting my culinary skills to display by documenting and blogging about the food I make. Not much has happened since the initial announcement due to a few factors: I have been eating a lot of homemade food, lack of time to purchase proper ingredients, and a general lack of time. The first is attributed to being on the receiving end of family cooked (frozen) meals; the second due to the erratic happenings of moving apartments; and the third just the busyness of things. But fear not readers! I am here to show you...HOT DOGS!


Okay, so you are probably laughing at the nature of the food I have presented. Hot dogs are amongst the easiest foods one can prepare, and generally lack much nutritional value. But I was craving some hot dogs this past week -- and decided that one blog post is better than none.

The cost of the ingredients added up to be about $3-4 total, which makes you about 8 hot dogs (for the math-inclined, that's about 50 cents per hot dog). Breakdown is as follows: $1 for a pack of franks, $1 for a bag of buns, $1 for a sweet onion, and $1 for all the energy usage as well as the ketchup. Preparing the hot dogs is extremely easy -- microwave a frank, stick it into a bun, cut some onions on top, and layer with ketchup. [At least this is my preferred way of eating a hot dog.] You can customize to your heart's content by adding things like mustard, or grilling the frank instead of microwaving it.


The corn that you see in one of the pictures is another extremely easy addition. Buy knobs of corn (currently average price is about 10 for $2), peel them, boil in water for 5-7 minutes, and simply enjoy. I liked the combination of hot dog and corn. Not the healthiest thing in the world, but it takes about 2 minutes to prepare -- very useful when time is of the essence.

Friday, July 8, 2011

The Economics of Usage-Based Data Plans

If you are a smartphone owner/user, then the term “data plan” should be a part of your daily vocabulary. All network carriers in the United States, such as AT&T, Verizon Wireless, and Sprint, mandate the customer to add a data plan if the cell phone of choice is a smartphone. This makes basic sense – smartphones consume data for functions like checking email or browsing the internet.

Yet until this past year, all the major carriers operated on a “one-size fits all” model for data plans offered. The premise is that, regardless of the amount of data one consumed, a fixed fee is levered (normally between $30 and $40). One assumption made is that because all smartphone use data, the amount of data consumed is ultimately insignificant. It was also more profitable to charge a fixed amount for an unlimited data plan, as the population of smartphone owners was smaller than the “dumb phone” ones (I am casting out a lot of jargon).

But this year, things have begun to change. Carriers have one by one have begun to shift their model from a “one-size fits all”, to a usage-based one. The word “unlimited” would no longer apply to data plans – unless the customer is willing to pay an arm and a foot for the privilege. If my memory serves me right, AT&T was the instigator of these changes: they offered a two-tier model of data plans, one much smaller than the other (250mb vs 5 GBs?). Now it appears Verizon has proceeded to follow suit (see left).

I am a strong proponent of investigating the basis for change, when change does occur. In this case, it is pretty obvious that the network carriers are being greedy and “want to screw the customer!” I agree with this sentiment, but there must be more fact-based explanations. For one, the population of smartphone users have grown from a niche market to the mainstream – people everywhere has opting for smartphone when given the option. The multiplayer battlefield between Apple’s iOS, Google’s Android, Microsoft’s WP7, and HP’s webOS has already been heavily fought over and over again. The result is greater strain on the capacity on the carriers’ data networks; everyone wants to be connected, to facebook and tweet on the go. In addition, the quality of data has become pluralized into the 3G and the 4G (latter being much faster than the former) worlds. All in all, it no longer seems fair to charge the same price for different usages and different speeds.

From an economic perspective, I am surprised it has taken this long for carriers to catch on. [T-Mobile and Virgin Mobile may be touting their unlimited data plans right now, but I think this will change.] The reason lies in the centuries-old practice of price discrimination. While outright price discrimination (e.g. first degree) is illegal in most countries, companies are allowed second degree and third degree price discrimination. For the case on hand, network carriers seem to be applying third-degree price discrimination – which entails creating a multi-level prices of entry, and having the customer voluntary self-identify by selecting the price of entry. As an example, if I know I consume more than 2GB but less than 5GB of data per month, I can opt for the 5GB plan outright. In this way, the network carrier can capture more of the consumer surplus than before – while also opening up the market to new potential customers.

I say I am surprised to see the length of time taken to see these changes come into effect because all network carriers have long practiced third-price discrimination. Think about it for a second: are these new usage-based data plans identical to voice plans, only they are for data instead of minutes? Network carriers have been charging more for the privilege of talking longer or texting more for decades! It’s puzzling it’s taken them this long to catch on; perhaps the increasing data-hungriness of smartphones has forced their hand. Other common examples of third-degree price discrimination are utilities such as electricity and gas, which are always levied based on how much one uses.

Don’t get me wrong – I am by no means happy that the usage-based data plans have become widespread. Charging $30 for a 2GB or $50 for 5GB of data is absolutely ludicrous. And is it realistic for anyone to opt for the 75MB for $10 a month plan? I’d call it an obvious trap for customers – and very good grounds for lawsuits against the carriers. But I wanted to take a step back, detach for a second, and look at the economic rationale behind these actions.

Thursday, July 7, 2011

Group Housing

Now that I am happily living in a 2 bedroom apartment, I would like to reflect on my housing experiences over the past 6 months – mainly on the phenomenon known as “group housing”. It’s a fairly common means of housing, yet specific to the Washington D.C. metropolitan area.

Before I came to live in the Washington D.C. area, I had no idea what a “group house” was. The term refers to a setup where different individuals enter into an agreement to live together; these individuals are often complete strangers at first. (Roommates may not be strangers to begin with, but I find this is rarely the case.) For recent college graduates (like myself), an accurate analogy would be living in a college dorm: everyone has their own private room, but share common areas such as the kitchen, bathroom, living area, and laundry facilities (if any). A “group house” also doesn’t have to be exclusive to individual houses – large apartments can also be used to accommodate similar living arrangements.

The main incentive for strangers to live together is to save on rent. In expensive areas such as Arlington county in Virginia, renting a house can be significantly cheaper than an apartment. Sure, the upfront cost of rent may be higher (e.g. $2100 a month for a 3 bedroom house), but individual rent would be much lower (e.g. $700 per month). The key is finding the right individuals to live together with. I’d consider group housing to be an excellent example of what happens when free market economics prevails: voluntary entry into housing agreements with others. The prevalence of group housing could also be due to the basic economics of supply and demand – in older areas like Arlington county, there are many more houses than apartment complexes.

A “group house” can vary significantly in size, location, and demographic. In terms of size, they can range from only 3 individuals to upwards of 7 and more. The basic rule of thumb is, the more people, the cheaper the rent (more heads to spread costs). Although you may think that group houses would theoretically only exist in expensive locales, in practice, they exist almost anywhere. I believe the reason for this is the sheer range of individual incomes levels: sometimes one cannot afford to live close to a metro station, nevertheless alone. In terms of demographic, while group houses tend to be single gender and center around a specific age group, they can be a mixture of individuals of different ages and genders. Once again, sometimes the reality of tight budget forces us to seek housing opportunities outside of our ideals.

So what are the pros and cons of group housing? I think there is one and only one advantage: cheaper rent. This being said, there is also one and only disadvantage: lack of privacy. Yet I think we can all understand the latter as layers of issues, rather than a single one. Having lived myself in group houses, I will say the biggest challenges have been maintaining cleanliness and noise levels. There is definitely an inverse correlation between the cleanliness and the number of people living together – the more people, the less clean and more noisy. This is what is known as the “tragedy of the commons” in economic speak: the incentive to upkeep decreases as the responsibility of upkeeping expands. Fortunately, a significant percentage of group houses (try to) resolve the cleanliness problem by scheduling a cleaning maid on a weekly or bi-weekly basis.

Overall, group housing is born out of economic necessity –rather than personal preference. I think they are easier to adapt to, the younger a potential roommate is.

Matthew 6: "Do Not Worry"

25 Therefore I tell you, do not worry about your life, what you will eat or drink; or about your body, what you will wear. Is not life more important than food, and the body more important than clothes? 26 Look at the birds of the air; they do not sow or reap or store away in barns, and yet your heavenly Father feeds them. Are you not much more valuable than they? 27 Can any one of you by worrying add a single hour to your life?

28 And why do you worry about clothes? See how the flowers of the field grow. They do not labor or spin. 29 Yet I tell you that not even Solomon in all his splendor was dressed like one of these. 30 If that is how God clothes the grass of the field, which is here today and tomorrow is thrown into the fire, will he not much more clothe you – you of little faith? 31 So do not worry, saying, ‘What shall we eat?’ or ‘What shall we drink?’ or ‘What shall we wear?’ 32 For the pagans run after all these things, and your heavenly Father knows that you need them. 33 But seek first his kingdom and his righteousness, and all these things will be given to you as well. 34 Therefore do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.

I am admittedly a profound worrier. Perhaps out of previous life experiences, I often worry about the present and the future. For example, I worry (albeit much more before than presently) about Mandy, about possible job changes, about making enough money, and about my family’s wellbeing. It’s foolish in hindsight because I have no control whatsoever on most the things I worry about – yet I persist in worrying.

Do you sometimes feel that a specific passage in the Bible was written just for you? Well, this is the passage for me. During my college years, my pastor introduced this passage to me so that I could worry less. I have ever since read this passage to myself whenever I find myself worrying.

In the passage itself, the main message is – as its title suggests—“do not worry”. It is a teaching Jesus passed down to his disciples, to remind them that our Father is in absolute control and that we ought not worry about things. Jesus points how the little things are taken care off: the flowers, the birds, and the grass of the field. In the same manner, the message given is that, just as He cares for the rest of creation, God will take care of us. Our instructions are very simple to follow – instead of indulging in worries of this life, we are to seek “first his kingdom and his righteousness.”

Ultimately, the issue is how much you trust God with your life. If you truly believe that God is in control, then you would have no worries about anything. This is something I am dealing with/beginning to understand more of. At the heart of the matter, our sinful nature means that there will always be things that will cause us to worry or draw away our attention from what is most important. Sometimes we can become myopic and believe that if our concern was fulfilled, then we would be free to deepen our relationship with God. But this is complete wrong. For one, we should not be testing God in such a way; and two, we are making false promises. I know that for my own life, I thought for the longest time that had a found the right job, everything else would be okay. But what ended happening was that, after finding a job, my worry became Mandy.