An Engineer

An Instance of Perspective

Archive for the ‘Engineering’ Category

Phanfare now backing up photos and videos to Amazon S3

with 23 comments

I am happy to announce that we have moved our backups to Amazon’s Simple Storage Service, known as S3. All current backups go to S3 and we are copying over historical data. We currently have about 20 terabytes at Amazon and will have about 40 terabytes when all the data is moved over.

We also maintain a copy of customer photos and videos on our RAID servers in our NJ datacenter. Amazon promises multi-data center redundancy for S3 data, so Phanfare customers now have the peace of mind of knowing that their data is in at least three datacenters, on opposite coasts of the US (NJ and WA).

The natural question is, why did we do it? We did it because we wanted to provide the assurance of off-site backup and because the engineering costs (time and money) in building out something similar to S3 exceed any cost savings we might have realized by managing the storage ourselves in the medium term.

We actually get more redundancy than we had before. Before we backed up data on a second set of RAID servers in our NJ datacenter. Those servers were cheaper to operate than Amazon S3 assuming 2 year amortization, but they did not provide the same level of geographic or physical redundancy. So for us, using Amazon was not cheaper, but it was better. Including the opportunity cost of working on Phanfare’s core products versus working on offsite backup, using Amazon is a definite strategic win for us.

To make Amazon actually lower our overall long term costs, we would need to stop storing the data ourselves, instead just caching hot data. We have competitors that do that and it would be cheaper, but we are not positive it would be better. After all, right now, Amazon does not provide a Service Level Agreement (SLA) or even a phone number to call if you are unhappy with the Amazon web service. I don’t expect that Amazon will ever lose our data of course, but we would like an SLA before we bet our customers’ data on that.

Amazon’s web services are game-changing, especially to smaller companies. They allow small companies to have a cost position that rivals some of the biggest online competitors. Amazon’s web services also lower the cost of entry for new startups and hence increase competition and foster innovation. Both these things are good for consumers and we applaud Amazon for embarking on their ambitious plan of providing storage and compute in the cloud for other companies. I know they are also trying to amortize their own costs of development, but for us it is wonderful. With proper SLAs, we would consider using Amazon’s Elastic Compute Cloud too (EC2).

EC2 enjoys local area network (LAN) latency and bandwidth to S3 storage and that would make S3 that much more attractive as primary storage for Phanfare. One of the first rules of building a high performance system is to keep compute close to the data it operates on, and hence without using EC2, we would always need to cache data on our side for performance. The latency between NJ and Seattle is too long otherwise.

If you think about it, Phanfare does for consumers what Amazon does for us. Just as it would be difficult and expensive for a consumer to build a system to store his photos and videos into the cloud, accessible from anywhere and backed up in geographically distributed locations, it would be difficult and expensive for Phanfare to replicate Amazon’s level of web infrastructure.

TAGS: , ,

Written by erlichson

July 12, 2007 at 2:22 pm

Music to my ears

with one comment

We just released new versions of Phanfare for the Mac and PC that you will get automatically the next time you start Phanfare. Here is what is new:

  • Add your own music to slideshows. You can now upload music in mp3, .wma, .m4a, and .ogg format. On the Mac, it even integrates with iTunes! We are working on iTunes integration for the PC.
  • Better understanding of EXIF information. We added several previously unhandled fields and properly display focal length information.
  • The referral program just got better. There is no longer any limit to the number of folks you can refer or the number of free years of service you can get. And, you can also send an invitation to a friend to join via email that includes $7 off.

Be sure to let us know what you think of these new features by posting to our forum or emailing support.

Written by erlichson

August 4, 2006 at 11:07 pm

New Phanfare Release

with 3 comments

We just released new versions of Phanfare for the Mac and PC along with some changes to the web albums.

  • Album sections are here. This has been a long-requested feature. You can now break up a big album into smaller sections. For example, a trip to Europe can be broken up by city.
  • Redeye removal is improved for the PC (Coming soon for the Mac).
  • Albums on the web now show a breadcrumb trail to let you figure out where you are in the hierarchy, plus you can quickly navigate to other albums in the account, or between sections of the same album.
  • Auto-rotation of images based on EXIF orientation field. If your camera has an orientation sensor, we losslessly rotate the image on import, rewrite the height and width and reset the orientation field.
  • Our first non-beta Mac release, containing numerous performance improvements and bug fixes. This new mac version is truly great. Imaging Resource comprehensively reviewed it.
  • Increased bandwidth limits. You can now upload up to 10GB per month and download up to 15GB per month.

Phanfare releases are delivered automatically when you start the Phanfare client.

Written by erlichson

June 29, 2006 at 2:38 pm

Why is there no free version of Phanfare?

with 9 comments

Many folks ask us why we don’t have a free version of Phanfare with limits and upsell services, products or a relaxation of the limits for a premium price. This business model, sometimes called Freemium, can make good strategic sense. Skype is probably the best example of a service that offers a free service with an upsell, but Flickr, DynDNS and the WSJ.com are others.

To provide a free version there are two major considerations:

  • What is the value of the free users to the company. What do the free users add?
  • What is the cost of supporting free users?

Skype is well-matched to offer a free version. The value of Skype to any user is related to the size of the overall Skype network (I can call more people). The cost of supporting those additional users is zero. Skype runs peer to peer, so there are no incremental hosting or bandwidth costs related to the free community.

What Skype sells, mostly, is the ability to call in and out of the traditional phone network (SkypeIn and SkypeOut). That makes strategic sense. The free users don’t cannibalize the sale of the SkypeIn and SkypeOut (well maybe to a very small degree since the more Skype users there are, the fewer people outside of Skype there are to call), and in fact increase the pool of people who might buy the upsell.

Because the value of the free users is so high to Skype and cost so low, it makes perfect sense to have Skype be free in its basic service. Skype is a classic low-cost provider. They have a close to zero cost position for additional users, and can offer something valuable at a lower price. Note also that the presense of free users does not tarnish Skype’s brand. They are all about letting the world talk for free.

Phanfare is in a different position with a strategy of differentiation, not low-cost. We offer a superior service, not cluttered with ads and we provide support to our customers. Let’s imagine we offered a severely space-limited version of Phanfare to the free users and look at the costs and benefits.

If there were a free version of Phanfare, it is likely most people would not pay for our differentiated sharing, organizing and archiving solution. So, most of our users would be free users. But in fact, even if we limited the disk space allowed by free users, most of our resources would go to supporting the free community.

We offer amazingly responsive support, striving to answer support requests within one hour. Such support costs real money to provide, and without a revenue stream associated with those free users we would quickly find it difficult to sustain offering them free support. It would not be long before we started hiding from our users to lower our support costs, thereby removing one of the very differentiators that would cause someone to buy the service in the first place!

And what of the benefits of the additional free users? There are some, to be sure. These free users would expose their online collection to other potential subscribers to Phanfare, thereby increasing awareness of our products and offerings. But of course, most of the people reached would become free users. And Phanfare is focused more on private sharing than public sharing, so very few of our albums get enormous traffic today. We are not an ad-supported eyeball aggregation play that depends on reach to be attractive to advertisers.

As you project forward, it is not hard to imagine soon having millions of free users and a relatively small number of paying customers. We would then have the costs of a gigantic service before we had any of the associated revenues. Pressure would be on to lower our costs to as close to zero as possible, something that is antithetical to our basic positioning, which is to be the differentiated provider of photo and video sharing services to consumers. We don’t strive to have the low-cost position. Our level of support, and promise of unlimited storage basically guarantees that we won’t!

Soon we would be thinking of how to monetize the free users. Advertising on the free albums is the obvious choice. So now, rather than offering a free 30 day trial of our complete service with no limits to prove to folks that we are worth paying for, we have a sea of free users with ads on their sites, and no support, and a small paying community that we don’t have time to focus on.

And I have not even discussed what such a free version would do to our brand, or how many folks would fail to buy the premium service because the free version (which has zero associated revenues) is sufficient.

We would be in the awkard position of trying to be the low-cost provider to our free users and the differentiated provider to our paying community.

In short, you won’t be seeing a free version of Phanfare anytime soon. Nor will you likely see a free Lexus program, free Apple computers supported by advertising, or a free version of HBO that includes ads. It rarely make sense for the differentiated provider in an industry to engage in free.

Written by erlichson

May 11, 2006 at 12:37 pm

Web 2.0

with 3 comments

There is a tremendous amount of hype around Web 2.0 these days. I have never even seen an adequate definition of what it is. In my mind it is actually two things.

  • Web applications are becoming more like desktop applications in that the user interaction is handled locally at the client, versus at the server.
  • A fascination with user-generated content and all things tagging, tied to a fascination with search, particularly Google, and most importantly, with their net income.

The second item I will discuss another day. Long live Google!

But the first item is closer to my heart. Consumer computing is becoming network centric and consumers are going to get their whole computing experience over the net. This has a lot of good features, not the least of which is that a consumer’s local computer is fungible, as it should be since only ubergeeks have any idea how to keep one running, virus free, and backed up.

If you assume that the consumer computing experience is going to the net, then the only sad thing is that we all know that local applications written natively to use the windowing system and other features of the operating system are more responsive and more enjoyable to use. Web 2.0 is a response to that shortcoming. Using AJAX, web apps are getting better.

But there is really no reason to abandon the idea of writing applications directly for the operating system. Those applications just need to be network-centric. Phanfare follows this model. You can download the client and run it from anywhere, logging into it to get access to all your photos and videos. And Phanfare makes you feel as if your photos and videos are local. Phanfare talks to the network via web service calls, is multithreaded and caches heavily to achieve this user experience.

Microsoft Outlook running against an Exchange server is the same idea. Gmail is a great interface, but Outlook is still a better email client. And like Web 2.0, I can install Outlook on any computer and access the Exchange server. Granted, I have to install the application, but for something I use every day, this is worth the trouble.

A day may come when the programming environment within the web browser is so rich that there is no reason to write an application directly for the operating system, but that day is not here today. In fact, we are fairly certain that we could not reproduce the Phanfare experience within a web browser due to security restrictions limiting access to local disk.

Further, applications written in the way I just described are really not new. The client-server model of computing from the days of X windows was just this, albeit with less standard communication protocols.

So in my mind, half of Web 2.0 is really about network-centric apps getting better and closer to the smooth enjoyable experience of local applications. Whether those applications run within the web browser is not the point.

Written by erlichson

March 2, 2006 at 12:29 am

Larry Ellison Was Right

with one comment

Way back in 1995, Larry Ellison predicted that stripped down network computers would replace PCs. He argued that computers had become too complicated, too hard to maintain and that most tasks could be accomplished with a simple device running rudimentary software, connected to the internet. Appliances have not replaced PCs, but consumer computing is going in exactly the direction he predicted.

Computers are, by any measure, a poorly designed consumer product. Removing a single file can render the computer unbootable, and unlike, for example, a car, where the innards are protected by a hood and dashboard, the modern computer operating system has the consumer working side by side with system files and other critical settings.

Most consumer use their computer for email, web surfing, music, and personally created photos and videos. What we are starting to see, and will see more of, are good network-based services to provide these functions.

For email, there is Gmail. You can run it from anywhere, it prefetches and caches content on your local computer, and is reasonably responsive to user input.

For music, today, many consumers are using iTunes. How long will it be before Apple lets you re-download a song that you previously purchased using your apple login? At that point the computer has becomes a cache of your music, versus the master copy.

For personally created photos and videos, we naturally see Phanfare as the solution. You manage your photos and vidoes using a full featured desktop application that can leverage the power of your local machine, get access to the fullsize originals, but can move from computer to computer without moving your data. Friends can view the content via any web browser.

For consumers, the Personal Computer is becoming the Network Computer. PCs will be used to access network services and cache network content, but the master copy of the content will live on the network.

Consumers are much better served using only network-based services. The network services can handle backups and data integrity and are administered by professionals. Meanwhile the personal computer becomes merely a cache of the content available on the network, and hence the stakes are much lower for keeping the PC running. If it fails, or is upgraded, you maintain access to your entire computing environment on a new computer.

Given how difficult it is for the average consumer to manage their own computer (PC or Mac), making the PC the network terminal versus the master copy makes perfects sense. How happy consumers will be that they can literally take a hammer to their PCs and get complete access to their environment by going to a new PC.

For disconnected access, the computers will offer cached local manipulation and synch when you next see the network (much like Outlook can accomplish using Exchange).

Corporate computer users at big companies have for years enjoyed this type of arrangement for their computers. Their files and email are stored on network servers, their login works throughout the organization and their local computer holds mostly applications.

Larry Ellison saw this trend in 1995, but he did not realize that Windows PCs and Macs would morph into perfectly acceptable network computers.

Written by erlichson

January 19, 2006 at 7:15 pm

Web Service Integration, Mac vs. PC

with 31 comments


As Ben mentioned in the first installment (I am a guest blogger for this installment), part of the original decision to build Phanfare on Microsoft’s .NET platform was the ease with which Visual Studio lets you build web services and clients that integrate with those services.

Building a .NET web service method within Visual Studio isn’t really much different than writing any other function. You make it a web method by simply adding the “[WebMethod]â€? attribute. Visual Studio takes care of everything else under-the-hood. For example, here is the C# web service definition of an imaginary method:


[WebMethod]
public PhanfareAlbum GetAlbum(string sessionId, long albumId, bool getImagesToo, out int albumVersion, out string albumName, out PhanfareImage[] images)
{
…
}

Easy, huh?

Using that web method in another application (like the Phanfare Photo client) is also simple. By referencing the web service in the client application’s Visual Studio project, Visual Studio will fetch the service’s WSDL document (Web Services Definition Language — an XML document that describes the methods that the service provides) and generates a proxy class that contains the stub methods — methods that look just like the ones you wrote on the service side. This generated code handles all the icky details of performing the under-the-hood magic to invoke the method on another machine across the network, while making it look like a local method call to you, the programmer.

The web method declared above is invoked from a C# client application like this:


PhanfareWebService ws = new PhanfareWebService();
string albumName;
int albumVersion;
PhanfareImage[] images;

// See how nice? The client call looks just like a local call to the web method.
PhanfareAlbum myAlbum = ws.GetAlbum(mySessionId, 12345, iWantTheDarnImages, out albumVersion, out albumName, out images);

// Now use the info!
Debug.WriteLine(String.Format(“This album was created on {0}â€?, myAlbum.createdDate.ToString()));

Facile!

Now for the Mac side of the story.

To make the Phanfare Photo Mac client work, it would have to talk to the same SOAP web service as the Windows client. When we first began looking at the Mac platform, we were excited to have read somewhere that OS X supports SOAP-based web services. However our excitement quickly dissipated when, in typical Apple style, we couldn’t find any coherent, useful documentation on how to integrate a Cocoa application with a SOAP-based web service (if you find some, please let us know. Here is Apple’s documentation. Now go build a complex web service client on the Mac. Have fun.). From what I could gather you put all of your parameters (no complex user-defined types, please) into a hash table (so if I get the parameters wrong I can still compile my client with no errors? Wonderful.), pass it to some web service invocation function, and then get a hash table back. Oh and if you run Apple’s WSMakeStubs application on Phanfare’s WSDL you basically get a lot of methods that have nothing but comments stating that complex types are not supported. Great.

After quickly realizing that Apple’s limited support for web services wasn’t going to make our lives any easier, we started looking for alternatives (to be honest Apple’s support seemed so lacking that we never even tried to build a test application before looking elsewhere). What we found was gSOAP, by Robert van Engelen at Florida State University. We decided to use gSOAP for several reasons: it is open source and distributed under the very favorable gSOAP License, it is extremely well documented, it is widely used, and it has an extremely active user community. Oh, and it works on the Mac (it even came with a Mac-specific makefile).

The primary complaint we had with gSOAP was that it did not integrate seamlessly with Objective-C/Cocoa, the environment in which we’d chosen to develop the Mac client. While gSOAP worked quite well in our tests, calling it from Objective-C/Cocoa looked something like this:


struct soap *mySoap = soap_new2(SOAP_IO_DEFAULT, SOAP_IO_DEFAULT);

// gSOAP takes input and provides output parameters via structs,
// not return values and in/out parameters the way the
// Visual Studio-genarated stub methods do.
struct _ns1__GetAlbum inParameters;

// We’re using NSStrings in our code. gSOAP needs a char *, so I need to convert.
inParameters.session_id = [mySessionId UTF8String];

inParameters.albumId = 12345;

// We’re using Objective-C booleans in our code, need to convert to the
// gSOAP-defined true_ and false_ enumeration values.
inParameters.getImagesToo = iWantTheDarnImages == YES ? true_ : false_;

struct _ns1__GetAlbumResponse soapOutParameters;
int soapRv = soap_call__ns1__GetAlbum(soap, endpoint, NULL, &soapInParameters, &soapOutParameters);

// Now use the info! But wait! I must convert the time_t to an NSDate first!
NSLog([NSString stringWithFormat:@�This album was created on %s�, [[NSDate dateWithTimeIntervalSince1970:soapOutParameters->albumDate] description]]);

// Now, all my return parameters
// (the PhanfareAlbum, albumName, albumVersion, and the array of images)
// are in “soapOutParametersâ€? and are C types.
// But I want NSStrings, NSDates, NSArrays, Objective-C Booleans…
// not char *’s, time_t’s and C arrays!
// I want Objective-C objects, not C structs!
// So, I guess I have to convert them. For example…

NSString albumName = [[NSString stringWithCString:soapOutParameters->albumName] retain];
int albumVersion = soapOutParameters->albumVesrion;

Clearly this will become very tedious very quickly and is not conducive to rapidly developing an application (especially on a platform that’s new to you and especially when you make lots of web service calls, like we do). What we needed was a way to make the web service calls look like the calls we were used to, but using Objective-C/Cocoa types.

To achieve this, we modified the gSOAP compiler (the thing that generates the web service stub methods from the WSDL — the same thing Visual Studio does when you add a web service reference to your project) to emit Objective-C in addition the code it already generates. This new code includes Objective-C class equivalents for all user-defined classes, using Cocoa/Objective-C types where appropriate (e.g., for member strings, arrays, booleans, dates, and other user-defined classes). It also contains an automatically generated Objective-C proxy class that implements Objective-C methods that look just like the original web service methods we’ve come to know and love, but using Objective-C/Cocoa types. These methods contain all the yucky glue (like in the example above) needed to convert between the C and Objective-C/Cocoa types (again, for strings, dates, arrays, binary data and such).

Now the web service call from Objective-C looks like this:


// PhanfareWebService is the Objective-C “proxyâ€? class
PhanfareWebService *ws = [PhanfareWebService init];
int albumVersion;
NSString *albumName;
NSArray *images;

// See how nice this is now? Everything returned to me is an Objective-C/Coca type.
PhanfareAlbum *myAlbum = [ws GetAlbum:sessionId albumId:12345 getImagesToo:getTheDarnImages outabumVersion:&albumVersion outalbumName:&albumName outimages:&images];

// Now use the info!
NSLog([NSString stringWithFormat:@�This album was created on %s�, [[myAlbum createdDate] description]]);

Much nicer, huh?

In addition the proxy layer does a few other nice things, like cleaning up some minor rough edges in gSOAP (e.g., beautifying enumeration types and values and such) and providing support for pooling multiple gSOAP contexts to simplify multithreading.

So, after initial excitement, then disappointment in Apple’s support for SOAP web services, gSOAP, with the addition of our own Objective-C/Cocoa layer, has turned out to be a very nice alternative. It has caused us very few headaches (and most of those were from bugs in our own generated code) and is now as easy to code against as Visual Studio generated stubs on the Windows side. Granted, we don’t have the right-click-and-choose-Update-Web-Reference level of integration and ease that Visual Studio provides. But we’ve come darn close.

Written by erlichson

October 15, 2005 at 12:58 am

GUI Development on the Mac vs. the PC

with 9 comments


GUI apps are like sausages, it is better not to see them made. Apple and Microsoft have a fundamentally different approach to sausage production. Visual Studio integrates its GUI designer right into the IDE. On the Mac, you use Interface builder, a separate application, to create your GUI elements.

The integration of GUI design within Visual Studio follows from Microsoft’s approach to GUI synthesis. The drag-and-drop GUI designer generates C# code that is then compiled into your application, and that code can be edited by the developer. Apple’s Interface Builder generates data files that are read in at runtime to display GUI elements. Considering that the code generated by Visual Studio’s GUI builder is in fact data to the Common Language Runtime after it is compiled into IL, there would seem to be a general equivalence between the techniques. One person’s code is another person’s data. But in terms of developer workflow and customization, the Visual Studio approach is far more efficient.

Here is how you create a button in each environment.

Visual Studio:


    1. Drag a button onto the window
    2. Double click the button
    3. Write code

Xcode/Interface Builder:


    1. In Xcode, go to your window controller header file, add two lines for your outlet and action
    2. Switch to Interface Builder
    3. Drag a button onto the window
    4. Change tab to the class browser
    5. Find your window controller and re-read the file to get your updates
    6. Change back to the object browser
    7. Apple-drag from the window controller instance to the button
    8. Select the outlet you defined before and click 'Connect'
    9. Apple-drag the button onto the window controller instance
    10. Select the action you defined before and click 'Connect'
    11. Switch back to Xcode and into your window controller code file
    12. Add the action method definition and write code

I soon came to dread the UI design work involved with Phanfare Photo for the Mac. The Mac may look nice on the outside, but it is not so nice on the inside.

There is an argument for separating out the GUI design component to a separate application. Ideally, this would allow someone more artistically gifted than I am to do the GUI work and hand me back a bunch of GUI design data objects with a big red bow. But for this separation to work, the GUI designer (and I mean the person) needs to be able to work without requiring fine-grained communication with the application engineer (that would be me).

The Apple approach fails here in that the programmer must muck around in the designer stuff to get anything working (sometimes not getting the behavior quite right), and because the set of built-in GUI controls is so limited that the GUI designer must constantly ask the programmer to create custom controls. There really is nothing wrong with the idea of separation, if well implemented. Microsoft’s new venture into the designer/programmer split with Avalon and their designer tool Sparkle looks promising.

There are two parts of GUI design: the ‘fun stuff’ (laying out the UI) and the ‘boring stuff’ (implementing custom controls/behaviors). That was the layout, now on to the behavior:

As I said, in Visual Studio, the GUI designer generates code that you can tweak by hand if needed. In Interface Builder, serialized objects are written out to be loaded when your application runs. This means that if you don’t like the way Interface Builder does something, you have to rewrite your entire GUI by hand (something I found myself doing with shocking regularity).

One of the nice features of.NET is that it has dozens of built-in controls (over 100 in the latest version) and thousands of custom controls built by users. And because controls in .NET are expressed in text-based code, they are easy to tweak and display on web pages, making for a vibrant user community.

The Mac, on the other hand, comes with a very limited set of controls. There are the basic buttons, grid views, etc, but for anything more than a text editor this isn’t enough. Interface Builder provides about 25 built-in controls (they didn’t even get a date picker until Tiger, and even then it’s almost unusable!) and with the size of the Apple developer community it’s hard to find anything on the net.

In the end, I found myself always writing my own controls. Now, to be fair, I had to do this occasionally in .NET too, but with far less regularity.

In the end, both Apple’s approach to GUI design and Microsoft’s can produce modern GUIs, but Apple’s approach required more effort on my part and significantly more hours spent in the not-so-sweet-smelling sausage factory.

Written by erlichson

October 14, 2005 at 1:04 am

Xcode vs. Visual Studio

with 18 comments

If you are writing a full featured GUI application for Windows, you spend your life in Visual Studio. On the Mac, you use Apple’s Xcode.

Before writing Phanfare Photo for the Mac, most of my development experience had been in Windows. I installed XCode. Within 10 minutes I had a ‘hello world’ application displaying a window on my screen. So far, so good. Over the next few weeks I spent hours trawling the Apple documentation and popular community development sites like CocoaDev and Cocoabuilder trying to absorb as much as I could about my new development platform and its nifty visual effects (who doesn’t love the genie?).

Xcode’s rich key mapping system helped ease my pain of transition. Visual Studio has this too, but since I started on Windows, I just used the default bindings there. I was quickly able to get Xcode to respond to the keys I had been patterned to hit; for example I find Apple-Shift-Right Arrow to select forwards on a line to be a little painful, so a rebinding to Shift-End made my hands thank me.

Xcode provides a flexible tree listing, a code editor with syntax highlighting, and Microsoft-like IntelliSense to automatically insert method definitions for easy coding. It all sounds great on paper, but as they say, the devil is in the details. Xcode still insists on opening separate windows for every task, or splitting views to such a degree that all important information is hidden behind scroll bars. Visual Studio, by comparison, keeps your desktop orderly through a tabbed interface and a powerful toolbar system that can be moved anywhere. A picture is worth a thousand words.

Here are two screenshots: both are of simple projects that contain a window and a button. The Visual Studio interface is clean and uncluttered. By comparison the Xcode interface is painful to use. You may think I purposefully opened up all these windows to try to make Xcode look bad, but this is actually what my desktop looks like on a normal day, and you can reproduce this yourself by adding a window in Apple’s Interface Builder, building the project, and then running the debugger.

Visual Studio integrates its debugging and build warning functionality right into its main interface, which is much more intuitive to me. Xcode, on the other hand, presents new windows for everything and then adds confusion by allowing each to have its own code view and inconsistent behavior. Clicking on a build warning will not reuse your existing code editor window, but instead open the file in the split view at the bottom (where, unless you maximize the window, you can’t see anything), and double clicking the warning will just add to the mess by opening an entirely new window! This may sounds like nit-picky stuff, but when you spend your entire day in an IDE, these details matter and affect your productivity (and your sanity).

Both products feature automatic code completion (“Intellisenseâ€? in the MS world), but once again Visual Studio has the more polished and useful implementation. The Visual Studio Intellisense system, which I fell in love with in the mid-90’s, just plain works across every language and every project – it’s incredible to be able to change a function definition in a library and jump to an entirely different project and have it appear right away. It’s so powerful, providing small descriptions of the functions and parameters, that when I was teaching myself .NET I rarely had to look up documentation: I could simply scroll through the list and read about what things did.

Now I’m not sure how long Xcode (or its previous versions) has had auto code completion, but they are clearly playing catch-up to what is one of Microsoft’s best features. Apple’s auto code completion breaks in so many ways that it can’t be depended on. Most of the time it will display a list of every symbol defined in the entire universe. It is not only slow, but painful to sort through the choices if you don’t know what you are trying to find because Xcode’s implementation provides only a minimal amount of information, lacking descriptions and usage information. Xcode’s behavior around when it actually updates symbols is so unpredictable that even after living and breathing in the Xcode IDE, I don’t have a good feeling for when it gets updated. If Apple wants to help adoption of its platform with developers, this would be a good place to focus their efforts.

For an IDE that has been around (under different names) for the past decade, there seemed to be so many bugs in Xcode that I found myself banging my head on the desk daily. Simple things like windows not responding to keyboard input or not updating properly to major issues like exceptions popping up while typing in the code editor plagued my early development process. No software is perfect, but with Apple’s push towards polish in the rest of their product line I felt that they left out the developer tools.

Now, we know the ending to this story. The Mac version of Phanfare Photo exists, so I did survive the experience of Xcode. In fact, to my surprise, once the code base hit around 20,000 lines, I was moving around just fine and making good progress on the Mac. While XCode is no Visual Studio, its deficiencies are not insurmountable to writing code.

Next time, I will be talking about GUI development on the Mac vs. the PC.

Written by erlichson

October 12, 2005 at 1:41 am

Posted in Apple, Engineering, General

Mac vs. PC Development

with 8 comments

We recently released the Mac version of Phanfare Photo. The issue of Mac versus PC for the end user is hotly and endlessly debated but you don’t see that much about the pros and cons of the platforms from a development standpoint. I developed the Mac version of Phanfare photo, and worked extensively on the PC version. It is not often that one gets to build a large application for both the Mac and PC, written to the exact same network API, both written with native tools. I am going to go over our experiences in developing the Mac versus the PC version, and I encourage you to try both so you can see the final result. Some of these thoughts came from other team members. I brought it all together.

First some high level stuff. Both programs do the same thing. They allow the user to upload, organize and share photos and video on the web by communicating with our online web service. Our online web service runs on Windows Server 2003. The Mac version was written in Objective C using Xcode (Obj-C for those who love it). The PC version was written using C# and Visual Studio 2003. The first question you might be wondering is why did we choose these technologies?

We wrote the PC client version first, and it seemed most natural to write it in C# in .NET. Everything we could find pointed to Microsoft being committed to .NET for client application development. We knew we wanted a full featured “fat” client because we wanted to give the user an immersive interactive experience based on a multithreaded architecture. One might argue that these things are possible using AJAX on the web, but we thought that a Winforms .NET app would be far easier to write and maintain. The choice of using windows for the back end web service came naturally because the integration between Windows .NET applications and Windows web services is no less than brilliant. We had a lot of Unix guys on the team, but even they agreed that web service was most naturally written using Windows if the first client was the C# .NET Winforms version. Windows does a nice job letting you read in the WSDL description of a web service and then letting you code against it as if it is local in Visual Studio.

We knew that someday we would be doing a Mac client so deliberately used only the most basic data types on the service side. We used arrays, strings, longs, ints and floats and structs. We avoided some of the proprietary types offered by Microsoft because we thought that whatever tool we used to translate WSDL for the Mac implementation would probably choke on those types.

When it came time to build the Mac version, we considered Java. To my dismay, doing Java on the Mac means going with the proprietary Apple bindings (killing hopes of having an easy port to Linux) and according to Apple Cocoa-Java has been has been killed with Tiger. So it looked like Objective-C, a 20 year old competitor to C++ used primarily by Apple, would be our best choice. Plus we were no fans of Java for reasons we won’t go into here.

The Mac version is not entirely finished yet, and is missing key features that the PC version already has. Roughly eliminating the PC code that has no Mac counterpart yet and any open source, the Mac client is 44,973 lines and the PC version is 55,736 lines. We wrote the Mac version second and had the advantage of a more stable API against which to code.

Before we begin here are some high level observations about the finished product. The Mac version starts a lot more quickly than the PC version. Chalk that up to native code versus the .NET CLR interpreting the IL. The PC version is as fast as the Mac version once it gets running. The Mac version looks nicer than the PC version and has some visual eye candy that the PC version lacks. This is partly because I care more about UI design than the guys who wrote the PC version, but it also is because the Mac has so little support for any decent UI widgets that you pretty much wind up rolling your own for everything. And if you are going to roll you own, you might as well make them better than the generics.

In the next installment, I will be writing about development environments and IDEs.

Written by erlichson

October 10, 2005 at 4:52 pm

Posted in Apple, Engineering, Phanfare