Lightroom and Image File Location

Monday, 20 June 2011 06:00:14 CEST

If you only use a Lightroom catalog on a single computer, the requirements for storing the image files aren't that complicated, but how to do it, if the same image catalog is to be used on many different computers?

I've previously written a long post about using Dropbox to sync a Lightroom catalog between multiple computers so Dropbox may seem like the obvious answer, but for storing actual image files, I don't think it is.

Here's how I do it. I store all originals on a NAS (with raid protection, multiple backups, etc.) on my home network, and then link to the image files in Lightroom using their UNC network path, e.g. \\\path\to\image.dng.

The advantage of doing this, in relation to the Lightroom catalog, is that this solution is platform independent. I use Lightroom on both Windows and Mac, and the UNC network path is the same on both platforms.

If I'd referenced the images using a mapped network drive on Windows (e.g. Z:\path\to\image.dng), then they wouldn't be accessible in Lightroom on Mac, as a Mac doesn't know what 'Z:' means. It would also mean that this network path must map to the same drive letter all the time, and on all Windows computers, where Ligthroom is to be used, which is also a chore.

There's a slight quirk in Ligthroom when using UNC network paths to store image files. Lightroom is case sensitive about the path, so \\host\path\to\image and \\HOST\path\to\image will end up as two different locations in Lightroom, although they're obviously not. This seems more like a bug than a feature.

By using the UNC network path, the original images are available in Lightroom on any platforms so long as this network path is available, includng using a VPN connection when I'm not at home.

As an added aside, it makes upgrading computers much easier as you don't have to safely transfer hundreds of gigabytes of image files, and you don't have to re-link tens or hundreds of folders in the catalog, because the image file location has changed. Remember when the users' home directory location in Windows changed from 'Documents and Settings' to 'Users' ...


Lightroom and Dropbox - Here's how to do it

Thursday, 16 June 2011 06:00:36 CEST

I've finally found a simple solution to using the same Lightroom image catalog on multiple computers: Store the Lightroom catalog, settings, plugins, and previews in a folder in Dropbox, and magic happens.

The only caveat is that you must wait for Dropbox to complete synchronization when switching from one computer to another.

I've been using Lightroom for a long time. Here's why.

Accessing the same image catalog on different computers, e.g. a laptop and a desktop has always been a frustrating experience. I use just a single image catalog, and I need access to this catalog both on my laptop when I'm working with a client and on my desktop, when I'm working at home.

Note that I don't need access to the originals, I just need access to the catalog and previews, and I need to be able to update the catalogue.

I've tried many different solutions, but they've all been flawed in one of two ways. They've either been too slow, e.g. using rsync to keep multiple copies in sync, or not offered any seeamless backup protection of the catalog, e.g. storing the catalog on an external portable drive.

This problem obviously applies to most image cataloging software, like Aperture, iPhoto, and iView Media Pro (now Microsoft). The only notable exception has been Google Picasa, which for a long time has offered to sync images between multiple computers using Picasaweb as intermediary.

Here's how to do it:

  1. Create a folder within the dropbox folder. I created a folder 'Lightroom' under the photos folder in Dropbox, but any folder will work
  2. Copy everything from the old Lightroom folder, to this new folder location
  3. Let Dropbox complete the syncronization
  4. Open the copied Lightroom catalog, and verify that everything works just as before
  5. Change Lightroom preferences, so that it stores user presets in a folder. Lightroom defaults to storing user presets in the user's home directory (that would be %APPDATA% on windows), but we want these available on all computers where Lightroom is used
  6. Optional, but highly recommended: Change Lightroom preferences to update metadata in originals

Most people I've found writing about this suggest that you set Lightroom to delete fullsize previews after a very short time, e.g. one day, to reduce the size of the previews folder. I don't think this is a general recommendation, because it's mostly a trade off in terms of speed. Whether or not it's a good idea depends on two things: Do you actually need the fullsize previews, and what kind of Internet connectivity do you have?

If you need the fullsize previews badly enough, then you're probably willing to accept the storage overhead, and the delay caused by syncing them, and after initial sync, this isn't going to be that much anyway. On the other hand, if you don't need them, you might as well not store them, to save space and gain a little speed.

I don't need them, so I don't store them for very long, but I do set my standard preview size quite large, which actually adds up to a bigger storage overhead than keeping smaller standard size previews, and storing fullsize previews for a longer period of time.

Next up is a post on how to store the actual image files ...


OC4J, MTOM, and the huge problem of [limited] return types

Friday, 10 June 2011 06:00:31 CEST

This one really surprised me.

Webservices implemented on the OC4J stack cannot return types under the java.* namespace, at least up to and including the current version 10.1.3.5.

This really limits the usefulness of MTOM (Message Transmission Optimization Mechanism) support in OC4J. If you need to deploy webservices on OC4J that return large amounts of data, most likely with MTOM transport enabled, your options appear to be either: don't or use JAX-WS RI.

Try to compile and deploy a webservice with a method signature like:

public InputStream getPublicationTable() throws RemoteException;

and the oracle:assemble ant task will most kindly tell you that:

Return type java.io.InputStream Can not have a value type in a package under java.*

Under most circumstances, changing the method signature to something like

public byte[] getPublicationTable() throws RemoteException;

will get you where you want to be, and if data is already binary you might as well MTOM enable the service at the same time.

The above works just fine for exchanging images, PDF files, and similar data, but MTOM enables other usecases than simply avoiding the base64 encoding overhead when transferring images and other binary data using SOAP.

On a current project I'm working on, I need to expose webservices that return data in quantities that are orders of magnitude larger than available server memory. Upwards 100Gb transferred in a single method invocation. The reasons for this can always be debated, but such are the customer's requirements.

In this context the proposed solution doesn't work. That solution requires a server capable of storing the whole byte-array in memory (it's a store'n forward pattern). The solution to this problem with the first solution is normally to introduce some form of data chunking, but that imposes some unwanted properties on the server and the caller such as state and house keeping (how far are we, and far do we have to go). And more importantly, it defies the purpose of letting the infrastructure handle these complexities in the first place.

If you're stuck on OC4J, somewhere around version 10.1.3.x and less than 11g, the only option available seems to be to hook in another WS stack in OC4J, that does support exposing streams as return types. Luckily, Oracle provides some decent information on doing so, though you will loose all the nice tooling support available in a pure OC4J stack.

If you're lucky enough to be on Metro, this is how you do it. It's also worth reading this on Stackoverflow.


Insert document into MongoDB from Node.js

Thursday, 09 June 2011 13:51:31 CEST

This is a very simple example of how to insert a document in MongoDB from Node.js

I recently needed to use MongoDB with Node.js on a project, and finding a barebones example of how to make them work together was more difficult than anticipated.

It's nothing more than what's in the excellent Getting Started with MongoDB and Node.js presentation, but it's kind of hard to copy-paste from an image :-)

var mongo = require('mongodb');

db = new mongo.Db('mydb', new mongo.Server("127.0.0.1", 27017, {}), {});

db.open(function(err, db) {
    db.collection('sample', function(err, collection) {
        doc = {
            "prop1" : "val",
            "prop2" : {
                a : 1,
                b : 2
            }
        };
        collection.insert(doc, function() {
            db.close();
        });
    });
});

For this example to work, you need to have MongoDB installed, started and listening on the default port. You also need to add the mongodb module to your node installation (npm install mongodb worked for me).

Another good simple example of using MongoDB with Node.js can be found here. I don't find the notation quite as clear, but that's just my personal preference. It works just fine.


Official HTC Hero ROM Update Released In Scandinavia

Wednesday, 16 September 2009 06:00:00 CEST

If you own an HTC Hero and live in Scandinavia, you'll be very pleased to know that HTC has officially released the ROM update that's been written about for some time now.

And the speed improvements this upgrade brings to the UI are unbelievable!

My choice of HTC Hero was originally an informed compromise. I knew that it would be slow(er than the IPhone), but I decided this was outweighed by it's ability to fully integrate with both Google and Exchange, as well as it's Facebook integration with contacts.

The new upgrade takes the UI speed to the same level as IPhone GS. Even with many widgets active the UI remains fully responsive, which certainly wasn't the case with the original HTC Hero firmware.

Head over to the HTC Europe Support, but download the image from HTC USA, as download speeds from HTC Europe and HTC Asia are painfully slow.


Solution: Android Browser Displays Blank Page

Wednesday, 09 September 2009 15:07:00 CEST

I've had an HTC Hero for a couple of weeks and save it's at times lackluster speed, I think it's hands down the best handset I've had.

Today, my handset started exhibiting a strange behaviour: Many Google services, e.g. Reader and Tasks would not load, and instead the browser would just show a blank screen. At first it seemed to be related to Google's web sites, but further testing proved it to be all SSL protected web sites that would display the blank screen.

Some Googling turned up many users complaining about this issue (see this thread on Google Code).

I'm in Denmark and on TDC's network. In my case, the issue was resolved by changing the APN to "internet" and removing the proxy server address and port. Go to Settings -> Wireless Controls -> Mobile Network Settings -> Access Point Names, select the proper configuration (mine was called TDC WAP, but that only holds in DK on TDC's network), and make the relevant changes.

Given the degree in which Mobile operators lock in their users, and the technical difficulty in fixing this, I'm rather disappointed that my network operator doesn't ensure that something as simple as port 443 traffic works on the network.


Mozilla Firefox 3.5 Is Available [But Not Yet Announced]

Tuesday, 30 June 2009 09:15:00 CEST

It appears that the RTM version of Firefox v3.5 is available from the download servers, even though it is not officially announced.

Copy the download link for the English version of Firefox v3.5 RC3, and modify it to: http://www.mozilla.com/products/download.html?product=firefox-3.5&os=win&lang=en-US


More Results On ODF Support In Microsoft Office 2007 SP2

Wednesday, 20 May 2009 06:00:00 CEST

The ODF Alliance just published an excellent gap analysis (PDF) on the (non-) support of ODF in Microsoft Office 2007 SP2.

Their findings seem to be fully in line with the conclusions Rob Weir arrived at in his comparison

From a marketing perspective the broken support for ODF introduced by Microsoft Office 2007 SP2 so far seems to working very well to Microsoft's advantage. They have received all the expected positive press coverage of their announced ODF support, and no mainstream news sites seem to be picking up on the incorrections.


Why OpenID Should Not Become An Identity [Technology] Monopoly

Thursday, 14 May 2009 21:58:00 CEST

I'm heavily involved in OpenID [in Denmark] and so I'm frequently asked about my opinion on the OpenID technology being able to prevent the next Microsoft of identity on the Internet.

Both are wrong. I don't think that is the purpose of OpenID and I very much doubt that there will ever be an identity monopoly on the Internet, unless forced by regulation. And even then, the next Microsoft of identity sounds more likely than the member states of the UN agreeing upon a single identity provider ;-)

OpenID is one of many great lightweight federated identity protocols well suited for the Internet. I don't hope OpenID will ever become the only federated identity protocol. Choice means competition, and competition means better products. Just look at Firefox. It's probably the best thing that's ever happened to Internet Explorer since Netscape.

One frequent "technology monopoly" argument is that it enables interoperability, but I believe that's confusing a technology monopoly with a standard. We definitely need standards - the OpenID protocol is specified in an ever growing number of standards.

However, in the identity space the argument for a single standard to be implemented across the board [Internet] is down to that of integration costs, and I believe the indirect cost reduction through competition between multiple technologies should more than offset the added costs of supporting multiple identity technologies.

In the face of competition, the promoters of an identity technology will try even harder to make it easy to use, and so get more adopters than the competition. The Janrain RPX service is another very good example of the derived value of competition. The Janrain RPX service makes it easier to implement simultaneous support for multiple identity protocols than it probably is to implement "native" support for any one of the [Janrain RPX] supported protocols.

OpenID is a great choice but so are many of the others.


An Actual Test Of ODF Support In Microsoft Office 2007

Monday, 04 May 2009 06:00:00 CEST

Microsoft just released Service Pack 2 for Microsoft Office 2007 and with it supposedly came support for ODF. Unfortunately, support for ODF spreadsheets in SP2 is broken to the point of being unusable.

Anybody working in software development knows that there can be a significant discrepancy between what the marketing department is saying, and what the software can actually do. And this seems to be a prime example of just that from the masters in this art in Redmond.

Not taking their claims at face value Rob Weir over at An Antic Disposition created an actual - and extremely simple - ODF document in all the major software suites offering ODF [spreadsheet] support, and cross loaded the files in all the other ones. His test results clearly show that the ODF spreadsheet support that came with SP2 for Microsoft Office 2007 is broken to the point where it's of no actual use, e.g. Microsoft Office 2007 can only read ODF spreadsheets created in Office 2007. Not much standards support in that imho.

Rob's test is very thorough and he's got many excellent points, so I suggest you read his article on ODF spreadsheet supportin full length.


Page 2 of 5