Monday, March 07, 2011

...so the Kindle just works out better for me

I can't believe how long this post turned out to be. Yikes. If you don't care about what shaped my evaluation criteria, feel free to skip to the conclusion.

It's true, I owned a Casio PDA

I recently decided to jump into the ebook market. I've been following it for quite some time but could never justify being an early adopter with the Sony line and figured I'd just wait it out. I have many fond memories of getting involved with project Gutenberg back in 2002 and discovering the immense library of public domain texts. At the time I read exclusively on a Cassiopeia E-125 but my commitment to ebooks waned as I upgraded devices and became far more involved in pursuing my degree.

Then iOS happened and Apple introduced their bookstore

I was initially quite interested in the concept of reading on my iPhone, again harkening back to the Casio days when I was traveling often for work and reading quite a bit. I bought a book, on a whim and out of other reading material, when coming back from a business trip in Oct 2010. I loved the book I read and I loved the convenience of my iPhone. But, quite frankly, it took it's toll physically on my eyes as well as making me wish swipe was never invented. My lasting impression was how much a hassle it was just to turn the pages, over and over and over.

Eink contenders aplenty, turns out I want an ecosystem too

For a long time I followed what Sony was doing and watched various other Chinese manufacturers make eink-based device announcements. However, I was never motivated to purchase because I realized (over several years of following) that the hardware was only half the solution for my ideal setup. While I loved reading on the Casio, it wasn't particularly enjoyable loading my ebooks from Project Gutenberg and it got messy managing things when my library became large (not to mention the lack of availability of new publications).
Bottom line, I finally think the market is in a mature-enough place (though far from ideal maturity IMO) that I didn't considered anything other than the Kindle 3 or the Nook.

Grading criteria

I've been patient, for a long time, and have been mentally recording a healthy list of criteria that I wanted to measure. Yes, there are dozens of reviews on both of these devices, but I frankly wasn't satisfied with what I'd read because I wanted a hands-on, personal, thorough evaluation for just my list of requirements. Thus, the following opinion/analysis really only satisfies what I care about and, of course, YMMV. What I wanted to evaluate, in no particular order:
  • The Feel
    The Kindle is lighter than the Nook, but that didn't make it an immediate winner. I liked the extra weight on the Nook that made it feel more handleable (is that a word?). However, with the extra weight, the Nook felt slightly more fragile and I noticed I was more careful when carrying it while walking and reading or doing other activities rather than just staying put and reading. The Kindle, though lighter, also seemed more robust and resilient.
    Winner: Kindle
  • Affordance and Input
    (in terms of how intuitive it was to use the device both visually and non-visually and also how intuitive the initial experience was)
    Kindle: The page turning buttons are slimmer than the Nook but (again, IMO) more well defined both physically and visually. The keyboard and 5-way button provide immediate input and tactile response. The overall experience of getting to know, and get comfortable, with the device was very natural to me.
    Nook: The Nook navigation buttons have little visual separation other than the large arrows, but physically the user can feel the difference due to a raised bump. On more than one occasion I inadvertently navigated in the wrong direction. After a while I got used to it and I like how much wider the buttons were than the Kindle, though with the additional width the user loses grip area. I did not like the LCD navigation, and maybe that's because I tested the Kindle first, but I found the navigation slow (in response to gestures), slightly confusing (scrolling options) and generally less appealing. Plus, the backlight is a strong contrast to the placid eink display for my eyes (yes, the backlight dims very quickly, but the combination just didn't work for me)
    Winner: Kindle

  • Instapaper support
    I like Instapaper a lot. It saves me time. It removes content distractions. It lets me collect stuff that's not important to read immediately, but important enough that I want to read it when I'm ready (meaning, when I have more time). It's convenient (works in all browsers), works on multiple devices (iOS, Android, Blackberry) and is simple to use (one click). So, if I want to read my Instapaper content on the Kindle or Nook, what are my options? Just Kindle.
    Winner: Kindle
  • File archiving
    Both B&N and Amazon have areas where the user can download previously purchased content. There was, at one time, legitimate concern about Amazon exercising the remote kill switch (as it did once), but under terms of the suit settlement they agreed that it will not happen in the future.
    Winner: Tie
  • Notes and highlights (I like to annotate, not all the time, but frequently)
    Kindle: Notes can be multi-lined and added to all content. User moves cursor and starts typing.
    Nook: Notes are limited to a single line and not available for PDF content. User awakens LCD, selects menu option, selects another menu option, move cursor with d-pad, selects menu option, moves cursor, selects menu option, begins typing note.
    Winner: Kindle
  • API (since I like to play with code)
    Kindle: KDK available here
    Nook: No API or development kit available (though rumor has it the color version will soon be unlocked for full the full Android Marketplace)
    Winner: Kindle
  • Performance
    Performance was pretty good in terms of rendering and input responsiveness. The only times I noticed a difference was when comparing menu navigation, but since the menu approach was unique to each device, I'm not going to compare the menu performance in conjunction with general use. The eink refresh was fast enough to never annoy me and they both seemed nearly identical in that regard.
    Winner: Tie
  • Lending
    From what I understand lending is currently subject to publisher overview, so I didn't notice a unique advantage comparing one device to the other. Both vendors allow a single-use, 14-day lend on publisher-approved books.
    Winner: Tie (user is equally hosed regardless of vendor, sigh)
  • PDF Support
    Turns out this feature became more important to me the more I thought about it. There are many times I have technical PDFs open on my desktop that I just don't get around to reading in a timely manner. After testing two of the latest (this and this), I was sold on being able to not only read these but also annotate them.
    Kindle: Multiple rendering options to adjust zoom and device orientation. Five contrast settings. Supports notes and highlights. Graphics properly rendered, layout preserved. Overall, very readable and usable.
    Nook: Supports three different text fonts and six text sizes. Mangled embedded graphics in my evaluation. Subject to layout mishandling and text injections (first PDF was very difficult to follow). No annotation support or additional rendering options. One time I got an error indicating I needed to force close the Activity Reader when I was navigating a PDF.
    Winner: Kindle
  • Extensability
    Kindle: No user extendable memory or replaceable battery
    Nook: User replaceable battery and microSD slot located underneath rear cover (thanks Cary A. and Jeff J. on this!)
    Winner: Nook
  • Battery life
    I wasn't sure how I was going to accurately measure this. Turns out I didn't have to worry about it. I've had both of these devices for several weeks now (time of this post) and have been using the Kindle quite a bit more than the Nook at this point. I charged both devices before my evaluation. At no time during the past several weeks have I charged either device and only minimal time in syncing via USB. Thus, after finishing my first book on the Kindle I was ready to read something of equivalent length on the Nook to test it out. Unfortunately, the battery was nearly depleted. And that was after being powered-off, nearly every day, for more than a week. I finished up the lengthy user manual and settled on a book to try, then powered down the device. I turned it on the following day to be greeted with the message that it needed to be charged (which it is doing at this time). Maybe I have a defective battery? The Kindle, under much heavier comparative use, appears to be at slightly less than 50%. Scientific approach? No, I wussed out, this is good enough for me.
    Winnner: Kindle


Everything said and done

I enjoyed comparing both devices and I can clearly see advantages for both. If I had a family member that was a book-a-week reader, aware and on top of recent releases, visited the bookstore often and was happy with an eink device, I'd recommend the Nook. But, overall, I found for me that the Kindle was the superior choice for my needs. Quite frankly, Whispernet became a selling point for me (after I had already purchased the Kindle). I had no idea how convenient it'd be to simply email stuff away (for free) and have it arrive on my Kindle ready to go. That's turned out to be very convenient and was icing on the cake. There were sprinkles too, like the immediate dictionary, being able to tweet/share passages, openlibrary.org integration and line-spacing options...so the Kindle just works out better for me.

Tuesday, February 15, 2011

Git, Gerrit, Redmine, gitflow: An ideal software development and release management setup

I've been using Subversion for more than 5 years and have been following Git's development, adoption and maturation during that entire time. At work, each time we would create a new repo for a project, I cringed at the thought and questioned, "but, we could be doing this in Git..." However, given the development environment as well as business context (not a software shop), it just wasn't suitable to immediately jump into Git. So what did it take to make the jump? Some rather significant dissonant (and concurrent) development tasks, branch management issues and several deploys that cost more time (and thus, money) than they could have.

Our primary requirements for the SCM migration included the following:
  1. 1. Central repo to act as an on-site, internally maintained, redundant source authority with replication capabilities
  2. 2. Integration into Redmine
  3. 3. A well defined methodology for maintaining several projects through multiple concurrent phases
  4. 4. Mature CLI toolset
  5. 5. Hudson/Jenkins support
  6. 6. IntelliJ integration
  7. 7. SCM should be open source with a vibrant community
I knew most of these requirements could be met with Git, but didn't want to make a blind choice and simply choose Git as the new internal de-facto standard. I've had some simple experience with Mercurial in the past and even spent some time doing some projects in Darcs several years ago. I have a good friend whose company (software shop) moved to Mercurial and he offered their arguments in support of choosing Mercurial. Frankly, I liked a lot about what I saw in Mercurial and I didn't ever have any negative experience when toying around with it. Though, truth be told, the projects I conjured up for trying it out were very simplistic and never moved outside of my local development environment. During my investigation I found stackoverflow.com to be immensely helpful in identifying specific differences and perceived strengths and weaknesses.

Git

In the end, after all the reading, playing, comparing--I found that Git just jived with me and it satisfied our requirements (4,5,6,7). Darcs just didn't fit, and Mercurial looks great, so I have nothing negative against those projects. Plus, they don't have Gerrit. Kudos to the Java Posse for a recent podcast in which Gerrit was discussed, timing happened to be critical as it was in the middle of my research, and it sounded like just what we needed to address a recent issue of lack of code review. Even though I was already heavily leaning toward Git at this point, Gerrit clearly brought additional benefit to the migration and methodology changes we were considering.

Gerrit

Gerrit is wonderfully simple to get setup and running. I very much appreciate that it's quite self-contained and offers OpenID support to avoid the grief of maintaining local user accounts. In fact, Gerrit will not only help facilitate code review and branch maintenance, it can act as our centralized repo while abstracting OS-level duties (specifically, user management) for utilizing SSH as our transport protocol. Gerrit's permissions model is more than adequate for our needs on a per-project basis, but is simple enough to setup and get going. Thus, Gerrit easily satisfied requirement #1 and gave us the extra bang-for-the-buck with its inherent review functionality.

Redmine

Redmine (1.0.1) was easy to modify for our situation and it made the most sense to replicate the central repo as a read-only repository locally available to the Redmine instance. Once the clone completed, the only remaining task was setting up a periodic cronjob for updating (git fetch --all) the repo.

gitflow

gitflow was the answer to requirement #3. One of the beauties of Git (and DVCS in general) is the fundamental capability of determining your own release cycle/phase/management process. In some cases when dealing with a DVCS setup it means there's a lot of rope to hang yourself. Our previous release practice was suffering, from time to time, with 100% consistency. I stumbled on gitflow only after deciding on Gerrit, and it made a lot of sense to embrace it not only for what it provides out of the box (helpful bash scripts) but also for the well defined development convention that it helps to enforce. Actually, it's not that gitflow just helps to enforce process, rather, it eases process implementation. Turns out that its process definition matches, and enhances, what we've already (mostly) been doing. Vincent deserves a heap of credit and has our gratitude.

CI

Hudson/Jenkins (Oracle is being a hoser about this, IMO) has two plugins options:
Gerrit: http://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Plugin
Git: http://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin

Really like the idea of pushing changes to Gerrit fires off CI tasks in Jenkins, then verifies/fails the changeset depending on the results. However, integrating that plugin during the initial migration turned out to be a lower priority. At the very least we simply swapped out the current setup of using Subversion with just the normal Git plugin.

IntelliJ

While I'm in a shell nearly 100% of the time, on occasion it's convenient to have some SCM support in Intellij 10. However, I did run into some issues with merging in IntelliJ and spent some time looking into various merge tools. My emacs blood revolted when I chose the Perforce merge tool over emerge (which I liked a lot better than opendiff, Meld or diffmerge). Thanks to Andy Mcintosh for his tips.

Conclusion


After walking through multiple discussions with several team members, it's been very clear to them the benefits and strengths this setup provides over our current process with Subversion. So as of this post, the first major project (72k LOC) has been migrated. This setup feels right, and it's good to see additional corroboration in the community (thanks, AlBlue).

Tuesday, December 22, 2009

Mule JMS message routing using an external ActiveMQ instance

I have a scenario where I'd like Mule to monitor an incoming queue, filter the messages and route to appropriate outgoing queue--using a separate ActiveMQ instance instead of the optional embedded one. While perusing Google results I didn't find a source that explicitly showed how to accomplish this. So using what information I did find from indirect examples and other documentation, this is what I came up with.
First, the connection factory Spring bean:
 <spring:bean name="activeMQConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">  
   <spring:property name="brokerURL" value="tcp://${esb.jms.endpoint}"/>  
  </spring:bean>  

Since I have Maven filtering my resources, the actual tcp URI will be replaced with the appropriate environmental property--in my case, being in an active development environment, and using activeMQ 5.3.0, the filtered value would be "tcp://localhost:61616".

Next is the connector definition:
  <jms:connector name="JMSConnector"  
          specification="1.1"  
          persistentDelivery="true"  
          connectionFactory-ref="activeMQConnectionFactory" >  

And finally, the endpoint:
 <jms:endpoint name="asynchIn" queue="asynch.in"/>  

The service definition for this simple case is:

 <service name="Asynchronous processing">  
    <inbound>  
     <inbound-endpoint ref="asynchIn" synchronous="false"/>  
     <wire-tap-router>  
      <stdio:outbound-endpoint system="OUT" name="debugTrace" connector-ref="SysOut"/>  
     </wire-tap-router>  
    </inbound>  
    <outbound>  
     <filtering-router>  
      <jms:outbound-endpoint queue="test.out" />  
      <message-property-filter pattern="JMSType=test"/>  
     </filtering-router>  
     <filtering-router>  
      <jms:outbound-endpoint queue="test2.out" />  
      <message-property-filter pattern="JMSType=test2"/>  
     </filtering-router>  
    </outbound>  
   </service>  

Notice that the inbound definition contains a wire-tap-router reference, this makes it much easier (IMO) to trace the message flow during development while defining the routing rules and generally tweaking things. Mule will send the message to sysout and also apply filter routing.

The filters generally speak for themselves, in the cases above the filters are based on the type of message.

To test the setup with a vanilla ActiveMQ install (stomp enabled and the stomp gem installed), this quick Ruby script works quite handily:
 require 'stomp'  
  Stomp::Client.open("stomp://localhost:61612").send("/queue/asynch.in","\n\n\n!!!!!!!!!!!!!!\ntest message\n!!!!!!!!!!!!!!!",{:persistent => true, :type => 'test'})  

Mule's wire-tap-router should dump the message:
 system out:ActiveMQBytesMessage {commandId = 3, responseRequired = false, messageId = ID:vsbeta-45609-1261505977249-4:104:-1:1:1, originalDestination = null, originalTransactionId = null, producerId = ID:vsbeta-45609-1261505977249-4:104:-1:1, destination = queue://asynch.in, transactionId = null, expiration = 0, timestamp = 1261518816945, arrival = 0, brokerInTime = 1261518816946, brokerOutTime = 1261518816946, correlationId = null, replyTo = null, persistent = true, type = test, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = org.apache.activemq.util.ByteSequence@7c66f0, marshalledProperties = org.apache.activemq.util.ByteSequence@4a4890, dataStructure = null, redeliveryCounter = 0, size = 0, properties = {content-type=text/plain; charset=UTF-8}, readOnlyProperties = true, readOnlyBody = true, droppable = false} ActiveMQBytesMessage{ bytesOut = null, dataOut = null, dataIn = null }INFO 2009-12-22 14:53:37,033 [JMSConnector.dispatcher.1] org.mule.transport.jms.JmsMessageDispatcher: Connected: endpoint.outbound.jms://test.out  

ActiveMQ's admin screen should show pending messages inside of the test.out or test2.out queues. Messages could be consumed via Stomp:
 require 'stomp'  
 client = Stomp::Client.open("stomp://localhost:61612")  
 client.subscribe("/queue/test.out"){|message| puts "consuming #{message.body} with properties #{message.headers.inspect}"}  

producing output:
 consuming   
 !!!!!!!!!!!!!!  
 test message  
 !!!!!!!!!!!!!!! with properties {"MULE_ORIGINATING_ENDPOINT"=>"asynchIn", "content_type"=>"text/plain; charset=UTF-8", "MULE_CORRELATION_ID"=>"0f2295b0-ef45-11de-856a-538c667e24a7", "expires"=>"0", "timestamp"=>"1261519075346", "destination"=>"/queue/test.out", "message-id"=>"ID:Rohirrim.local-51739-1261518810672-0:0:7:1:1", "priority"=>"4", "MULE_SESSION"=>"SUQ9MGYyMjk1YjEtZWY0NS0xMWRlLTg1NmEtNTM4YzY2N2UyNGE3", "content-length"=>"46", "MULE_MESSAGE_ID"=>"ID:vsbeta-4  
 5609-1261505977249-4:117:-1:1:1", "correlation-id"=>"0f2295b0-ef45-11de-856a-538c667e24a7", "MULE_ENCODING"=>"UTF-8", "MULE_ENDPOINT"=>"jms://test.out"}=> nil  

This approach enables a single queue to collect asynchronous message requests and leverage Mule's filtering-routers to decouple the producer and consumer. The requests go on the ESB, Mule defines where they should go, and the service components act and process the request independent of the requester.

Thursday, December 10, 2009

We chose Mule for our ESB

After careful consideration of multiple open-source ESB products, Mule made the most sense for implementation into our data information infrastructure. The other major competing providers were OpenESB(GlassFish v3), FUSE (Apache stack) as having an open-source solution was a very strict requirement. Mule is very component oriented and can be quickly setup for integration with existing services (of which we have several). Furthermore, it is Spring-based and supports Maven -- which fits right into our existing application development
methodology.

Mule has many strong points, the most relevant for our decision being:

  • very mature, years of development and major deliveries

  • a solid installation base across many significant enterprises

  • well-documented with excellent examples and diagrams

  • is open-source (CPAL 1.0)

  • a commercial support model with additional tools (service registry,monitoring)

  • flexible configuration and instantiation options

  • wide-array of built-in and downloadable modules

  • a top-choice in DTS of Utah ESB comparison

  • excellent testing framework


I read a lot on OpenESB, watched presentations and then checked out tutorials. Simply put, OpenESB appears quite overkill for our specific needs. Furthermore, I'm not a fan of vendor lock-in and OpenESB appears to be very heavily biased towards NetBeans (which isn't really a surprise).

As for the Apache side of things, I spent a fair amount of time reading comparisons on ServiceMix and Mule. The favor was typically weighted on Mule's behalf and the one major viable commercial support option for ServiceMix was through
Fuse. Fuse appears to have some good documentation, but it wasn't nearly as in-depth as Mule. Also, going this route appeared to require more "gluing" using Camel and ServiceMix is also more specifically oriented to JBI--a shared trait among both OpenESB and ServiceMix.

Mule made it very easy to get up and running quickly. Within minutes I had implemented a REST service component using an existing endpoint. After some time reading more documentation I recognized a good case for using the template URI pattern and exposed another two REST endpoints in a separate service. Finally, with a Maven archetype, it was very easy to generate a Mule project and tweak it to support multiple environment deployments (dev,test,beta,prod, etc). I created a simple bootloader to start up the Mule context and register a shutdown hook with the JVM. This approach leverages Maven's capabilities in property filtering and distribution assembly. Thus, we can now create standalone distributions with full Maven dependency support (avoiding the hassle of updating MULE_HOME/lib/user), integrated testing, custom property filtering, and artifact assembly for multi-enviroment support.

Friday, March 13, 2009

calling dynamic domain finders in Grails

Had the need today for calling a Grails domain class finder method outside of the normal artifact setup. There's a singleton I wanted to write to cache certain hunks of data from the database. It'd be very convenient to have access to those domain classes to save me the pain of writing boilerplate Hibernate config and EJB classes. So, here's what ended up working.

I injected a GrailsApplication reference into my bean and created a closure that I passed to a new Groovy Timer instance. Inside the closure I'm able to invoke the dynamic finders on the domain classes because I can fetch a new instance this way:

def person_class = grailsApplication.getArtefact("Domain","Person")
def person_instance = person_class.newInstance()


That's simple enough, to actually call the finders (in this case "list") the next step was:

def results = person_class.metaClass.invokeStaticMethod(person_instance,'list',null)


Next plans are to create a generic way of exposing the ability to call the dynamic methods so that any Groovy class in the app has has access to them.

Thursday, March 12, 2009

Maven, Grails and Metro -- it works

I was really stoked about the maven-grails plugin when I had some time to start playing with Grails 1.1-SNAPSHOT last week. In fact, Grails has matured quite a bit since I first looked at it a little more than a year go. Almost two years ago I wrote about an integration of NetSuite's WebServices with Ruby's SOAP4r. We've been using this integration for nearly two years and have found ways to improve our original approach. In fact, usage has significantly increased, so much so that scalability is now becoming a concern. Don't get me wrong, SOAP4r has never actually croaked on us. But it makes sense to de-couple this element from the application and make a full-fledged service layer for additional in-house integrations. It's time to port the code to Java.

So, back to Grails and Maven. With over five years of Maven experiences I'm completely sold on its many benefits. I've been watching Grails and hoping that the two would integrate to a point where it's completely usable to manage a Grails project in Maven. That day is here, and it's solid. I created a prototype (love how fast it was) with Axis using Grails 1.1-SNAPSHOT. I was sold. Then, just two days ago, Grails 1.1 was finalized and became GA. Funny thing, as soon as I upgraded my project I could no longer use my prototype because I would receive the dreaded:

java.lang.LinkageError: loader constraints violated when linking javax/xml/namespace/QName

That really made me sad, my prototype was hosed. I didn't want to start sleuthing the class dependency collision. Mike Heath suggested I check out Apache CXF and Sun's Metro, both of which appear to be more "cleanly" designed than Axis. I spent a while trying to get CXF to work, but apparently it has a bunch of jars that need to be excluded since it hosed grails:run-app (No such property: readable for class: org.springframework.core.io.Class).

Finally, I had some success with Metro in Maven and Grails. I regenerated the NetSuite classes via wsimport, and only had to add jaxws-rt and jaxws-tools to my dependencies (this was helpful, https://metro.dev.java.net/guide/Using_JAX_WS_from_Maven.html). For now, I'm up and running and looking forward to more Grails development.

Thursday, November 20, 2008

RESTlet with Portlet in Liferay

Liferay allows authentication plugins in order to flexibly accommodate implementations that need more than what comes out-of-the-box. Incidentally, a portlet we have been working on has some rules governing its render state. There are situations where it shouldn't be rendered due to a blacklisting strategy in our requirements. Unfortunately, Liferay's permissions are strictly based on whitelists.

So, how to not render the portlet for a group of people and do render it for others using a whitelist? Logic for who should see it is purely internal to the portlet and it made sense (in simplicity and respecting separation of concerns) to expose a RESTlet for consumption of our authentication module. The authentication module in turn puts people in the proper groups for various community and portlet access. Setup was incredibly simple and it didn't take much more effort to enable JPA transaction support with this class:



/**
* Enables JPA transactional support for subclassed Restlets.
*/
public abstract class AbstractJpaRestlet extends Restlet {
private static final Logger logger = Logger.getLogger(AbstractJpaRestlet.class);

@Autowired
private EntityManagerFactory emf;

/**
* Handler for Restlet requests and responses. Implementing this method
* will ensure db connectivity and transactions support with the DAOs
*
* @param req incoming Request
* @param resp outgoing Response
*/
public abstract void doHandle(Request req, Response resp);


@Override
public void handle(Request request, Response response) {
EntityManager em = emf.createEntityManager();
TransactionSynchronizationManager.bindResource(emf, new EntityManagerHolder(em));

try {

doHandle(request, response);
} catch (Throwable t) {

logger.error(this, t);
TransactionSynchronizationManager.unbindResource(emf);

throw new RestletException(t.getMessage());
}
finally{
TransactionSynchronizationManager.unbindResource(emf);
}
}
}

This is my first RESTlet and I'd be interested in any feedback or pointers in this approach. I'm quite happy with how fast it was to code up. There's only one implementation of this class at this point, but the pattern is very simple and allows for quick future expansion as we need to expose more data.