tag:blogger.com,1999:blog-76432892024-03-19T02:31:52.674-06:00RblogWhen I started this blog (2004) it was to document Gentoo linux experiences and *nix adventures. Then it turned into posts regarding software development challenges and other findings. These days I mostly tweet (rollinsruss) and my posts are infrequent.Unknownnoreply@blogger.comBlogger236125tag:blogger.com,1999:blog-7643289.post-29757769492231716552011-03-07T13:38:00.001-07:002011-03-07T14:45:15.248-07:00...so the Kindle just works out better for meI can't believe how long this post turned out to be. Yikes. If you don't care about what shaped my evaluation criteria, feel free to skip to the <a href="#kindle_won">conclusion</a>.<br />
<br />
<h4>It's true, I owned a Casio PDA</h4>I recently decided to jump into the ebook market. I've been following it for quite some time but could never justify being an early adopter with the Sony line and figured I'd just wait it out. I have many fond memories of getting involved with project Gutenberg back in 2002 and discovering the immense library of public domain texts. At the time I read exclusively on a <a href="http://en.wikipedia.org/wiki/Casio_Cassiopeia#Cassiopeia_E-125">Cassiopeia E-125</a> but my commitment to ebooks waned as I upgraded devices and became far more involved in pursuing my degree.<br />
<br />
<h4>Then iOS happened and Apple introduced their bookstore</h4>I was initially quite interested in the concept of reading on my iPhone, again harkening back to the Casio days when I was traveling often for work and reading quite a bit. I bought a book, on a whim and out of other reading material, when coming back from a business trip in Oct 2010. I <a href="http://www.goodreads.com/review/show/120656069">loved the book</a> I read and I loved the convenience of my iPhone. But, quite frankly, it took it's toll physically on my eyes as well as making me wish swipe was never invented. My lasting impression was how much a hassle it was just to turn the pages, over and over and over. <br />
<br />
<h4>Eink contenders aplenty, turns out I want an ecosystem too</h4>For a long time I followed what Sony was doing and watched various other Chinese manufacturers make eink-based device announcements. However, I was never motivated to purchase because I realized (over several years of following) that <b>the hardware was only half the solution</b> for my ideal setup. While I loved reading on the Casio, it wasn't particularly enjoyable loading my ebooks from <a href="http://www.gutenberg.org/">Project Gutenberg</a> and it got messy managing things when my library became large (not to mention the lack of availability of new publications). <br />
Bottom line, I finally think the market is in a mature-enough place (though far from ideal maturity IMO) that I didn't considered anything other than the <a href="http://www.amazon.com/kindle">Kindle 3</a> or the <a href="http://www.barnesandnoble.com/nook/index.asp?PID=34323&cds2Pid=35700#logo">Nook</a>. <br />
<br />
<h4>Grading criteria</h4>I've been patient, for a long time, and have been mentally recording a healthy list of criteria that I wanted to measure. Yes, there are dozens of reviews on both of these devices, but I frankly wasn't satisfied with <a href="http://goo.gl/Ds5cX">what I'd read</a> because I wanted a hands-on, personal, thorough evaluation for just my list of requirements. Thus, the following opinion/analysis really only satisfies what I care about and, of course, YMMV. What I wanted to evaluate, in no particular order:<br />
<ul><li>The Feel<br />
The Kindle is lighter than the Nook, but that didn't make it an immediate winner. I liked the extra weight on the Nook that made it feel more <a href="http://goo.gl/UZqU1">handleable</a> (is that a word?). However, with the extra weight, the Nook felt slightly more fragile and I noticed I was more careful when carrying it while walking and reading or doing other activities rather than just staying put and reading. The Kindle, though lighter, also seemed more robust and resilient. <br />
<b>Winner: Kindle<br />
</b></li>
<li><a href="http://goo.gl/sxxgM">Affordance</a> and Input<br />
(in terms of how intuitive it was to use the device both visually and non-visually and also how intuitive the initial experience was)<br />
Kindle: The page turning buttons are slimmer than the Nook but (again, IMO) more well defined both physically and visually. The keyboard and 5-way button provide immediate input and tactile response. The overall experience of getting to know, and get comfortable, with the device was very natural to me. <br />
Nook: The Nook navigation buttons have little visual separation other than the large arrows, but physically the user can feel the difference due to a raised bump. On more than one occasion I inadvertently navigated in the wrong direction. After a while I got used to it and I like how much wider the buttons were than the Kindle, though with the additional width the user loses grip area. I did not like the LCD navigation, and maybe that's because I tested the Kindle first, but I found the navigation slow (in response to gestures), slightly confusing (scrolling options) and generally less appealing. Plus, the backlight is a strong contrast to the placid eink display for my eyes (yes, the backlight dims very quickly, but the combination just didn't work for me) <br />
<b>Winner: Kindle</b><br />
<br />
</li>
<li>Instapaper support<br />
I like <a href="http://www.instapaper.com/">Instapaper</a> a lot. It saves me time. It removes content distractions. It lets me collect stuff that's not important to read immediately, but important enough that I want to read it when I'm ready (meaning, when I have more time). It's convenient (works in all browsers), works on multiple devices (iOS, Android, Blackberry) and is simple to use (one click). So, if I want to read my Instapaper content on the Kindle or Nook, what are my options? Just Kindle.<br />
<b>Winner: Kindle</b> <br />
</li>
<li>File archiving<br />
Both B&N and Amazon have areas where the user can download previously purchased content. There was, at one time, legitimate concern about Amazon <a href="http://en.wikipedia.org/wiki/Kindle#Remote_content_removal">exercising the remote kill switch</a> (as it did once), but under terms of the <a href="http://goo.gl/uxOyn">suit settlement</a> they agreed that it will not happen in the future. <br />
<b>Winner: Tie</b><br />
</li>
<li>Notes and highlights (I like to annotate, not all the time, but frequently)<br />
Kindle: Notes can be multi-lined and added to all content. User moves cursor and starts typing.<br />
Nook: Notes are limited to a single line and not available for PDF content. User awakens LCD, selects menu option, selects another menu option, move cursor with d-pad, selects menu option, moves cursor, selects menu option, begins typing note.<br />
<b>Winner: Kindle</b> <br />
</li>
<li>API (since I like to play with code)<br />
Kindle: KDK available <a href="https://kdk.amazon.com/">here</a><br />
Nook: No API or development kit available (though rumor has it the color version will soon be unlocked for full the full Android Marketplace)<br />
<b>Winner: Kindle</b><br />
</li>
<li>Performance<br />
Performance was pretty good in terms of rendering and input responsiveness. The only times I noticed a difference was when comparing menu navigation, but since the menu approach was unique to each device, I'm not going to compare the menu performance in conjunction with general use. The eink refresh was fast enough to never annoy me and they both seemed nearly identical in that regard.<br />
<b>Winner: Tie</b><br />
</li>
<li>Lending<br />
From what I understand lending is currently subject to publisher overview, so I didn't notice a unique advantage comparing one device to the other. Both vendors allow a single-use, 14-day lend on publisher-approved books. <br />
<b>Winner: Tie (user is equally hosed regardless of vendor, sigh)</b><br />
</li>
<li>PDF Support<br />
Turns out this feature became more important to me the more I thought about it. There are many times I have technical PDFs open on my desktop that I just don't get around to reading in a timely manner. After testing two of the latest (<a href="http://www.blogger.com/net.educause.edu/ir/library/pdf/ERM0951.pdf">this</a> and <a href="http://invensense.com/mems/gyro/documents/whitepapers/InvenSense-MEMSMotionProcessing-ConsumerProducts-3DUIWhitepaper-031210.pdf">this</a>), I was sold on being able to not only read these but also annotate them.<br />
Kindle: Multiple rendering options to adjust zoom and device orientation. Five contrast settings. Supports notes and highlights. Graphics properly rendered, layout preserved. Overall, very readable and usable.<br />
Nook: Supports three different text fonts and six text sizes. Mangled embedded graphics in my evaluation. Subject to layout mishandling and text injections (first PDF was very difficult to follow). No annotation support or additional rendering options. One time I got an error indicating I needed to force close the Activity Reader when I was navigating a PDF.<br />
<b>Winner: Kindle</b><br />
</li>
<li>Extensability<br />
Kindle: No user extendable memory or replaceable battery<br />
Nook: User replaceable battery and microSD slot located underneath rear cover (thanks Cary A. and Jeff J. on this!)<br />
<b>Winner: Nook</b><br />
</li>
<li>Battery life<br />
I wasn't sure how I was going to accurately measure this. Turns out I didn't have to worry about it. I've had both of these devices for several weeks now (time of this post) and have been using the Kindle quite a bit more than the Nook at this point. I charged both devices before my evaluation. At no time during the past several weeks have I charged either device and only minimal time in syncing via USB. Thus, after finishing my first <a href="http://www.goodreads.com/book/show/68427.Elantris">book</a> on the Kindle I was ready to read something of equivalent length on the Nook to test it out. Unfortunately, the battery was nearly depleted. And that was after being powered-off, nearly every day, for more than a week. I finished up the lengthy user manual and settled on a book to try, then powered down the device. I turned it on the following day to be greeted with the message that it needed to be charged (which it is doing at this time). Maybe I have a defective battery? The Kindle, under much heavier comparative use, appears to be at slightly less than 50%. Scientific approach? No, I wussed out, this is good enough for me.<br />
<b>Winnner: Kindle</b> </li>
</ul><h4><a name="kindle_won"><br />
Everything said and done</a></h4>I enjoyed comparing both devices and I can clearly see advantages for both. If I had a family member that was a book-a-week reader, aware and on top of recent releases, visited the bookstore often and was happy with an eink device, I'd recommend the Nook. But, overall, I found for me that the Kindle was the superior choice for my needs. Quite frankly, Whispernet became a selling point for me (after I had already purchased the Kindle). I had no idea how convenient it'd be to simply email stuff away (for free) and have it arrive on my Kindle ready to go. That's turned out to be very convenient and was icing on the cake. There were sprinkles too, like the immediate dictionary, being able to tweet/share passages, openlibrary.org integration and line-spacing options...so the Kindle just works out better for me.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-54702977208989234762011-02-15T16:55:00.000-07:002011-02-15T16:55:27.520-07:00Git, Gerrit, Redmine, gitflow: An ideal software development and release management setupI've been using Subversion for more than 5 years and have been following Git's development, adoption and maturation during that entire time. At work, each time we would create a new repo for a project, I cringed at the thought and questioned, "but, we could be doing this in Git..." However, given the development environment as well as business context (not a software shop), it just wasn't suitable to immediately jump into Git. So what did it take to make the jump? Some rather significant dissonant (and concurrent) development tasks, branch management issues and several deploys that cost more time (and thus, money) than they could have.<br />
<br />
Our primary requirements for the SCM migration included the following:<br />
<ol><li>1. Central repo to act as an on-site, internally maintained, redundant source authority with replication capabilities</li>
<li>2. Integration into Redmine</li>
<li>3. A well defined methodology for maintaining several projects through multiple concurrent phases</li>
<li>4. Mature CLI toolset</li>
<li>5. Hudson/Jenkins support</li>
<li>6. IntelliJ integration</li>
<li>7. SCM should be open source with a vibrant community</li>
</ol>I knew most of these requirements <i>could</i> be met with Git, but didn't want to make a blind choice and simply choose Git as the new internal de-facto standard. I've had some simple experience with Mercurial in the past and even spent some time doing some projects in Darcs several years ago. I have a good friend whose company (software shop) moved to Mercurial and he offered their arguments in support of choosing Mercurial. Frankly, I liked a lot about what I saw in Mercurial and I didn't ever have any negative experience when toying around with it. Though, truth be told, the projects I conjured up for trying it out were very simplistic and never moved outside of my local development environment. During my investigation I found stackoverflow.com to be immensely helpful in identifying specific differences and perceived strengths and weaknesses.<br />
<br />
<h2>Git</h2>In the end, after all the reading, playing, comparing--I found that Git just jived with me and it satisfied our requirements (4,5,6,7). Darcs just didn't fit, and Mercurial looks great, so I have nothing negative against those projects. Plus, they don't have <a href="http://code.google.com/p/gerrit/">Gerrit</a>. Kudos to the <a href="http://javaposse.com/">Java Posse</a> for a recent podcast in which Gerrit was discussed, timing happened to be critical as it was in the middle of my research, and it sounded like just what we needed to address a recent issue of lack of code review. Even though I was already heavily leaning toward Git at this point, Gerrit clearly brought additional benefit to the migration and methodology changes we were considering.<br />
<br />
<h2>Gerrit</h2>Gerrit is wonderfully simple to get setup and running. I very much appreciate that it's quite self-contained and offers OpenID support to avoid the grief of maintaining local user accounts. In fact, Gerrit will not only help facilitate code review and branch maintenance, it can act as our centralized repo while abstracting OS-level duties (specifically, user management) for utilizing SSH as our transport protocol. Gerrit's permissions model is more than adequate for our needs on a per-project basis, but is simple enough to setup and get going. Thus, Gerrit easily satisfied requirement #1 and gave us the extra bang-for-the-buck with its inherent review functionality.<br />
<br />
<h2>Redmine</h2>Redmine (1.0.1) was easy to modify for our situation and it made the most sense to <a href="http://www.redmine.org/projects/redmine/wiki/RedmineRepositories#Setting-up-a-mirror-repository-shortcut-tracking-branches">replicate the central repo as a read-only repository locally available to the Redmine instance.</a> Once the clone completed, the only remaining task was setting up a periodic cronjob for updating (git fetch --all) the repo.<br />
<br />
<h2>gitflow</h2><a href="https://github.com/nvie/gitflow">gitflow</a> was the answer to requirement #3. One of the beauties of Git (and DVCS in general) is the fundamental capability of determining your own release cycle/phase/management process. In some cases when dealing with a DVCS setup it means there's a lot of rope to hang yourself. Our previous release practice was suffering, from time to time, with 100% consistency. I stumbled on gitflow only after deciding on Gerrit, and it made a lot of sense to embrace it not only for what it provides out of the box (helpful bash scripts) but also for the <a href="http://nvie.com/posts/a-successful-git-branching-model/">well defined development convention</a> that it helps to enforce. Actually, it's not that gitflow just helps to enforce process, rather, it eases process implementation. Turns out that its process definition matches, and enhances, what we've already (mostly) been doing. <a href="http://nvie.com/about/">Vincent</a> deserves a heap of credit and has our gratitude.<br />
<br />
<h2>CI</h2>Hudson/Jenkins (Oracle is being a hoser about this, IMO) has two plugins options:<br />
Gerrit: http://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Plugin<br />
Git: http://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin<br />
<br />
Really like the idea of pushing changes to Gerrit fires off CI tasks in Jenkins, then verifies/fails the changeset depending on the results. However, integrating that plugin during the initial migration turned out to be a lower priority. At the very least we simply swapped out the current setup of using Subversion with just the normal Git plugin.<br />
<br />
<h2>IntelliJ</h2>While I'm in a shell nearly 100% of the time, on occasion it's convenient to have some SCM <a href="http://blogs.jetbrains.com/idea/tag/git/">support in Intellij 10</a>. However, I did run into some issues with merging in IntelliJ and spent some time looking into various merge tools. My emacs blood revolted when I chose the Perforce merge tool over emerge (which I liked a lot better than opendiff, Meld or diffmerge). Thanks to <a href="http://www.andymcintosh.com/?p=33">Andy Mcintosh for his tips</a>.<br />
<br />
<h2>Conclusion</h2><br />
After walking through multiple discussions with several team members, it's been very clear to them the benefits and strengths this setup provides over our current process with Subversion. So as of this post, the first major project (72k LOC) has been migrated. This setup feels right, and it's good to see additional corroboration in the community (thanks, <a href="http://alblue.bandlem.com/2011/02/someday.html">AlBlue</a>).Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-7643289.post-89589519566539997912009-12-22T15:12:00.001-07:002009-12-22T15:13:54.155-07:00Mule JMS message routing using an external ActiveMQ instanceI have a scenario where I'd like Mule to monitor an incoming queue, filter the messages and route to appropriate outgoing queue--using a separate ActiveMQ instance instead of the optional embedded one. While perusing Google results I didn't find a source that explicitly showed how to accomplish this. So using what information I did find from indirect examples and other documentation, this is what I came up with.<br />
First, the connection factory Spring bean:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> <spring:bean name="activeMQConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<spring:property name="brokerURL" value="tcp://${esb.jms.endpoint}"/>
</spring:bean>
</code></pre><br />
Since I have Maven filtering my resources, the actual tcp URI will be replaced with the appropriate environmental property--in my case, being in an active development environment, and using activeMQ 5.3.0, the filtered value would be "tcp://localhost:61616".<br />
<br />
Next is the connector definition:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> <jms:connector name="JMSConnector"
specification="1.1"
persistentDelivery="true"
connectionFactory-ref="activeMQConnectionFactory" >
</code></pre><br />
And finally, the endpoint:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> <jms:endpoint name="asynchIn" queue="asynch.in"/>
</code></pre><br />
The service definition for this simple case is:<br />
<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> <service name="Asynchronous processing">
<inbound>
<inbound-endpoint ref="asynchIn" synchronous="false"/>
<wire-tap-router>
<stdio:outbound-endpoint system="OUT" name="debugTrace" connector-ref="SysOut"/>
</wire-tap-router>
</inbound>
<outbound>
<filtering-router>
<jms:outbound-endpoint queue="test.out" />
<message-property-filter pattern="JMSType=test"/>
</filtering-router>
<filtering-router>
<jms:outbound-endpoint queue="test2.out" />
<message-property-filter pattern="JMSType=test2"/>
</filtering-router>
</outbound>
</service>
</code></pre><br />
Notice that the inbound definition contains a wire-tap-router reference, this makes it much easier (IMO) to trace the message flow during development while defining the routing rules and generally tweaking things. Mule will send the message to sysout and also apply filter routing. <br />
<br />
The filters generally speak for themselves, in the cases above the filters are based on the type of message.<br />
<br />
To test the setup with a vanilla ActiveMQ install (stomp enabled and the stomp gem installed), this quick Ruby script works quite handily:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> require 'stomp'
Stomp::Client.open("stomp://localhost:61612").send("/queue/asynch.in","\n\n\n!!!!!!!!!!!!!!\ntest message\n!!!!!!!!!!!!!!!",{:persistent => true, :type => 'test'})
</code></pre><br />
Mule's wire-tap-router should dump the message:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> system out:ActiveMQBytesMessage {commandId = 3, responseRequired = false, messageId = ID:vsbeta-45609-1261505977249-4:104:-1:1:1, originalDestination = null, originalTransactionId = null, producerId = ID:vsbeta-45609-1261505977249-4:104:-1:1, destination = queue://asynch.in, transactionId = null, expiration = 0, timestamp = 1261518816945, arrival = 0, brokerInTime = 1261518816946, brokerOutTime = 1261518816946, correlationId = null, replyTo = null, persistent = true, type = test, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = org.apache.activemq.util.ByteSequence@7c66f0, marshalledProperties = org.apache.activemq.util.ByteSequence@4a4890, dataStructure = null, redeliveryCounter = 0, size = 0, properties = {content-type=text/plain; charset=UTF-8}, readOnlyProperties = true, readOnlyBody = true, droppable = false} ActiveMQBytesMessage{ bytesOut = null, dataOut = null, dataIn = null }INFO 2009-12-22 14:53:37,033 [JMSConnector.dispatcher.1] org.mule.transport.jms.JmsMessageDispatcher: Connected: endpoint.outbound.jms://test.out
</code></pre><br />
ActiveMQ's admin screen should show pending messages inside of the test.out or test2.out queues. Messages could be consumed via Stomp:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> require 'stomp'
client = Stomp::Client.open("stomp://localhost:61612")
client.subscribe("/queue/test.out"){|message| puts "consuming #{message.body} with properties #{message.headers.inspect}"}
</code></pre><br />
producing output:<br />
<pre style="background-image: URL(https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMmkJs2ez8LGiHE1xXwkipmyQt9D03OFvnanLSsdVJBTmb8O71UBsHfEWTzpqVT2lbRFuyUD7I_R_7l0gcy7I9uWx9SOrSfgkIR8cpxGqgiTA4AucT6rWF54A_4pqoWJ6BIsQHeg/s320/codebg.gif); background: #f0f0f0; border: 1px dashed #CCCCCC; color: black; font-family: arial; font-size: 12px; height: auto; line-height: 20px; overflow: auto; padding: 0px; text-align: left; width: 99%;"><code style="color: black; word-wrap: normal;"> consuming
!!!!!!!!!!!!!!
test message
!!!!!!!!!!!!!!! with properties {"MULE_ORIGINATING_ENDPOINT"=>"asynchIn", "content_type"=>"text/plain; charset=UTF-8", "MULE_CORRELATION_ID"=>"0f2295b0-ef45-11de-856a-538c667e24a7", "expires"=>"0", "timestamp"=>"1261519075346", "destination"=>"/queue/test.out", "message-id"=>"ID:Rohirrim.local-51739-1261518810672-0:0:7:1:1", "priority"=>"4", "MULE_SESSION"=>"SUQ9MGYyMjk1YjEtZWY0NS0xMWRlLTg1NmEtNTM4YzY2N2UyNGE3", "content-length"=>"46", "MULE_MESSAGE_ID"=>"ID:vsbeta-4
5609-1261505977249-4:117:-1:1:1", "correlation-id"=>"0f2295b0-ef45-11de-856a-538c667e24a7", "MULE_ENCODING"=>"UTF-8", "MULE_ENDPOINT"=>"jms://test.out"}=> nil
</code></pre><br />
This approach enables a single queue to collect asynchronous message requests and leverage Mule's filtering-routers to decouple the producer and consumer. The requests go on the ESB, Mule defines where they should go, and the service components act and process the request independent of the requester.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-7643289.post-34307180015532362732009-12-10T18:34:00.002-07:002009-12-10T18:41:26.066-07:00We chose Mule for our ESBAfter careful consideration of multiple open-source ESB products, <a href="http://mulesoft.org/">Mule</a> made the most sense for implementation into our data information infrastructure. The other major competing providers were OpenESB(GlassFish v3), FUSE (Apache stack) as having an open-source solution was a very strict requirement. Mule is very component oriented and can be quickly setup for integration with existing services (of which we have several). Furthermore, it is Spring-based and supports Maven -- which fits right into our existing application development<br />methodology.<br /><br />Mule has many strong points, the most relevant for our decision being:<br /><ul><br /><li>very mature, years of development and major deliveries</li><br /><li> a solid installation base across many significant enterprises</li><br /><li> well-documented with excellent examples and diagrams</li><br /><li> is open-source (CPAL 1.0)</li><br /><li> a commercial support model with additional tools (service registry,monitoring)</li><br /><li> flexible configuration and instantiation options</li><br /><li> wide-array of built-in and downloadable modules</li><br /><li> a top-choice in <a href="http://tinyurl.com/yhfvq2w">DTS of Utah ESB comparison</a></li><br /><li> excellent testing framework</li><br /></ul><br />I read a lot on OpenESB, watched presentations and then checked out tutorials. Simply put, OpenESB appears quite overkill for our specific needs. Furthermore, I'm not a fan of vendor lock-in and OpenESB appears to be very heavily biased towards NetBeans (which isn't really a surprise).<br /><br />As for the Apache side of things, I spent a fair amount of time reading comparisons on ServiceMix and Mule. The favor was typically weighted on Mule's behalf and the one major viable commercial support option for ServiceMix was through<br />Fuse. Fuse appears to have some good documentation, but it wasn't nearly as in-depth as Mule. Also, going this route appeared to require more "gluing" using Camel and ServiceMix is also more specifically oriented to JBI--a shared trait among both OpenESB and ServiceMix.<br /><br />Mule made it very easy to get up and running quickly. Within minutes I had implemented a REST service component using an existing endpoint. After some time reading more documentation I recognized a good case for using the template URI pattern and exposed another two REST endpoints in a separate service. Finally, with a Maven archetype, it was very easy to generate a Mule project and tweak it to support multiple environment deployments (dev,test,beta,prod, etc). I created a simple bootloader to start up the Mule context and register a shutdown hook with the JVM. This approach leverages Maven's capabilities in property filtering and distribution assembly. Thus, we can now create standalone distributions with full Maven dependency support (avoiding the hassle of updating MULE_HOME/lib/user), integrated testing, custom property filtering, and artifact assembly for multi-enviroment support.Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-7643289.post-40950701435716595912009-03-13T15:33:00.001-06:002009-03-13T15:36:14.555-06:00calling dynamic domain finders in GrailsHad the need today for calling a Grails domain class finder method outside of the normal artifact setup. There's a singleton I wanted to write to cache certain hunks of data from the database. It'd be very convenient to have access to those domain classes to save me the pain of writing boilerplate Hibernate config and EJB classes. So, here's what ended up working.<br /><br />I injected a GrailsApplication reference into my bean and created a closure that I passed to a new <a href="http://groovy.codehaus.org/groovy-jdk/java/util/Timer.html">Groovy Timer</a> instance. Inside the closure I'm able to invoke the dynamic finders on the domain classes because I can fetch a new instance this way:<br /><pre><br />def person_class = grailsApplication.getArtefact("Domain","Person")<br />def person_instance = person_class.newInstance()<br /></pre><br /><br />That's simple enough, to actually call the finders (in this case "list") the next step was:<br /><pre><br />def results = person_class.metaClass.invokeStaticMethod(person_instance,'list',null)<br /></pre><br /><br />Next plans are to create a generic way of exposing the ability to call the dynamic methods so that any Groovy class in the app has has access to them.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-63698150988732255902009-03-12T13:43:00.001-06:002009-03-12T13:47:04.742-06:00Maven, Grails and Metro -- it worksI was really stoked about the maven-grails plugin when I had some time to start playing with Grails 1.1-SNAPSHOT last week. In fact, Grails has matured quite a bit since I first looked at it a little more than a year go. Almost two years ago I wrote about an integration of NetSuite's WebServices with Ruby's SOAP4r. We've been using this integration for nearly two years and have found ways to improve our original approach. In fact, usage has significantly increased, so much so that scalability is now becoming a concern. Don't get me wrong, SOAP4r has never actually croaked on us. But it makes sense to de-couple this element from the application and make a full-fledged service layer for additional in-house integrations. It's time to port the code to Java.<br /><br />So, back to Grails and Maven. With over five years of Maven experiences I'm completely sold on its many benefits. I've been watching Grails and hoping that the two would integrate to a point where it's completely usable to manage a Grails project in Maven. That day is here, and it's solid. I created a prototype (love how fast it was) with Axis using Grails 1.1-SNAPSHOT. I was sold. Then, just two days ago, Grails 1.1 was finalized and became GA. Funny thing, as soon as I upgraded my project I could no longer use my prototype because I would receive the dreaded:<br /><br /><span style="font-style: italic;">java.lang.LinkageError: loader constraints violated when linking javax/xml/namespace/QName</span><br /><br />That really made me sad, my prototype was hosed. I didn't want to start sleuthing the class dependency collision. <a href="http://swamp.homelinux.net/blog/">Mike Heath</a> suggested I check out Apache CXF and Sun's Metro, both of which appear to be more "cleanly" designed than Axis. I spent a while trying to get CXF to work, but apparently it has a bunch of jars that need to be excluded since it hosed grails:run-app (No such property: readable for class: org.springframework.core.io.Class).<br /><br />Finally, I had some success with Metro in Maven and Grails. I regenerated the NetSuite classes via wsimport, and only had to add jaxws-rt and jaxws-tools to my dependencies (this was helpful, <a href="https://metro.dev.java.net/guide/Using_JAX_WS_from_Maven.html">https://metro.dev.java.net/guide/Using_JAX_WS_from_Maven.html</a>). For now, I'm up and running and looking forward to more Grails development.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-69566514283262206622008-11-20T10:14:00.004-07:002008-11-20T10:26:35.571-07:00RESTlet with Portlet in LiferayLiferay allows authentication plugins in order to flexibly accommodate implementations that need more than what comes out-of-the-box. Incidentally, a portlet we have been working on has some rules governing its render state. There are situations where it shouldn't be rendered due to a blacklisting strategy in our requirements. Unfortunately, Liferay's permissions are strictly based on whitelists.<br /><br />So, how to not render the portlet for a group of people and do render it for others using a whitelist? Logic for who should see it is purely internal to the portlet and it made sense (in simplicity and respecting separation of concerns) to expose a RESTlet for consumption of our authentication module. The authentication module in turn puts people in the proper groups for various community and portlet access. Setup was <a href="http://temporary.name/java/index.php/spring/restlet-spring-integration">incredibly simple</a> and it didn't take much more effort to enable JPA transaction support with this class:<br /><small><br /><pre><br /><br />/**<br />* Enables JPA transactional support for subclassed Restlets.<br />*/<br />public abstract class AbstractJpaRestlet extends Restlet {<br /> private static final Logger logger = Logger.getLogger(AbstractJpaRestlet.class);<br /><br />@Autowired<br /> private EntityManagerFactory emf;<br /><br /> /**<br /> * Handler for Restlet requests and responses. Implementing this method<br /> * will ensure db connectivity and transactions support with the DAOs<br /> *<br /> * @param req incoming Request<br /> * @param resp outgoing Response<br /> */<br /> public abstract void doHandle(Request req, Response resp);<br /><br /><br /> @Override<br /> public void handle(Request request, Response response) {<br /> EntityManager em = emf.createEntityManager();<br /> TransactionSynchronizationManager.bindResource(emf, new EntityManagerHolder(em));<br /><br /> try {<br /><br /> doHandle(request, response);<br /> } catch (Throwable t) {<br /><br /> logger.error(this, t);<br /> TransactionSynchronizationManager.unbindResource(emf);<br /><br /> throw new RestletException(t.getMessage());<br /> }<br /> finally{<br /> TransactionSynchronizationManager.unbindResource(emf);<br /> }<br /> }<br />}<br /></pre><br /></small>This is my first RESTlet and I'd be interested in any feedback or pointers in this approach. I'm quite happy with how fast it was to code up. There's only one implementation of this class at this point, but the pattern is very simple and allows for quick future expansion as we need to expose more data.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-70607403605101937592008-11-19T14:51:00.003-07:002008-11-19T14:59:39.540-07:00Revisited: Spring InvervalJobs and scheduling in Liferay 5Update to a better way to go about Spring scheduling: I tried simply defining a destroy method on the scheduler bean instead of relying on the extended ContextLoaderListener. This never appeared to be invoked and subsequently didn't unschedule the jobs. This is quite problematic in the long-run. Not good to have duplicate jobs running at the same time, it's sloppy and bad things could happen. So simply having the destroy method was insufficient.<br /><br />Instead of extending Liferay's JobScheduler (as I suggested in the previous post), it made more sense to create a singleton POJO that then invokes com.liferay.portal.kernel.job.JobSchedulerUtil.getJobScheduler().schedule(). That way the dependency is relying on Liferay's Quartz integration altogether (not thinking any of this would ever live outside of Liferay, but in the event that it does should be able to propagate that dependency with minor adjustments). Everything said and done (and tested), this is the scheduler:<br /><small><br /><pre><br />public class JobScheduler {<br /> private static Log _log = LogFactory.getLog(JobScheduler.class);<br /> public Set<IntervalJob> jobs = new HashSet<IntervalJob>();<br /><br /> /**<br /> * Set all of the scheduled jobs.<br /> *<br /> * @param jobs Set of jobs to schedule.<br /> */<br /> public void setJobs(Set<IntervalJob> jobs) {<br /> this.jobs = jobs;<br /> }<br /><br /> public void init(){<br /> com.liferay.portal.kernel.job.JobScheduler scheduler = JobSchedulerUtil.getJobScheduler();<br /><br /> for(IntervalJob job : jobs){<br /> try {<br /> _log.info("Initializating " + job);<br /> scheduler.schedule(job);<br /> }<br /> catch (Exception e) {<br /> _log.error("Initialization error instantiating " +job);<br /> _log.error(e);<br /> }<br /> }<br /> }<br /><br /> public void destroy() {<br /> com.liferay.portal.kernel.job.JobScheduler scheduler = JobSchedulerUtil.getJobScheduler();<br /><br /> for(IntervalJob job : jobs){<br /> try {<br /> _log.info("Unscheduling " + job);<br /> scheduler.unschedule(job);<br /> }<br /> catch (Exception e) {<br /> _log.error("Unscheduling error with" +job);<br /> _log.error(e);<br /> }<br /> }<br /> }<br />}<br /></pre><br /></small>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-31948396023356043152008-11-19T14:09:00.002-07:002008-11-19T14:15:46.678-07:00Spring InvervalJobs and scheduling in Liferay 5If you have a Liferay portlet that requires some scheduling you can easily use Liferay's built-in Scheduler to add an IntervalJob to the job list, <a href="http://portaldevelopment.wordpress.com/2008/05/08/how-to-create-scheduler-in-portlet/">like this</a>. However, what if your IntervalJob is a Spring bean and has dependencies on other Spring beans in the portlet? Unfortunately, at the time of this writing (Liferay 5.1.2), the hot deploy code invokes the Scheduler configuration and execution before the context is initialized--which means you're up a creek when Spring is setup to be initialized with the context (happens to be my case).<br /><br />An alternative approach is to extend com.liferay.portal.job.JobSchedulerImpl with a Spring singleton and configure the jobs via Spring. While this is very flexible, the singleton is now operating outside the Liferay Quartz realm and therefore will not be subject to the lifecycle of the portlet. That is to say, when you redeploy the portlet the jobs stay scheduled. A more annoying aspect to this is that if you try to shutdown Liferay it appears to hang. Sure the log says that Coyote is stopped, but that's not the case and the process appears to be waiting on a thread. This in turn requires manually killing every time. During development this is such a pain. My guess, without significant research into the bowels of the Liferay Quartz integration, is that the Spring singleton hasn't been properly disposed of.<br /><br />One solution to this situation is to extend org.springframework.web.context.ContextLoaderListener with something like this:<br /><small><br /><pre><br />public class SpringSchedulerContextLoaderListener extends org.springframework.web.context.ContextLoaderListener{<br /> private static final Logger logger = Logger.getLogger(SpringSchedulerContextLoaderListener.class);<br /> public void contextInitialized(ServletContextEvent event) {<br /> super.contextInitialized(event);<br /> }<br /><br /> public void contextDestroyed(ServletContextEvent event) {<br /><br /> JobScheduler j = (JobScheduler) StaticApplicationContextHolder.getApplicationContext().getBean("jobScheduler");<br /> j.shutdown();<br /> <br /> super.contextDestroyed(event);<br /> }<br />}<br /></pre><br /></small><br /><br />This will ensure that the Spring singleton JobScheduler will unschedule the registered IntervalJobs when the context is destroyed. You're good to go once this entry replaces org.springframework.web.context.ContextLoaderListener in web.xml. <br /><br />There may be a more efficient way to do this, but for now this works.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-6099214047930820652008-03-31T10:32:00.004-06:002008-03-31T10:58:55.363-06:00Alfresco content management with LiferayAt work we've been pondering a better long-term solution to our content management. A year ago we gave some thought to Liferay and in fact deployed it for one of our sites. Recently we've reconsidered our approach due to some new additional requirements accompanied with a more thoughtful perspective on leveraging internal content management with future web projects. We spent last week looking at JSR-170 alternatives/companions to Liferay, which with all due respect, only provides the implementation in their Document Library. We wanted something all-encompassing, that is, a repository where we could manage both web, print and other electronic content. Alfresco seemed to be the best option for us. After all of us spending a week with it and having various meetings trying to define the roles of stakeholders and functionality requirements, this was my conclusive perspective with enumerated priorities:<br /><ol><br /><li> Versioning<br />This is easily satisfied with Alfresco and I was specifically impressed with the various ways to update content. I'm particularly pleased with the multiple capabilities of creating/updating content given the built-in CIFS server, FTP server, Office plug-ins and web interface. This wide array of interfaces should enable our users to begin versioning content with a limited learning curve (especially in terms of the shared drive notion). The WebProject versioning feature is very worthwhile in that it provides us with the ability to view/rollback content at any given time for each release, very helpful for auditing and liability. Lastly, their implementation of sandboxing is especially beneficial in concurrent development as each user can submit their work to workflow after sufficient authoring and testing.<br /></li><br /><li> Document Management<br />Alfresco was written primarily to manage documents, and given the aforementioned information on versioning, I think it's very capable for our needs.<br /></li><br /><li> Integration<br />I'm very pleased and excited about the ease of creating REST endpoints using Alfresco's WebScript framework. We won't have to write any extra functionality (read: additional JARs) to work with existing APIs but can rely on implementing custom WebScripts for exposing what we want, how we want. This is particularly useful for rapid development for any of our potential integration points, this is specificall a boon for both integration with our Rails CRM and the custom Liferay content portlet Jeff Wilson is writing.<br /><br /></li><li> Workflow<br />I think creating workflows specific to our needs will require the most work. Granted, the WCM component ships with a very basic approval workflow, we'll still need to create custom workflows once we decide how to hone our processes (and choose our deployment strategies). Depending on our needs, we may only need to define the rules in XML and forgo additional code (I believe our definitions will need to precede the investigation of additional functionality).<br /></li><br /><li> User Experience<br />Again, referencing the Versioning info above, I think this is covered. It'd be very helpful for the users to see that state/phase of workflow that a given item is in, but that appears to be a current enhancement request (per Jared).<br /></li><br /></ol><span style="font-weight: bold;">Additional Benefits</span><br /><ul><li> Search: all meta-data (including custom aspects) is indexed, with incredible ease users will be able to find content much faster than perusing through shared drives trying to remember the location of specific files.</li><br /><br /><li> Task dashboard: users are able to see what tasks they have awaiting their action (be it approval, updates, reviews, etc.)</li><br /><br /><li> SSO options are plentiful for integrating with our ActiveDirectory: LDAP, NTLM, Kerberos</li><br /><br /><li> Simplified replication: there's already a pre-configured XML doc for repository replication</li><br /><br /><li> Space Rules: Alfresco has a great rule-engine for manipulating content based on a set of Space rules. For example, specific meta-data (via custom aspects) can be applied to certain content as defined in the rules. Space rules have an inheritance model<br /></li><br /><li>Roles are configured per Space (and thus also subject to inheritance) enabling a very flexible detailed system of privileges. Roles can be applied to users or groups of users, per Space.<br /></li><br /><li>Content transformations: Alfresco integrates with OpenOffice to provide instant content transformations(text to PDF, PowerPoint to Flash) and can be extended to provide custom transformations.</li><br /><br /><li>Send content to Alfresco via email: The next release of Alfresco will include the ability to add content to Alfresco via email attachment. This could be a very efficient way for sales people to put quotes,proposals,contracts, etc straight into Alfresco without leaving their email client.<br /></li><br /><li>Space Templates: we can setup a space and template it to create future spaces based on that template, thereby ensuring default layouts and content are appropriately propagated.</li><br /><br /><li>Alfresco deployable run-time enables us to deploy the repository to our environments w/o the overhead and deployment of the web client (a clear separation of concerns strategy that also avoids potential content tampering).<br /></li><br /><li>Stability and product maturation: Alfresco is clearly a player in the marketplace with 400+ enterprise clients and 20k deployed instances.</li><br /><br /><li>Speed: <a href="http://www.theserverside.com/news/thread.tss?thread_id=43282">Alfresco and RedHat created a JSR-170 benchmark</a> with Optaros validating its results in a 10 million doc test exercising repository corruption avoidance and high-concurrency usage, 0.4s response time. <a href="http://www.alfresco.com/media/releases/2008/01/unisys-benchmark/">Updated results</a>.<br /></li><br /><br /><li>Search: all meta-data (including custom aspects) is indexed, with incredible ease users will be able to find content much faster than perusing through shared drives trying to remember the location of specific files.</li><br /></ul><br /><br />I clearly believe that the Alfresco solution, coupled with our Liferay content-rendering portlet, is the best approach we could pursue in managing long-term corporate content. It enables all of our departments and users to create and manage content, whether print or web-related, in a variety of very intuitive and thoughtful interfaces. Furthermore, it satisfies multiple IT goals in terms of application integration, data replication, content authorization and workflow/process definition. To that end, and knowing more functional valued enhancements will be soon released, I strongly recommend it.<br /><br />Been a pleasure to muck with it, looking forward to future implementation (which I hope is approved).Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-7643289.post-91268197891950536642008-03-20T08:12:00.002-06:002008-03-20T08:16:11.051-06:00Great look at virtualization<a href="http://www.anandtech.com/IT/showdoc.aspx?i=3263&p=1">Virtualization: Nuts and Bolts </a><br /><br />What I appreciated most about this article was the lack of fluff found in most of the VMWare or XEN docs comparing X or Y and why they're better than the other guy. Johan does a great job of providing a bit of history and background in virtualization (specifically binary translation then paravirtualization) and then explores the Intel VT-x and AMD SVM roles at the hardware level. He discusses memory and I/O challenges that can still be hindrances. It's a long article so if you're not interested in all the gory details, at least check out page 12 for a good look at benchmarking (and what's NOT being benchmarked) and page 13 for a well-summarized conclusion.<br /><br />Coming away from reading this leaves me anxious for the next article and future enhancements at the hardware level. I'd like to find more articles similar to this one for more information and academic research. Appears there are still great strides to be made to hone efficiency. Fun stuff!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-61316158968335517452007-12-18T16:12:00.000-07:002007-12-18T16:21:27.420-07:00JNI library testing on OSX 10.5Refactoring a bunch of the JNI H-ITT code that I was working on last month caused me grief when I was nailed with the presence of UnsatisfiedLinkError during my maven build. My Mac prototype code was very simplistic and I had all the libraries (and Java test driver) in the same directory and it worked flawlessly. Sheesh, LD_LIBRARY_PATH in OSX 10.5 is not the right env variable, should be DYLD_LIBRARY_PATH. Once that was added to my pom I was good to go, tests passed as expected.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-78966512500456671882007-12-13T21:52:00.000-07:002007-12-13T21:59:40.879-07:00Madison Summit Oxfords, second pair<a href="http://rcrblog.blogspot.com/2004/12/walmart-earth-shoegarbage-footwear.html">Three years ago I wrote about a horrible experience with junk shoes from Wal-Mart </a>. Tonight I bought my second pair of Timberland Madison Summit Oxfords. I love these shoes and forwarded my thoughts to Timberland:<br /><br />To Whom it May Concern,<br /><br />I just wanted to write a quick note and thank you for such fantastic shoes. Two years ago I was looking for a new pair of shoes that I could use for work (business-casual), for school (walking around college campus), for home (playing sports with my kids, shoveling the walk, and wearing around the house) and for anything else. I was talking with a fellow in my church congregation and he suggested Timberland. I headed over to a local sporting good store and the sales guy immediately showed me your Madison Summit Oxford. <br /><br />I purchased that pair of shoes 23.5 months ago and they has served me very, very well. I have worn them exclusively, for every activity. The best part about this shoe was how well it "restored" when I regularly applied my leather-weather cream. <br /><br />Tonight I purchased my new pair, the exact same size and model for $30 cheaper than two years ago. I commend you on a fine product and its outstanding endurance. I appreciate your company's values and commitment to the environment. I hope this next pair will last me another two years and that I can continue on with my Madison Summit Oxford addiction in the foreseeable future (2010, 2012, 2014...).Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-78920635162610181182007-12-12T15:00:00.001-07:002007-12-12T15:10:23.173-07:00TextMate: Subversion Annotate commandI really like vc-annotate in emacs. Considering the other very easy command I created earlier, thought I'd give this one a shot too (since it's not in the default Subversion bundle). Again, extremely simple and minutes to complete. Here's the command (edited from the Info command):<br /><pre><br />require_cmd "${TM_SVN:=svn}" <br />: ${TM_RUBY:=ruby}<br />FORMAT_INFO="${TM_BUNDLE_SUPPORT}/format_annotate.rb"<br /><br />"$TM_SVN" annotate "$TM_FILEPATH" |"$TM_RUBY" -- "$FORMAT_INFO" <br /><br /><br /></pre><br /><br />And here's the Ruby formatter:<br /><pre><br />require ENV['TM_BUNDLE_SUPPORT']+'/svn_helper.rb'<br />include SVNHelper<br /><br />puts html_head(:window_title => "Info", :page_title => "SVN Annotation", :sub_title => 'Subversion')<br />puts '<div class="subversion">'<br />STDOUT.flush<br />@colors = ['BlanchedAlmond',<br />'BlueViolet',<br />'Brown',<br />'BurlyWood',<br />'CadetBlue',<br />'Chartreuse',<br />'Chocolate',<br />'Coral',<br />'CornflowerBlue',<br />'Crimson',<br />'Cyan',<br />'DarkBlue',<br />'DarkCyan',<br />'DarkGoldenRod',<br />'DarkGray',<br />'DarkGreen',<br />'DarkKhaki',<br />'DarkMagenta',<br />'DarkOliveGreen',<br />'Darkorange',<br />'DarkOrchid',<br />'DarkRed',<br />'DarkSalmon',<br />'DarkSeaGreen'] #see http://www.w3schools.com/html/html_colornames.asp for more pretty colors <br /><br />@color_hash = Hash.new<br />@color_ind_size = @colors.size() -1<br /><br />def color_for_rev(rev)<br /> color = @color_hash[rev]<br /> <br /> unless (color)<br /> color_index = rev % @color_ind_size<br /> color = @colors[color_index]<br /> <br /> @color_hash[rev] = color<br /> end<br /> <br /> color<br />end<br /><br />$stdin.each_line do |line|<br /> rev = line.strip.split(" ").first.to_i<br /> <br /> color = color_for_rev(rev)<br /> colored = "<div style='color:#{color}'>#{htmlize(line.strip)}</span>"<br /> <br /> puts(colored)<br />end<br /><br />puts("</div>")<br />html_footer()<br /><br /><br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-24305559854180751642007-12-12T12:03:00.000-07:002007-12-12T12:14:49.690-07:00TextMate command db query promptI decided to give TextMate another whirl last night. I thought since I gave NetBeans 6 some time recently I'd see how efficient a couple of days with TextMate would be (compared with my life-blood emacs). I was pleasantly surprised at its extendability, speed and myriad of bundle choices. While perusing the SQL bundle I noticed I couldn't see of a way of directly typing in a query. I really like sql-mysql in emacs, so I was hoping I could do something similar--the workaround being typing a query into a buffer, selecting it, then invoking the command to send it. After poking around in the TextMate manual I was shocked at how easy it appeared to be to add custom commands. In less than a minute I had this (edited straight from the manual example of showing a dialog for input):<br /><pre><br />res=$(CocoaDialog inputbox --title "Send query" \<br /> --informative-text "Enter query text:" \<br /> --button1 "Submit" --button2 "Cancel")<br /><br />[[ $(head -n1 <<<"$res") == "2" ]] && exit_discard res=$(tail -n1 <<<"$res") db_browser.rb --query="$(tr '\n' ' ' <<< "$res")" </pre><br /><br />If I can just get over my remaining habits (screen splitting, hippie-expand, and more), I may end up paying for this editor. Too bad it's not OSS :(Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-88225519776460485622007-11-21T12:49:00.000-07:002007-11-21T12:57:32.781-07:00Rxtx to MINA H-ITT Flash integration out of the parkThat's a lame title, but it advertises all of the relevant (okay, short of Java's role, which is inherent since <a href="http://mina.apache.org/">Apache MINA</a> is written in Java) technologies in my latest project at work. We'll be using a Flash based application for our next training platform. One of the goals of the platform is to integrate an audience response system with the training. We specifically chose <a href="http://www.h-itt.com">H-ITT</a> as <a href="http://www.vitalsmarts.com/default.aspx?zid=7&pg=22">our</a> provider because the offer a simple <a href="http://www.h-itt.com/developers.html">SDK</a> (cross-platform translation library) and they appeared to have the most developed system and provide solid support. <br /><br />Today I finished a proof of concept and made it available to them, here are the relevant sections of my email (heading home soon, this saves me time from repeating myself):<br />--begin email chunk--<br />I wanted to let you know that I have successfully completed a workable Windows H-ITT-Java-Flash prototype. I have some of the code operational in OSX but haven't pursued it in light of our goal of a single Windows deliverable (with embedded JRE) that _doesn't_ require an installation routine. I've used Java as the communication layer, natively interfacing through your SDK, and created a rudimentary TCP protocol for communication with the Flash application. I've heavily relied on the open-source Apache MINA project in conjunction with the Rxtx libraries (which will be included in a later stable release of MINA), Simple-Log and Launch4j. <br /><br />The prototype simply exhibits the ability to auto-detect the transceiver's serial port (baud can be manually configured via a properties file, defaults at 19200), sets up the appropriate serial connection, and then forwards the responses to the Flash app. Java logging is written to disk unless executed from a read-only medium. The Flash prototype provides an "Acquire" button that kicks off the auto-detect, a log console (scrolling is there, but not so visible, I'm not a Flash guru), and also virtual buttons related to the remote. Thus, when configuration is successfully complete, pressing the buttons on the remote invokes the related event in the Flash app to show which button was pressed. The whole setup serves as a basic proof-of-concept for what we're pursuing with our next-gen training platform.<br />--end email chunk--<br /><br />Thanksgiving is tomorrow, I'm feeling grateful: So I've already plugged H-ITT in the beginning of the post, but I really gotta say they've been great to work with and very prompt in answering questions. Next plug goes to the MINA team. MINA intuitively and simply (relatively, I thought the docs and examples were sufficient) provided a great framework for custom socket communication with Flash. It's elegantly architected (IMHO) and easily integrated with what I had envisioned for the Java communication layer. Props to <a href="http://swamp.homelinux.net/blog/">Mike Heath</a> for recommending MINA and assisting my approach. <a href="http://simple-log.dev.java.net/">Simple-log</a> and <a href="http://launch4j.sf.net">Launch4j</a> simplified logging and customizing a single Windows .exe file. Lastly, <a href="http://rxtx.org">Rxtx</a> provided the crux of the serial-communication and I applaud the development team in their cross-platform approach and deliverables. Finally, my thanks to Ross Asay for the JNI help, since I was intending a cross-platform delivery to correspond with H-ITT's libraries this proved to be a good challenge and learning experience.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-36216040193559001102007-09-12T18:18:00.000-06:002007-09-12T18:51:45.359-06:00Failed to get IPC connectionThis convenient message showed up tonight on a Windows 2003 Server VM under VMWare server. Restarting the service, no go. Restarting the VM, no go (same message). Remove devices and change the VM settings, no go. Restarting host (Suse), no go. <br /><br />A bit of looking around didn't help much, but someone mentioned a permissions issue. Here's the log entry:<br /><pre>Sep 12 18:13:56: vmx| CnxAcceptConnection: Could not receive fd on 187: invalid control message<br />Sep 12 18:13:56: vmx| Failed to get IPC connection</pre><br /><br />Going out to the directory of the VM, and executing a "chown root:root -R ." did the job. Restarting the VM after that brought it up nice and happy. So the question remains as to what caused this to occur.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-63420289241566540042007-08-15T12:18:00.000-06:002007-08-15T12:25:16.956-06:00Why I love Apache's Ldap StudioI've gotta modify some attributes on people in our ActiveDirectory. The fun Microsoft way to do this is to download ADAM-adsi management console plugin, and then go from there. It's pretty basic and is usual MS ugliness. On the other hand, <a href="http://directory.apache.org/studio/">Apache Ldap Studio</a> provides a much better user experience. Being built on top of Eclipse enables a slick interface and a very intuitive way of dealing with your directory (browsing, searching, editing). Kudos to the guys that wrote it, I can now enjoy mucking with AD from my Mac with ease and finesse.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-7643289.post-19272867109176774202007-08-09T10:20:00.001-06:002007-08-09T10:29:27.378-06:00Remote files in EmacsToday I decided I was sick of always SSHing everywhere only to open up a couple of files for editing and then saving them. It doesn't make sense to copy my .emacs everywhere either. I discovered <a href="http://jeremy.zawodny.com/blog/archives/000983.html">Tramp</a>, and it just makes me all the more happy with my Emacs zealotry. Even better, my <a href="http://porkrind.org/emacs/emacs-builds/Emacs-22.1-i386-10.4.9.dmg">current snapshot</a> already includes it.<br /><br />Works great with dired too:<br />/ssh:hoser@somebox:/whateverdirectoryUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-69097156760439321642007-08-01T11:06:00.000-06:002007-08-01T11:21:13.884-06:00installing the latest nginx on OS XI've read a lot about <a href="http://nginx.net/">nginx</a> lately and wanted to test its performance for our up-and-coming Ruby on Rails CRM application. It looked very easy to setup and easily configurable (especially with <a href="http://projects.require.errtheblog.com/browser/nginx_config_generator">this</a>). My first attempt was installing it through MacPorts, but that didn't fly, as I received the unpleasant "dyld: Library not loaded: /usr/local/lib/libpcre.0.dylib". Turns out I had the same problem when trying to execute after building straight from source (which happened to be a newer version).<br /><br />So with MacPorts I installed pcre 7.2_0, deactivated 7.0_0, created the symlink as "sudo ln -s /opt/local/lib/libpcre.0.dylib /usr/local/lib/" and was then able to start up nginx from the source build. Unfortunately the generated configuration file I am using expected a user and group for "nginx". A quick work around for that was to just reference myself instead. Here's what I ended up with (awaiting tweaks and optimizations):<br /><br /><pre><br />#user and group to run as<br />user russ russ;<br /><br /># number of nginx workers<br />worker_processes 2;<br /><br /># pid of nginx master process<br />pid logs/nginx.pid;<br /><br /># Number of worker connections. 1024 is a good default<br />events {<br /> worker_connections 1024;<br />}<br /><br /># start the http module where we config http access.<br />http {<br /> # pull in mime-types. You can break out your config<br /> # into as many include's as you want to make it cleaner<br /> include conf/mime.types;<br /><br /> # set a default type for the rare situation that<br /> # nothing matches from the mimie-type include<br /> default_type application/octet-stream;<br /><br /> # configure log format<br /> log_format main '$remote_addr - $remote_user [$time_local] $status '<br /> '"$request" $body_bytes_sent "$http_referer" '<br /> '"$http_user_agent" "http_x_forwarded_for"';<br /><br /> # main access log<br /> access_log logs/access.log main;<br /><br /> # main error log<br /> error_log logs/error.log debug;<br /> #error_log logs/error.log debug_http;<br /><br /> # no sendfile on OSX<br /> sendfile on;<br /><br /> # These are good default values.<br /> tcp_nopush on;<br /> tcp_nodelay off;<br /> # output compression saves bandwidth<br /> gzip on;<br /> gzip_http_version 1.0;<br /> gzip_comp_level 2;<br /> gzip_proxied any;<br /> gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml<br /> application/xml+rss text/javascript;<br /><br /> # this is where you define your mongrel clusters.<br /> # you need one of these blocks for each cluster<br /> # and each one needs its own name to refer to it later.<br /> upstream vscrm {<br /> server 127.0.0.1:8000;<br /> server 127.0.0.1:8001;<br /> server 127.0.0.1:8002;<br /> }<br /><br /> # the server directive is nginx's virtual host directive.<br /> server {<br /> # port to listen on. Can also be set to an IP:PORT<br /> listen 80;<br /><br /> # sets the domain[s] that this vhost server requests for<br /> server_name vscrm;<br /><br /> # doc root<br /> root /home/russ/forge/svn/vscrm/trunk/public;<br /><br /> # vhost specific access log<br /> access_log logs/vscrm.access.log main;<br /><br /> #Set the max size for file uploads to 50Mb<br /> client_max_body_size 50M;<br /><br /> # this rewrites all the requests to the maintenance.html<br /> # page if it exists in the doc root. This is for capistrano's<br /> # disable web task<br /> if (-f $document_root/maintenance.html){<br /> rewrite ^(.*)$ /maintenance.html last;<br /> break;<br /> }<br /><br /> if ($host ~* "www") {<br /> rewrite ^(.*)$ http://vscrm$1 redirect;<br /> break;<br /> }<br /><br /> location / {<br /><br /><br /> # needed to forward user's IP address to rails<br /> proxy_set_header X-Real-IP $remote_addr;<br /><br /> # needed for HTTPS<br /> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;<br /> proxy_set_header Host $http_host;<br /> proxy_redirect false;<br /> proxy_max_temp_file_size 0;<br /><br /> # check for index.html for directory index<br /> # if its there on the filesystem then rewite<br /> # the url to add /index.html to the end of it<br /> # and then break to send it to the next config rules.<br /> if (-f $request_filename/index.html) {<br /> rewrite (.*) $1/index.html break;<br /> }<br /><br /> # this is the meat of the rails page caching config<br /> # it adds .html to the end of the url and then checks<br /> # the filesystem for that file. If it exists, then we<br /> # rewite the url to have explicit .html on the end<br /> # and then send it on its way to the next config rule.<br /> # if there is no file on the fs then it sets all the<br /> # necessary headers and proxies to our upstream mongrels<br /> if (-f $request_filename.html) {<br /> rewrite (.*) $1.html break;<br /> }<br /><br /> if (!-f $request_filename) {<br /> proxy_pass http://vscrm;<br /> break;<br /> }<br /> }<br /><br /> error_page 500 502 503 504 /50x.html;<br /> location = /50x.html {<br /> root html;<br /> }<br /> }<br />}<br /><br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-6398937593678686972007-07-31T13:18:00.000-06:002007-07-31T13:37:47.824-06:00sanitizing Rails input parametersI really like how Tapestry automagically escapes HTML input when posted from a form. In fact, it was just great never having to worry about that when coding. I'd like to have the same functionality in Rails especially after reading about Rails XSS vulnerabilities and work-arounds. Since the webapp I'm writing has no requirements for allowing formatted user input, I just need something simple to clean/sanitize all the params. Here's the latest:<br /><pre><br /> #escapes all HTML from the given hash's values (recursively applied as needed)<br /> def sanitize(hash)<br /> dirty_hash = hash<br /><br /> dirty_hash.keys.each do |key|<br /> value = dirty_hash[key]<br /> <br /> if(value.kind_of?Hash)<br /> dirty_hash[key] = sanitize(value)<br /> else<br /> if (value && value.kind_of?(String))<br /> dirty_hash[key] = CGI.escapeHTML(value)<br /> end<br /> end<br /> end<br /> <br /> hash = dirty_hash<br /> end<br /></pre><br />This is then invoked by a before_filter. Seems to do the job, is there a better/cleaner/faster way of doing this? Let me know how it could be improved...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-52650822247314276472007-07-11T09:53:00.000-06:002007-07-11T10:09:14.927-06:00Ruby: constant time for include?Sure would be nice if the Ruby docs (including <a href="http://www.rubycentral.com/pickaxe/">this book</a>) would provide more details to the implementation of the Set class. According to <a href="http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/184879">this</a>, Set implements its backing collection with a Hash, which would essentially mean that it's synonymous (to some degree) with Java's <span class="blsp-spelling-error" id="SPELLING_ERROR_0">HashSet</span>. Thus providing a constant time <span class="blsp-spelling-error" id="SPELLING_ERROR_1">lookup</span> when Set#include? is invoked. Just for grins I <span class="blsp-spelling-error" id="SPELLING_ERROR_2">benchmarked</span> this in <span class="blsp-spelling-error" id="SPELLING_ERROR_3">irb</span> with a million <span class="blsp-spelling-error" id="SPELLING_ERROR_4">Fixnums</span> and was pleased with the 15 microsecond <span class="blsp-spelling-error" id="SPELLING_ERROR_5">lookups</span>. I looked at the source of both and found a fair amount of similarity.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-42688993850321001492007-06-22T22:01:00.001-06:002007-06-22T22:38:40.774-06:00Sorting serialized objects from a YAML file, Ruby vs JavaI recently had a large dataset that I needed to sort, basically a bunch of objects with 9 string attributes. I dumped them to a YAML file so I could benchmark various aspects of sorting (Class#to_yaml really rocks, really). As it turns out Enumerable#sort_by was the more efficient way to go rather than Enumerable#sort (<a href="http://www.ruby-doc.org/core/classes/Enumerable.html#M003151">check it</a>). The dataset contained 5429 unique objects, here's the benchmarking I did in irb:<br /><pre><br />>> bm do |x|<br />?> x.report("all"){results.sort{|a,b|a.account_name <=> b.account_name}}<br />>> end<br /> user system total real<br />all 0.070000 0.000000 0.070000 ( 0.073835)<br />>> bm do |x|<br />?> x.report("all"){results.sort_by{|a|a.account_name}}<br />>> end<br /> user system total real<br />all 0.020000 0.000000 0.020000 ( 0.020745)<br /></pre><br /><br />Noticeable difference between the two methods, and quite pleasing to see how fast sort_by performed (Apple MBP 2.33GHZ 2GB RAM Ruby 1.85). Then the thought occurred to me, since I have this in a YAML file, I could write something really quick in Java and see how fast the sort would be by dumping it into a TreeSet.<br /><br />So I set off to find out how I could marshal the data into POJOs from the YAML file. <a href="http://jyaml.sourceforge.net/">JYaml</a> and <a href="http://jvyaml.dev.java.net/">JvYaml</a> are the only (insofar as I looked) open source Java YAML libraries. Both seem half-baked in their own right, likely containing just the functionality the respective author needed and not much more (at least that's how it appeared). I ended up using JvYaml and had to search-replace the yaml entry identifier("tag:yaml.org,2002:map") so that JvYaml would create HashMap instances for me instead of its completely worthless PrivateType class.<br /><br />From there I iterated through the maps to create the POJOs and shoved them into a HashSet. Once that was completed I created the comparator for my POJO, passed it into the TreeSet constructor and then timed an addAll giving it the entire HashSet:<br />#1: 55ms<br />#2: 28ms<br />#3: 27ms (pretty much constant thereafter)<br /><br />Interesting results eh? The dataset I used was from a randomized generator and now that I have these initial numbers ( Ruby appears to have won this round), I want to test a larger set. And to be really fair I should do more research on benchmarking best practices (in addition to still needing to take my stats class).Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-7643289.post-2626924747426008142007-04-17T16:18:00.000-06:002007-04-17T16:20:37.570-06:00X11 tunneling via SSH in OSX-X isn't enough for the ssh args in OSX according to <a href="http://lists.apple.com/archives/x11-users/2006/Jun/msg00032.html">this post</a>. So, "ssh -XY hoser@remotebox" worked just great when remoting in to a Suse server and firing executing fvwm.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7643289.post-19959389772211846392007-04-12T16:34:00.000-06:002007-04-12T16:57:32.554-06:001024 vs 2048 RSA encryption/decryption fun in RubyUsing <a href="http://blog.leetsoft.com/2006/03/14/simple-encryption">Tobias's handy openssl wrapper</a> I decided to run a couple timing tests to see how well Ruby's openssl implementation performed in encrypting/decrypting a set of 1000 identical messages, with 1024 bit and 2048 bit keys. Nothing scientific about this, just three runs of the script on my MacBook Pro (2.33 GHZ, 1G RAM). <br /><br />Encrypted text: "This is a much longer message since than what I intend to encrypt"<br /><br />Encryption results, avg time<br />1024: 0.000300691s<br />2048: 0.001010027s<br /><br />Decryption results, avg time<br />1024: 0.005472064s<br />2048: 0.035524023s<br /><br />Pick your poison.Unknownnoreply@blogger.com2