Thursday, November 20, 2008

RESTlet with Portlet in Liferay

Liferay allows authentication plugins in order to flexibly accommodate implementations that need more than what comes out-of-the-box. Incidentally, a portlet we have been working on has some rules governing its render state. There are situations where it shouldn't be rendered due to a blacklisting strategy in our requirements. Unfortunately, Liferay's permissions are strictly based on whitelists.

So, how to not render the portlet for a group of people and do render it for others using a whitelist? Logic for who should see it is purely internal to the portlet and it made sense (in simplicity and respecting separation of concerns) to expose a RESTlet for consumption of our authentication module. The authentication module in turn puts people in the proper groups for various community and portlet access. Setup was incredibly simple and it didn't take much more effort to enable JPA transaction support with this class:

* Enables JPA transactional support for subclassed Restlets.
public abstract class AbstractJpaRestlet extends Restlet {
private static final Logger logger = Logger.getLogger(AbstractJpaRestlet.class);

private EntityManagerFactory emf;

* Handler for Restlet requests and responses. Implementing this method
* will ensure db connectivity and transactions support with the DAOs
* @param req incoming Request
* @param resp outgoing Response
public abstract void doHandle(Request req, Response resp);

public void handle(Request request, Response response) {
EntityManager em = emf.createEntityManager();
TransactionSynchronizationManager.bindResource(emf, new EntityManagerHolder(em));

try {

doHandle(request, response);
} catch (Throwable t) {

logger.error(this, t);

throw new RestletException(t.getMessage());

This is my first RESTlet and I'd be interested in any feedback or pointers in this approach. I'm quite happy with how fast it was to code up. There's only one implementation of this class at this point, but the pattern is very simple and allows for quick future expansion as we need to expose more data.

Wednesday, November 19, 2008

Revisited: Spring InvervalJobs and scheduling in Liferay 5

Update to a better way to go about Spring scheduling: I tried simply defining a destroy method on the scheduler bean instead of relying on the extended ContextLoaderListener. This never appeared to be invoked and subsequently didn't unschedule the jobs. This is quite problematic in the long-run. Not good to have duplicate jobs running at the same time, it's sloppy and bad things could happen. So simply having the destroy method was insufficient.

Instead of extending Liferay's JobScheduler (as I suggested in the previous post), it made more sense to create a singleton POJO that then invokes com.liferay.portal.kernel.job.JobSchedulerUtil.getJobScheduler().schedule(). That way the dependency is relying on Liferay's Quartz integration altogether (not thinking any of this would ever live outside of Liferay, but in the event that it does should be able to propagate that dependency with minor adjustments). Everything said and done (and tested), this is the scheduler:

public class JobScheduler {
private static Log _log = LogFactory.getLog(JobScheduler.class);
public Set jobs = new HashSet();

* Set all of the scheduled jobs.
* @param jobs Set of jobs to schedule.
public void setJobs(Set jobs) { = jobs;

public void init(){
com.liferay.portal.kernel.job.JobScheduler scheduler = JobSchedulerUtil.getJobScheduler();

for(IntervalJob job : jobs){
try {"Initializating " + job);
catch (Exception e) {
_log.error("Initialization error instantiating " +job);

public void destroy() {
com.liferay.portal.kernel.job.JobScheduler scheduler = JobSchedulerUtil.getJobScheduler();

for(IntervalJob job : jobs){
try {"Unscheduling " + job);
catch (Exception e) {
_log.error("Unscheduling error with" +job);

Spring InvervalJobs and scheduling in Liferay 5

If you have a Liferay portlet that requires some scheduling you can easily use Liferay's built-in Scheduler to add an IntervalJob to the job list, like this. However, what if your IntervalJob is a Spring bean and has dependencies on other Spring beans in the portlet? Unfortunately, at the time of this writing (Liferay 5.1.2), the hot deploy code invokes the Scheduler configuration and execution before the context is initialized--which means you're up a creek when Spring is setup to be initialized with the context (happens to be my case).

An alternative approach is to extend com.liferay.portal.job.JobSchedulerImpl with a Spring singleton and configure the jobs via Spring. While this is very flexible, the singleton is now operating outside the Liferay Quartz realm and therefore will not be subject to the lifecycle of the portlet. That is to say, when you redeploy the portlet the jobs stay scheduled. A more annoying aspect to this is that if you try to shutdown Liferay it appears to hang. Sure the log says that Coyote is stopped, but that's not the case and the process appears to be waiting on a thread. This in turn requires manually killing every time. During development this is such a pain. My guess, without significant research into the bowels of the Liferay Quartz integration, is that the Spring singleton hasn't been properly disposed of.

One solution to this situation is to extend org.springframework.web.context.ContextLoaderListener with something like this:

public class SpringSchedulerContextLoaderListener extends org.springframework.web.context.ContextLoaderListener{
private static final Logger logger = Logger.getLogger(SpringSchedulerContextLoaderListener.class);
public void contextInitialized(ServletContextEvent event) {

public void contextDestroyed(ServletContextEvent event) {

JobScheduler j = (JobScheduler) StaticApplicationContextHolder.getApplicationContext().getBean("jobScheduler");


This will ensure that the Spring singleton JobScheduler will unschedule the registered IntervalJobs when the context is destroyed. You're good to go once this entry replaces org.springframework.web.context.ContextLoaderListener in web.xml.

There may be a more efficient way to do this, but for now this works.

Monday, March 31, 2008

Alfresco content management with Liferay

At work we've been pondering a better long-term solution to our content management. A year ago we gave some thought to Liferay and in fact deployed it for one of our sites. Recently we've reconsidered our approach due to some new additional requirements accompanied with a more thoughtful perspective on leveraging internal content management with future web projects. We spent last week looking at JSR-170 alternatives/companions to Liferay, which with all due respect, only provides the implementation in their Document Library. We wanted something all-encompassing, that is, a repository where we could manage both web, print and other electronic content. Alfresco seemed to be the best option for us. After all of us spending a week with it and having various meetings trying to define the roles of stakeholders and functionality requirements, this was my conclusive perspective with enumerated priorities:

  1. Versioning
    This is easily satisfied with Alfresco and I was specifically impressed with the various ways to update content. I'm particularly pleased with the multiple capabilities of creating/updating content given the built-in CIFS server, FTP server, Office plug-ins and web interface. This wide array of interfaces should enable our users to begin versioning content with a limited learning curve (especially in terms of the shared drive notion). The WebProject versioning feature is very worthwhile in that it provides us with the ability to view/rollback content at any given time for each release, very helpful for auditing and liability. Lastly, their implementation of sandboxing is especially beneficial in concurrent development as each user can submit their work to workflow after sufficient authoring and testing.

  2. Document Management
    Alfresco was written primarily to manage documents, and given the aforementioned information on versioning, I think it's very capable for our needs.

  3. Integration
    I'm very pleased and excited about the ease of creating REST endpoints using Alfresco's WebScript framework. We won't have to write any extra functionality (read: additional JARs) to work with existing APIs but can rely on implementing custom WebScripts for exposing what we want, how we want. This is particularly useful for rapid development for any of our potential integration points, this is specificall a boon for both integration with our Rails CRM and the custom Liferay content portlet Jeff Wilson is writing.

  4. Workflow
    I think creating workflows specific to our needs will require the most work. Granted, the WCM component ships with a very basic approval workflow, we'll still need to create custom workflows once we decide how to hone our processes (and choose our deployment strategies). Depending on our needs, we may only need to define the rules in XML and forgo additional code (I believe our definitions will need to precede the investigation of additional functionality).

  5. User Experience
    Again, referencing the Versioning info above, I think this is covered. It'd be very helpful for the users to see that state/phase of workflow that a given item is in, but that appears to be a current enhancement request (per Jared).

Additional Benefits
  • Search: all meta-data (including custom aspects) is indexed, with incredible ease users will be able to find content much faster than perusing through shared drives trying to remember the location of specific files.

  • Task dashboard: users are able to see what tasks they have awaiting their action (be it approval, updates, reviews, etc.)

  • SSO options are plentiful for integrating with our ActiveDirectory: LDAP, NTLM, Kerberos

  • Simplified replication: there's already a pre-configured XML doc for repository replication

  • Space Rules: Alfresco has a great rule-engine for manipulating content based on a set of Space rules. For example, specific meta-data (via custom aspects) can be applied to certain content as defined in the rules. Space rules have an inheritance model

  • Roles are configured per Space (and thus also subject to inheritance) enabling a very flexible detailed system of privileges. Roles can be applied to users or groups of users, per Space.

  • Content transformations: Alfresco integrates with OpenOffice to provide instant content transformations(text to PDF, PowerPoint to Flash) and can be extended to provide custom transformations.

  • Send content to Alfresco via email: The next release of Alfresco will include the ability to add content to Alfresco via email attachment. This could be a very efficient way for sales people to put quotes,proposals,contracts, etc straight into Alfresco without leaving their email client.

  • Space Templates: we can setup a space and template it to create future spaces based on that template, thereby ensuring default layouts and content are appropriately propagated.

  • Alfresco deployable run-time enables us to deploy the repository to our environments w/o the overhead and deployment of the web client (a clear separation of concerns strategy that also avoids potential content tampering).

  • Stability and product maturation: Alfresco is clearly a player in the marketplace with 400+ enterprise clients and 20k deployed instances.

  • Speed: Alfresco and RedHat created a JSR-170 benchmark with Optaros validating its results in a 10 million doc test exercising repository corruption avoidance and high-concurrency usage, 0.4s response time. Updated results.

  • Search: all meta-data (including custom aspects) is indexed, with incredible ease users will be able to find content much faster than perusing through shared drives trying to remember the location of specific files.

I clearly believe that the Alfresco solution, coupled with our Liferay content-rendering portlet, is the best approach we could pursue in managing long-term corporate content. It enables all of our departments and users to create and manage content, whether print or web-related, in a variety of very intuitive and thoughtful interfaces. Furthermore, it satisfies multiple IT goals in terms of application integration, data replication, content authorization and workflow/process definition. To that end, and knowing more functional valued enhancements will be soon released, I strongly recommend it.

Been a pleasure to muck with it, looking forward to future implementation (which I hope is approved).

Thursday, March 20, 2008

Great look at virtualization

Virtualization: Nuts and Bolts

What I appreciated most about this article was the lack of fluff found in most of the VMWare or XEN docs comparing X or Y and why they're better than the other guy. Johan does a great job of providing a bit of history and background in virtualization (specifically binary translation then paravirtualization) and then explores the Intel VT-x and AMD SVM roles at the hardware level. He discusses memory and I/O challenges that can still be hindrances. It's a long article so if you're not interested in all the gory details, at least check out page 12 for a good look at benchmarking (and what's NOT being benchmarked) and page 13 for a well-summarized conclusion.

Coming away from reading this leaves me anxious for the next article and future enhancements at the hardware level. I'd like to find more articles similar to this one for more information and academic research. Appears there are still great strides to be made to hone efficiency. Fun stuff!