Tag Archives: solex

A Short Goodbye and Thank-you (again)

As my time with OpenText comes to an end tomorrow, I leave the company with gratitude for the 7 years I’ve been with RedDot and then OpenText and I leave with a level of excitement for the future that I haven’t felt in a long time. So whilst it is sad to be leaving, all things indicate that I’m ready for the next challenge.

I’ve written a couple of goodbye emails to my colleagues today and within those I emphasised how much I enjoyed my time establishing and building the SolutionExchange community platform. This certainly feels – in hindsight – one of my greater achievements and armed me with greater knowledge around agile/lean concepts, community management, and the wise utilisation of social channels.

Therefore, to conclude my goodbyes before I get buzzed about the future, I would like to (once again) thank those Customers, Partners, and colleagues who participated in the SolutionExchange community and made it what it was but also more selfishly, for giving me such an enjoyable and constructive phase of my career.

As I shall now be entering the scary world of freelancing, I’m sure I shall meet or engage on-line with many of you still and I look forward to it, which is why this is only a short goodbye.

Those of you who want to get in touch will be able to find me in the usual on-line hangouts – LinkedIn & Twitter being my preferred choice. Failing that, just Google me and you’ll find me. :-)

 

Dan

A Belated Thank-you to SolutionExchange Users

opt003_banner-large_500x98

This post is actually a very overdue thank-you to the user community of SolutionExchange.

The Early Days

Since first raising the idea of an online community platform back at the tail end of 2009, I was fortunate enough to be part of an organisation under Jens Rabe that actively supported the idea and that kick-started a very enjoyable and passion filled couple of years as the Community Leader for the SolutionExchange platform.

There are many aspects of the SolutionExchange where I am particularly proud, from the pure open nature of the platform to the way in which the platform was adopted by you the users, all the way through to some little features which we pioneered on the platform within the enterprise context. On top of this, we did it all whilst showcasing our own products.

A special thanks goes to Markus Giesen during those early days of planning as Markus had already spent years building up an engaged community around the blog he started – The Unofficual RedDot CMS Blog. Markus had every right to be skeptical about what I had planned given the previous community attempts ending in failure but we struck up a great relationship and he provided some very valuable early guidance that helped me justify some of the decisions of our approach to my management team.

I learnt a lot in the following couple of years. I learnt and understood quickly about the philosophies of many start-ups: get to the customer as quick as you can and iterate based on what they say and what they do. We did this.

The Launch

We launched a beta quietly back in the spring of 2010 with a simple Solution repository/App store concept but quickly iterated based on feedback to include an aggregated feed of blog posts from the broader community. I was particularly pleased with the effectiveness of this simple feature as it put users in control by telling them that they can blog externally of the platform, contribute their blog URL in their SolutionExchange profile, and by simply tagging their posts with the word ‘solex’, they would contribute that specific post to the “Community Feed” feature. At the time, this was big as not many were doing this elsewhere on the web as there was a preference to encourage users to blog within a given platform, which added a small but significant barrier. We discovered that this approach worked very well and what followed was a relatively great success story. Beyond early adopters like Markus Giesen, other partners started to contribute, then some customers, and then many of my colleagues created blogs with WordPress and Blogger and started sharing knowledge this way for the first time, specifically with the intention to share via the SolutionExchange. Kudos should once again be given to you, the user community, for leading the way here but you can now consume great knowledge from within OpenText that is being shared by the likes of  Tim DavisJian Huang, Manuel Schnitger, and Dennis Reil. I cannot emphasise enough how this was a big deal and I thank each and every contributor for making the Community Feed such a success.

Further to this, we introduced the forum into the platform some months later with thanks going to netmedia for their support. This once again has been a tremendous success and I must thank those early contributors of feedback, which meant that we applied tweaks until the user experience was just how we wanted, which in turn affected the adoption. Well over 1000 posts later, the forum is still going strong and is a valuable ‘go to’ resource along with the likes of the RedDot Google Group. This forum along with the open nature of the platform brought with it more and more visits from search, which helped the discovery of the platform by new visitors.

We also made other subtle but significant additions to the site over time. I could name the introduction of Twitter Anywhere to help our users engage and connect with others, the Ideas feature, Group pages, the integration of OTSN, the rating features, or the external authentication proof of concept to name but a few. I could also talk about how significant some of the stats are like the almost 600 strong user base the platform has, which is relatively fantastic for a platform you do not need to sign up to in order to consume content. Although I could, I shall not, as my role has changed.

New Challenge

As of July 2012, SolutionExchange was no longer ‘my baby’. In fact, I had been re-assigned and given different responsibilities a year earlier in July 2011 and felt I had already been too ineffective with my commitments to the community platform. That guilty feeling was however put a little into perspective when I had a telephone call with Uli Weiss not that long back where he stated that he felt the platform was getting better and better. This is quite simply once more down to you, the users, as your time and contributions are what make the platform gain value day by day.

I am now part of the corporate web team at OpenText and have a very grand title as a Web Business Architect. It is very important for me (albeit somewhat overdue) to acknowledge that I have gained this opportunity in part due to the relative success of SolutionExchange and it is for this reason that I pass my thanks on to you in the community. Without your passion, engagement and desire to share in the ways in which you have, it would not have been the success it continues to be and I may not have had the opportunity that I am currently working on fulfilling now to bring some of those valuable concepts learnt during the past few years into the broader OpenText web experience.

Safe Hands

All that said, I’m not leaving a vacant hole. I am very enthused and pleased that Manuel Schnitger has decided to step into my shoes and take on the Community Management role as he is absolutely a perfect fit in my eyes. Manuel has many years worth of experience in various roles and importantly understands the challenges from a community perspective. On top of that, he is always willing to share the knowledge he has (as proven through his blog) and connect others when he is not the right person. Therefore, the community leadership is in good hands and it is pleasing to see the community still progress and build momentum over time.

Simply Thanks

I may not have named everybody who deserves a special mention but hopefully you know who you are. Let me once again thank you all as I look forward to a different challenge where I shall look to apply the lessons I’ve learnt over an enjoyable period in my career that was only made possible with your support.

Regards,

Dan

Doing Good in the Community (and raising Brand Awareness)

“We have become digital crack addicts!”

Within my role as Community Manager for the OpenText Web Site Management (WSM) product, I’m close to celebrating a great milestone – 500 registered users.  This is a truly great milestone as the platform is open, meaning you don’t need to register to read the content and use the platform. You do however need to register if you would like to contribute to the aggregated feed of external blog posts – called “Community Feed”, contribute to the “Tweet Exchange” Twitter feed, or post to the Forum or Ideas feature. The fact that a vast majority of registered users do not provide their details to contribute to the Community Feed or Tweet Exchange is not necessarily surprising but a significant number also have not posted to the Forum as well, which begs the question why register?

I mention this as it could be inferred as another piece of positive qualitative data – perhaps people simply register to ‘belong’ and affiliate themselves with the community even if they are not participating pro-actively straight away. This qualitative feedback is a great compliment to its sibling, quantitative data a.k.a. metrics.

In some cases it even feels like such qualitative feedback has greater value and context. For instance, the open approach to the community platform was endorsed by some praise provided by a prospect (now a customer) who saw that open, honest, and sometimes critical discussion was ongoing in the platform’s forum. This sounds like it should have been a risk as the dreaded variant of the word criticism was used. What turned it into something positive however, was the fact that this prospect could openly see that there was activity in such discussions, and that any such criticism was used constructively and that the engaged members of the community pulled together on many occasions to share experiences or knowledge around a given point of criticism. Internal OpenText employees along with Partners and Customers have jointly played a role here. This subject of openness and transparency is perhaps a subject for another day.

What does particularly interest me in this space, is how digital marketeers and businesses with online assets in general, have become obsessed with metrics. We have become digital crack addicts!

In many cases this is completely understandable as there can often be a very tangible and clearly measurable route from visitor to lead to opportunity to closed deal within a traditional marketing focused website. But what about community platforms?

How does a conversation between peers in a community platform or a blog post by a customer sharing best-practice knowledge tangibly influence that bottom line? Let’s face it, it is that same ROI challenge around “Social Media” that has been floating around for a few years now and we all know there are no magic rules that provide the answer as the context is all so important.

As I tend to be someone who sees the application of repeatable patterns in everything I do, from Software Development code idioms to Marketing Strategies, I thought that there is sure to be a parallel to this challenge and indeed there is – in the traditional marketing world.

The Traditional Approach

This realisation came to me as I recently visited my home town in the UK and noticed that as I drove to see a friend, a local roundabout that had perfectly trimmed grass and beautiful flower beds also had a sponsor — the local Sports Centre.

This really got me thinking as it made me think about why the Sports Centre decided to invest in this way and because of my (digital) crack addiction, I thought how can they measure the return on that investment?

It is not exactly like the UK’s Health and Safety department would allow the placement of an all so trendy QR code on the sponsorship sign situated in the middle of a roundabout on a busy junction — although that wouldn’t surprise me nowadays as I have seen a few on the back of lorries!. I can see the future: “Is this van driven safely? No, then take a picture with your mobile device whilst driving and let us know!” — I digress.

Maybe the Sports Centre simply wanted to raise a positive profile within the community where many of its clients or potential clients pass through. After all, it was a beautifully kept roundabout that many a competitive gardener would be proud of and perhaps it is that association with something well kept and maintained which inferred a well run Sports Centre.

Why Invest?

Whilst looking for an image to accompany this post, I found the image above, which was a stroke of luck. The sponsors on this road sign happen to be CDS, a long-term well-respected Partner of OpenText based in Leeds, UK and one that I’ve had the pleasure of working with on a number of occasions. Given this coincidence, I decided to reach out to Mike Collier who is CDS’ Technical Director to ask directly about this investment. Here is what he said:

“The advertising on the road sign was all about raising brand awareness and coincided with a branding refresh we undertook a few years ago. This was also coupled with advertising on the back of a bus!

The location of the sign and the bus advertising was significant as it was on one of the main routes travelled by business people, into Leeds. The bus advertising was on a route which circled Leeds and in particular the town centre and the main train station.

I am not sure that we generated any real measurable business from it but it did raise awareness of the brand with a number of our existing customers commenting on it in a good way.

We did have a an unexpected piece of good fortune when the bus crashed! (no injuries thankfully) and it was featured on Look North – the local news channel!”

I found this feedback from Mike very interesting as it helps re-enforce the question I’m trying to raise in this post.

The Question

Community platforms such as the Solution Exchange are platforms that in the first instance, are there to help serve the community better. Whether that is the aggregation of related articles on a shared context or the sharing and dissemination of best-practice knowledge, the focus is on generating genuine value for end users to help them get their job done without a hidden agenda of lead generation.

Given this thread of thought, is lead generation a feasible goal for such a community of tech savvy users, who often are abstracted a level or two away from key decision makers? You could track activity at an Account/Company level instead of individual but my feel is that such tracking could come at the cost of user trust — a commodity that is hard to establish but so easy to lose.

What this boils down to, is something very simple — should such community platforms where the intention is to do something good for end users be a Brand Awareness initiative or a Lead Generation/Customer Acquisition initiative?

This question depends on many factors and in particular the context as many “social” communities can certainly facilitate nurturing prospects to a conversion goal. A retail brand using Facebook to promote to potential customers presents a contrasting context to that of an multi-product/service enterprise providing value to an existing customer base in an open and transparent way.

Conclusion – Lay off the (digital) crack!

For me, as the Community Manager for Solution Exchange, my focus is on generating genuine value for Users (Customers, Partners, along with internal staff). It is therefore unbelievably clear to me that I am undertaking a Brand Awareness initiative primarily. Yes, lead generation through referrals and soft promotions is and will be possible but it should not take centre stage.

So maybe it is time for us to lay off the digital crack as it clouds our decision making. Balanced use of quantitative and qualitative data is what is needed here to make educated business decisions. This may not be appropriate in every “community” initiative but one that makes a whole lot of sense to me.

What do you think?

An Open MVC Approach to OT WSM (RedDot) Delivery Server Functionality

This topic has been on my mind for some time now and inspired by a chat with Dennis Reil, I thought I would get something written down with the view to harvesting some of the views out there in the community.

The main context for this post is the enablement of Social Media features within an OT WSM project but the pattern described can be equally applied to other forms of integration through the use of the OT WSM Delivery Server.

I’ve long desired a way in which editors can be better empowered within the constraints of what the site builder/developer has allowed them to do with regards to features like commenting and tagging etc.  It turns out that the flexibility within the Management Server product provides us this very possibility.

With version 10 of the product, came the possibility for a SmartEdit user to drag and drop templates into containers from the panels available from within the SmartEdit view.  This was initially focused on the scenario where a SmartEdit user can build up the various content parts of a page but I’d question why can this not allow the same user to enable some functional elements with a page also?  Even without the drag and drop, the point of enabling that business user was something that I was interested in looking into.

Therefore, before I detail my proposed strawman, I think it is worthwhile to detail some of the guiding principles of the idea that has helped me shape this:

All Content in Management Server

This for me is a no-brainer and something that I often pass off as “best-practice”.  What I mean here is that everything as much as possible should live in the Management Server.  This means the content that is normally typically unique to Delivery Server should be within Management Server (e.g. XSLT and XML files) and published into Delivery Server.  More specifically still, those XML and XSLT files are set up as Content Classes and instantiated within the project tree structure. This provides the following benefits:

  • Keeps all assets together in a single repository
  • Allows the utilisation of version control within the Content Classes of this content
  • Allows for the possibility to parametrise elements within the templates through placeholders
  • Allows for the ability to permeate the setting of certain placeholder values through to SmartEdit users
  • A single project that can be published to have all set up within Delivery Server (although Delivery Server project and system config needs to be managed directly within Delivery Server)

Utilise the Existing Skillsets of Site Administrators and Developers

This is another important one for me to ensure that those wishing to adopt new features don’t suffer from that fear of learning another skill by facilitating the rollout of these features through the existing knowledge they already have.

Adopt an MVC Approach

Why is this important? Well, this well established, tried and tested pattern is there for a reason and you’d be able to search for it easily if you haven’t come across it before.  It nicely separates the responsibilities within the feature “module” and you’ll see how this separates out into different CMS pages or elements in the solution allowing for the constraining of access to the  various parts if need be.

An Open Approach

This one should be obvious and is actually related to the point about skills above.  Encapsulation is a good thing when used right, but when something that can and often needs to be customised is shut away behind what appears to the user as a black box, then that task has just got harder.  Therefore, an open approach of providing access to the various parts if needed is important.

The Provisional Proposal

The essence of this proposal is the creation of a feature module made up of different Management Server components:

  • A Configuration/Controller Content Class
  • A Controller/Model Content Class
  • A number of View Content Classes

It looks like I’ve sat on the fence with the “Controller/Model” part above so I’ll explain the purpose of each of the above Content Classes:

Configuration/Controller Content Class

From a SmartEdit user’s perspective, this is the main Content Class that contains the relevant enabling code/content for the feature.  It is this Content Class that can be dragged into a container on the page for instance.

Within this Content Class, several placeholders can be exposed to the SmartEdit interface allowing control of various feature parameters to a relevant level of user.  For instance, if the feature shows a list of comments and a comment form then a parameter may allow the user to set a “refresh time” for the comments, which translates in the technical world to how long the resultant calls under the covers are cached.

In principle, this Content Class refers to two other Content Class instances (actual CMS pages) – the XML Model and the XSLT defining the View.  In the simplest case, this may just contain a single include DynaMent:

<rde-dm:include content="anc_linkToXML" stylesheet="opt_listOfViews" 
                cachingtime="stf_refreshTime" />

It can be imagined that the option list and the standard field could be exposed through SmartEdit to allow user control.  If the option of the view is not to be given to the user, then an anchor placeholder can be used and a pre-assigned reference to the chosen view instance utilised.

Controller/Model Content Class

OK, so this Content Class is part Controller and part Model and the reason is because it contains the controlling code to invoke a given feature and the resultant XML provides us the model, which is the input to the view.

Typically, it is this Content Class that encapsulate the Delivery Server DynaMent language functionality and with OpenText’s Social Communities product, this will be using the HTTP DynaMent, which I have to say is a refreshing and strong addition to the product.

View Content Class

This is simply the XSLT that transforms the output XML from the feature into your resultant format.  Let’s keep it simple and assume we are generating a HTML result here.  One or more can be created if you wanted to provide different ways of using the model data. Of course, if it is just look and feel changes you’re looking to provide your users control in changing then this may be better implemented in CSS.  The various XSLT Content Classes are if the results are fundamentally used in different ways.  An example that I’ve often used is when a features should return XML or JSON – that’s simply a different XSLT file that is achieving this.

The Value

The value of such an approach is that it enables those with the relevant knowledge to encapsulate examples for others to use.  It therefore empowers business (SmartEdit) users to be able to choose functionality within certain sections of a page – for instance, a user can drag and drop comments or ratings onto an article page.  Finally, it shows an open approach for how such features can be enabled using elements that admins are familiar with – Management Server Content Class templates.

The Next Step: Your ideas!

In the first instance, I would like to understand people’s views on this with the intention to conclude the proposal by making a suggestion to how such a module can be packaged.  I would like to somehow make it possible that an admin can import the module into the Management Server and from there, complete a couple of minor configuration steps and then the module’s feature is available to the business user wherever the admin enables it.

Therefore, leave a comment or join the conversation at http://www.solutionexchange.info/forum.

Canonical URLs and SEO

As I recently made a foolish mistake, I thought I would share it to help others avoid it in the future.  It was to do with my quest to get certain pages of the Solution Exchange Community platform indexed in Google, Bing, and Yahoo etc.  Specifically, the valuable forum threads.

First of all, it is worth mentioning how these threads are delivered.  The forum itself is an object of the OpenText Social Communities (OTSC) product, which interacts with the Delivery Server through the OTSC XML API.

Therefore, the forum thread pages are dynamically delivered with the shell of the page being the same physical page with the content influenced by parameters.  In this case, I’ve chosen to utilise sensible URL structures that contain the parameters for simplification and SEO.  I mention more about this in this forum post.  The use of rewrite rules in this way for SEO is one of the key values of a Front Controlling Web Server.

As the shell of the page is the same, I initially had the same <title> tag for all threads and thought that this was the problem.  After changing to adapt the <title> value to the title of the forum thread (along with waiting for re-indexing to happen) there was no change.

Finally, through checking the index of Solution Exchange on Bing with a “site:” search, I noticed to my surprise that one of the threads was indexed but was associated with the URL http://www.solutionexchange.info/forum.htm!!!  This was strange due to the fact that externally, the forum thread was only accessible through a URL like http://www.solutionexchange.info/forum/thread/{ID} meaning that I must be explicitly telling the search engines the wrong URL.  

This was the clue I needed to realise that my problem was due to something I had implemented many months before.

To address the potential SEO penalty that the home page of the community was able to be reached through http://www.solutionexchange.info/ and http://www.solutionexchange.info/index.htm, I introduced the use of the following html header link tag – the example below is the home page value but I included this across the whole site:

<link rel="canonical" href="http://www.solutionexchange.info/index.htm" />

You can read more about this on the Official Google Webmaster Central Blog.  In summary, it tells the search engines that this page is to be associated with the given URL and page ranking (or “Google juice”) is to be associated with that and not the entry URL that the crawler bot used.  This avoids the possibility of page ranking for the same page being split across two or more URLs or being penalised for duplicating content across multiple URLs.

With this knowledge, I was able to update the page template that houses this dynamic content to form the correct URL within this canonical link.  Now it’s back to the waiting game to see if the indexes will pick the content and forgive me for positioning different pages as one.

Although a small detail, the end goal and potential gain is huge as it opens up the rich content that continues to grow within the forum for discovery via the big search engines.  This in turn will only help those within the wider community who are not aware of Solution Exchange discover the content, which may help them resolve an issue or encourage them to take part in the community platform moving forwards.

As always, leave a comment or get in touch if you have any questions.

Moving Open Text Delivery Server to Common Search

As part of the small team behind the Solution Exchange, I was somewhat dreading the day when I had to change the internal search engine over to Open Text Common Search on the Web Site Management Delivery Server.

However, in absolute honesty, this was not the issue of complex configuration that I was expecting and I will explain the steps I took.

The Delivery Server Common Search Connector

  1. The first step is to install the Open Text Common Search product.  I was fortunate enough to have this already in our infrastructure so didn’t need to do this step.
  2. Assuming the Common Search is installed, you can log into Delivery Server, navigate to connectors > Search Engines > Administer and click the import button.  Version 10.1 of Delivery Server has a pre-configured connector that you can use.  Click the OTCommonSearch link to import the connector.
  3. Change the URL of the Common Search Server to the IP of your Common Search machine.
  4. Change the “Incoming directory of indexing jobs” to a shared folder.  This is a path as seen by the Delivery Server.  I’ve chosen to place this on the local machine of the Delivery Server and share that with the Common Search machine.
  5. Change the “Incoming directory of Common Search server” to point to the same directory as above but from the perspective of the Common Search machine.  I initially had problems here as the Delivery Server and Common Search were on different Domains.  We changed this anyhow to reflect better practice in our setup.
  6. Create the shared folder if you haven’t already and make sure both the Delivery Server and Common Search have read/write access.
  7. You’re done!

It was really that easy! (well, if I discount the delays due to not being able to share directories effectively across Windows Domains at first).

Finally, it is worth pointing out the tweaks I made to my queries for the new Search Engine.

When searching specific groups, you can now use the syntax:

group:<ds_group_name>

and

attributeName:'[#request:attributeExample#]'

for attributes.

Admittedly, I didn’t need to do anything more complex than this so there were not a lot of queries to change.

There may be more complex example out there but the key message is to start planning your changeover now as it might just be easier than you think!

As always, please leave your questions or comments.

What the **** is Social Media?

The following slide deck from Marta Kagan, which in my opinion, is one of the best I’ve seen to date on the subject of Social Media.  Partially because of its engaging format and eye catching messages but also because it is well researched.

After reading through this slide deck and only imagining how great the presentation would have been live, it got me thinking about a very relevant point made in Clay Shirky’s book – Here Comes Everybody.  He observes how Social Media and the communities that are formed from these ‘new’ tools actually lowers the cost of failure.  This is particularly relevant when you think of this in the context of an example Social Media campaign where those in the community are empowered to create short videos say, of them using a product.  From tens, hundreds, or even thousands of cheaply created contributions, many are going to be poor, some OK, and a minority are going to be fantastically engaging.  This power law (reverse exponential/long tail) shows how Social Media lowers the cost of participation to increase contributions and therefore increases the likelihood of discovering that golden piece of content that casts a large shadow over the others that does far more good than the others put together.

Naturally, there are risks involved also as the potential negativity is also large.  However, I don’t fear this as I’ve come to think that the nature of Social Media is a leveler or regulator of behaviour.  If you are seen to be pushing your brand unethically or are self-obsessed without desiring to understand the true value of your offering, then you’ll be found out and Social Media will provide a platform for people to call you out and damage your brand.  If you’re honest about the mistakes you make and open about what you are trying to achieve, you’ll be supported and supported in ways you never thought you would.

The main take away point from all this for me is the affirmation that we should all be empowering the communities that exist around our brands.  Whether large or small, it is the community that contains a brand’s most powerful “brand ambassadors”.  Giving them a voice and listening to what they have to say is far more powerful than making isolated decisions.  I remember once someone stating to me “never assume you know more than your audience”.  In today’s Social online world, never has that been so true.

Enjoy the presentation!

Automatic Translation with Google Language API

When faced with the need to help a multi-lingual community interact better, not intimidate one region or another, and generally facilitate interaction, language can be a huge barrier.

I’ve recently started to investigate what could be done when faced with this challenge as this is a very real problem for me and our www.SolutionExchange.info community platform.  Aggregation of user driven content can be a great thing but common publication processes like editing and translation are bypassed.  The availability of tools to aid an individual to publish his or her thoughts and opinions is of course a good thing in the most part as it allows for people to interact more quickly and easily removing the barriers that actually once prevented any kind of sharing or interaction (e.g. you were never able to publically comment on a newspaper article or spread a story without significant effort and cost).

With a wide and varied community, I investigated the use of the Google translate API accessible via the Google AJAX language API to start a trial to see how this automated process can help our users gain some context about content that may not be written in their mother tongue.  What is particularly useful, is that the API can detect the source language automatically, which is great when you have many languages within many sources.

The trial starts on the 6th August 2010 and I would like to run it over the course of a month to see whether this prototype evolves into something valuable for some of our users.  The feature can be seen in the footer of the site www.solutionexchange.info and must be invoked manually as no choices are currently remembered.  By design, this was a pro-active choice as I was keen to ensure users pro-actively decided to try out the feature and not become confused by auto-translated content that they had not expected.  Auto-translated content then shows up appended with a green asterix to indicate that the related text has gone through automatic translation.  Currently, Tweets, Solution Descriptions, and Community Feed items are just some of the sections under trial but this can be easily extended or refined depending on feedback.

I’d like to further and improve this trial so I’d happily take feedback here or through the feedback form on the site at www.solutionexchange.info/feedback.htm.

If  you have any questions then feel free to pop them in a comment below.

IIS7, Tomcat & Application Request Routing

Further Update: 27nd June 2011

Another update on this topic. If you were making the use of custom error pages in IIS7 and you implemented the below update, you may have noticed that the custom error commands are no longer being adhered to. To change this, you need to set up custom error pages at a site level by choosing your site, selecting “Error Pages”, then “Edit Feature Settings” from action menu and then “Custom error pages”.

Important Update: 22nd June 2011

On page 2 of this article (How To Configure IIS 7.0 and Tomcat with the IIS ARR Module), there is a key step that I failed to observe when I wrote the original post below.  The step in question is the enablement of the (reverse) proxy server after the ARR install.  By doing this, you are able to apply rewrite rules at the site level — something I wasn’t able to achieve originally, which meant that the routing rules within my server farm were somewhat overloaded.

With this setting enabled, I can leave a single delegation rewrite rule at the server farm level, telling IIS to delegate HTTP requests of a certain pattern but leave the rewrite rules that are there for beautification at the desired site level.  This is a much tidier and more scalable approach.

One gotcha that you need to be aware of is that the rewrites at the site level need to be absolute URLs.  Therefore, you could be tempted to place the host of a single tomcat instance that lay behind IIS direct in here and it would work fine but why not allow for a little future proofing and use localhost within all absolute URL site level rewrites, which isolates the rewrites used for masking ugly application URLs and delegates the job of request delegation to the server farm?  This approach would allow for the server farm config to be used to bring other tomcat instances online or taken offline for maintenance etc without having to change the site level configuration.  In other words, it keeps the various areas of the IIS7 interface focused on the job in hand allowing for easier administration.

Please keep this update in mind as you read the otherwise unchanged original post below.

Regards,

Dan

After many years of using the Tomcat Connector (http://tomcat.apache.org/connectors-doc/) when setting up Tomcat behind IIS, it is now time to say goodbye.

This is the conclusion that I’ve come to after having some particularly significant challenges using IIS7 on a 64bit Windows 2008 machine.

The traditional approach I’ve used in the past has been to utilise the Tomcat Connector, which is implemented as an ISAPI Filter, to delegate requests from IIS through to Tomcat.  This has worked great for me in the past and was the subject of a previous article (http://bit.ly/lp6zW) but the 64bit system threw in a couple of additional challenges that weren’t so easy to get around.

The problems faced led me to discover Application Request Routing (ARR), an official extension for IIS7, which allows you to define the delegation of requests to servers sitting behind the IIS instance.

What is particularly nice with this extension is the way in which it facilitates the former approach within the GUI, making it easier to understand what is being delegated.  The approach however, is similar to the ISAPI filter approach – delegating based on URL path patterns.

The following takes you through an overview of how to set this up:

1. Install ARR

You can obtain the appropriate install for the ARR IIS7 extension at http://www.iis.net/download/applicationrequestrouting

Once installed, the ‘Server Farms’ node indicates that it has installed correctly as indicated in the picture below.

ARR Install

The Server Farms node is seen if ARR is installed correctly

A number of  modules are added as part of this extension.  You can find the details of these from the same ARR link (http://www.iis.net/download/applicationrequestrouting)

2. Create Server Farm

Although the concept of a ‘farm’ of servers may be overkill for our needs of delegating HTTP requests through ISS7 to Tomcat, we shall never the less set up a farm containing one server – our Tomcat instance.

To do this:

  1. highlight the ‘Server Farms’ node in the left panel of the IIS7 Management Console .
  2. Choose ‘Create Server Farm’ from the right hand side action menu.
  3. You will be prompted for a name for the farm.  For my  needs in setting up the Open Text Delivery Server behind IIS7, I gave the farm the name ‘Tomcat – Delivery Server’.ARR Server Farm Name
  4. You will then be prompted to set up a server in the farm.  In our case, we are just going to select the localhost instance of Tomcat running on port 8080. To specify the port, open the ‘Advanced settings’.  Strangely, there appears to be no easy way to edit a servers port once set up so make sure you are correct, otherwise you will have to delete and add a new server.

    ARR Add Server

    Make sure you open the Advanced settings to edit the port number

3. Configure the Routing Rules

Now that we have informed IIS7 about the server that sits behind, we need to let it know how we wish to delegate HTTP requests to it.  To do this, we choose the newly created Server Farm in the left hand panel and select the Routing Rules feature.ARR Routing RulesWithin here, we have a few options.  I’ve chosen to keep the defaults of having both checkboxes checked and have no exclusions set as I am delegating this responsibility to the URL Rewrite Rules.

From here, you can add and modify the rewrite rules defining how requests are delegated using the ‘URL Rewite’ link in the right-hand action panel.

In my case, I chose to change the default rule that was set up for me to a regular expression as opposed to the wildcard default.  However, I only chose this due to personal preference.  The pattern I used for this rule is:

cps(.+)

and I ignore the case.

Finally, I have no Conditions or Server Variables to take note of in my scenario although they can easily be added here, so I conclude the rule by setting the action to ‘Route to Server Farm’ and chose my ‘Tomcat – Delivery Server’ farm with a path setting of

/{R:0}

This passes all URL path info through to Tomcat.  I also choose to stop processing of subsequent rules

4. Refine Rules for your Environment

Lastly, in my setup, I’ve added the following further rules to refine how my site is served through IIS7:

Delegate .htm and .html requests:

Pattern - ([^/]+.html?)
Action path - /cps/rde/xchg/<project>/default.xsl/{R:1}

Delegate .xml requests:

Pattern - ([^/]+.xml?)
Action path - /cps/rde/xchg/<project>/default.xsl/{R:1}

Delegate default home page

Pattern - ^/?$
Action path - /cps/rde/xchg/<project>/default.xsl/index.htm

Summary

Although this approach of using IIS7 in a reverse proxy capacity may not benefit from the efficiencies of the AJP protocol used by the Tomcat Connector, the impact in most sites will be negligible.  In exchange, you have a way of Tomcat and IIS7 working together in a way where the GUI of the IIS7 Management Console helps admins define and understand what is happening.  The ISAPI Filter approach is often not so visible because of the broad nature of what ISAPI modules can provide but also due to the configuration required outside of the IIS7 Management Console.

As always, if you have any questions, leave a comment.

Open Text Delivery Server with a Front Controlling Web Server

Overview

This post discusses the best practice of deploying the Open Text Delivery Server in an optimal way alongside a front controlling web server.

Delivery Server is a dynamic web server component that has strengths in coarse grained personalisation and dynamic behaviour as well as system integration.  Therefore, as it is housed within a Servlet Container, it is not the ideal location from which to serve static content (unless you wish to maintain a level of access control over the static content).

Leveraging the use of a front controlling Web Server, facilitates an optimised site deployment as web servers such as Microsoft’s IIS or Apache’ HTTP Server can be utilised for delivering static content in an optimised way.  For example, it is possible to easily configure a far future ‘Expires’ header on a given folder and therefore its content within either Apache or IIS, which promotes the caching of content in a user’s browser, which reduces page load times.  Another example is in the use of mature compression features within such web servers.  Although these examples can be achieved with some Servlet Container’s, it is certainly not straight forward and doesn’t necessarily make sense from an architectural perspective.

It is for this architectural reason, that best-practice dictates that we delegate only the relevant HTTP requests to Delivery Server.  In most cases, this means that Delivery Server is delegated requests for .htm and .xml resources.  The rest can be served from the front controlling web server (or better still a CDN).

This article provides a high-level overview of what to set up.  Depending on feedback, I may post further posts on the details of each step.

Delegating Requests from the Web Server to Delivery Server

This step can be easily achieved using the Tomcat Connector for both IIS and Apache. To find out more see the Tomcat Connector documentation here: http://bit.ly/at1w8G.

This connector uses the Apache JServ Protocol, which connects to port 8009 by default on Tomcat and is optimised to use a single connection between the Web Server and the Delivery Server for many HTTP requests.  Therefore, this represents a better option than using reverse proxy functionality within the Web Server.

If we take a typical Delivery Server install (i.e. the reference install using Tomcat), a page can be accessed with something like the following URL:

http://<host>:8080/cps/rde/xchg/<project>/<xsl_stylesheet>/<resource>

where resource could be any text based file like index.html or action.xml.

The result of correctly installing the Tomcat Connector means that we can access that same resource but through the Web Server on port 80 and not direct to the Tomcat instance on port 8080:

http://<host>/cps/rde/xchg/<project>/<xsl_stylesheet>/<resource>

Many confuse this step with URL rewriting or redirecting as the Tomcat Connector is often called the Jakarta Redirector.  Therefore, I choose to differentiate by saying that this delegates HTTP requests between the two systems and nothing more.

In every install, I have always used the defaults in the workers.properties file and just used the following rule in the uriworkermap.properties file:

/cps/*=wlb

URL Rewriting

Due to the effort of setting up delegation, deciding which HTTP requests should be forwarded to Delivery Server is a simple matter of performing some URL rewrites.

As we have decided to use a mature Web Server, there are best practice ways to achieve this.  In IIS6, HeliconTech (http://bit.ly/bgJEF6) created a very useful ISAPI filter which ports the widely adopted Apache mod_rewrite (http://bit.ly/cfvuLD) functionality.  For both of these, the same rewrite rules can be used.  The following provides a couple of typical examples:

# Default landing page redirect
RewriteRule ^/$ /cps/rde/xchg/<project>/<xsl_stylesheet>/index.htm [L]
# Rewrite to delegate all *.html or *.htm HTTP requests to Delivery Server
RewriteRule ^/?.*/(.+.html?)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1 [L]
# Rewrite to delegate all *.xml HTTP requests to Delivery Server
RewriteRule ^/?.*/(.+.xml)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1 [L]

Those of you who are well versed in regular expressions will see that the last two rules could be combined but I tend to leave them separate to aid readability.

The beauty of using regular expressions in this way is that you can actually create useful SEO benefits to your site also. Take for example the following rule:

RewriteRule ^/?.*/([0-9a-zA-Z_]+)$ /cps/rde/xchg/<project>/<xsl_stylesheet>/$1.htm [L]

This rule maps a URL with many apparent subdirectories to the Delivery Server file.  This means that you can publish a page with a “virtual” path within the Management Server which appears to a browser (and search engines) as something like the following:

http://<host>/this/is/a/descriptive/directory/structure/page.htm

and yet this maps to:

/cps/rde/xchg/<project>/<xsl_stylesheet>/page.htm

IIS7

Being a Microsoft product, IIS7 has some quirks with regards to the rewriting (of course), which I explained in a previous post: http://bit.ly/lp6zW.

Summary

This approach has led to many successful installations where sites could additionally be optimised for SEO and page load.