The Copeland Companies is a premier provider of retirement planning products, services, and support. Copeland specializes in assisting not-for-profit organizations in the healthcare, government, and educational markets, as well as for-profit employers with a variety of defined contribution plans. Plans include 403(b) Tax Sheltered Annuity programs, 401(a) plans, 457(b) Deferred Compensation programs, and 401(k) plans. Copeland is focused on providing single source access to multiple products from different financial providers. This provides a complex set of technical demands on their IT staff. Copeland distinguishes itself by offering both diversification and extensive personal counseling. Providing these distinguishing offerings requires robust and complex software systems.

The Copeland Companies are wholly-owned subsidiaries of Travelers Group Inc., one of the nation’s largest diversified financial services companies. As of June 1997, Copeland Associates, Inc. services retirement savings programs through more than 7,000 employers with approximately 940,000 active participants who have accumulated almost $18.3 billion toward their retirement. In addition to supporting record keeping and administration associated with the Copeland Products, Copeland IT staff has recently been tasked with developing applications to manage Traveler’s only products. In the case of Traveler’s only products, administration applications are executed at Copeland’s call center facility. These applications require access the system of record located at Traveler’s in Connecticut. This arrangement has placed an additional set of complex requirements in the lap of Copeland’s IT organization.

Historically, Copeland Associates has managed their participant’s plan data using DB2 running on the AS400. Participants could access plan information or make changes to their plan via a Voice Response Unit (VRU) or by speaking directly with call center representatives. The call center representatives would run AS400 applications developed with RPG. Long development cycles, high maintenance costs, and difficulty in finding and retaining AS400 and RPG talent made Copeland’s decision to migrate away from the proprietary environment and easy choice. Copeland’s senior IT management and technical architects realized that the IT organization should plan an orderly, long-term transition to component based development and open systems. He hoped that component development and open systems would provide a new development platform for his organization. UML, Java, JDBC, CORBA, and NT would all play a role in the new platform.

Transition Overview

Copeland would not reinvent itself overnight, but over many years. The process would take time, but with proper planning, Copeland would achieve interim deliverables and long-term benefits. Copeland employed Genesis Development Corporation to help them plan their transition to this new computing platform. The approach would be based on Genesis’ SureTrackTM transition methodology. The transition would include multiple projects and multiple phases. The initial phase took roughly one month and included a technical, organizational, and business assessment. The assessment identified existing skills and critical systems within the organization. Key business goals and objectives were also identified. Once this information was gathered, a high-level transition plan was put in place.
The plan identified several pilot projects. Delivery of these projects would increase the component skill sets of individual developers and business analysts. Project managers would become intimately familiar with the development lifecycle of component-based projects. The Copeland organizational would make improvements to critical processes such as requirements gathering and quality assurance. Within six months from initiation, tangible business benefits would result from the deployment of critical applications. This would ensure management commitment and help the transition move past the challenges, which would periodically result from attempting to develop application using new technologies and approaches.

Over the first 18 months of the transition, five projects have come on-line. These projects have achieved significant business value and helped Copeland IT organization become more effective through the use of component technology. The business projects have helped refine the development approaches and methodologies, which will be used at Copeland during the years to come. The initial projects are listed below:

* A UML project related to improving requirements gathering and business analysis critical to the success of other business oriented projects.
* A development lifecycle refinement project related to improving and enforcing a more formal development lifecycle on other business oriented projects.
* An Internet based application for allowing participants and plan administrators to manage their benefits programs. * An Intranet based application for supporting customer service representatives assisting participants and plan administrators with the management of their benefits programs.
* A component infrastructure project targeted at reducing development associated with creation and management of enterprise components, which are used in or across other business oriented projects.

Some of these projects, such as the Internet and Intranet administration applications, are directly related to specific business goals. Others are related to the overall transition of Copeland to component technology. These projects, such as the UML project, will benefit Copeland across other business projects.

Leveraging UML for requirements gathering and projects estimates

The first transition project, which was started at Copeland, focused on improving their requirements gathering process. As a result of the initial assessment, it was discovered that Copeland’s requirements gathering process was mostly ad-hoc. This resulted in the delivery of applications, which did not always effectively address business needs. Since Copeland was pursuing a component based approach, UML and particularly a Use Case approach seemed appropriate.

Genesis delivered just-in-time training in UML and the Use Case approach. Training was only given to staff who would immediately be entering into the requirements gathering phase of a real business project. The training was not a traditional course, but rather it was hands-on, and focused specifically at the particular domain being addressed by the project. In addition, at the beginning of this process, select Copeland staff were identified and asked to play the role of facilitators. Genesis provided mentoring through out the requirements gathering process to ensure that progress was being made and the proper techniques were being utilized.

At the conclusion of this project two critical achievements had been made. First, Copeland was confident that business and user needs would be addressed accurately on these projects. Second, Copeland now had a formal approach, along with staff who were experienced and had been successful utilizing this approach. Not surprisingly, project management found that by utilizing the artifacts which resulting from this type of requirements gathering process, they were much more accurate at estimating timeframes associated with the various phases of the project lifecycle.

The biggest challenges associated with leveraging this new and formal approach to requirements gathering, were related to training staff on these new approaches and also to the fact that tools supporting this approach were just being released and were not as stable as was hoped. With respect to helping Copeland staff successfully attack the learning curve, it was critical that training was done just before actual real world work began. It was also critical that mentoring was available during the real world application. The mentoring was front loaded, but reviews needed to occur at periodic points in time.

Copeland also has utilized a variety of modeling tools. None of these tools were without their problems. The closer the inspection, the more problems were found. There was often a tendency to switch from one product to another. This was an issue, and corporate or project wide decisions seemed to drag on. The important thing was that, while not all staff utilized the same tools, a common methodology unified the staff. One of the first business projects to leverage the new approach and tools for gathering requirements and undergoing proper analysis via UML was the Intranet based customer service workstation.

Supporting Intranet application development with Java and CORBA

One of the most immediate business needs was the requirement to quickly release a new application to support in-house customer service representatives (CSRs). This application is known internally as the customer service workstation (CSW). CSW would allow CSRs to service the accounts of plan participants. Typical interactions might include balance inquiry, transfer of funds, allocation changes, re-balancing of accounts, loan inquiry, etc. It was decided that CSW would attempt to leverage open systems platforms, the UML approach, and component technology.

Specifically, the user interface portion of the application would be deployed on NT and 95. It would be developed in Java and utilize CORBA to gain access to a middle-tier application server. The application server would be developed in Java and would initially be deployed on NT. The application server would utilize JDBC to access legacy DB2 tables and stored procedures, which were hosted on the AS400. Since the application server was developed in Java, it would be possible, at a later date, to re-deploy the middle-tier on UNIX or even on the AS400. The high-level application architecture for CSW includes the following types of generic objects:

The objects listed in the table above are described generically. Based on the business requirements, a traditional business object model was created, and then an associated context object and the supporting source, table and/or queue objects would be modeled. Once the Use Cases and Scenarios were created, the façade and controller objects would be modeled. The traditional business object model included objects such as Account, Product, MoneySource, Transaction, OutstandingLoan, ModeledLoan, etc. See figure 1.

These objects are all designated as Context Objects. Since Context Objects are in fact CORBA objects, they were first defined in OMG IDL and then implemented in Java. The Context Object implementations would be deployed within the application server. In an effort to keep the front-end application simpler and in order to reduce network traffic between the front-end and the application server, Context Objects are not utilized directly by the front-end application. Instead, Façade objects were introduced to the application architecture. This ensured that front-end applications would be shielded from some of the complexity of the business object model and that the minimal network operations would performed. Even though the context objects are not currently accessed across a network, Copeland is comfortable knowing that their business objects can be accessed across a network via the IIOP standard.


Gamma, et al, defines the Façade pattern as follows: “[the façade pattern] provides a unified interface to a set of interfaces in a subsystem. Façade defines a higher-level interface that makes the subsystem easier to use.” When modeling the Façade layer, a new Façade was created for each Use Case. The Façade would support all of the scenarios associated with the Use Case. Each Façade would be implemented using the underlying set of Context Objects (Business Objects). Being CORBA Objects, Façades would also be defined in OMG IDL and implemented in Java. The Façade objects would also be deployed within the application server. This ensured that all of the calls between the Façade and the Context objects would be optimized to avoid TCP/IP. Calls to the Façade would be made across IIOP (and thus TCP/IP) but these calls would be defined so that repeated distributed calls could be avoided and network performance would not be an issue. In a sense, the Context Object model is a pure object model with no specialization for a particular usage or deployment scenario. The Façade Object model is presented as the programmer’s model and is optimized for a particular usage or deployment scenario. See figure 1. Context Objects would be re-used as new Use Cases were introduced and new Façade Objects implemented.
For example, figure 1 shows an Account object, which is related to many MoneySource objects. The MoneySource object is used to differentiate employee contributions from employer contributions. Each of the valid MoneySource objects is associated with one or more Transactions objects. When the front-end application needs to access transaction history information, it deals only with the TransactionHistory Façade. By doing this, the front-end is shielded from the details associated with the Account, MoneySource and Transaction objects.

Front-end applications would implement a Controller object in Java for each Use Case. The Controller objects would acquire CORBA references to any Facades it needed via a local Java class, known as the FacadeDispenser. The FaçadeDispenser class will ensure that even if two different controllers need the same type of Façade, that both controllers refer to the same Façade object. When the front-end application starts up, the FacadeDispenser would ask the application server to explicitly construct a new Façade Object for each Use Case. This means that the application server has one Façade, of each type, for every front-end application executing. A thread per object policy is used to ensure that clients receive proper server responsiveness without compromising concurrency. Since the Copeland call center has less than 100 CSRs, this architecture does not currently present a scalability problem.
Context objects are business objects and must interact with the legacy systems in order to support their behavior. For example, transaction contexts, used so that CSR can view historical participant interactions, obtain their information from a DB2 table located at Copeland in New Jersey. ModeledLoan contexts, on the other hand, obtain their information from a series of MQ interactions, which ultimately results in access to IMS, which is located at the Traveler in Connecticut. In an effort to isolate context objects from specific calculated data, the location of raw underlying data, several other generic objects were introduced to the application architecture. The first object introduced was the Source Object. Each Context object has an associated Source object. The purpose of the Source object is to provide the Context with its data. The Source object also isolates the Context objects from the details regarding its data. The Source object would in turn leverage specific calculations, conversions, transformations, and several other architectural objects used to obtain legacy data. These additional architectural objects are known as Table and Queue objects. The Source object would utilize any underlying Table or Queue objects it requires in order to provide the object with its legacy data. See figure 2 and figure 3.

Note that the Account Source will leverage a number of Table objects and a Queue object. Some Source objects, such as the TransactionSource object would only use one Table object. Table objects are developed with JDBC and are specific to the legacy DB2 tables at Copeland. Queue objects purpose is to interface to legacy information via MQ. The Queue objects are developed using an infrastructure service, which was developed at Copeland. This service is known as the MQ Data Access Service. A more detailed discussion of this service follows later in this article.



Supporting Internet application development with Java and CORBA

Along with the need to quickly release an Intranet based application to support CSRs, Copeland wanted to deploy an Internet based application which would allow their clients to directly make inquiries and modifications to their accounts. Java applets downloaded via the Internet and executed within Browsers seemed like a perfect approach. While the actual functionality was not identical to that of the CSW application, there was a lot of overlap. More accurately, the Internet application would be a rough subset of the functionality provided by CSW. It seemed that CORBA would be a perfect mechanism to allow the Java applets to access objects located in an application server. These ideas would form the initial approach to developing applications for Internet deployment scenarios.

Use Case analysis was performed and sets of Façade objects were designed. These Façade objects would leverage the same set of Context objects utilized within the CSW application. In some cases, additional methods were added to the existing Context objects. While some Context methods were only utilized by one of the applications, a great deal of reuse was achieved at the Context/Source/Table level.
The application was developed, tested and ultimately deployed. Once deployment began, a set of issues related to Java, CORBA and the Internet began to bubble up. While the significant percentage of Copeland clients were very happy with the new application. Many Copeland clients were either not happy with certain details or were unable to execute the application. Let us take a closer look at the types of issues, which were involved:

1. Some clients were using dial-up lines and were not able to download Java applets in a reasonable amount of time.
2. Compatibility problems did not allow Java applet to run identically on Netscape and Microsoft web browsers.
3. Some clients were not using web browsers, which supported Java.
4. Some client’s corporate Internet policies did not allow the execution of externally developed Java applets.
5. Some client’s corporate Internet policies did not allow IIOP to be used across their firewalls.
6. Some client’s corporate Internet policies did not allow HTTP tunneling to be used across their firewalls.

While the majority of clients were very happy with the new application, the above issues kept a significant number of clients from leveraging the deployed application. This caused Copeland to revisit options to the Java applet model. Copeland decided that they needed to deploy a pure HTML based Internet applications. While the HTML based application would not support as robust usability features, it would provide all of their clients with direct access to their plans. It was envisioned that the pure HTML based Internet application would be deployed along side the recently developed Java applet based Internet application.

Leveraging Application Server Technology for Internet application development

Now that Copeland had decided to pursue the development of a pure HTML based Internet application, the question was how would Copeland allow existing code to be leveraged while still providing a pure HTML delivery mechanism. Copeland had a number of options:

1. A CGI based solution
2. A Dynamic HTML based solution
3. An NSAPI based solution
4. An Application Server solution

The key to a successful approach would be one which allowed quick development of the HTML based front-ends and also allowed the existing Facades (or perhaps Context objects) to be directly leveraged. Copeland evaluated a number of solutions and based on its ability to directly address these requirements, Copeland decided on the NetDynamics’ product.

The NetDynamics product provides an environment for developing and deploying Internet based applications. It allows components to be developed and deployed within the NetDynamics Application Server. Traditionally, components are developed within the product and the components relying on NetDynamics to provide native RDBMS transactional support. In Copeland’s case, they would only utilize this capability for storing temporary session based information. The existing Context objects already supported access to their persistent data. Copeland would leverage critical features such as automatic load balancing, security, and high availability. Front-ends can be deployed as either Java applets or pure HTML applications. Copeland is currently only utilizing the HTML based variety.
The final critical feature is the tool’s Platform Adapter Component SDK. This is referred to as the PAC. The PAC SDK is a server-side kit that would enable the existing Copeland Facades to be deployed as plug-and-play components in the NetDynamics Application Server. See figure 4. These server objects, known as PACs, would support browsing via the NetDynamics’ wizards. This would make it easy to build applications, which needed access to the set of procedures offered by the PAC. Some of the benefits of the PAC SDK are listed below:


Development Benefits of the PAC
* API of external system is visible in the NetDynamics Studio
* Visual development support using wizards and editors
Management Benefits of the PAC
* Automatic integration of the PAC into the NetDynamics Command Center
* Support for real-time monitoring, statistics, logging, and parameter configuration of the PAC

While NetDynamics supported the key critical features (HTML based applications and the PAC SDK for interfacing to existing components), there were some problems associated with its usage. Even though NetDynamics is built around the concepts of components and was in fact developed on top of a CORBA based infrastructure, The PAC SDK is note object oriented but procedurally oriented. The product comes from a Database orientation and the PAC was originally seen as a way to access databases or services not directly supported. PAC objects basically provide a set or procedures. They are inherently stateless and can only return data as opposed to references to other PAC or components. If we look at the Context object and the Façade objects, we see the same type of distinction. The Context objects are truly object oriented. They support methods, which accept and return other context objects. The Façade objects are more procedural. They accept and return parameters, which are basic types. Based on the design of the PAC SDK. The PACs could only support Façade objects. If the PAC were object oriented, then Copeland could develop PACs for the Context objects, which would eliminate an additional layer from the architecture.

Leveraging a Distributed Component Infrastructure

While the application architecture utilized within the CSW and Internet application was successful in meeting its goal, it really defines a development approach or a set of best practices. Developers still have many decisions to make when designing typical application elements. Often different developers make different choices and end up re-implementing particular aspect of the system over and over again. While Context objects have been shown to be reusable, there was almost no code, which is leveraged over and over. Reusable infrastructure would ideally exist for the typical developer tasks. Some examples of potential services would include the following:

1 Application Services
2 Meta-Data Services
3 Relational Data Services
4 Security Service
5 Logging Services
6 Message Queueing Data Services

Since we are operating in a distributed manner, the assumption is that all of these services would be accessible in a distributed fashion. Application services would include functionality related to starting, stopping, and managing middle-tier servers. Meta-data services are related to a repository of data used to control the behavior of middle-tier servers, objects, and other services. The Meta-data (or repository) service defines two critical pieces of information. The bits of information needed by servers, objects, or services, and the format of those bits of information. Copeland has a design and proof of concept developed for both of these services.
Relational Data services are used by objects which need to access data managed by RDBMS. Copeland is still in an exploratory stage with respect to these relational data services. This service can take several different forms. Copeland is evaluating a simplified layer above JDBC, a more abstract persistent data service based on dynamic information obtained via the meta-data service, and a persistent manager approach based on the observer pattern. While several designs and proof of concepts have been developed these services are not yet being leveraged. Copeland has preliminary design specifications in place for both security and logging services.

The Message Queuing Data Service (MQDS) consists of several interfaces which provides a CORBA compliant interface to MQSeries interactions. It allows two bi-directional asynchronous MQSeries interactions to be presented as a single CORBA compliant request/response. The MQDS also supports timer based caching of duplicate MQDS requests. This ensures that redundant MQSeries requests can be eliminated. The MQDS hides the complexities and specifics of MQSeries via a simple interface. This interface provides business objects with a simple mechanism to allow information to be sent and retrieved via MQSeries. These interfaces are MQRegistry, MQAccess, MQParser and MQPersist. Please see figure 5.


MQAccess is the administrator of this package by providing access to the external world and provides the interaction between the components. MQRegistry provides for a series of operations, which allows for the retrieval of hierarchical information. In this initial release the transaction variables will be created via editing a text file. MQParser is a lightweight class, which uses the message structure received from the MQRegistry service to create a Name Value Pair objects. MQPersist stores the Name Value Pair objects into a hash table via a transaction identifier, which is formulated at the start of the transaction by the MQAccess package. MQPersist will also provide a method to remove the entry once it’s no longer usable. The MQDS service is currently being used successfully by the CSW application. It is the expectation that additional applications will be able to leverage this service in the event that they need access to MQSeries.


Overall, the transition at Copeland has proceeded very well. Copeland has move from an organization developing RPG based AS400 applications, to an organization capable of leveraging UML, a formal component development approach, the Java programming language, and CORBA based communications for developing new systems. Copeland staff has grown significantly in its ability to design, develop, and manage the delivery of component-based systems. Copeland has also begun the work associated with developing reusable infrastructure elements. These elements will ensure that future development efforts can be delivery faster and more reliably by eliminating the need to redevelop functionality required by different business projects.

In terms of specific business projects, The CSW application has been deployed and extended several times. Business goals have been met and the system is being positioned as a template for future development efforts. The Internet project has also been deployed. While its underlying design has been modified over time, it is now meeting all of the critical customer requirements. The Internet and CSW applications are sharing a common set of business objects (Context Objects). Over time, additional applications will also leverage the common set of business objects. Current candidates include VRU applications. The fact that Copeland is achieving some levels of reuse at a business object level, indicates the success of their transition to component systems. While organizational change has been needed to successfully manage shared objects, the benefits of shorter development lifecycle are starting to be seen.

While the transition at Copeland is moving forward successfully, it is not yet complete. Copeland is still in the process of developing a reusable infrastructure. Additional Copeland staff still needs to increase their component development skill sets. While many of the project managers at Copeland have become quite experienced with managing component based projects, more need to gain skills in these areas. As additional business projects move towards a component based approach, more staff will become exposed to the various aspects related to components. Additional mentoring will ensure that Copeland completes their transition in the shortest amount of time, which is required.

Leave a Reply

Your email address will not be published.