Most projects must be delivered and put in operation under certain constraints. Traditionally, these constraints have been listed as "scope," "time," and "cost", frequently referred to as the "project management triangle" where each side represents one of the constraints.

The time constraint refers to the amount of time available to complete a project, the cost constraint refers to the budgeted amount available for the project and the scope constraint refers to what must be done to produce the project's end result.

The basic idea is that one side of the triangle cannot be changed without affecting the others. These three constraints are often competing: an increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope and a tight budget could mean increased time and reduced scope.

In this respect, the project management discipline is about providing tools and techniques that enable the team meet the constraints..

QuantPool refers to High Performance Computing (HPC) as the application of supercomputers, GPU servers and workstations as well as distributed supercomputing (grid computing systems) to financial and scientific problems such as pricing, investment strategy, portfolio and risk management, excution and performance evluation modeling and simulation.

QuantPool has a very good experience of GPU computer cluster drivers and Matlab?Jacket as well as CUDA C/C++ programming which greately reduces the CUDA kernel overhead on Windows. In both cases, remote desktops can access services (SOA). Also GPU computer cluster nowdays has monitoring software that for example gives the GPU temperature, fan speed, and ECC information is available using the nvsmi tool. QuantPool works with several cluster management software and hardware providers, who support GPU-based systems.

For example, NVIDIA's GPUDirect technology enables faster communication between the GPU and other devices on the PCIe bus by removing unnecessary overhead on the CPU. GPUDirect v2.0 allows 3rd party device drivers e.g. for InfiniBand adaptors to communicate directly with the CUDA driver, eliminating the overhead of copying data around on the CPU and enables peer-to-peer (P2P) communication between GPUs in the same system, avoiding additional CPU overhead.

Distributed supercomputing is an architecture of a grid computing system connecting many personal computers over the internet. This is an opportunistic form of super computing, where a “virtual super computer”, a networked grid computing, loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing can not handle all supercomputing tasks.

Quasi opportunistic supercomputing is a form of distributed computing where a large number of networked geographically disperse computers performs demanding computing tasks. Quasi opportunistic supercomputing provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and by the use of intelligence about the availability and reliability of individual systems within the supercomputing network.

However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.

Most QuantPool projects involve recent object-oriented programming languages, such as Visual Basic.NET (VB.NET) and C# for Microsoft's .NET platform and Java developed by Sun Microsystems. We find that Microsoft and Java platforms provide OOP benefits in their own ways. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language. Our developers usually compile Java to bytecode, allowing Java to run on any operating system for which a Java virtual machine is available while VB.NET and C# make use of the strategy pattern to accomplish cross-language inheritance (whereas Java makes use of the Adapter pattern).

Projecrts that involve object-oriented programming usually contains different types of objects corresponding to a particular kind of complex data to be managed or perhaps to a real-world object or concept such as a bank account. To design financial software, advanced OOP features, such as data abstraction, encapsulation, messaging, modularity, polymorphism and inheritance are used.

Just as procedural programming led to refinements of techniques such as structured programming, object-oriented programming and software design methods led to refinements such as the use of design patterns, design by contract, and modeling languages (such as UML). Object-oriented features have been added to many existing languages during that time, including Ada, BASIC, Fortran, Matlab and Pascal. We find that adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code.

The early ideas of OOP influenced many later languages, including Smalltalk, derivatives of LISP (CLOS), Object Pascal, and C++. Object-oriented programming developed as the dominant programming methodology in the early and mid 1990s when programming languages supporting the techniques became widely available. These included C++ and Delphi. The dominance was further enhanced by the rising popularity of graphical user interfaces, which rely heavily upon object-oriented programming techniques. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk.

Service Oriented Architecture (SOA) is a set of engineering principles and methodologies for designing and developing interoperable software services. These services are well-defined in the sense that the service consumers and the services communicate by passing data in a well-defined formats. That lets developers to combine and reuse existing functions and allow certain users to access services over a network.

 In some respects, one can regard SOA as an architectural evolution rather than as a revolution. It captures many of the best practices of previous software architectures. As of 2008, increasing numbers of third-party software companies offer software services for a fee. In the future, SOA systems may consist of third-party services combined with in-house services. This promotes standardization within and across industries, and spreads the cost of shared resourses over several users. In this respect, SOA revives concepts like modular programming (1970s), event-oriented design (1980s) and interface/component-based design (1990s) and promotes the goal of separating users/consumers from the service implementations. Services can run on multiple distributed platforms and be accessed across networks, which also maximizes the return on the investment.

For SOA to operate, no interactions must exist between the chunks specified or within the chunks themselves. Instead, humans specify the interaction of services (all of them unassociated peers) in a relatively ad hoc way with the intent driven by newly emergent requirements. Thus the need for services as much larger units of functionality than traditional functions or classes, lest the sheer complexity of thousands of such granular objects overwhelm the application designer. Programmers develop the services themselves using traditional languages like Java, C, C++, C#, Visual Basic, COBOL, or PHP. Services may also be wrappers for existing Legacy systems, allowing re-facing of old systems.

SOA realizes its business and IT benefits by utilizing an analysis and design methodology when creating services. This methodology ensures that services remain consistent with the architectural vision and roadmap and that they adhere to principles of service-orientation. Arguments supporting the business and management aspects from SOA are outlined in various publications.

A service comprises a stand-alone unit of functionality available only via a formally defined interface. Services can be some kind of "nano-enterprises" that are easy to produce and improve. Also services can be "mega-corporations" constructed as the coordinated work of subordinate services. Services generally adhere to the following principles of service-orientation:

  1. abstraction autonomy composability discoverability formal contract loose coupling reusability statelessness
  2. a mature rollout of SOA effectively defines the API of an organization.

Reasons for treating the implementation of services as separate projects from larger projects include:

Separation promotes the concept to the business that services can be delivered quickly and independently from the larger and slower-moving projects common in the organization. The business starts understanding systems and simplified user interfaces calling on services. This advocates agility. That is to say, it fosters business innovations and speeds up time-to-market.

Documentation and test artifacts of the service are not embedded within the detail of the larger project. This is important when the service needs to be reused later. An indirect benefit of SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation.

Straight-through processing (STP) enables an efficient use of computers for processing capital market and payment transactions without manual intervention. However, for this to be achieved, multiple market participants must realize high levels of STP, such that transaction data is made available on a just-in-time basis.

Historically, STP helped financial firms move to one-day trade settlement of equity transactions, to meet the demand resulting from the growth of online trading. Now STP is also used to minimize operational costs and to reduce systemic and operational risks and to improve certainty of settlementOne benefit is to to minimise execution settlement risk and enabling market risk management practices.

Today the financial sector tends to view STP as meaning 'same-day' and ideally minutes or seconds settlement processing. This is achieved with the emergence of business process interoperability (such as SOA). It is therefore fair to say that SOA and STP can shorten the processing cycles for asset managers, brokers and dealers, custodians, banks and other financial sector participants, as well as reduce the settlement risk and lower operating costs.

Certain capital market and payment transactions, from initiation to settlement, are complex manual processes taking several days and subject to legal and regulatory restrictions. Therefore, 100% STP automation is not always an achievable objective for all firms. Instead such firms can promote a high levels of STP within the firm, while encouraging other firms to participate and improve the automation of transactions, either bilaterally or as a community.

Our project managers will provide clients with a work breakdown structure (WBS), which provides a framework for the the planning and control of the project and serves as the basis for dividing work into definable increments from which work statements can be developed and technical, schedule, cost, and labor hour reporting can be established.

The WBS is illustrated as a tree structure that shows subdivisions required to achieve the project objective, such as for example a service contract or program routine. The WBS is usually service or process-oriented, but can also be hardware oriented.

The WBS is developed by starting at the end working subdividing work backwards in terms of size, duration, and responsibility of for example systems, subsystems, components, tasks, subtasks, and work packages until all steps necessary to achieve the objective has been included.

Successfull quantitative projects integrate project management and system development activities with directly associated operational activities. QuantPool will typically describe this in an overall Project Management Framework, which also consists of investment management activities and the project budget. A Project Management Framework diagram is usually used to illustrate this using closing milestones after the system deployment.

With QuantPool, project closing activities continues through system deployment activities into operational activities for the purpose of illustrating, describing and documenting the system. QuantPool find closing milestones within the project management framework important, because they makes sure that routines function appropriately and are appropriately documented. Several figure are used to illustrate the actions and artifacts of the program management process.


Huanxiao Zhang
Project Manager
+86 13826586703
hzhang@quantpool.com


Wei Ni
Project Manager
+86 18922903932
nwei@quantpool.com