- October 3
Speaker: James Skene
Title: What's going on with the MDA?
Abstract: The MDA, as anybody will tell you, represents the next evolutionary step in computer programming and software engineering, promising an elevation in the level of abstraction at which engineers work similar to that realised when moving from assembly languages to structured programming. However, several challenges remain, not the least of which is determing what the MDA actually is. In this seminar I'll try to explain this, and identify the technologies that are required to 'do MDA'. I'll discuss the state of the art, and identify the challenges, which I think are primarily to do with managing consistency in projects. I will then propose an innovation that I think could help. I'll talk about how large amounts of automated support for MDA could change the way software is developed, and I will even hint at what I think will come after the MDA, a concept that I call 'no-click software engineering'. - October 27
Speaker: Vivien Quema
Title: DREAM: a Component Framework for the Construction of Resource-Aware, Dynamically Configurable Communication Middleware
Abstract: In this talk, we present the work we are conducting at INRIA Rhône-Alpes on the design of a component-based framework for the construction of autonomous systems.Modern distributed computing systems are becoming increasingly complex. A major trend currently is to build autonomous systems, i.e. systems that reconfigure themselves upon occurrence of events such as software and hardware faults, performance degradation, etc. Building autonomous systems requires both a software technology allowing the development of administrable systems and the ability to build control loops in charge of regulating and optimizing the behavior of the managed system.
In this talk, we will mainly focus on the first requirement, i.e. providing a software technology for the development of administrable systems. We argue that better configurability can be reached through the use of component-based software frameworks. In particular, we present DREAM, a software framework for the construction of message-oriented middleware (MOMs).
Several MOMs have been developed in the past ten years. The research work has primarily focused on the support of various non functional properties like message ordering, reliability, security, scalability, etc. Less emphasis has been placed on MOM configurability. From the functional point of view, existing MOMs implement a fixed programming interface (API) that provides a fixed subset of asynchronous communication models (publish/subscribe, event/reaction, message queues, etc.). From the non-functional point of view, existing MOMs often provide the same non-functional properties for all message exchanges, which reduces their performance.
To overcome these limitations, we have developed DREAM (Dynamic REflective Asynchronous Middleware), a component framework for the construction of dynamically reconfigurable communication systems. The idea is to build a middleware as an assembly of interacting components, which can be statically or dynamically configured to meet different design requirements or environment constraints. DREAM provides a component library and a set of tools to build, configure and deploy middleware implementing various communication paradigms. DREAM defines abstractions and provides tools for controlling the use of resources (i.e. messages and activities) within the middleware. Moreover, it builds upon the Fractal component model, which provides support for hierarchical and dynamic composition. DREAM has been successfully used for building various forms of communication middleware: publish-subscribe (JMS), total order group communication protocols, probabilistic broadcast,asynchronous RPC, etc.
- Monday November 14
Speaker: Genaina Nunes Rodrigues
Title: Reliability Prediction in Model-Driven Development
Abstract: Evaluating the implications of an architecture design early in the software development lifecycle is important in order to reduce costs of development. Reliability is an important concern with regard to the correct delivery of software system service. Recently, the UML Profile for Modeling Quality of Service has defined a set of UML extensions to represent dependability concerns (including reliability) and other non-functional requirements in early stages of the software development lifecycle. Our research has shown that these extensions are not comprehensive enough to support reliability analysis for model-driven software engineering, because the description of reliability characteristics in this profile lacks support for certain dynamic aspects that are essential in modeling reliability. In this work, we define a profile for reliability analysis by extending the UML 2.0 specification to support reliability prediction based on scenario specifications. A UML model specified using the profile is translated to a labelled transition system (LTS), which is used for automated reliability prediction and identification of implied scenarios; the results of this analysis are then fed back to the UML model. The result is a comprehensive framework for addressing software reliability modeling, including analysis and evolution of reliability predictions. We exemplify our approach using the Boiler System used in previous work and demonstrate how reliability analysis results can be integrated into UML models. - Monday November 14
Speaker: Leticia DuBoc
Title: Scalability - A Matter of Stakeholder's Interest
Abstract: Scalability is a widely-used term in scientific papers, technical magazines, software descriptions and etc. Its use in the most varied contexts contribute to a general confusion about what the term really means. This lack of consensus is a potential source of problems, as assumptions are made in the face of a scalability claim. A clearer and widely-accepted understanding of scalability is required to restore the usefulness of the term. Essentially, we believe it should be accepted that scalability has many dimensions and is ultim§ ately a matter of stakeholder's interest. This paper discusses commonly found definitions of scalability and attempts to capture its essence in a systematic framework. This contribution aims to assist software developers to reason, characterize, communicate and adjust the scalability of software systems. - Monday November 21
Speaker: Alessandro Garcia
Title: Modularizing and Composing Design Patterns with Aspects: A Quantitative Assessment
Abstract: Design patterns offer flexible solutions to common problems in software development. Recent studies have shown that several design patterns involve crosscutting concerns. Unfortunately, object-oriented (OO) abstractions are often not able to modularize those crosscutting concerns, which in turn decrease the composability, reusability, and maintainability of pattern-specific and application-specific concerns. Hence, it is important verifying whether aspect-oriented approaches support improved modularization of crosscutting concerns relative to design patterns. Ideally, quantitative studies should be performed to compare OO and aspect-oriented implementations of classical patterns with respect to important software engineering attributes, such as coupling and cohesion. This talk will present a quantitative study that compares aspect-based and OO solutions for the 23 Gang-of-Four patterns. We have used stringent software engineering attributes as the assessment criteria, such as separation of concerns, coupling, cohesion, and size. We have also assessed the implementations with respect to external quality attributes, including reusability, and composability, and discussed the definition of a predictive model for the aspectization of design patterns. - Monday November 21
Speaker: Harold Ossher
Title: The Concern Manipulation Environment; An Open-Source Environment for Aspect-Oriented Software Development
Abstract: The Concern Manipulation Environment (CME) is an Eclipse open-source project aimed at supporting aspect-oriented software development across the software lifecycle. It supports working with concerns, including crosscutting concerns, as first-class entities that occur within and across artifacts of different kinds. Current CME tools support querying software, defining concerns based on queries, modeling concerns, navigating and visualizing software from multiple points of view based on concerns, and composing aspects and other concerns. Java and Ant artifacts are currently supported, and the architecture is designed to facilitate addition of new kinds of artifacts. A key goal of the CME is to serve as an integrating platform for multiple contributors and AOSD approaches. This will allow developers to leverage the respective benefits of various approaches, and aid researchers in developing and experimenting with new approaches. The CME includes, as one of these approaches, the next stage of research and development on multi-dimensional separation of concerns and Hyper/J. In this talk I will discuss the use of concerns across the software lifecycle, demonstrate some of the CME tools, and give a brief overview of the architecture of the CME. I will then focus on the support for composition, including recently-added support for high-level, Hyper/J-like composition. - Monday November 28
Speaker: Richard Taylor
Title: An Informal Talk on Software Architecture
Abstract: Typically, seminars present work that is complete and polished. This talk is not of that kind. Along with two co- authors, Neno Medvidovic and Eric Dashofy, I am writing a new, major textbook on software architecture. The text takes the position that architecture is not a phase of development, but is a focus that has major implications for all aspects of development. Both technical and organizational aspects of software architecture are treated in the book, though the primary focus is technical. This talk will provide an overview of the book and will describe a variety of its "key points of departure". As a preview, the first chapter of the book, "The Big Idea," covers the architecture of the WWW, its underlying style (REST), architecture in the tiny (viz., Unix pipe and filter), and program families, as exemplified by the Koala work. The second chapter discusses how software architecture relates to, and changes, the rest of software engineering. Chapter three discusses domain-specific software architectures. 17 chapters are currently under development. The seminar should be highly interactive; come and help shape the architecture! - Monday December 5
Speaker: Paul Brebner
Title: Two Ways to Grid: The contribution of Open Grid Services Architecture (OGSA) mechanisms to Service-centric and Resource-centric lifecycles
Abstract: Service Oriented Architectures (SOAs) support service lifecycle tasks, including Development, Deployment, Discovery and Use. We observe that there are two disparate ways to use Grid SOAs such as the Open Grid Services Architecture (OGSA) as exemplified in the Globus Toolkit (GT3/4). One is a traditional enterprise SOA use where end-user services are developed, deployed and resourced behind firewalls, for use by external consumers: a Service-centric (or "1st order") approach. The other supports end-user development, deployment, and resourcing of applications across organizations via the use of execution and resource management services: a Resource-centric (or "2nd order") approach. We analyze and compare the two approaches using a combination of empirical experiments and an architectural evaluation methodology (scenario, mechanism, and quality attributes) to reveal common and distinct strengths and weaknesses. The impact of potential improvements (which are likely to be manifested by GT4) is estimated, and opportunities for alternative architectures and technologies explored. We conclude by investigating if the two approaches can be converged or combined, and if they are compatible on shared resources. - Monday December 12
Speaker: Richard Taylor
Title: Software Patent Litigation: A View from the (Expert) Witness Stand
Abstract: While almost everyone has an opinion on the legitimacy or the wisdom of software patents, fewer people have actually read software patents, and fewer still have served as an expert witness on software patent litigation. Over the past two and half years I have not only had the privilege of doing all that, but testifying before a jury in a U.S.. Federal Court in a patent infringement lawsuit. This seminar will cover a few of the basics of patents and the litigation process associated with them, as practiced in the United States. Both perspectives of testifying for the plaintiff and testifying for the defense will be covered. Part of the seminar will be devoted to observations and lessons, as seen from the viewpoint of software engineering research and, especially, practice. A discussion of the history of electronic commerce on the web will also be included. - Monday January 9
Speaker: Wolfgang Emmerich
Title: Novel programming paradigms for the global economy
Abstract: We introduce the notion of global computing that emerges as a consequence of the globalisation of the service industry. The business processes of an organisation are no longer confined to the four walls of its office building and instead involve partners, increasingly located on other continents, to perform more specialised tasks. In doing so they form global virtual organisations. The processes performed by these global virtual organisations need to be supported by appropriate IT systems, again at a global scale. The question that we explore in this keynote is: Are the programming languages and tools that are available today up to this challenge? We show why mainstream object and component-oriented programming paradigms are insufficient for programming global distributed systems and identify three areas that are in dire need of improvement: synchronisation, quality of service and trust. - Monday January 23
Speaker: Clovis Chapman
Title: Predictive Resource Scheduling in Computational Grids
Abstract: - Monday February 2
Speaker: Stefanos Zachariadis
Title: The RUNES Middleware System
Abstract: Next generation embedded systems will be composed of large numbers of heterogeneous devices, including extremely resource-constrained devices such as sensor motes. These devices will use different operating systems and be connected through different network interfaces. They will also need to be adaptive to changing conditions based on context-awareness. And, as well as being connected to the fixed Internet, they may be mobile and form ad-hoc networks with their peers. Our focus is on the provision of a framework to support the development and execution of next generation embedded systems. In traditional, fixed, networked environments this is typically the role of a middleware platform such as CORBA, EJB or Web Services. However, these platforms incur unacceptably high overhead in terms of performance and memory footprint, and also have limited support for customisability and adaptivity. The approach presented leverages a small component runtime to support highly modularised and customisable middleware services that can be tailored for specific embedded environments. The approach decouples and encapsulates the functionality provided by its various constituents (components) behind well-defined interfaces. This decoupling not only enables one to deploy different variants of the same component (e.g., tailored to a specific device type), but also enables the dynamic reconfiguration of component instances and their interconnections, which in turn provides support for adaptivity. - Monday February 27
Speaker: Andy Maule
Title: Impact analysis of relational database schema changes upon applications
Abstract: Applications that have a need to manage large amounts of persistent data typically use databases for storage, retrieval and consistent update of those data. Applications often have to be written based upon a specific database schema, with the implicit assumption that the application is only valid if the schema remains unchanged. If the database schema requires a change then the application and the schema may become inconsistent.Maintaining the consistency between database schemas and database applications is a difficult problem. Impacts of changes to the schema can be complex, far reaching and difficult to track. This is a difficult problem to deal with, especially when the database schema is not within the control of the application developers.
One approach is to avoid the problem by avoiding change to the schema. Another approach is to build the application is such a way that impact is confined to specific components, minimising the impact area that has to be reconciled. We propose a technique to predict the effects of schema changes, with the goal of better informing both the developer making changes, and developer who must reconcile an application with the new schema.
The major problem we face is providing traceability from relational database artefacts to object oriented code. Previous research has focused on using object oriented databases to implicitly provide this kind of traceability. However, despite the emergence of object, deductive and semi-structured databases the vast majority of databases used in practice remain relational databases. We therefore constrain our research to providing impact analysis with relational database schemata.
We investigate the use of Object Relational Mappings (ORM) and strongly typed queries to provide the traceability required. Having traceability between these artefacts we can statically extract dependency data from the application and the schema. We use the extracted data to populate a dependency model that can be used to predict the impact of a change. We use object oriented modelling techniques to provide the dependency model and predict the effects of change.
We shall evaluate these techniques using a real world case study.
- Monday April 4
Speaker: Christian Zirpins
Title: Towards Service-Oriented Design & Development of Interaction Procedures for Virtual Organizations
Abstract:The talk introduces the research training project PARIS, which is placed in the multi-disciplinary context of distributed information systems technology for virtual organizations and targets combined aspects of software engineering methodology and distributed middleware technology.
In virtual organizations, business processes are decomposed with respect to core competencies of its partner organizations. These assets are continually re-integrated pertaining to changing requirements of customers and markets. Each such constellation implies a specific interaction procedure between the partners that has to be implemented by an operative process-chain. The more dynamic, the more the virtual organization relies on integrated ICT. Service-oriented Grid architectures promise effective means to share functions and processes. Yet, gaps remain between technology and its application: the abstractions (e.g. services) are too low-level to be efficiently applied by business engineers and software engineering methodology to develop mission-critical systems on top is usually missing. Hence, PARIS aims at two research goals: (i) high-level abstractions for organizational interaction procedures that enable efficient realization of operational process-chains and (ii) software engineering methodology to develop information systems that enforce and control organizational constellations. The envisioned approach builds on two software engineering concepts: patterns and frameworks. Patterns are intended to abstract interaction procedures. For implementation, they ought to be plugged in a rule-driven and policy-controlled framework on top of conventional Grid Computing middleware.
- Friday April 21
Speaker: Adam Porter
Title: Exploring Tools and Techniques for Distributed Continuous Quality Assurance
Abstract: Dynamic analyses, such as testing and profiling, play a key role in state-of-art approaches to software quality assurance (QA). With a few rare, (but notable) exceptions, these analyses are performed in-house, on developer platforms, using developer-provided input workloads. The shortcomings of focusing on in-house QA efforts alone include increased cost and schedule for extensive QA activities and misleading results when the input test-cases and workload differs from actual workloads or when the in-house system or execution environment differ from that found in the field.To improve this situation we are developing tools and techniques to support a new approach to dynamic analyses called Distributed, Continuous Quality Assurance (DCQA). that execute around-the-world and around-the-clock, on a virtual computing pool made of up of numerous end-user machines. Our approach divides QA processes into multiple subtasks that are intelligently distributed to client machines around the world, executed by them, and their results returned to central collection sites where they are fused together to complete the overall QA process.
In this talk we will describe our approach and present the results of several feasibility studies.
- Monday August 14
Speaker: Costin Raiciu
Title: Enabling Confidentiality in Content-Based Publish/Subscribe
Abstract: We try to answer the following question: can a third party (e.g. a router) take content-based forwarding decisions (i.e. should message X go to subscriber Y?) while not learning anything more than necessary about the contents of message X or the criteria imposed by Y. We formally define what security means in this setting and propose a series of secure practical solutions, some novel and some adapted from existing work. We present performance evaluation results of our proposals, showing that this type of filtering is feasible in practice.Our findings can be applied in at least two scenarios: forwarding in a content-based publish/subscribe network and generalized searches on encrypted data (e.g. remote file servers, etc).
- Monday August 21
Speaker: Slinger Jansen
Title: Improving the Customer Configuration Update Process by Explicitly Managing Software Knowledge
Abstract: The implementation and continuous support of a software product at a customer with evolving requirements is a complex task for a product software vendor. There are many customers for the vendor to serve, all of whom might require their own version or variant of the application. Furthermore, the software application itself will consist of many (software) components that depend on each other to function correctly. On top of that, these components will evolve over time to meet the changing needs of customers. To alleviate this problem we propose to alleviate the software release and deployment effort and reduce risks associated with it. This will be achieved by explicitly managing typical knowledge about the software product, such as configuration and dependency information, thereby allowing software vendors to improve the customer configuration updating process. The proposed solution of knowledge management at both the customer and vendor site, is validated through industrial case studies. - Monday September 4
Speaker: Jidtima Sunkhamani
Title: Stakeholder Mapping for Stakeholder Identification
Abstract: The key success factor of system development is the extent to which the system meets its intended purposes. Requirements engineers have to determine all the system requirements. As stakeholders are the principal source of system requirements, relevant stakeholders must be identified. A stakeholder modelling technique is in need to support requirements engineers in analysing requirement sources and their important aspects affecting system reqirements. An organisation analysis is introduced as an approach to stakeholder mapping for stakeholder identification. - Monday September 11
Speaker: Vito Perrone
Title: Developing an Integrative Platform for Cancer Research: a Requirements Engineering Perspective
Abstract: The NCRI Informatics Initiative has been established with the goal of using informatics to maximise the impact of cancer research. A clear foundation to achieving this goal is to enable the development of an informatics platform in the UK that facilitates access to, and movement of, data generated from research funded by NCRI Partner organisations, across the spectrum from genomics to clinical trials. To assure the success of such a system, an initial project has been defined to establish and document the requirements for the platform and to construct and validate the key information models around which the platform will be built. The platform will need to leverage many projects, tools and resources including those generated by many e-Science projects. It also required contributing to the development of a global platform through a close interaction with similar efforts being developed by the NCI in the USA. This paper recounts our experience in analysing the requirements for the platform, and explains the customised analysis approach and techniques utilised in the project. - Monday September 18
Speaker: Franco Raimondi
Title: (1) Meta-models to reason about planning domains + (2) Testing and verifying planning domains
Abstract:This talk presents the outcomes of my summer internship. It is composed of two parts:In the first part I present a technique to reason about complex planning scenarios using UCLMDA tools + an eclipse editor. In particular, I introduce a methodology to translate planning models into MOF models, where different kinds of analysis can be performed. I provide a medium-sized example (an autonomous Rover) to clarify the approach. Discussion is welcome on this approach.
In the second part I propose a methodology for testing and verifying (where possible) flight rules of planning domains. The methodology is self contained, in the sense the flight rules are verified or tested using the planner itself. More in detail:
- flight rules are characterised using patterns and encoded using LTL
- coverage conditions for specification-based testing are reviewed
- a translation of LTL formulae into planning goals is presented
- case study: verification of flight rules for the Rover scenario (see above)
October 2005
November 2005
December 2005
January 2006
The integration of clusters of computers into computational grids has recently gained the attention of many computational scientists. While considerable progress has been made in building middleware and work-flow tools that facilitate the sharing of compute resources, little attention has been paid to grid scheduling and load balancing techniques to reduce jobs waiting time. Based on a detailed analysis of usage characteristics of an existing grid that involves a large CPU cluster, we observe that grid scheduling decisions can be significantly improved if the characteristics ofcurrent usage patterns are understood and extrapolated into the future. We describe a formal framework that uses Kalman filter theory to predict future CPU resource utilisation. This ability to predict future resource utilisation forms the basis for significantly improved grid scheduling decisions. The talk describes the architecture for such a prediction and grid scheduling frame-work and its implementation using Condor. By way of replicated experiments we demonstrate that the prediction achieves a precision within 15-20% of the utilisation later observed and can significantlyimprove scheduling quality, compared to approaches that only take into account current load indicators.