2006/07 Archive

    October 2006


  • October 2
    Speaker: Liam McNamara
    Title: Trust and Proximity aware Pervasive Service Discovery and Selection
    Abstract: In a world where people often move with an average of 2-3 personal networked devices (such as PDAs, portable music players and mobile phones), the concept of service provision can be extended to the ability to create ad hoc exchanges whenever in reach of someone with the right information or service. In these dynamic pervasive settings, however, the probability that an exchange will fail is rather high, for example, because the devices involved in the service provision may move away from each other before the service completes, or because the selected provider may not deliver the service as expected.

    We present an approach to pervasive service discovery and selection based on past colocation, mobility prediction and reputation reasoning to try and decrease the amount of service failures.

  • October 9
    Speaker: Panuchart Bunyakiati
    Title: A Compliance Test Suite for UML Tools
    Abstract: Software engineering standards underpin delivery of projects by defining baseline good practices that software processes shall follow. Software tools enact software processes, automate activities, and support the production of artefacts required by the standards. An outstanding issue today in the software tools industry is the interoperability of tools across vendors and across the steps defined in software processes. It is, however, difficult to establish the compliance of these complex pieces of software to those standards. It has been suggested that many existing tools that advertise standard compliance fail to lift up to their claims. The objective of this work is to propose an approach to systematically and formally assess the compliance of these tools to the standards and to diagnose the causes of non-compliance.

  • October 16
    Speaker: Lemai Nguyen
    Abstract: The emergence and advancement of new information and communication technologies (ICT), such as the Internet, mobile and ubiquitous computing, create new forms of organisations and dramatically change the way people live and work. Organisations and people eagerly seek innovative ways, through smart use of ICT, to maximize potential benefits and create value to business. The requirements process (also known as requirements engineering - RE) can be seen as a key enabler in the exploration, discovery and specification of the business visions and requirements for the new system to be developed. Creativity has been increasingly recognised as playing an important role in the requirements process.

  • October 30
    Speaker: Miroslaw Milewski
    Title: Better MDA through AOP?
    Abstract: Model Driven Architecture (MDA) and Aspect Oriented Programming (AOP) entered the collective consciousness of software engineers in more or less the same time. However, these approaches come from completely different backgrounds and their adoption is radically different, both in scope and method.

    This seminar will aim to provide an insight on the similarities of Model Driven Architecture and Aspect Oriented Programming and how these similarities could be of benefit to both of these approaches. It will also address a number of well known issues with MDA and present a way of avoiding them.

  • November 2006


  • November 6
    Speaker: Ben Tagger
    Title: Managing Change in Experimental Environments
    Abstract: Some years ago, as we entered the genomic era of biology, there occurred an explosion of publicly available biological data. There arise many problems when attempting to deal with this magnitude of data; some have received more attention than others. One significant problem is that data within these publicly available databases are subject to change in an unannounced and unpredictable manner. Consider an e-scientist conducting experiments with such data. In the event that the data used for experimentation has changed, it may be important that the e-scientist be made aware of this change in order to establish the impact on their previously attained results. Couple this change with the experimentation, protocol and other changes that can occur, it is clear that there exists a complex environment of possible change that can affect the scientist's results.

    The problem is how to manage this state of experimental change in a way that can benefit the experimenter. Some results may be more important than others, may require more effort to repeat or may be less affected by certain changes than other sets of results. Managing this environment of experimental change is a complex issue and, to date, has not been fully reconciled.

  • November 13
    Speaker: Arun Mukhija
    Title: SOC: Beyond B2B Application Integration
    Abstract: Service-oriented computing (SOC) provides a high level of abstraction (higher than anything before it) for reasoning about, and enabling collaboration between, distributed autonomous services. Even though SOC has a potential to offer high flexibility and scalability, the current applications of SOC are largely limited to B2B application integration. In this seminar, I will talk about the usefulness of SOC paradigm for interactions in dynamic environments; what new challenges it throws that are not being addressed currently; and present the conceptual design of our approach (Dino) that we have been developing with an aim to address these challenges.

  • November 20
    Speaker: Bruno Wassermann
    Title: Reliable Scientific Service Compositions
    Abstract: Distributed service oriented architectures (SOAs) are increasingly used by users, who are insufficiently skilled in the art of distributed system programming. A good example are computational scientists who build large-scale distributed systems using service-oriented Grid computing infrastructures. Computational scientists use these infrastructure to build scientific applications, which are composed from basic Web services into larger orchestrations using workflow languages, such as the Business Process Execution Language. For these users reliability of the infrastructure is of significant importance and that has to be provided in the presence of hardware or operational failures. The primitives available to achieve such reliability currently leave much to be desired by users who do not necessarily have a strong education in distributed system construction. We characterise scientific service compositions and the environment they operate in by introducing the notion of global scientific BPEL workflows. We outline the threats to the reliability of such workflows and discuss the limited support that available specifications and mechanisms provide to achieve reliability. Furthermore, we propose a line of research to address the identified issues by investigating autonomic mechanisms that assist computational scientists in building, executing and maintaining reliable workflows.

  • November 27
    Speaker: Slinger Jansen
    Title: Software Supply Networks
    Abstract: One of the most significant paradigm shifts of software business management is that individual organizations no longer compete as single entities but as complex dynamic supply networks of interrelated participants. Understanding these intricate software supply networks is a difficult task for decision makers in these networks. This presentation outlines a modelling technique for representing and reasoning about software supply networks. We show, by way of worked case studies, how modelling software supply networks might allow managers to identify new business opportunities, visualizes liability and responsibilities in a supply network, and how it can be used as a planning tool for product software distribution.

  • December 2006


  • December 4
    Speaker: Clovis Chapman
    Title: An Adaptive and Cooperative Scheduling Framework for Best-effort Computational Grids
    Abstract: To ensure efficient resource usage in a grid environment, we require means of aggregating resources providing similar or inter-related services and sharing workloads amongst these resources in a way that is scalable, administratively manageable and seamless to both users and applications. In order to achieve this, we must define and embed within the middleware that interconnects these resources federation mechanisms that will facilitate the distribution of requests and the coordination of policies across sites whilst respecting the decentralized nature of the grid.

    The autonomous operation of individual resource management systems relied upon in a grid environment requires us to adopt novel scheduling approaches by which these can be made to provide a single integrated service. Principles and mechanisms that allow for coordination between underlying systems whilst still taking into account their potentially divergent behaviour, such as prediction and reputation-based allocation, should be an integral part of the design of grid middleware.

    In this talk, I will provide a high-level overview of the grid scheduling problem and define the various requirements for a meta-scheduling framework that we aim to deploy on the eMinerals mini-grid. Looking specifically at est-effort computing resources, I will also provide an overview of the solutions we have looked at to address the various aspects of this problem.

  • January 2007


  • January 8
    Speaker: James Skene
    Title: The Monitorability of Service-Level Agreements for Application-Service Provision
    Abstract: Service-Level Agreements (SLAs) mitigate the risks of a service-provision scenario by associating financial penalties with aberrant service behaviour. SLAs are useless if their provisions can be unilaterally ignored by a party without incurring any liability. To avoid this, it is necessary to ensure that each party's conformance to its obligations can be monitored by the other parties. We introduce a technique for analysing systems of SLAs to determine the degree of monitorability possible. We apply this technique to identify the most monitorable system of SLAs including timeliness constraints for a three-role Application-Service Provision (ASP) scenario. The system contains SLAs that are at best mutually monitorable, implying the requirement for reconciliation of monitoring data between the parties, and hence the need to constrain the parties to report honestly while accommodating unavoidable measurement error. We describe the design of a fair constraint on the precision and accuracy of reported measurements, and its approximate monitorability using a statistical hypothesis test.

  • January 22
    Speaker: Wolfgang Emmerich
    Title: The Impact of Research on Middleware Technology - A history lesson in technology transfer
    Abstract: The middleware market represents a sizable segment of the overall Information and Communication Technology market. In 2005, the annual middleware license revenue was reported by Gartner to be in the region of 8.5 billion US Dollars. In this talk we address the question whether research had any involvement in the creation of the technology that is being sold in this market? We attempt a scholarly discourse. We present the research method that we have applied to answer this question. We then present a brief introduction into the key middleware concepts that provide the foundation for this market. It would not be feasible to investigate any possible impact that research might have had. Instead we select a few very successful technologies that are representative for the middleware market as a whole and show the existence of impact of research results in the creation of these technologies. We investigate the origins of web services middleware, distributed transaction processing middleware, message oriented middleware, distributed object middleware and remote procedure call systems. For each of these technologies we are able to show ample influence of research and conclude that without the research conducted by PhD students and researchers in university computer science labs at Brown, CMU, Cambridge, Newcastle, MIT, Vrije, and University of Washington as well as research in industrial labs at APM, AT\&T Bell Labs, DEC Systems Research, HP Labs, IBM Research and Xerox PARC we would not have middleware technology in its current form. We summarise the article by distilling lessons that can be learnt from this evidenced impact for future technology transfer undertakings.

  • January 29
    Speaker: Liang Chen
    Title: OMII-BPEL: Scientific workflow orchestration with Business Process Execution Language (BPEL)
    Abstract: Grid Services has been a growing interest in Grid computing. Business Process Execution Language (BPEL) is the de facto industrial standard for service orchestration in a Service Oriented Architecture. OMII-BPEL focused on the expressiveness of BPEL as a scientific workflow language and investigated the sufficiency of open source BPEL engine in orchestrating Grid Services with distinct characteristics from those of business processes. In particular, we did it through extensive experiments on a chemical application, which looks for polymorphs of crystal structures. The case study was able to help us address those common challenges of Grid applications and how BPEL, and BEPL engine can cope with them in the Grid environment. We looked at the issues like, scalability, concurrency control, performance, reliability and security. With the tools and integrated BPEL environment that we released through OMII, we have successfully provided a life system that is regularly used by our Chemists. We are looking forward to introducing the experience of Grid service orchestration with BPEL broader in the community.

  • February 2007


  • Febreuary 26
    Speaker: Thomas Alspaugh
    Title: Scenarios read by people and software
    Abstract: Scenarios are widespread in software requirements practice, where they written almost exclusively for human readers. As a result, tool support for scenarios remains weak, and software development does not receive the full benefit of the work put into them. Despite the informal prose form of scenarios, people interpret and use them in consistent patterns that follow relationships embodied in the text. ScenarioML is a markup language with which scenario authors can make these relationships explicit, so that software tools can give effective support for working with scenarios, and programs can read scenarios in order to use them for more purposes. ScenarioML's semantics are defined in terms of how scenarios describe the world, resulting in equivalences and specializations between structurally-related events that can be exploited for scenario refactoring, event recognition, and other software processing. These well-defined semantics combined with tools for presenting scenarios effectively show promise for a representation of requirements that is clearer and more effective both for nontechnical stakeholders and for developers. We discuss three recent and current applications of ScenarioML for scenario tool support, automated multimedia presentations of scenarios, and requirements-based testing.

  • March 2007


  • March 5
    Speaker: Lawrence Tratt
    Title: Implementing Domain Specific Languages
    Abstract: As the concept of Domain Specific Languages (DSLs) grows in popularity and importance, it is apparent that current implementation techniques often don't reflect the way that we wish to use them. DSLs tend to start small, yet the tools we use to implement them often lead to surprisingly large and cumbersome implementations. DSLs tend to evolve in unforeseen ways, yet our implementations often have a "hackish" feel that makes change difficult.

    In this talk I will introduce a different technology for tackling the problem of DSL implementation. The Converge programming language which I have developed adds a powerful macro system to a modern dynamically typed language. A simple facility then allows DSLs of arbitrary syntaxes to be embedded within the language and transparently compiled at compile-time along with the rest of the program. Converge offers a number of subtle features which make implementing rich, powerful DSLs easy for the DSL implementer, but which also make life pleasant for the DSLs end users.

    In this talk I will introduce Converge itself, show how different types of DSLs can be developed in it, and also offer some more general insights that I and others have gained about DSL development and use. Although Converge is a relatively early stage technology, it has already seen use in industry.

  • March 19
    Speaker: Graham Roberts
    Title: Groovy
    Abstract: Groovy is a dynamic language designed to run on the Java Virtual Machine (JVM) and closely integrate with Java. This seminar will give an overview of the Groovy programming language and why you should consider making use of it. The seminar will also include some thoughts on the future of Java and the influence of the dynamic scripting languages.

  • March 19
    Speaker: Laura K. Dillon
    Title: A Compositional Contract Model for Safe Multi-threaded Applications
    Abstract: It is well known that the expressive power afforded by the use of concurrency comes at the expense of increased complexity. Without proper synchronization, concurrent access to shared objects can lead to race conditions, and incorrect synchronization logic can lead to starvation or deadlock. Moreover, concurrency confounds the development of reusable software modules because synchronization policies and decisions are difficult to localize into a single software module.

    Szumo (Synchronization Units Model) extends an object-oriented language with a notion of synchronization contracts to address these concerns. In lieu of writing low-level code to acquire and release shared objects, programmers declare synchronization contracts in a module's interface. A distributed run-time scheduler negotiates the contracts on behalf of processes, ensuring that the contracts of all modules are met while simultaneously guarding against data races and avoidable deadlocks.

    This talk provides an introduction to Szumo and describes a case study to validate the efficacy of Szumo on a realistic design problem: the component-based design of a multi-threaded web server.



  • April 2007


  • April 30
    Speaker: Leticia Duboc
    Title: A Framework for Characterization and Analysis of Software System Scalability
    Abstract: The term scalability appears frequently in computing literature, but it is a term that is poorly defined and poorly understood. The lack of a clear, consistent and systematic treatment of scalability makes it difficult to evaluate claims of scalability and to compare claims from different sources. In this talk we will present a framework for precisely characterizing and analyzing the scalability of a software system. The framework treats scalability as a multi-criteria optimization problem and captures the dependency relationships that underlie typical notions of scalability. We will also present the results of a case study in which the framework and analysis method were applied to a real-world system, demonstrating that it is possible to develop a precise, systematic characterization of scalability and to use the characterization to compare the scalability of alternative system designs.

  • May 2007


  • May 14
    Speaker: Lee Osterweil
    Title: Precise Process Definition
    Abstract: Software engineering has come to recognize the value of effective development processes as vehicles for addressing such goals as improved efficiency in software development, the achievement of improved product quality, and better coordination and communication of the members of software teams. Greater clarity, completeness, and precision in defining these processes seem to lead to greater effectiveness in pursuing these goals. We have explored the use of languages and notations that are strongly based upon traditional applications programming languages to support the precise definition of processes, and the analysis of such process definitions. Such languages show considerable promise, but more progress towards better languages and tools still seems indicated.

    This talk suggests that processes are also central to the effective pursuit of essential goals in a broad spectrum of such other domains of human endeavor as medical services, e-government, dispute resolution, manufacturing, business, and even the conduct of scientific research itself. The talk summarizes research in the precise definition of processes in these other domains, using our Little-JIL process definition language as a vehicle for being specific about the processes we have defined and studied in these domains. This vehicle also forms the basis for observations that will be made about the language features, and tools, that seem important in supporting the definition and analysis of processes in all domains.

  • May 21
    Speaker: Andy Dingwall-Smith
    Title: Checking Complex Compositions of Web Services Against Policy Constraints
    Abstract: Research in web services has allowed reusable, distributed, loosely coupled components which can easily be composed to build systems or to produce more complex services. Composition of these components is generally done in an ad-hoc manner. As compositions of services become more widely used and, inevitably, more complex, there is a need to ensure that compositions of services obey constraints. In this paper, we consider the need to provide policy constraints on service compositions, that define how services can be composed in a particular business setting. We describe compositions using WS-CDL and we use xlinkit to express policy constraints as consistency rules over XML documents.

  • June 2007


  • June 4
    Speaker: Emmanuel Letier
    Title: Deriving Software Behavioural and Quality Requirements from Stakeholders' Goals
    Abstract: Goal-oriented methods are increasingly popular for elaborating software requirements. They offer systematic support for incrementally building intentional, structural and operational models of the software and its environment. They also provide various techniques for early analysis, notably, to manage conflicting goals, to anticipate exceptional behaviors that prevent goals from being achieved, and to reason about the impact of alternative decisions on the degrees of goal satisfaction.

    The first part of the talk will present a goal-oriented requirements elaboration process in action. We will illustrate how to identify and structure goals for a safety-critical system for proton therapy treatments, and how to systematically derive software requirements from such goals. The second part will present ongoing work concerned with automating the derivation of precise specifications of both functional and non-functional software requirements from stakeholders' goals and known domain properties.

  • June 11
    Speaker: Panu Bunyakiati
    Title: The Certification of Software Tools with respect to Software Engineering Standards
    Abstract: Software development standards such as the UML provide complex modeling languages for specifying, visualizing, constructing, and documenting the artifacts of software systems. Software tools support the production of these artifacts according to the model elements, relationships, well-formedness rules and semantics defined in the standards. Due to the complexities of both standards and software tools, it is difficult to establish the compliance of the software tools to the standards. It has been suggested that many existing tools that advertise standard compliance fail to lift up to their claims. The objective of this work is to propose a framework for developing systematic, disciplined, and quantifiable certification schemes to assess the compliance of these tools to standards and to diagnose the causes of non-compliance.

  • June 18
    Speaker: Wolfgang Emmerich
    Title: Managing Web Service Quality
    Abstract: The IT industry is beginning to mirror a trend towards specialization and outsourcing that has been used successfully in others industries. Organizations are using web services for the integration of their IT systems with those of specialist service providers, such as CRM services and market places, payment and settlement services, bill presentment services and many others. Unlike in traditional supply chain management, however, the IT industry does not yet have an agreed way to manage web service quality. In this talk, we will focus on managing the service quality of web services that are used across organizational boundaries. We discuss the systematic definition of formal service level agreement languages that support the precise definition of service quality, such as latency, throughput, availability and reliability. We present how service level agreements written in these languages can be used and describe how service level agreements can be policed. We conclude the talk by sketching further research that is necessary before we have a similar grip on quality as other engineering disciplines.

  • June 25
    Speaker: Lori Clarke
    Title: Using Software Engineering Technology to Reduce Medical Errors
    Abstract: It has been estimated that there are approximately 98,000 deaths per year in the United States resulting from medical errors, and many of these are attributed to error-prone processes. In the University of Massachusetts Medical Safety Project, we are investigating if software engineering technologies can help reduce medical errors. Specifically, we are modeling medical processes with a process definition language and then analyzing these processes using finite-state verification and other analysis techniques. Working with the School of Nursing and the Baystate Medical Center, we are undertaking in-depth case studies on error-prone and life-critical medical processes. In many ways, these processes are similar to complex, distributed systems; they have many concurrent threads that often need to communicate and coordinate with each other, and exceptional conditions frequently arise and must be handled before normal execution can continue.

    Although our results are preliminary, we have been able to develop detailed process models, specify important safety properties, and detect vulnerabilities. This talk describes the technologies we are using, discusses the case studies, and presents our initial observations and findings.

  • August 2007


  • August 9
    Speaker: Howard Foster
    Title: Model Checking Service Compositions under Resource Constraints
    Abstract: When enacting a web service orchestration defined using the Business Process Execution Language (BPEL) we observed various safety property violations. This surprised us considerably as we had previously established that the orchestration was free of such property violations using existing BPEL model checking techniques. In this talk, we describe the origins of these violations. They result from a combination of design and deployment decisions, which include the distribution of services across hosts, the choice of synchronisation primitives in the process and the threading configuration of the servlet container that hosts the orchestrated web services. This leads us to conclude that model checking approaches that ignore resource constraints of the deployment environment are insufficient to establish safety and liveness properties of service orchestrations specifically, and distributed systems more generally. We show how model checking can take execution resource constraints into account. We evaluate the approach by applying it to the above application and are able to demonstrate that a change in allocation of services to hosts is indeed safe, a result that we are able to confirm experimentally in the deployed system. The approach is supported by a tool suite, known as WS-Engineer, providing automated process translation, architecture and model-checking views.

This page was last modified on 18 Oct 2013.