October 2002

  • October 2
    Speaker:Christian Nentwich
    Title: Consistency Management with Repair Actions
    Abstract: In previous work, we have developed xlinkit, which can check distributed and heterogeneous documents for consistency. This talk provides a logical continuation - repairing documents once the inconsistent elements are identified. We present a new semantics for first order logic constraints that produces a complete set of interactive repair actions for any formula, such that: no other actions can be taken to fix the problem (completeness) and any of the actions can fix the problem (correctness).

    We will further discuss architectural issues for putting repair systems into practice, and present examples of repair in the context of UML-based software development, including the repair of EJB deployment descriptors following cross-notational ("inter-viewpoint") inconsistencies.

  • October 9
    Speaker:Wolfgang Emmerich
    Title: What is Software Engineering Research?
    Abstract: At the beginning of the term, with many new members joining the group, it might be appropriate to reflect on the nature of software engineering research. We will be reviewing the different forms that software engineering research can take. We will investigate various ways on how to generate new research questions and review techniques how these questions can be answered in an analytic, experimental and empirical.

    We will discuss the activities involved in the "software engineering research lifecycle", including writing research grant proposals, hiring researchers, conducting the research, writing up and publishing the results and transferring them into industrial practice.

  • October 16
    Speaker: Andrew Dingwall-Smith
    Title: From Requirements to Monitors
    Abstract: Software systems are built based on assumptions about the environment in which they will operate. Change in the environment can therefore result in failure of the system which cannot easily be anticipated. Requirements monitoring, as part of the normal operation of a system, is necessary to identify these failures and to drive system evolution.

    In this seminar we describe our work on a framework for monitoring goal oriented requirements specifications. The goals are specified formally using temporal logic. We will discuss how we automatically generate monitors, from the temporal logic specifications, to monitor the satisfaction or failure of goals. We will go on to discuss how we insert instrumentation into the monitored system to emit events to the monitors. This is done using AspectJ, an aspect oriented extension to Java, which we will briefly describe.

  • October 23 (Room 209, 1pm)
    Speaker: Prof. Marta Kwiatkowska, University of Birmingham.
    Title: PRISM - Probabilistic Symbolic Model Checker
    Abstract: Probability is widely used in the design and analysis of software and hardware systems: as a means to derive efficient algorithms (e.g. the use of coin flipping and randomness in decision making); as a model for unreliable or unpredictable behavior (e.g. fault-tolerant systems, computer networks); and as a tool to analyse system performance (e.g. the use of steady-state probabilities in the calculation of throughput and mean waiting time). Probabilistic verification (or probabilistic model checking) refers to a range of techniques for calculating the likelihood of the occurrence of certain events during the execution of the system, and can be useful to establish properties such as "shutdown occurs with probability at most 0.01" and "video frame will be delivered within 5ms with probability at least 0.97".

    This talk introduces the probabilistic model checker PRISM being developed at the University of Birmingham. PRISM can be used to build a variety of probabilistic models (discrete- and continuous-time Markov chains and Markov decision processes). Models are specified using a simple state-based language, an extension of Reactive Modules. Two property specification languages are supported, PCTL (probabilistic computation tree logic) and CSL (continuous stochastic logic). PRISM is a symbolic model checker - its basic underlying data structures are BDDs and MTBDDs (multi-terminal BDDs), though for numerical computations three engines (sparse, MTBDD and hybrid) are supported.

    In the talk will give a brief overview of issues in probabilistic model checking, outline the symbolic model construction together with selected algorithms, and finish with some examples. More information can be found on http://www.cs.bham.ac.uk/~dxp/prism/.

  • October 29 (Tuesday 13.00-14.00, room 203)
    Speakers: Gordon Blair and Paul Grace, Lancaster Univesrity
    Title: Reflective Middleware and its Application to Mobile Computing
    Abstract: Middleware has emerged as an important architectural component in modern distributed systems. However, it is now recognised that established middleware platforms such as CORBA, DCOM and .NET are not flexible enough to meet the needs of emerging distributed applications, featuring for example access to multimedia services and also support for mobile users. In particular, they are not sufficiently configurable and they do not support reconfiguration or longer-term evolution of architectures.

    Recently, a number of reflective middleware platforms have emerged in an attempt to overcome such problems. This talk will report on the Open ORB reflective middleware architecture developed at Lancaster University. This architecture features the use of component technologies, component frameworks and both structural and behavioural reflection.

    This architecture is currently being applied in a variety of aplication domains including mobile computing, programmable networks and the GRID. In the second part of the talk we will focus on the particular use of the technology in supporting mobile devices, in particular overcoming the high levels of heterogeneity that can be experienced in mobile and pervasive environments.

  • October 30
    Speakers: Dave Martin
    Title: Investigations into EJB-Based Middleware
    Abstract: Searchspace is a software company that has a enterprise level product called the Intelligent Enterprise Framework. The middleware tier of this product consists of a java based application with CORBA as the underlying middleware used for client connectivity. This talk will cover the current state of our investigations into how EJB can be used to replace the existing middleware of this enterprise system.

  • November 2002

  • November 13
    Speaker: Emil C. Lupu, Imperial College.
    Title: Policy-based Network and Security Management
    Abstract: Policy appears in many areas of today's research. From access control to network Quality of Service management, from virtual enterprises to ubiquitous systems, numerous academic and industrial efforts have sought or provided policy-based solutions in a variety of application areas. This talk will provide an overview of the Ponder policy framework developed at Imperial College over a number of years. Particular emphasis will be placed on the fundamental concepts which are common across policy-based solutions, on the significant research challenges which remain to be addressed and on the emerging application areas for policy-based systems.

    Dr. Emil Lupu is a lecturer in the Department of Computing at Imperial College. He has worked on policy-based network and security management for a number of years and is a co-founder of the Policy Workshop. His research interests include network and distributed systems management, distributed access control, security management and ubiquitous systems. Dr. Lupu was program co-chair of the Policy Workshop (2001) and of the IEEE Enterprise Distributed Object Computing Conference (2001). In addition, he has served on numerous program committees for conferences in network operations and management, access control and enterprise distributed systems.

  • November 20
    Speaker: Will Heaven
    Title: Using UML to support requirements engineering
    Abstract: The seminar will present two UML profiles for supporting goal-oriented requirements engineering. One profile is for modelling the lightweight approach to requirements developed by UCL for the Ubiquitous Web Applications (UWA) project, the other is for modelling requirements under the KAOS framework. Requirements models represented in the UML allow for effective integration - even interleaving - of high-level models and lower level design models, potentially leading to better specification documents. With KAOS, the option of UML support also opens up the approach to those unfamiliar with the non-standard KAOS notation, thereby increasing its usefulness. KAOS can be truly successful only if a large number of professionals are sufficiently convinced of its potential to use it in industrial cases. Use of the the UML to support KAOS could help achieve this end.

  • November 27
    Speaker: Mirco Musolesi
    Title: Data replication and synchronization in a mobile environment
    Abstract: A fundamental issue in distributed systems is data replication; this represents a widely used technique to enhance service provision in several application scenarios. For example, data may be replicated in order to share workload or to increase availability, especially in a mobile context, where disconnections are probable and, in some cases, frequent. On the other hand, distributed data are not necessarily consistent, because, for instance, they may be out of date: for this reason, disconnected operations are only feasible if users (or applications) can cope with stale data and can resolve any conflicts that arise.

    The aim of the XMIDDLE project is the design and the implementation of a data sharing middleware for ad-hoc networks. The data structures that are stored by XMIDDLE are XML documents, which can be semantically associated with trees. The reconciliation process that we implement is executed transparently to users and can be easily adapted by developers according to their application requirements. In this seminar we will describe the data replication and synchronization techniques used to deal with possible data conflicts, using XML technologies, such as XML Schema.

  • December 2002

  • December 4
    Speaker: David Bush Senior System Engineer in NATS, and Engineering Doctorate student at UCL
    Title: Assessing Long Term Requirement Stability
    Abstract: NATS plans, provides and operates a safe, integrated Air Traffic Management Service for the United Kingdom. The domain is one of impending rapid change, and in this uncertain environment NATS finds itself with its technical infrastructure representing about 70% of its fixed assets. This infrastructure is expensive to procure and has a very long expected lifetime in service. Developing new infrastructure that is robust against the possible future changes represents a significant challenge. This goal developing robust requirements leading to robust systems (E-Type Systems) needs a clear method for assessing how stable the requirements are likely to be in the very long term. This need is the motivation for the work described in the presentation.

    There is a clear gap in research and practice in addressing the stability (or volatility) of the requirements of E-Type systems - at least in assessing this stability early in the requirements lifecycle. This presentation describes a new approach for early assessment of the stability of system requirements. The approach fuses two established techniques and the novel contribution is in the bringing together the existing work in Scenario Planning and Goal Directed Requirements Engineering. The presentation will describe its view of the problem of Requirements Stability, introduce Environmental Scenarios and illustrate how they can be linked with Goal Based Requirements approaches to address the problem. Using a NATS case study, the presentation will walk through the Goal Stability Assessment process using the Goal Stability Assessment Tools developed to support he process.

  • December 11
    Speaker: Rami Bahsoon
    Title: ArchOptions: A Real Options-Based Model for Predicting the Stability of Software Architectures
    Abstract: Architectural stability refers to the extent an architecture is flexible to endure evolutionary changes in stakeholders? requirements and the environment while leaving the architecture intact. Approaches to evaluating software architectures for stability can be retrospective or predictive. Both approaches start with the assumption that the software architecture?s primary goal is to guide the system?s evolution. Retrospective evaluation looks at successive releases of the software system to analyse how smoothly the evolution took place. Predictive evaluation examines a set of likely changes and the extent the architecture can endure these changes. Our work takes a predictive approach to the evaluation of software architectures for stability; it uses value-based reasoning (EDSER1-4) and exploits real options theory to prediction.

    In this talk, we explain why real options theory appears to be well suited to predict architectural stability. We present ArchOptions, a novel model that exploits options theory to predict architectural stability. We show how we have derived the model from Black and Scholes options theory (Noble Prize Wining). we discuss: the analogy and assumptions made to reach the model, its formulation, possible interpretations, and usages. We discuss related and future work.

  • January 2003

  • January 29
    Speaker: James Skene
    Title: Model Driven Performance Analysis
    Abstract: The Model Driven Architecture (MDA) is a software development methodology endorsed by the Object Management Group, in which UML models are the central development artefact. The MDA was conceived to addresses the problem of architectural tie-in during design. Designs are first captured in technology neutral Platform Independent Models (PIMs) which capture the structure and semantics of particular business domains. These are then refined to Platform Specific Models (PSMs) which incorporate architectural detail.

    The key research challenge associated with the MDA is finding ways to transform models in a semantics preserving manner, for example from a PIM to a PSM. In my talk I will outline some ideas as to how these transformations can be described using the Object Constraint Language (OCL) at the UML meta-model level using the UML's lightweight extension mechanisms. I will also describe how some formal methods, particularly performance analysis, can be straightforwardly integrated into CASE tools supporting the MDA, by defining a semantic domain for the formalism, and describing mappings from designs to that domain.

  • February 2003

  • February 5
    Speaker: Ben Butchart
    Title: Harnessing the Grid with OGSA
    Abstract: The Open Grid Services Architecture is an attempt to integrate Web service technologies with existing Grid standards so that developers of grid applications can benefit from the broad commercial support for Web service standards. The OGSA specification borrows from the Web Service toolkit mechanisms for service description and discovery (WSDL, WSIL), automatic generation of client and server code from descriptions, binding to network protocols and support for other emerging higher level standards built on core Web service technologies (BPEL). At the same time OGSA leverages experience from the Grid community in supporting resource and job management, information discovery, security and notification.

    We illustrate OGSA design features with reference to a fully productive e-Science application that we have modified to operate in a Grid environment using early manifestations of the Globus Toolkit 3.0. Computational chemists developed the original application combining functionality from several Fortran programs to create a process to predict stable crystal forms from an initial molecular structure. The process is extremely hungry for CPU time and even a simple analysis currently takes six to eight weeks to run. Apart from the limitation of running code on a single processor, scientists have to manually copy and reformat files so different parts of the process can run on platforms most suitable for a particular calculation. This lack of automation and resource coordination frequently results in errors forcing a total or partial rerun of the analysis. We show how an OGSA implementation facilitates the distribution of such applications across a large network of heterogeneous platforms radically improving performance of the system through parallel CPU capacity, coordinated resource management and automation of the business process.artial rerun of the analysis. We show how an OGSA implementation facilitates the distribution of such applications across a large network of heterogeneous platforms radically improving performance of the system through parallel CPU capacity, coordinated resource management and automation of the business process.

  • February 12
    Speaker: Andy Hughes
    Title: The policy configuration language for a software system designed to improve the engineering of router software components
    Abstract: Over the last decade, the area of programmable networking seems to have matured significantly; it now attracts interest from both academic and industrial institutions. Many projects have either focused on or made use of work done in this area; however, until now, programmable networks have not been used to improve the process of engineering router software. This project does just this. Our previous publications outline a software system that makes the experimental analysis of routers---and hence router software---more accessible to network researchers. The software system provides a configuration system designed to manage a testbed consisting of multiple programmable routers. It allows administrators to quickly and easily configure the testbed using a high level policy language. This presentation discusses this language and presents rationale for design choices.

  • February 19
    Speaker: Davide Lamanna
    Title: SLAng: A language for defining Service Level Agreements
    Abstract: Application or web services are increasingly being used across organisational boundaries. Moreover, new services are being introduced at the network and storage level. Languages to specify interfaces for such services have been researched and transferred into industrial practice.

    We investigate end-to-end quality of service (QoS) and highlight that QoS provision has multiple facets and requires complex agreements between network services, storage services and middleware services. We introduce SLAng, a language for defining service level agreements that accommodates these needs.

    We illustrate how SLAng is used to specify QoS in a case study that uses a web services specification to support the processing of images across multiple domains in a quality of service aware manner. We evaluate SLAng based on the experience gained from this case study.

  • February 26
    Speaker: Joe Lewis-Bowen
    Title: Modelling EGSO Architecture
    Abstract:Grid software systems should use architectural styles that support their non-functional requirements. However, many projects seem to arrive at their architecture in an uninformed way. A dynamic FSP model may demonstrate that an architecture meets operational requirements by animating core behavioural scenarios. This technique has been applied to EGSO, an astronomy data-grid, complementing its static UML architectural description.

  • March 2003

  • March 5
    Speaker: Torsten Ackemann
    Title: Incentives for Cooperation in Peer-to-Peer Networks
    Abstract: In recent years, a new paradigm for networking architectures has been found, or rather re-discovered. This moves from the dominant client-server architecture, where one or more centralised servers serve a number of clients, to a system where any node can take either roles. This model is referred to as peer-to-peer.

    Peer-to-peer networks became popular quickly due to file sharing applications such as Napster or Gnutella. In these applications, people would let their computers cooperate, so that every user could download the other user's files, and vice versa.

    Despite the unquestionable success, much still leaves to be desired by the actual service quality in file sharing networks. The download speed for a given file is usually far from the line maximum, and if it is, there is nothing a node can do to speed it up. The quality attributes of downloaded files, such as the level of compression or even just the completeness of the content, are also almost entirely unpredictable.

    To stimulate cooperation in peer-to-peer networks, and both improve service performance and service reliability, we suggest a bartering based approach to provide incentives for nodes to offer services to other nodes. For this, we adapt a service oriented view of the peer-to-peer network. This allows the inclusion of other peer-oriented networks and services, such as Grid networks and message forwarding.

  • March 12
    Speaker: Thomas Alspaugh (UCI)
    Title: Validating an integrated scenario strategy
    Abstract: The use of scenarios or use cases to specify a system's requirements involves a number of challenges in practice. Integrated Syntactic Analysis is s strategy for supporting this use of scenarios. ISA uses a collection of interrelated syntactic analyses and techniques to guide the creation and management of a collection of scenarios. Automated support is necessary for ISA and is provided by the software tool "SMaRT". The value of such a technique depends on its effectiveness in practice, and this effectiveness is examined using a comparison between the results of three case studies.

  • March 20 - Thursday, 2-3pm.
    Speaker: Giovanni Denaro
    Title: Formal Verification of Safety Critical Software
    Abstract: Safety critical systems require to be highly reliable and thus special care is taken when verifying their software. This seminar discusses alternatives for verification in this field and presents a verification technique based on symbolic execution. Symbolic execution allows for computing the reachable states of programs with unbounded input variables. Thus, symbolic execution may provide execution models, which are suitable for checking relevant safety properties. A case study conducted on a software component of the traffic alert and collision avoidance system (TCAS) exemplifies the technique and provides empirical evidence of its practical applicability.

  • March 26 - Library (room 108).
    Speaker: Stefanos Zachariadis
    Title: Software in Mobility - Exploiting Logical Mobility Techniques in Mobile Computing Middleware
    Abstract: Recent trends in computing show an increased decentralisation of computing environments, with users starting to own mobile devices (PDAs, mobile phones, laptops etc) of increased power. This seminar discusses the advantages of using logical mobility (code mobility) techniques in a mobile environment, the principles of doing so, as well as the implementation of a particular system that allows this.
  • April 2003

  • April 2
    Speaker: Malika Boulkenafed, INRIA
    Title: AdHocFS: Ad hoc Distributed File System for Mobile Users
    Abstract: Among other features, pervasive computing aims at offering access to users' data, anytime, anywhere from whatever available terminal that is the most convenient in a given situation. However, achieving such a goal requires a number of improvements in the way servers and users' terminals interact. In particular, users' terminals should not greatly rely on an information server, which may be temporarily unavailable in a mobile situation. Rather, they should exploit all the information servers available in a given context through loose coupling with both fixed servers and peer mobile terminals.

    We will present the AdHocFS file system for mobile users, that allows collaborative data sharing among ad hoc groups that are dynamically formed according to the connectivity achieved by the ad hoc WLAN. It enhances, in particular, data availability within mobile ad hoc collaborative groups, and integrates a new adaptive data replication protocol for mobile terminals, combining both optimistic and conservative schemes in a way that minimizes communication and hence, energy consumption, while not restricting the nodes' autonomy.

  • April 9
    Speaker:Prof. Paola Inverardi, University of L'Aquila
    Title: A Declarative Framework for Adaptable Applications in Heterogeneous Environments
    Abstract: In the near future applications will run on a variety of devices (laptops, personal digital assistants, cellular phones, communicators, etc.). The same communication infrastructure will be accessed by devices which are different with respect to quantitative and qualitative characteristics (e.g., memory size, computational power, display capabilities, supported protocols, etc.). Moreover, what today is seen as a discrete set of well-characterized different types of devices is going to become a virtually infinite range of heterogeneous devices, each one with its own set of capabilities. In order to prevent execution failures, applications have to be aware of this potentially infinite diversity. This paper presents a framework to develop applications which can be correctly adapted to a dynamically provided context. We have chosen to attack this problem by using a declarative and deductive approach. Inspired by Proof Carrying Code techniques, we use first-order logic formulas to model both the behavior of the code, with respect to the properties of interest, and the execution context. The adaptation process is carried out by using theorem proving techniques and, in particular, the proof assistant HOL4. The aim is to derive a formal proof that the behavior of the code can be correctly adapted to the given context. By construction, the proof, if it exists, gives information on how the adaptation has to be done.

  • April 30
    Speaker: Daniel Dui
    Title: Compatibility of XML Language Versions
    Abstract: Individual organisations as well as industry consortia are currently defining application and domain-specific languages using the eXtended Markup Language (XML) standard of the World Wide Web Consortium (W3C). The paper shows that XML languages differ in sig- nificant aspects from generic software engineering artifacts and that they therefore require a specific approach to version and configuration mannagement. When an XML language evolves, consistency between the language and its instance documents needs to be preserved in addition to the internal consistency of the language itself. We propose a definition for compatibility between versions of XML languages that takes this additional need into account. Compatibility between XML languages in general is undecidable. We argue that the problem can become tractable using heuristic methods if the two languages are related in a version history. We propose to evaluate the method by using different versions of the Financial products Markup Language (FpML), in the definition of which we participate.
  • May 2003

  • May 14
    Speaker: Ben Butchart
    Title: Cargo Cult Design: A case study of good intentions gone wrong
    Abstract: In the South Seas there is a "Cargo Cult" people. These people got used to aeroplanes landing during the War bringing lots of good things. When the war was over they wanted it to continue. So they prepared a flat strip of land for planes to land on and lined the strip with tourches at night. They built a small wooden hut with wooden sticks on the top that looked a bit like radio antennae. They even had a man in a hut wearing wooden headphones. But the planes didn't come anymore. Although the islanders were doing everything right, they were missing something essential. I think it is often the same in software design. Although we have the form right, we miss the essential thing that makes the planes land. In this seminar I'd like to share my experience working on project for a large German bank where we had all the things in place that we associate with a state-of-the-art (then , at least) object-oriented software project. We used an object-oriented programming language (Java), we did moelling using UML tools, we used an expensive Application Server with Enterprise Java Beans. Everything looked right, but in truth, we did not have a 3-tier information system anymore than the Cargo Cult islanders had an airport. So I'd like to explain how the architecture failed to deliver re-usability, performance and maintainability despite all the Java technology, UML diagrams and middleware. This is not an academic paper and this isn't my area of research so I don't really have a research agenda for this "Cargo Cult" phenomenon. I simply intend to use the spare seminar slot to share my own experience with you and discuss the problems of maintaining the principles of good design in a real project.

  • May 21
    Speaker: Peter Saffrey and Anthony Finkelstein
    Title: Meta-modelling: An approach to managing and integrating diverse biological models
    Abstract: One of the major challenges of contemporary science is to 'scale-up' our knowledge of micro-level phenomena to yield an understanding of macro-level phenomena. This challenge is particularly evident in biology where our growing knowledge of molecular and cell biology has still to be harnessed in such a way as to give a better understanding of gross physiological issues such as the behaviour of organs.

    Low-level biological behaviour is now being explored using 'biological modelling', a diverse family of techniques for representing and testing our understanding of biological systems and their emergent behaviour. Scaling up these models, through integration and combination, is a huge challenge, since each model may address a different aspect of a system, use different assumptions, or be based upon an entirely separate modelling paradigm.

    The integration of models requires a detailed understanding of modelling itself. Therefore, in order to understand modelling we have constructed a meta-model, which we will eventually use as a means to integrate and scale-up models. In this talk, we will discuss our progress and preliminary findings.



  • May 28 - room 214.
    Speaker: Clare Gryce
    Title: A View from the Top - Requirements and Architecture in the EGSO Project
    Abstract: The requirements and architecture of any complex software system are intimately related. The 'Twin Peaks' model suggests a development process that explicitly focuses on these two concerns, promoting their concurrent and independent evolution. In this presentation, we discuss how we are negotiating these 'Twin Peaks' in a real world project, EGSO (the European Grid of Solar Observations). In EGSO, the system requirements and architectural definition have formed two distinct areas of activity. As is typical forany non-trivial development project, different stakeholder groups came to the EGSO project with a diversity of opinions, experiences and expectations. This resulted in a number of specific difficulties for the development of both the requirements and architecture. In this presentation, we informally review how software engineering approaches and methodologies have been used in the evolution of the requirements and architecture to date. We note the issues they were intended to address, and the ways in which they succeeded and failed. We also consider the broader context of theproject, and directions for future progress suggested by the Twin Peaks model.

  • June 2003

  • June 4
    Speaker: Javier Morillo
    Title: Improve xlinkit caching (precache using state machines)
    Abstract:xlinkit is a flexible application service that checks the integrity of distributed documents, databases and web content. It provides powerful diagnostics that generate hyperlinks between elements that violate data integrity. This project determines how the checking phase can be improved by precaching some XPath expressions from the rule set. In order to recognise precacheable XPath expressions, state machines and graphs are created. An algorithm is used to obtain a NFA graph which represents an XPath expression within each group of state machines.

This page was last modified on 18 Oct 2013.