2004/05 Archive

    October 2004

  • October 4
    Speaker: Alessio Lomuscio
    Title: Specification and verification of multiagent systems.
    Abstract: In this talk I shall try and argue that appropriate techniques based on formalisms based on modal logic provide a powerful tool for the specification and verification of multiagent systems. The talk will consist of two parts. In the first part the idea of specifications of multiagent systems by means of temporal, epistemic, deontic logics will be put forward and the main results presented. In the second, attention will be given to the problem of verifying that a multiagent system follows said specifications; in particular, techniques for verification by model checking via OBDD and SAT will be introduced and, time permitting, demonstrated.

  • October 18
    Speaker: Ivan Flechais
    Title: AEGIS - a methodology for the development of secure and usable systems
    Abstract: Security is a complex and important non-functional requirement of software systems. According to Ross Anderson, "Many systems fail because their designers protect the wrong things, or protect the right things in the wrong way". Surveys also show that security incidents in industry are rising, which highlights the difficulty of designing good security. Some recent approaches have targeted security from the technological perspective, others from the human computer interaction angle, offering better user interfaces for the definition of high level user security. No research has yet studied the relationship between security and usability requirements. In this seminar I will describe AEGIS, a methodology for the development of secure and usable systems. AEGIS defines a development process and a UML meta-model of the definition and the reasoning over the system's assets. AEGIS has been applied to case studies in the area of Grid computing and I will present some of the results.

  • October 25
    Speaker: Licia Capra
    Title: Engineering Human Trust in Mobile System Collaborations
    Abstract: Rapid advances in wireless networking technologies have enabled mobile devices to be connected anywhere and anytime. While roaming, applications on these devices dynamically discover hosts and services with whom interactions can be started. However, the fear of exposure to risky transactions with potentially unknown entities may seriously hinder collaboration. To minimise this risk, an engineering approach to the development of trust-based collaborations is necessary. In this talk we present hTrust, a human trust management model and framework that facilitates the construction of trust-aware mobile systems and applications. In particular, hTrust supports: reasoning about trust (trust formation), dissemination of trust information in the network (trust dissemination), and derivation of new trust relationships from previously formed ones (trust evolution). The framework views each mobile host as a self-contained unit, carrying along a portfolio of credentials that are used to prove its trustworthiness to other hosts in an ad-hoc mobile environment. Customising functions are defined to capture the natural disposition to trust of the user of the device inside our trust management framework.

  • November 2004

  • November 1
    Speaker: Paul Brebner
    Title: Grid Middleware - Principles, Practice, and Potential
    What are the principles of Grid middleware? We explore the intended use and architecture of OGSA/OGSI based Grid middleware,and compare with a typical Enterprise technology, J2EE.

    How easy is it to use in practice? What are the pitfalls? We then reveal the initial results of a project to explore the issues surrounding installing, configuring and securing an exemplar OGSA Middleware (GT3) across organisations, and give some preliminary performance indications.

    What potential does Grid middleware have to: (1) provide insight into different ways of using Service Oriented Architectures, and (2) support automatic deployment and debugging? We present a more sophisticated approach for evaluating GT3, using an architectural/role/scenario based comparison of two alternative designs for a Grid application, implemented using different GT3 mechanisms.

    Finally we ponder the possibilities for automatic Grid deployment, and an approach for enhancing Grid debugging support aided by knowledge of deployment context.

  • November 15
    Speaker: Costin Raiciu
    Title: Code Collection to Support Large Applications on Mobile Devices
    Abstract: The progress of mobile device technology unfolds a new spectrum of applications that challenges conventional infrastructure models. Most of these devices are perceived by their users as "appliances" rather than computers and accordingly the application management should be done transparently by the underlying system unlike classic applications managed explicitly by the user. Memory management on such devices should consider new types of mobile applications involving code mobility such as mobile agents, active networks and context aware applications. This paper describes a new code management technique, called "code collection" and proposes a specific code collection algorithm, the Adaptive Code Collection Algorithm (ACCAL). Code collection is a mechanism for transparently loading and discarding application components on mobile devices at runtime that is designed to permit very low memory usage and at the same time good performance by focusing memory usage on the hotspots of the application. To achieve these goals, ACCAL uses properties specific to executable code and enhances conventional data management methods such as garbage collection and caching. The results show that fine-grained code collection allows large applications to execute by using significantly less memory while inducing small execution time overhead.

  • November 29 - Room 203
    Speaker: Gunnar Schroeter
    Title: The abstract concepts behind modern Modeling Techniques (from he viewpoint of formal Modeling Theory)
    Abstract: In this talk presents the abstract (formal) ideas behind Composition and Decomposition, Abstraction and Refinement and Separation and Integration and how these concepts interact.

  • December 2004

  • December 13
    Speaker: Genaina Rodrigues
    Title: An Automated Approach for Predicting Software System Reliability Abstract:Scenarios are a popular means for capturing behavioural requirements of software systems early in the lifecycle. Scenarios show how components interact to provide system level functionality. If component reliability information is available, scenarios can be used to perform early system reliability assessment. In this paper we present a novel automated approach for predicting software system reliability. The approach involves extending a scenario specification to model (1) the probability of component failure, and (2)scenario transition probabilities derived from an operational profile of the system. From the extended scenario specification, probabilistic behaviour models are synthesized for each component and are then composed in parallel into a model for the system. Finally, a user-oriented reliability model described by Cheung is used to compute a reliability prediction from the system behaviour model. The contribution of this paper is a reliability prediction technique that takes into account the component structure exhibited in the scenarios and the concurrent nature of component-based systems. We also show how implied scenarios induced by the component structure and system behaviour described in the scenarios can be used to evolve the reliability prediction.

  • January 2005

  • January 10
    Speaker: Alistair Harris
    Title:Strategies for Selecting Repair Actions
    Abstract: Consistency management of distributed documents in computer supported cooperative work is critical in most organisations for three reasons. First, actors and their participants may work in a computer-supported and distributed fashion. Second, several of these actors located at different sites may collaboratively author artefacts. Third, these actors and their participants may be physically located and collocated at different sites.

    Consistency checking of documents encompasses the writing of constraints between related documents, identifying, locating and repairing of inconsistencies. After a consistency check is performed, a number of inconsistencies may be identified and located. A tool is then used to analyse constraints and generate repair actions. Consequently, a set of at least two repair actions is generated to remove inconsistency. There may be to two reasons for this: First, inconsistency occurs between at least two elements and second, the mechanism for generating repair actions is based on the first order logic system, which is undecidable. Moreover, all generated repair actions for a particular constraint, may equally repair that inconsistency. So, which repair action should the user select? This is a conflict.

    In this seminar I discuss the use of strategies (heuristics) for supporting the user in selecting a repair action. This selected repair action should give the user a level of confidence.

  • January 17
    Speaker: Leticia Duboc
    Title: Impact Analysis of Database Schema Changes
    Abstract: Many Enterprise Systems deployments comprise a base software product associated with a set of extensions to meet specific business requirements for a customer. This collection of systems, similar in some important aspects yet different in others, is known as a Product Family.

    Experience shows that when products of a family are used in similar domains, extensions present commonalities, which should be re-integrated to enhance the features of the base framework. This is not a straightforward process as candidate changes may cause unexpected errors if applied to existing deployments. Particularly, when considering database schema variations, code impact may happen not only static SQL queries embedded in the code, but also dynamically generated SQL queries and associated code.

    This seminar will argue that this is a real problem facing the software industry across a variety of domains. This will be enforced by two examples: a commercial product family in the financial area and set of related systems in bioinformatics. In addition, it will be discussed a research that attempts to partially solve the problem and its results when applied to one of the case studies above.

  • January 20
    Speaker: Andrea Bracciali
    Title: A Coordination-based Methodology for Security Protocol Verification
    Abstract: The quest for the formal certification of properties of systems is one of the most challenging research issues in the field of formal methods. It requires the development of formal models together with effective verification techniques. We describe a formal methodology for verifying security protocols based on ideas borrowed from the analysis of /open/ systems, where applications interact with one another by dynamically sharing common resources and services in a not fully trusted environment. The methodology is supported by ASPASyA, a tool based on symbolic model checking techniques.

  • January 24
    Speaker: Paolo Costa
    Title: Semi-probabilistic Content-based Publish-Subscribe
    Abstract: Mainstream approaches to content-based distributed publish-subscribe typically route events deterministically based on information collected from subscribers, and do so by relying on a tree-shaped overlay network. While this solution achieves scalability in fixed, large-scale settings, it is less appealing in scenarios characterized by high dynamicity, e.g., mobile ad hoc networks or peer-to-peer systems. At the other extreme, researchers in the related fields of multicast and group communication have successfully exploited probabilistic techniques that provide increased fault tolerance, resilience to changes, and yet are scalable.

    In this talk, we will introduce a novel approach where event routing relies on deterministic decisions driven by a limited view on the subscription information and, when this is not sufficient, resorts to probabilistic decisions performed by selecting links at random. Algorithm's description will be followed by a brief overview of simulation results, aimed at showing that the particular mix of deterministic and probabilistic decisions we put forth in this work is very effective at providing high event delivery and low overhead in highly dynamic scenarios, without sacrificing scalability

  • January 31
    Speaker: Franco Raimondi
    Title: Model checking multi-agent systems
    Abstract: This talk presents a methodology for the formal verification of temporal, epistemic, deontic, and cooperation modalities in multi-agent systems via model checking. This methodology extends standard OBDD-based techniques for temporal logic model checking. Various examples will be introduced to motivate this research, and an implementation of the algorithms will be shown.

  • February 2005

  • February 14
    Reading week - no seminar.

  • February 21
    Speaker: Ben Butchart
    Title: Sedna - A Graphical BPEL Editor for Scientific Workflows
    Abstract: We outline the design and implementation of a graphical editor for modelling scientific workflows as orchestrated Web Service interactions using the Business Process Execution Language. The early design envisaged a generic workflow editor based loosely on UML activity diagrams with a level of abstraction roughly adjacent to that of the BPEL specification. Usability tests reveal that this level of abstraction is not intuitive to scientists who have no knowledge of distributed systems. We show that extending a generic editor with domain and application specific components radically improves the usability of the software for this user group. We discuss the role of the Eclipse Plugin Development Framework as a mechanism for rapid development of high-level components targeting specific user groups. We consider the feasibility of a tool that would enable users with little or no knowledge of distribution technologies to add domain specific editing tools without any support from engineers. We conclude that the ability to author tools for designing workflow is crucial to the acceptance of service oriented systems.

  • February 28
    Speaker: Wolfgang Emmerich
    Title: Validation with Experiments
    Abstract: Once a research hypothesis has been developed it needs to be validated. We present a brief overview of principled approaches for validation, including proof, simulation, experimental and empirical methods. A lot of systems research calls for experimental evaluation. We review the literature on experiment design and survey approaches that can be used in software engineering and distributed systems design.

  • March 2005

  • March 7
    Speaker: Andrew-Dingwall-Smith
    Title: Building Run-Time Monitors from Goal Oriented Requirement
    Abstract: This talk describes work in run-time monitoring of goal-oriented requirements specifications which are defined using temporal logic. This is achieved by instrumenting the monitored system (using AspectJ) so that events are emitted which can be used to determine whether the behaviour of the system complies with the requirements. The instrumentation must bridge the gap between the implementation of the system and the requirements model. The talk will discuss how instrumentation to achieve this objective can be built using AspectJ.

  • March 7
    Speaker: Jinshan Liu
    Title: Towards supporting QoS-aware service composition in mobile ad hoc networks
    Abstract: The ease of deployment makes MANETs an attractive choice in a variety of environments, e.g., pervasive computing. However, service deployment and composition on such a distributed system face challenges including: (1) the limited computation power and communication capacity of thin devices, (2) the lack of infrastructure, and (3) the nodes' mobility and transient nature of the wireless connection. In this context, we are devising base middleware functionalities towards supporting the dynamic composition of mobile services over MANETs.

    Our proposition starts with the service specification that not only takes into account the qualities of service provided to the user, but also the introduced resource assumption on the service host. With the above service specification, we introduce an incentive-compatible mechanism for service allocation and the associated distributed reputation mechanism to stimulate and facilitatethe service provision/consumption in selfish mobile ad hoc networks. Additionally, to alleviate the networks' dynamic topology, a middleware group service is designed based on various attributes of group membership for handling group initialization and dynamics.

  • March 14
    Speaker: Genaina Rodrigues
    Title: Using Scenarios to Predict the Reliability of Concurrent Component-Based Software System
    Abstract: Scenarios are a popular means for capturing behavioural requirements of software systems early in the lifecycle. Scenarios show how components interact to provide system level functionality. If component reliability information is available, scenarios can be used to perform early system reliability assessment. In this paper we present a novel automated approach for predicting software system reliability. The approach involves extending a scenario specification to model(1) the probability of component failure, and (2) scenario transition probabilities derived from an operational profile of the system. From the extended scenario specification, probabilistic behaviour models are synthesized for each component and are then composed in parallel into a model for the system. Finally, a user-oriented reliability model described by Cheung is used to compute a reliability prediction from the system behaviour model. The contribution of this paper is a reliability prediction technique that takes into account the component structure exhibited in the scenarios and the concurrent nature of component-based systems. We also show how implied scenarios induced by the component structure and system behaviour described in the scenarios can be used to evolve the reliability prediction.

  • March 21
    Speaker: Torsten Ackermann
    Title: A Lightweight Incentives Mechanism for Peer-to-Peer Networks
    Abstract: In recent years, we have seen a shift from the traditional client/server service provisioning paradigm to a more distributed approach, where each node can take the role of both oth client and server simultaneously. While peer-to-peer networks are certainly the most prominent example of this, there are other types of networks with a similar structure, for example Grids and mobile ad-hoc networks.

    Lack of cooperation seems to be one of the the biggest obstacles for the success of these networks. Without incentives for cooperation, most participating nodes will act as freeloaders and will not provide services to others but will only consume services, and so the overall service quality is only a fraction of what it could be.

    In the talk, we will focus on peer-to-peer file sharing networks and look at some of the already existing incentives schemes, we will analyze the problem using game theory, and we will look at a possible solution that will hopefully be more lighweight and flexible than existing approaches.

  • March 21
    Speaker: James Skene
    Title: Engineering Runtime Requirements-Monitoring Systems using MDA Technologies
    Abstract: The Model-Driven Architecture (MDA) technology toolset includes a language for describing the structure of meta-data, the MOF, and a language for describing consistency properties that data must exhibit, the OCL. Off-the-shelf tools can generate meta-data repositories and perform consistency checking over the data they contain. We describe how these tools can be used to implement runtime requirements monitoring of systems by modelling the required behaviour of the system, implementing a meta-data repository to collect system data, and consistency checking the repository to discover violations. We evaluate the approach by implementing a contract checker for the SLAng service-level agreement language, a language defined using a MOF metamodel, and integrating the checker into an Enterprise JavaBeans application. We discuss scalability issues resulting from immaturities in the applied technologies, leading to recommendations for their future development.

  • April 2005

  • April 25
    Speaker: Michael Kay
    Title: Engineering an Open-Source XSLT Processor
    Abstract: XSLT is a language for transforming XML documents. The author started development of the Saxon XSLT processor in 1999, and development is still continuing today: there have been a quarter of a million downloads, and the product is now recognized as a leading implementation of not only XSLT but also XQuery and XML Schema. The author is now running his own company to exploit the technology. The purpose of this talk is to try and answer the question: why has it been so successful?

    The question can be answered at two levels. Firstly, what are the technical qualities of the product, what is it about Saxon as a piece of software that makes it tick? Secondly, what was it about the project that enabled these qualities to be delivered successfully: are there any lessons one can learn about engineering processes, project management, or marketing? As the author will explain, it certainly hasn't been a conventional project.

    Michael Kay started the development of Saxon while at ICL, where he had worked for 24 years, designing a series of software products and reaching the rank of ICL Fellow. He had previously gained a Ph.D for work on database management systems at the University of Cambridge. After ICL he joined Software AG for three years, spending most of his time on W3C standards work (and Saxon!) and then left at the start of 2004 to create his own company, Saxonica.

  • May 2005

  • May 9
    Speaker: Antinisca Di Marco
    Title: Performance Analysis of Software Architecture
    Abstract: Early performance analysis based on Queueing Network (QN) models has been often proposed to support software designers during the software development process. The introduced approaches aim at addressing performance issues as early as possible in order to reduce design failures. All of them try to adapt a system performance analysis methodology to software systems and they assume as available, at design time, additional information about the performance aspects.

    The seminar introduces the topic of early performance validation of software systems and presents our methodology that permits quantitative reasoning on performance aspects at the software architecture level. This approach, differently from other methodologies, does not require the specification of the hardware platform aspects. By filling the knowledge gap between the software and the performance worlds, the approach systematically generates a QN model representing the software architecture ready to be evaluated by performance solvers.

  • May 11
    Speaker: Robert Chatley, Imperial College London
    Title: Predictable Dynamic Plugin Architectures
    Abstract: Modern software systems are often assembled from collections of components. Ideally it should be possible to construct correctly functioning systems by simply deploying sets of independent components. It should also be straightforward to effect upgrades or reconfigurations after the application has been deployed. The notion of a self-organising system aims to remove as much of the configuration management effort as possible from the user or developer when working with such systems, passing the responsibility to the system itself. Unfortunately systems without explicitly defined architectures, and those subject to evolution, are prone to behaving in a surprising manner, as components from different sources are combined in different configurations. It is desirable to be confident that the system realised as the result of an upgrade or reconfiguration will behave correctly before a change is made.

    We present an approach to dynamically extending applications based on "plugin" components. Plugins are optional components which can be used to enable the dynamic construction of flexible and complex systems. We present a model of plugin systems and a prototype implementation of a framework for managing them. We show how our model integrates closely with an object-oriented programming language, and requires minimal effort on behalf of the developer to create components that will work with the plugin framework.

    In order to ensure the correctness of dynamic systems, some techniques for modelling and analysis are required. We generate models combining the structural and behavioural aspects of prospective system configurations and use model checking to discard those configurations that violate desired behavioural properties. In this way behavioural concerns can be used in choosing between potential configurations. By integrating our modelling and analysis techniques into the reconfiguration process, we can use the analysis that our technique provides to guide self-organisation and produce systems that behave in a predictable way.

  • May 13
    Speaker: John Grundy, University of Auckland
    Title: Building domain-specific, visual language software engineering tools
    Abstract: Domain-specific software engineering tools focus on supporting a subset of software construction and often a subset of the software lifecycle. In this talk I will present a motivation for such tools and overview some examples from our own research. These include data mapping, user interface design, performance engineering, event-based system configuration, project planning, workflow and aspect-oriented development tools. I will focus on support for visual description of domain concepts and code generation from these tools. I will provide an overview of architectures and meta-tools we have developed for engineering such tools and our experiences using them for both academic and industrial problems.

  • May 23
    Speaker: Ofer Margoninski
    Title: A framework for the specification and execution of heterogeneous, composite models
    Abstract: When modeling complex systems, it is often desirable to create composite models, by integrating together various sub-models, created in a variety of different languages and schemes, using a variety of different tools. In this work, we propose an XML based language, that can be used to specify composite models. The language supports concise description of the functionality and interfaces provided by each model, as well as specification of the way in which a composite model is built and executed. We also describe a computational framework, which can be used for the execution of such composite models. Unlike other suggested approach for model integration, our approach does not impose one modeling scheme or composition algorithm, and supports a wide range/ecology of different models, executing on their native tools, along with a variety of different connectors, used to link them together.

  • June 2005

  • June 6
    Speaker: Francois Taiani, Lancaster University
    Title: A Multi-Level Meta-Object Protocol for Fault-Tolerance in Complex Architectures
    Abstract: The past decade has seen an increasing use of complex computer systems made of third party components to develop mission critical applications. To insure the dependability of those systems in a sound and maintainable manner, technologies are needed to add fault-tolerance mechanisms transparently, while maintaining efficiency, high coverage, and evolvability. In this presentation, we present a generic framework that addresses this problem and can be used within current industrial software. Our proposal is based on a limited set of core concepts inspired from plant biology and meta-object protocols. It provides separation of concerns for the implementation of adaptive fault tolerance strategies, while maintaining a global inter-level perception of the system runtime behavior. We demonstrate its practicality by using it to control the non-determinism of a CORBA/UNIX system.

  • June 20
    Speaker: Annika Hinze, University of Waikato, New Zealand
    Title: Fine-grained Event Notification in Information Delivery Applications
    Abstract: Event Notification Services (ENS) inform about the occurrences of events that are of special interest for the service's users, e.g. the publication of new data in a network. ENS are also called publish/subscribe services, where user subscribe to certain data published by event sources. Typical systems for event notification are news or stock tickers and remote monitoring of commercial buildings. The main focus of ENS research in recent years has been on efficient event filtering in wide area networks.

    Currently, the pertinence of event-based technology to a wider range of applications is becoming increasingly evident. Upcoming applications for distributed event notifications are mobile location-based services (e.g. traveller information systems), publish/subscribe for semantic web content, and integration into meta-software that supports information document repositories. The main challenge in these new applications is the need to flexibly support a variety of non-regular network setups, new information models, and integration with external services.

    This talk will introduce our work towards meeting these challenges. We will start from the example of a generic publish/subscribe service for an open meta-software for digital libraries. New ways had to be found for supporting a fragmented network of distributed digital library servers. We will describe the design and usage of a distributed Directory Service and introduce our hybrid approach using two networks and a combination of different distributed routing strategies for event filtering. We will also touch upon the issue of approximative filtering of XML documents and the integration of event-based notification with location-based serices for context-based information delivery.

    This talk explores the use of ENS for upcoming applications for information delivery. We discuss the transfer of event notification concepts into new applications, such as the the Semantic Web, change management, and mobile information delivery. This talk also briefly introduces the research carried out in the 'Information Systems and Databases' research group at the University of Waikato, the top-ranked university for computer science research in New Zealand.

  • July 2005

  • July 14
    Speaker: Arosha Bandara
    Title: Policy Refinement for DiffServ QoS Management
    Abstract: Quality of Service (QoS) management aims to satisfy the Service Level Agreements (SLAs) contracted by the provider and therefore QoS policies are derived from SLA specifications and the provider's business goals. This policy refinement is usually performed manually with no means of verifying that the policies written are supported by the network devices and actually achieve the desired QoS goals. Tool support is lacking and policy refinement has rarely been addressed in the literature.

    In this presentation we show how the policy refinement approach we have developed can be applied to the domain of DiffServ QoS management. We make use of goal elaboration and abductive reasoning to derive strategies that will achieve a given high-level goal. By combining these strategies with events and constraints, we show how policies can be refined, and what tool support can be provided for the refinement process using examples from the QoS management domain.

  • July 18
    Speaker: Thomas Heinz
    Title: HiPAC - High Performance Packet Classification
    Abstract: Since the late 90s, the so-called packet classification problem has drawn quite some attention from the academic networking community. Packet classification is a fundamental operation in many networking devices such as switches, routers and firewalls. The talk is about HiPAC (http://www.hipac.org/), a high performance packet filter for Linux 2.4, which is from a user's point of view a feature-complete clone of iptables (standard packet filter of Linux 2.4) but implements a more advanced O(d log n) classification algorithm. I will give an overview of the classification algorithm and the design of HiPAC. Moreover, I would like to discuss some novel, geometry-based strategies to get space complexity under control without sacrificing lookup time too much.

  • July 21
    Speaker: Harald Gall, University of Zurich, Switzerland
    Title: Software Evolution Analysis and Visualization
    Abstract: Gaining higher level evolutionary information about large software systems is a key challenge in dealing with increasing complexity and architectural deterioration. Modification reports and problem reports taken from systems such as CVS and Bugzilla contain an overwhelming amount of information about the reasons and effects of particular changes. Such reports can be analyzed to provide a clearer picture about the problems concerning a particular feature or a set of features. Hidden dependencies of structurally unrelated but over time logically coupled files exhibit a high potential to illustrate software evolution and possible architectural deterioration.

    In this talk, we describe the visualization of software evolution by taking advantage of this logical coupling introduced by modifications and bug fixes over time. We show different views on the evolution of a software system: (a) views based on quantitative analysis of growth and change rates; (b) dependencies introduced by logical couplings and their visualization; (c) feature evolution views; and (d) integrated views that combine several evolution metrics.

    As a result, our approach helps to uncover hidden dependencies between software parts and presents them in easy-to-assess visual form. Such visualizations can indicate locations of design erosion in the architectural evolution of a software system. We have applied our approach to several large software systems including Mozilla and its CVS and Bugzilla data to show the effectiveness of our approach.

  • September 2005

  • September 26
    Speaker: Panu Bunyakiati
    Title: The certification of software tools with respect to software engineering standards
    Abstract: Software engineering standards underpin delivery of projects by defining baseline good practices that software processes shall follow. Software tools enact software processes, automate activities, and support the production of artefacts required by the standards. An outstanding issue today in the software tools industry is the interoperability of tools across vendors and across the steps defined in software processes. It is, however, difficult to establish the compliance of these complex pieces of software to those standards. It has been suggested that many existing tools that advertise standard compliance fail to lift up to their claims. The objective of this work is to propose an approach to systematically and formally assess the compliance of these tools to the standards and to diagnose the causes of non-compliance.

  • September 29
    Speaker: T.Y. Chen
    Title: Adaptive Random Testing (ART)
    Abstract: Recently, we have proposed to improve the fault-detection capability of random testing by enforcing a more even, well-spread distribution of test cases over the input domain. Such an approach is named as adaptive random testing. In this seminar, we will cover:
    1. the motivation;
    2. failure pattern based testing;
    3. various principles that could enforce an even spread of test cases, and the advantages and disadvantages of their corresponding ART implementations; and
    4. comparison of random testing and adaptive random testing with respect to various testing effectiveness measures.

This page was last modified on 18 Oct 2013.