My research interests mainly focus on debugging, and in particular on:
Other interests include program transformations, profiling, partial evaluation, refactoring, concurrent programming (in particular CSP and Petri nets), and information retrieval. |
The following list contains former and current PhD students:
|
The following list contains webpages with information on some tools that I developed:
|
You can check the list of projects in which I have participated (outdated since 2011). |
Recent advances in Computerized Numeric Control (CNC) have allowed the manufacturing of products with high-quality standards. Since CNC programs consist of a series of assembler-like instructions, several high-level languages, such as AutoLISP, have been proposed to raise the programming abstraction level. Unfortunately, the lack of formality in these languages prevents the verification of programs. In this work, we propose the use of Haskell in CNC programming. Haskell is a modern pure functional language permitting formal verification and presenting advanced features such as lazy evaluation, higher order, etc. These features will allow us to cope with infinite data structures to apply heuristics to CNC programs to optimize them, etc. |
The development of modern routers requires a significant effort to be designed, built, and verified. While hardware routers are faster, they are difficult to configure and maintain. Software routers, on the other hand, are slower but can easily be configured and maintained. A modular architecture and software toolkit for building routers and other packet processors has been introduced: the system Click. It includes a specification language with features for declaring and connecting router elements and for designing abstractions. We propose the use of a declarative multi-paradigm language, Curry, to specify routers. Since Curry is a fully declarative language, router specifications can be transformed, optimized, verified, etc., by using several existing formal techniques. Moreover, we will show that Curry's features are particularly useful in designing router components with a high level of abstraction. Our first experiments point out that the proposed methodology is both useful and practical... |
There are very few approaches to measure the execution costs of lazy functional (logic) programs. Using a lazy execution mechanism implies that the complexity of an evaluation depends on its context, i.e., the evaluation of the same expression may have different costs depending on the degree of evaluation required by the different contexts where it appears. We have developed a novel approach to complexity analysis of functional (logic) programs. We focus on constructing equations that compute the time complexity of expressions. In contrast to previous approaches, it is simple, precise -i.e., it computes exact costs rather than upper/lower bounds-, and fully automatic. |
Program slicing is a well-known technique to extract the program statements that (potentially) affect the values computed at some point of interest. In this work, we introduce a novel slicing method for XML documents. Essentially, given an XML document (which is valid w.r.t. some DTD), we produce a new XML document (a slice) that contains the relevant information in the original XML document according to some criterion. Furthermore, we also output a new DTD such that the computed slice is valid w.r.t. this DTD. A prototype implementation of the XML slicer has been undertaken. |
Filtering HTML Documents |
Nowadays, the Internet is the main source of information for millions of people and enterprises. However, information on the Internet has not yet been classified, and consequently, the search for information is one of the most important tasks and processes performed by users and systems. In particular, for WWW human users, searching for information is the main (time-consuming) task. To face this problem, both the industrial and the academic communities have developed many methods and tools to index and search web pages. The most extended solution is the use of search engines such as Google and Yahoo; however, while current search engines can be a suitable solution to find a particular webpage, they are useless in finding the relevant information on such a page. Hence, once a web page is found, the user must search it to verify if the information needed is there. This is a problem that has not been satisfactorily solved, and thus, there is no extended solution. In this project, we develop a tool able to automatically extract from a webpage the information (text, images, etc.) related to a filtering criterium without the use of semantic specifications or lexicons and without the need for offline parsing or compilation processes. |
More Information... |
Algorithmic debugging is a semi-automatic debugging technique which
is based on the answers of an oracle (usually the programmer) to a
series of questions generated automatically by the algorithmic
debugger. The technique typically traverses a record of the
execution---the so-called execution tree---which only
captures the declarative aspects of the execution and hides
operational details. This work overviews and compares the most
important algorithmic debuggers of different programming paradigms.
In the study, we analyze the most important features incorporated by
current algorithmic debuggers, and we identify some features that are not
supported yet by any debugger. We then compare all the debuggers
giving rise to a map of the state of the practice in algorithmic
debugging. |
This work presents the nofib-buggy suite: a
collection of benchmarks written in Haskell. The suite is an
extension of the well-known nofib suite but with the
peculiarity that all the programs in the new suite are buggy. This
characteristic makes the benchmarks useful to test and compare
debugging methods. We describe here the kind of bugs used in the
suite, and how they have been distributed. It is also explained how
to download, use and contribute to the nofib-buggy suite. |
SOC is a program slicer for CSP specifications. To increase the precision of program slicing, SOC uses a new data structure called a Context-sensitive Synchronized Control Flow Graph (CSCFG). Given a CSP specification, SOC generates its associated CSCFG and produces two different kinds of slices, which correspond to two different static analyses. |
CSP2PN is a tool that automatically builds a Petri net that produces the same sequences of observable events that a given CSP specification. In CSP terminology, these sequences are the so-called traces. In Petri nets, they correspond to transitionfiring sequences. The transformation algorithm uses an instrumentation of the standard CSP's operational semantics as defined by A.W. Roscoe to explore all possible computations of a specification. The semantics is deterministic because the initial configuration predetermines the rule applied in every step. Therefore, the algorithm can execute the semantics several times to iteratively explore all computations and, hence, generate the whole Petri net. |
Copyrights are held variously by the authors or publishers. In general, these papers are provided for direct use in scholarship and education. If you are contemplating other uses, such as republication, you must obtain appropriate permissions. If you wish to cite any of the published work, please look up the formal publication and use that version. The preprints in this area are as accurate as I can make them, but they are not authoritative copies of the published versions.
|
Copyrights are held variously by the authors or publishers. In general, these papers are provided for direct use in scholarship and education. If you are contemplating other uses, such as republication, you must obtain appropriate permissions. If you wish to cite any of the published work, please look up the formal publication, and use that version. The preprints in this area are as accurate as I can make them, but they are not authoritative copies of the published versions.
|
|