[1] D. Duval, R. Echahed, F. Prost, and L. Ribeiro. Transformation of attributed structures with cloning. In Proceedings of the 17th International Conference on Fundamental Approaches to Software Engineering (FASE 2014), 2014. To be published.
[ bib ]
[2] Mehdi Mhalla and Frédéric Prost. Gardner's minichess variant is solved. CoRR, abs/1307.7118, 2013. To be published in ICGA Journal.
[ bib ]
[3] F. Prost and L. Terras. Privacy and nomadic computing - a public-key cryptosystem based on passwords. Technical report, LIG, 2012. Submitted to PECCS 2013.
[ bib | .pdf ]
The use of public-key cryptography is complicated in a nomadic computing era. Private keys are typically huge numbers that are impossible to memorize or even to write down and have to be stored electronically. Therefore a mobile user has somehow to keep its private key with him at all time (it is senseless to imagine that the private key is dowloaded through a public network). It is neither realistic nor safe. Indeed the mobile device must be protected, which is usually done through password mechanisms. At the end of the day all the cryptosystem relies on this password. In this paper we propose a generic way to produce keys from a password using secure hashing functions. We have done an implementation of a nomadic PGP as a proof of concept: the software is a java applet, thus platform independent, providing a complete solution for mail encryption based on public-key cryptography. In the end the user just has to remember its password, and no longer has to rely on specific software/operating system/hardware settings.
[4] D. Duval, R. Echahed, and F. Prost. Graph transformation with focus on incident edges. In Graph Transformations - 6th International Conference, ICGT 2012, Bremen, Germany, September 24-29, 2012. Proceedings, volume 7562 of Lecture Notes in Computer Science, pages 156-171. Springer, 2012.
[ bib | .pdf ]
We tackle the problem of graph transformation with particular focus on node cloning. We propose a new approach to graph rewriting, called polarized node cloning, where a node may be cloned together with either all its incident edges or with only its outgoing edges or with only its incoming edges or with none of its incident edges. We thus subsume previous works such as the sesqui-pushout, the heterogeneous pushout and the adaptive star grammars approaches. We first define polarized node cloning algorithmically, then we propose an algebraic definition. We use polarization annotations to declare how a node must be cloned. For this purpose, we introduce the notion of polarized graphs as graphs endowed with some annotations on nodes and we define graph transformations with polarized node cloning by means of sesqui-pushouts in the category of polarized graphs.
[5] Frederic Prost. On the impact of information technologies on society: an historical perspective through the game of chess. In Andrei Voronkov, editor, Turing-100, volume 10 of EPiC Series, pages 268-277. EasyChair, 2012.
[ bib | http ]
The game of chess as always been viewed as an iconic representation of intellectual prowess. Since the very beginning of computer science, the challenge of being able to program a computer capable of playing chess and beating humans has been alive and used both as a mark to measure hardware/software progresses and as an ongoing programming challenge leading to numerous discoveries. In the early days of computer science it was a topic for specialists. But as computers were democratized, and the strength of chess engines began to increase, chess players started to appropriate to themselves these new tools. We show how these interactions between the world of chess and information technologies have been herald of broader social impacts of information technologies. The game of chess, and more broadly the world of chess (chess players, literature, computer softwares and websites dedicated to chess, etc.), turns out to be a surprisingly and particularly sharp indicator of the changes induced in our everyday life by the information technologies. Moreover, in the same way that chess is a modelization of war that captures the raw features of strategic thinking, chess world can be seen as small society making the study of the information technologies impact easier to analyze and to grasp.
[6] F. Prost. Enforcing dynamic interference policy. In Proceedings of the third IEEE International Conference on Privacy, Security, Risk and Trust (PASSAT-11), 2011.
[ bib | http ]
Noninterference is the mathematical basis for confidentiality analyses. The idea is to ensure that private data will not be observable at a public level. Understood in a strict way noninterference is a too strong property. Standard every day life examples like password checks or message encryption formally break the noninterference property. In this paper we propose a framework in which it is possible to define an interference policy allowing to define safe data declassification. Moreover this policy is dynamic, i.e. the confidentiality level of data may evolve during computation: think at policies in which you want to express that a user has a limited number of guesses or to the sending of a pay-per-view information. We develop a notion of program safety with relation to a dynamic interference policy and give an algorithm (in the form of an abstract evaluation of the program) to check that a program is safe with relation to a dynamic interference policy.
[7] D. Duval, R. Echahed, and F. Prost. Categorical abstract rewriting systems and functoriality of graph transformation. volume 41: Graph-Transformation and Visual Modeling Techniques 2011, 2011.
[ bib | http ]
[8] F. Prost. Calcul et Dépendances. Habilitation á diriger des recherches, Université de Grenoble, November 2010.
[ bib | .pdf ]
[9] D. Duval, R. Echahed, and F. Prost. Graph rewriting with polarized cloning. Technical report, LIG, 2009.
[ bib | http ]
We tackle the problem of graph transformation with a particular focus on node cloning. We propose a graph rewriting framework where nodes can be cloned zero, one or more times. A node can be cloned together with all its incident edges, with only the outgoing edges, with only the incoming edges or without any of the incident edges. We thus subsume previous works such as the sesqui-pushout, the heterogeneous pushout and the adaptive star grammars approaches. A rule is defined as a span LlKrR where the right-hand side R is a multigraph, the left-hand side L and the interface K are polarized multigraphs. A polarized multigraph is a multigraph endowed with some cloning annotations on nodes and edges. We introduce the notion of polarized multigraphs and define a rewriting step as pushback followed by a pushout in the same way as in the sesqui-pushout approach.
[10] F. Prost. Twilight chess - a chess variant designed to rehabilitate human vs. computer challenge. International Computer Games Association Journal, 32(2), 2009.
[ bib | .pdf ]
[11] F. Prost and C. Zerrari. Reasoning about entanglement and separability in quantum higher-order functions. In Proceedings of Unconventional Computation 2009 (UC'09), Lecture Notes in Computer Science. Springer, 2009.
[ bib | .pdf ]
[12] D. Duval, R. Echahed, and F. Prost. A heterogeneous pushout approach to term-graph transformation. In Proceedings of Rewriting Techniques and Application 2009 (RTA'09), 2009.
[ bib | .pdf ]
We address the problem of cyclic termgraph rewriting. We propose a new framework where rewrite rules are tuples of the form (L,R,tau, sigma) such that L and R are termgraphs representing the left-hand and the right-hand sides of the rule, tau is a mapping from the nodes of L to those of R and sigma is a partial function from nodes of R to nodes of L. The mapping sigma describes how incident edges of the nodes in L are connected in R, it is not required to be a graph morphism as in classical algebraic approaches of graph transformation. The role of tau is to indicate the parts of L to be cloned (copied). Furthermore, we introduce a notion of heterogeneous pushout and define rewrite steps as heterogeneous pushouts in a given category. Among the features of the proposed rewrite systems, we quote the ability to perform local and global redirection of pointers, addition and deletion of nodes as well as cloning and collapsing substructures.
[13] Dominique Duval, Rachid Echahed, and Frédéric Prost. Modeling pointer redirection as cyclic term-graph rewriting. Electr. Notes Theor. Comput. Sci., 176(1):65-84, 2007.
[ bib ]
[14] F. Prost and C. Zerrari. A logical analysis of entanglement and separability in quantum higher-order functions. Technical report, LIG, 2008.
[ bib | http ]
We present a logical separability analysis for a functional quantum computation language. This logic is inspired by previous works on logical analysis of aliasing for imperative functional programs. Both analyses share similarities notably because they are highly non-compositional. Quantum setting is harder to deal with since it introduces non determinism and thus considerably modifies semantics and validity of logical assertions. This logic is the first proposal of entanglement/separability analysis dealing with a functional quantum programming language with higher-order functions.
[15] F. Prost. Taming non-compositionality using new binders. In Proceedings of Unconventional Computation 2007 (UC'07), volume 4618 of Lecture Notes in Computer Science. Springer, 2007.
[ bib | .pdf ]
We propose an extension of the traditional λ-calculus in which terms are used to control an outside computing device (quantum computer, DNA computer...). We introduce two new binders: ν and ρ. In ν x.M, x denotes an abstract resource of the outside computing device, whereas in ρ x.M, x denotes a concrete resource. These two binders have different properties (in terms of α-conversion, scope extrusion, convertibility) than the ones of standard λ-binder. We illustrate the potential benefits of our approach with a study of a quantum computing language in which these new binders prove meaningful. We introduce a typing system for this quantum computing framework in which linearity is only required for concrete quantum bits offering a greater expressiveness than previous propositions.
[16] D. Duval, R. Echahed, and F. Prost. Adjunction for garbage collection with application to graph rewriting. In Proceedings of Rewriting Techniques and Application 2007 (RTA'07), volume 4533 of Lecture Notes in Computer Science. Springer, 2007.
[ bib | .pdf ]
We investigate garbage collection of unreachable parts of rooted graphs from a categorical point of view. First, we define this task as the right adjoint of an inclusion functor. We also show that garbage collection may be stated via a left adjoint, hence preserving colimits, followed by two right adjoints. These three adjoints cope well with the different phases of a traditional garbage collector. Consequently, our results should naturally help to better formulate graph transformation steps in order to get rid of garbage (unwanted nodes). We illustrate this point on a particular class of graph rewriting systems based on a double pushout approach and featuring edge redirection. Our approach gives a neat rewriting step akin to the one on terms, where garbage never appears in the reduced term.
[17] D. Duval, R. Echahed, and F. Prost. Modeling pointer redirection as cyclic term-graph rewriting. In Proceedings of the Third International Workshop on Term Graph Rewriting(TERMGRAPH'06), Vienna, Austria, April 2006.
[ bib | .pdf ]
[18] F. Prost. Sort abstraction for static analyzes of mobile processes. In Sixth Symposium on Trends in Functional Programming (TFP' 2005), Tallinn, Estonia, September 2005.
[ bib | .pdf ]
We introduce a new calculus combining functional progamming (lambda-calculus) and mobile processes inspired from the ``blue-calculus''. We give a type system for it, that, in some sense, represents the structure of terms. This type system can be seen as an extension of traditional functional types to process algebras. We show basic properties of this type system including subject-reduction. We show how this simple type system can be extended in order to achieve static analyses (non-interference property) and give a result of non-interference for its calculus.
[19] R. Echahed and F. Prost. Security policy in a declarative style. In Proceedings of the 7th International Conference on Principles and Practice of Declarative Programming (PPDP '05), Lisboa, Portugal, July 2005.
[ bib | .pdf ]
We address the problem of controlling information leakage in a concurrent declarative programming setting. Our aim is to define verification tools in order to distinguish between authorized, or declared, information flows such as password testing (e.g., ATM, login processes, etc.) and non-authorized ones. In this paper, we first propose a way to define security policies as confluent and terminating rewrite systems. Such policies define how the privacy levels of information evolve. Then, we provide a formal definition of secure processes with respect to a given security policy. We also define an actual verification algorithm of secure processes based on constraint solving.
[20] D. Duval, R. Echahed, and F. Prost. Data-structure rewriting. Technical report, March 2005.
[ bib | http ]
We tackle the problem of data-structure rewriting including pointer redirections. We propose two basic rewrite steps: (i) Local Redirection and Replacement steps the aim of which is redirecting specific pointers determined by means of a pattern, as well as adding new information to an existing data ; and (ii) Global Redirection steps which are aimed to redirect all pointers targeting a node towards another one. We define these two rewriting steps following the double pushout approach. We define first the category of graphs we consider and then define rewrite rules as pairs of graph homomorphisms of the form L <-- K -->R. Unfortunately, inverse pushouts (complement pushouts) are not unique in our setting and pushouts do not always exist. Therefore, we define rewriting steps so that a rewrite rule can always be performed once a matching is found.
[21] R. Echahed and F. Prost. Handling declared information leakage. In Proceedings of Workshop on Issues in the Theory of Security (WITS'05), Long Beach, California, January 2005.
[ bib | .pdf ]
[22] N. Brauner, R. Echahed, G. Finke, H. Gregor, and F. Prost. Specializing narrowing for timetable generation: A case study. In Proceedings of Practical Aspects of Declarative Languages (PADL'05), Long Beach, California, January 2005.
[ bib | http ]
An important property of answer generation strategies for functional logic programming (FLP) languages is the complete exploration of the solution space. Integrating constraints into FLP proves to be useful in many cases, as the resulting constraint functional logic programming (CFLP) offers more facilities and more efficient operational semantics. CFLP can be achieved using conditional rewrite systems with a narrowing-based operational semantics. A common idea to improve the efficiency of such operational semantics is to use specific algorithms from operations research as constraint solvers. If the algorithm does not return a complete set of solutions, the property of completeness might be lost. We present a real world timetabling problem illustrating this approach. We propose an algorithm, obtained as an integration of three known optimization algorithms for the linear assignment problem (LAP), enumerating solutions to the LAP in order of increasing weight, such that the answer generation is complete again. We show, how the narrowing process can be tailored to use this algorithm and provide an efficient way to solve the timetable generation problem.
[23] R. Echahed, F. Prost, and W. Serwe. Statically assuring secrecy for dynamic concurrent processes. In Proceedings of the 5th International Conference on Principles and Practice of Declarative Programming (PPDP '03), Uppsala Swede, August 2003.
[ bib | .ps ]
We propose a new algorithm of secrecy analysis in a framework integrating declarative programming and concurrency. The analysis of a program ensures that information can only flow from less sensitive levels toward more sensitive ones. Our algorithm uses a terminating abstract operational semantics which reduces the problem of secrecy to constraint solving within finite lattices. It departs in that from the previous works essentially based on type systems. Furthermore, our proposal is general and tackles a very large class of programs, featuring dynamic process creation, general sequential composition, recursive process calls and high level synchronization.
[24] R. Echahed and F. Prost. Handling harmless interference. Technical report, Leibnitz-IMAG, 46 av, Felix Viallet, 38 031 Grenoble, France, Juin 2003.
[ bib | .pdf ]
We address the issue of confidentiality of information in a context where some authorized functions, called conversion functions, are allowed to downgrade/upgrade the confidentiality level of information. This allows the formalization of several real-world applications, notably applications based on password technology (ATM, login process etc.), which are impossible to deal with under a strict non-interference approach. In this paper we develop a notion of harmless interferences in the sense that if it is formally possible to infer private information from public ones, if it has no impact in practice. To our knowledge this notion has not been investigated so far. We propose a formal definition of harmless interference in a concurrent declarative programming setting and define a corresponding verification algorithm based on constraint solving.
[25] N. Brauner, R. Echahed, G. Finke, F. Prost, and W. Serwe. Intégration des méthodes de réécriture et de recherche opérationnelle pour la modélisation et la résolution de contraintes. In 1ere conférence francophone en gestion et ingéniérie des systèmes hospitaliers (GISEH 2003), pages 333-340, January 2003.
[ bib | .pdf ]
[26] R. Echahed, F. Prost, and W. Serwe. Assuring secrecy for concurrent declarative programs. Technical report, Leibnitz-IMAG, 46 av, Felix Viallet, 38 031 Grenoble, France, January 2002.
[ bib | .pdf ]
We propose a secrecy analysis in a framework for concurrent declarative programming and concurrency. The analysis uses an abstract operational semantics that always terminates. In the line of a recent paper of Boudol and Castellani our analysis ensures that information can only flow from low privacy levels towards higher privacy levels. Our proposal is done in a computational model more complex and closer to reality (a prototype has been implemented), notably it can deal with the dynamic creation of parallel processes.
[27] F. Prost. Types for static analyzis of mobile processes. Technical report, Leibnitz-IMAG, 46 av, Felix Viallet, 38 031 Grenoble, France, July 2001.
[ bib | .pdf ]
We introduce a new calculus inspired from the ``blue-calculus''. We give a type system for it, that, in some sense, represents the structure of terms. This type system can be seen as an extension of traditional functional types to process algebras. We show basic properties of this type system including subject-reduction. We show how this simple type system can be extended in order to achieve static analyses (non-interference property) and give a result of non-interference for its calculus.
[28] F. Prost. Interprétation de l'analyse statique en théorie des types. PhD thesis, École Normale Supérieure de Lyon, december 1999.
[ bib | .ps.gz ]
Static analyses of functional programs such as dead code or binding time are important to perform valuable program optimizations. Many type inference based systems have been proposed to perform those analyses. The idea is to use types as skeletons where annotations are added. Although all these systems share common features, there is not yet a generic framework being able to give a uniform approach to this field. This lack of understanding leads to problems due to the fact that typing constraints hamper accurate analyses. Moreover, only simply typed calculus are considered : it shortens severely the range of these techniques.

This thesis is a theoretical study of relations between typing and analysis of programs. We show how to formalize program dependencies in type theory. We propose a generic system that can be instantiated into several analyses. Our type system is more powerful and more accurate than previous ones. Moreover, by the means of Pure Type Systems, we give a canonical formalization of the techniques used. It allows a generalization of our results to the λ-cube.

[29] F. Prost. On the semantics of non-interference type-based analyses. In JFLA'001, Journées Francophones des Langages Applicatifs, January 2001.
[ bib | .ps ]
Type-based analyses are widely used. In this setting, types are decorated with annotations. We address the problem of the semantical meaning of annotations. We give an extension of Girard's system F for non-interference analysis. We give a semantics based on partial equivalence relations (PERs) for this extension and show that annotations have to be understood as class of PERs (i.e. trivial and diagonal PERs are two different class of PER). We also show that the analysis of a program by type-based systems amount to abstract over PER classes.
[30] F. Prost. A static calculus of dependencies for the λ-cube. In Proc. of IEEE 15th Ann. Symp. on Logic in Computer Science (LICS'2000). IEEE Computer Society Press, 2000.
[ bib | .ps.gz ]
Dependency analysis consists in finding how different parts of a program depend one from each other. It is the root of many program analyses such as dead-code, binding time, strictness, program slicing etc. We address the problem of dependency analysis in the context of typed λ-calculus. We consider all systems of the λ-cube and extend them conservatively in order to determine which parts of a λ-term may contribute to its evaluation. We show how typing information can be used to statically identify dependencies.
[31] F. Prost. A formalization of static analyses in system F. In H. Ganzinger, editor, Automated Deduction - CADE-16, 16th International Conference on Automated Deduction, LNAI 1632, Trento, Italy, july 1999. Springer-Verlag.
[ bib | .ps.gz ]
In this paper, we propose a common theoretical framework for type based stat ic analyses. The aim is the study of relationships between typing and program analy sis. We present a variant of Girard's System F called . We prove standard pr operties of . We show how it can be used to formalize various program analyses like bi nding time and dead code. We relate our work to previous analyses in terms of expressivness (of ten only simply typed calculi are considered) and power (more information can be inferred). features polymorphism as well as subtyping at the level of unive rse extending previous author work where only universe polymorphism (on a simply typed calculus) was considered
[32] F. Damiani and F. Prost. Detecting and removing dead code using rank-2 intersection. In International Workshop: TYPES'96, selected papers, volume 1512 of Lecture Notes in Computer Science. Springer-Verlag, 1998.
[ bib ]
In this paper we extend, by allowing the use of rank 2 intersection, the non-standard type assignment system for the detection and elimination of dead-code in typed functional programs presented by Coppo et al in the Static Analysis Symposium '96. The main application of this method is the optimization of programs extracted from proofs in logical frameworks, but it could be used as well in the elimination of dead-code determined by program specialization. The use of non-standard types (also called annotated types) allows to exploit the type structure of the language for investigating program properties. Dead-code is detected via annotated type inference, which can be performed in a complete way, by reducing it to the solution of a system of inequalities between annotation variables. Even though the language considered in the paper is the simply typed λ-calculus with cartesian product, if-then-else, fixpoint, and arithmetic constants we can generalize our approach to polymorphic languages like Miranda, Haskell, and CAML.
[33] F. Prost. Using ML type inference for dead code analysis. Research Report RR97-09, LIP, ENS Lyon, France, May 1997.
[ bib | .ps.Z ]
We propose to adapt ML type inference algorithm to find and erase dead-code in simply typed lambda-terms. We prove the correctness of our optimization : the optimized program is observationnaly equivalent to the original one. This paper also sheds new lights on links between typing and static analyses, in the particular case of dead code analysis. Indeed, the same algorithm can be used both for typing and dead-code analysis. Moreover, it appears new to the author that ML type inference can be used to represent links as encountered in data flow analyses.
[34] F. Prost. Marking techniques for extraction. Research Report RR95-47, LIP, ENS Lyon, France, December 1995.
[ bib | .ps.Z ]
Constructive logic can be used to consider program specifications as logical formulas. The advantage of this approach is to generate programs which are certified with respect to some given specifications. The programs created in such a way are not efficient because they may contain large parts with no computational meaning. The elimination of these parts is an important issue. Many attempts to solve this problem have been already done. We call this extracting procedure. In this work we present a new way to understand the extraction problem. This is the marking technique. This new point of view enables us, thanks to a high abstraction level, to unify what was previously done on the subject. It enables also to extend to higher-order languages some pruning techniques developed by Berardi and Boerio, which were only used in first and second order language.
[35] F. Prost. A formalization of static analyses in system f. Research Report RR1999-07, LIP, ENS Lyon, France, January 1999.
[ bib | .ps.Z ]
We propose a variant of Girard's System F. The aim of this new system is to provide a common framework for various type based analyses. We modify F by the introduction of two different universes from which types may be built. Following C. Paulin, those two universes may be seen as Prop and Spec. The originality of our system lies on two points: 1) We define an inclusion relation between the two universes, from which we derive a subtyping relation on types. 2) We introduce a notion of universe variable, and develop a polymorphism à la ML on universes. We show that this system has the standard properties (conflence, SN, etc.) and how it can be used to formalize various program analyses like binding time and dead code. We relate our work to previous analyses in terms of expressiveness (often only simply typed calculi are considered) and power (more information can be inferred).
[36] P. Orponen and F. Prost. Parallel programming on hopfield nets. In Timo Honkela Jarmo Alander and Matti Jakobsson, editors, STEP'96 - Genes, Nets and Symbols, pages 5-12. Publication of the Finnish Artificial Intelligence Society, 1996.
[ bib ]

This file has been generated by bibtex2html 1.74