Tim Muller

I am an Assistant Professor in the School of Computer Science at the University of Nottingham. Previously, I was a Departmental Lecturer in Security at the Software Engineering Programme at the Department of Computer Science at the University of Oxford and a Research Fellow at the Nanyang Technological University in Singapore. I have a broad range of interests including formal methods, security and trust. Just below is a summary of some of my research. I teach Mathematics for Computer Science for first year students.

In my spare time, I enjoy playing poker, futsal and squash. You can always ask me to make (and eat!) a tasty, spicy hotpot.

Tim Muller

School of Computer Science

University of Nottingham, CS Building, Office B50a

+447947657949

t.j.c.muller@gmail.com

Research

Trust and security are intertwined concepts. When security guarantees are unobtainable, users may be forced to trust others. There is an active field of research trying to design mechanisms to avoid trusting the wrong people (trust systems). Trust systems themselves, however, are also susceptible to attacks. Attacks identified in the literature have subsequently been found performed on existing systems. Typical attacks on trust systems manipulate trust scores. My primary research interest is where trust and security meet: How secure are you when you trust a certain person, especially given that he may have manipulated you into trusting him?

Fig. 1. Only when sufficiently trusted, will the wolf reveal himself. Fig. 2. p is the proportion of attackers, q is the probability that a lie will be identified in hindsight, a determines whether an attacker will actually lie.

Ultimately, when we want to reason about trust in a security system, we have to acknowledge that attackers will not behave as we want them to. Even the incentives of the attacker are unknown. What is the worst an attacker can do? The idea in our information theoretic approach is that trust ratings contain some amount of information, depending on the probability that the source of the rating is honest. Attackers lower the information content by introducing noise. Increasingly expressive and nuanced models have been published. In "Is It Harmful When Advisors Only Pretend to Be Honest?", we study dynamic behaviour by attackers. Being the first to formally analyse how attackers should change their behaviour to frustrate a trust system, we found some surprising results. In particular, analysing robustness of a system under the "camouflage attack" (attackers are honest until trusted) is to weak to be meaningful. Much more powerful attacks exist.

If we are to apply trust to security systems, then it becomes crucial that the decisions made using ratings are robust. The actual probability that a decision is correct depends on the behaviour of attackers; which is unknown. A decision is robust, if, for all behaviours, the probability of the decision being correct remains above some (high) threshold. The project “Provably Secure Decisions Based on Potentially Malicious Trust Ratings” aims to construct such a solution for existing systems that require robust decisions. The underlying technique is based on Shannon's theorem, using the fact that the information in a rating is non-zero in most cases.

A major difference to other approaches, is that we accept that some users are probably attackers, and build a provably secure system using these potentially malicious users' ratings.

Besides faking ratings, there are other ways to attack a trust and reputation system. One example that I find particularly interesting is the reputation lag attack. It takes time for a reputation to spread, and this can be exploited by an attacker, by acting with a not-yet-plummeted reputation. Any naive discretisation of time might not accurately capture the subtleties of the reputation lag attack. As a continuation of "On Robustness of Trust Systems", we are currently developing a simple tool that can handle a minimal extension of both continuous-time Markov chains and partially observable Markov processes. The idea is that undesirable states should be sufficiently improbable, even with an attacker that can act any (legal) way, at any time.

Publications

Semantics of Trust, T Muller, Formal Aspects of Security and Trust, 141-156 (2010).

A Formal Derivation of Composite Trust, T Muller, P Schweitzer, Foundations and Practice of Security, 132-148 (2012).

On Beta Models with Trust Chains, T Muller, P Schweitzer, Trust Management VII, 49-65 (2013).

On Robustness of Trust Systems, T Muller, Y Liu, S Mauw, J Zhang, Trust Management VIII, 44-60 (2014).

Towards Robust and Effective Trust Management for Security: A Survey, D Wang, T Muller, Y Liu, J Zhang, Trust, Security and Privacy in Computing and Communications (TrustCom) (2014).

Expressiveness Modulo Bisimilarity of Regular Expressions with Parallel Composition, JCM Baeten, B Luttik, T Muller, P van Tilburg, Mathematical Structures in Computer Science, 1-36 (2015).

Using Information Theory to Improve the Robustness of Trust Systems, D Wang, T Muller, AA Irissappane, J Zhang, Y Liu, International Conference on Autonomous Agents and Multiagent Systems, 791-799 (2015).

The Fallacy of Endogenous Discounting of Trust Recommendations, T Muller, Y Liu, J Zhang, International Conference on Autonomous Agents and Multiagent Systems, 563-572 (2015).

Quantifying Robustness of Trust Systems against Collusive Unfair Rating Attacks using Information Theory, D Wang, T Muller, J Zhang, Y Liu, International Joint Conference on Artificial Intelligence (2015).

Trust Revision for Conflicting Sources, A Jøsang, M Ivanovska, T Muller, International Conference on Information Fusion (Fusion), 550-557 (2015).

Information Theory for Subjective Logic, T Muller, D Wang, A Jøsang, Modeling Decisions for Artificial Intelligence, 230-242 (2015).

Is it Harmful when Advisors only Pretend to be Honest, D Wang, T Muller, J Zhang, Y Liu, AAAI Conference on Artificial Intelligence (AAAI) (2016).

A Language for Trust Modelling, T Muller, J Zhang, Y Liu, International Workshop on Trust in Agent Societies (2016).

Limitations on Robust Ratings and Predictions, T Muller, J Zhang, Y Liu, IFIP International Conference on Trust Management, 113-126 (2016).

How to Use Information Theory to Mitigate Unfair Rating Attacks, T Muller, D Wang, J Zhang, Y Liu, IFIP International Conference on Trust Management, 17-32 (2016). [best paper award]

The Foul Adversary: Formal Models, N Dong, T Muller, The 20th International Conference on Formal Engineering Methods (2018).

Information Theoretical Analysis of Unfair Rating Attacks Under Subjectivity, D Wang, T Muller, J Zhang, Y Liu , IEEE Transactions on Information Forensics and Security 15, 816-828 (2019).

An Unforeseen Equivalence Between Uncertainty and Entropy, T Muller, IFIP International Conference on Trust Management, 57-72 (2019)

The Reputation Lag Attack, S Sirur, T Muller, IFIP International Conference on Trust Management, 39-56 (2019)

Provably Robust Decisions based on Potentially Malicious Sources of Information, T Muller, D Wang, J Sun, IEEE 33rd Computer Security Foundations Symposium (CSF), 411-424 (2020)