At the Synchronous languages workshop I have attended a very interesting and amusing talk given by Timothy Bourke and Marc Pouzet regarding real time real number computation and simulation. It’s a take on the classic puzzle: Suppose that two cars *A* and *B* are 100 km apart and they drive towards each other at 50 km/hr each. A fly flies from *A* towards *B* at 80 km/hr and, when it reaches it, flies back to *A* and so on. Whenever the fly reaches a car it turns and flies to the other car. The usual question is “*how many km the fly travels before it is crushed between the two cars?*”. The question with a twist is: “*how many turns the fly makes before it is crushed between the two cars?*”.

This is of course a Zeno-like situation and, as the problem is stated, the fly must make an infinite number of turns even though it travels a finite distance. So far so good. The amusing turn of the presentation is that there exist commercial (and expensive) software packages out there which can simulate this scenario. The market leader Simulink calculates the number of turns very efficiently: **42**. Of course, 42 is an unnaceptably poor approximation for infinity. Or maybe it is.

The fist observation is that, of course, *Simulink* uses not real numbers but floating point representations, which introduce round-off errors. The second observation is that arbitrary precision representations for real numbers are possible and libraries are actually available. How would *Simulink* work if it used arbitrary precision? For now let us ignore issues of efficiency: producing an answer as hilariously wrong as 42 cannot be excused by the fact that it was produced “efficiently”. It is rather preferable that an answer is **not** produced at all! Even if a Zeno-like computation would diverge, without producing any digits of output, it would be preferable to a misleadingly confident *42*.

I have not tried to set up this puzzle using arbitrary-precision arithmetic so I am not sure how it works out. If we calculate not in ℝ but in the unit interval, would the computation produce no output digits or would it produce a stream of digits converging towards 1? I might just try it, but as interesting as the result would be, it wouldn’t settle the discussion though.

Marc Pouzet explained to me very convincingly the difference between **simulation **and **computation** in a hybrid continous-discrete system. The continuous values, at least in this case, are **outside** the computation. The computation **samples** continuous values, converts them to floating point and operates with them. So there are not one but **two** sources of possible errors in the simulation. The first one is, as discussed, due to the use of floating points, which introduce **round-off errors**. The second one is due to the **sampling** **error**, when the external real value is converted for the first time into a floating point representation. One of the most interesting sampling constructs is a time-sampling construct (*up*) which monitors an (external) continuous function and issues a (boolean) signal when the function changes sign.

How would both these sources of errors be affected by switching to arbitrary precision? Programmatic numerical errors will be eliminated, in the usual sense: all digits of a value that are calculated are correctly calculated. But what about sampling errors? In arbitrary precision arithmetic equality is not decidable because, simplistically put, you don’t know how many digits to look at before you can figure out whether 0.999999… and 1.00000… are equal or not.

So what would happen to *Simulink* if we wanted to use arbitrary precision both in the computation and in the simulation? Something beautiful would happen! *Simulink* would no longer allow you to describe physically unrealisable models. Physical functions must be computable. The fix is very simple: whenever you sample a continuous value, the margin of error of the sampling process must be specified. The margin of error will tell you that you don’t need to bother looking at the digits that mean less than the sampling error because they are irrelevant.

This is a somewhat subtle point: even if you are sampling the continuous external value into a value which you represent as arbitrary precision, it still needs to be made **computable** through the specification of a sampling error. The mechanism of arbitrary precision used in the computation is only meant not to introduce additional errors, but it cannot compute the uncomputable! And sampling a real number with infinite precision is uncomputable because it requires an infinite amount of information to be transmitted in a finite amount of time.

How about the *up *construct: it is external and it only needs to communicate a finite amount of information to the program (1 bit). Why is it unrealistic? The uncomputability of *up *comes out of the infinitely precise **timing** of the signal. In other words, the amount of information is still infinite because in addition to the bit representing the signal we have the potentially infinite precision of the moment at which it is received.

Of course, specifying a sampling error is perfectly reasonable: any instrument that can be produced must have finite precision, therefore must introduce a sampling error. The use of arbitrary precision arithmetic only prevents you from writing simulations that are nonsensical anyway. This is a good thing!

In the case of our initial puzzle, the introduction of a sampling error would quickly lead to the identification of a special case in the simulation, when the cars A and B are so close that the fly cannot tell whether it is on A or on B. And in this case the simulation could stop, and the answer could be *42*! However, in this case the answer would be correct, because this would be correct behaviour for a system in which the sensors have a (known) sampling error. The greater the precision of the instruments, the more turns the fly will be able to make before it enters the uncertainty area.

As my colleague and area expert Martín Escardó can attest, I was quite skeptical regarding the usefulness of arbitrary precision real numbers. I was clinging to the faulty intuition that the **real** real numbers are things that you can grasp and work with. They are more complicated than that, it turns out. And the fact that the arbitrary precision representation forces you to face their subtlety, and thus avoid unrealistic computation (and simulation) scenarios is to me a strong indication that arbitrary precision real numbers could be quite useful!

Facebook discussion: https://www.facebook.com/dan.ghica/posts/10152092417363092

Thanks for the great post, Dan. The technique presented in the paper addresses the first question (distance) but not the second (number of turns). The first is quite important for practical problems (such as in robotics) because things are often well-behaved past the Zeno point. If we focus on the Zeno point, there will always be questions not so easy (computationally) to answer. For example, did the ball bounce and odd or an even number of times?

Walid Taha pointed out his recent paper which is exactly on this topic: finding Zeno.

The criticism of Simulink is unwarranted — 42

isthe correct answer — indeed it is the Answer to the Ultimate Question of Life, the Universe, and Everything…Indeed. I have updated the post to reflect this.