Videos

Olga Holtz

Olga Holtz

Full Professor, Department of Mathematics, University of California, Berkeley, Member, Institute for Advanced Study, Princeton, Professor of Applied Mathematics,Technische Universität Berlin, Matheon Berlin Mathematical School

"Many challenging industrial tasks can only be solved with the help of math”: Russian-born leading math researcher Olga Holtz has very clear ideas on the importance of applied mathematics in our lives. Having developed her studies between Russia, the United States and Europe, she is still an intercontinental scientist directing high budget research projects and teaching both at the Technische Universität in Berlin and at the University of California in Berkeley. Awarded in 2006 with the Alexander von Humboldt Foundation‘s Sofia Kovalevskaja Award, one of the highest endowed research awards in the world, in 2008, with the European Mathematical Society Award, and in 2010 with a Starting Grant from the European Research Council (ERC), Olga Holtz has been able to put gather outstanding teams of pure and applied mathematicians and theoretical computer scientists. At the Falling Walls Conference, Olga Holtz will explain how reliable and quick mathematical methods can accelerate computer calculations, and break the walls of complexity connected to large amounts of data, for example, in the fields of airplane and pharmaceutical development, in the aerospace industry, in architecture and in the establishment of mobile networks.

Breaking the Walls of Complexity. How New Mathematical Methods Allow for New Solutions to Everyday Problems

Transcription

When the wall fell I was a schoolgirl in Chelyabinsk in the Ural Mountains. It was an event that changed the world and my life.

I should thank the organisers for putting together such a great event. It is a pleasure to be here, as always. It is a pleasure to be in Berlin. I will try to introduce a mathematical area that many of you may have heard about, may know about, in some detail. I will try to make precise the notion of complexity, which again should be quite intuitive, if we all talk about complexity about our everyday problems; some things are complex. Some things are complicated, so that is what we intuitively understand by complexity. In fact, in mathematics the same occurs. Some problems are more complicated than others. Mathematics are these pedantic people who want to quantify everything, and they want to get hold of and understand things really in great technical detail.

So how do you understand complexity? You want to solve a problem, which typically is a question to the answer. Typically it is a general abstract question, not something concrete: how do I find my way from my home to the university. But something more general: how do I connect two points- A and B, say, on a network in the most efficient way? This can be quantified completely precisely, and this becomes a problem. There are typically infinitely many instances of a problem; each has its own solution. So, that is what we call the problem.

An instance can be thought of as an input into an algorithm. Again, if we talk about algorithms, we intuitively know what that means. There is a formalised notion for that, which I will tell you about in a second. The solution is an output rate of an algorithm. So, what could be a problem in mathematics, primality testing, is a given positive integer prime. You may know what prime means; primes are the numbers that are divisible only by one and themselves. These are incredibly important building blocks, number theory in the whole mathematics. So that is a general problem. This happens to be solvable in polynomial time: polynomial in the input size, which again, has to be quantified correctly. This was a great achievement about eight years ago; that result was established.

Another problem that I sort of mentioned already is a travelling salesman problem: how do you connect several points, lets say on the network- you want to visit each point exactly once, and you want to be as economical as possible about it. So you want to minimise the total length. It can be measured in many different ways. You want to minimise the total cost. It is the same problem. The beauty of mathematics is that you can with one tool, with one mechanism, and one model, understand the whole huge area- a whole class of problems, as we say. So, some problems are decision problems, where the answer is yes or no, to be contrasted with problems that have a numerical answer, a concrete answer like 42 or a path in a graph- something like that.

So, I did mention how we measure the instance size. Typically, we encode our inputs; they can be encoded as bits strings and then the typical sizes and bits. You don’t have to do that. You can measure it in many, many different ways. A number can be measured by the number of digits and its representation other than the binary, and so forth. So there are many, many ways to do that. In terms of complexity, you can also understand different things. What is typically thought of as complexity is the worst-case complexity, where you want to see how an algorithm, your procedural for finding a solution behaves under the worst possible scenario. This can also be compared with average-case scenario or the best-case scenario. Usually the best-case scenario is really not challenging, so you don’t want to analyse that necessarily.

So what is polynomial time? Polynomial time, typically means worst-case complexity, given an input of size and some function tells you how much time, how much computational time you will spend to eventually arrive at an answer, right? This is a function of (N) if this function is a polynomial, then we say that the algorithm works in polynomial time.

These notions go back, even before Turing, but definitely to the work of Alan Turing: this very profound contribution to theoretical computer science. So Turing formulated first this very detailed notion of an idealised machine, which actually models all our every day computers today. Every computer you use is effectively an (? 58:32 instance) of a universal Turing machine. So Turing machine is something that can be understood, in fact, very simply as mechanical device. It is an idealised device. It has an infinitely long tape. So there is a tape, there is a head that reads the tape. The tape consists of segments; it is divided into segments. A series of commands is being executed. The commands are very simple, so you move along the tape, or you read something. You record the states of this machine. So such a simple device, idealised device, is actually equivalent to the machines of today with all these millions of microchips that we use.

There are different subcategories. They are deterministic, non-deterministic, quantum Turing machines, lots of different flavours, and likewise there are lots of different flavours of complexity measures. As I said, you typically measure the time required or the number of arithmetic operations required. This is the complexity of an algorithm. It is, in a very precise sense, a measure of how complex a problem is. A problem, remember, is not an instance rate; It is a whole class of instances all answering the same kind of question.

As I mentioned already, what we see today started with the work with Turing, led very quickly, in fact, in the 40s and early 50s to the development of electronic computers. This is one of my favourite photographs, one of my heroes: Robert Oppenheimer standing with another very, very remarkable scientist: John von Neumann- a darker figure, if you will. They are standing in front of ENIAC, one of the first machines, first electronic machines built, according to the Von Neumann architecture. This was all at IS and Princeton- in a very special place and with a very special couple of people.

What does a complexity theorist do? A complexity theorist, or a theoretical computer scientist, tries to provide upper and lower bounds on complexity- so tries to analyse how much time or arithmetic complexity is actually required to solve a problem, and this is quantified. Upper bounds just require you to produce an algorithm that performs with that speed. Lower bounds are much trickier. They require a very careful analysis of the problem.

There is a whole hierarchy of complexity classes. The most famous are P and NP. P is what is doable in polynomial time on a deterministic Turing machine. NP is the same for non-deterministic Turing machines. So it is a technical division, which is a little tricky to explain. This is one of the most outstanding problems in theoretical computer science: is P the class of problems? Is P the same as NP? The majority of theoretical computer scientists believe that this is not the case. There are indirect, many, many indirect evidences that this is not the case. However, the question is still open, and this question is worth a million dollars if you manage to solve it.

As I said, NP means, roughly, checkable in polynomial time; this deserves a long discussion, but I will not go into this. As I said, there are many important like everyday problems that are, in fact, NP complete, which means that if this problem happens to be solvable in polynomial time, then this is a representative of the entire class of NP. In other words, if you manage to solve it in polynomial time, then you solve all NP problems in polynomial time, which would be a very stunning result. I am not sure we are ready to see that. As I said, the majority believes that is not going to be the case.

This is a solution, and a particular solution in this small picture of the travelling salesman problem for 15 big German cities, the biggest German cities.

Now, finally, what do I do with my collaborators? We are analysing a slightly different flavour of algorithm performance, namely complexity in the sense of communication. When we have a problem so large that many computers or many processors have to deal with that. In that case we want to analyse how much communication is required. So, how much different processors have to send and receive information from each other, which we call communication complexity for algorithms. Our goal is, of course, is to minimize energy and time costs. This is becoming the next bottleneck, not so much the arithmetic complexity that I just told you about, but the communication complexity that we are now trying to analyse.

There are many different computer architectures in all these instances. One has to somehow figure out what is possible in principle, and what is intrinsically not possible- like in the arithmetic set-up. That is very recent work. My co-authors are mostly in Berkley: G. Ballard, J. Demmel, also O. Schwartz, who is currently at Weizmann Institute in Israel, but coming to work with us very soon. We have been building a framework to make this analysis possible, much in the spirit of our arithmetic complexity. We are quite happy that we were able to analyse, in fact, most of the linear algebra, algorithms, that are used everywhere for problems of control theory, computerised geometrical designs- you name it: scheduling, logistics, and so forth.

What we could do: we could show that many things are optimal already, and many things can be optimised. We also established intrinsic bounds as to how far you can go; things that you cannot go beyond, in the sense of lower complexity bounds.

Another exciting thing, which I can unfortunately not explain to a general audience: we were using lots of different tools. Some from analysis and some from algebra, and some from combinatorics, in particular, we borrowed the very important notion from graph theory that has to do with graph expansion. It is a notion about local properties of a graph- connectivity of a graph. We analyse these massive graphs that are computation graphs coming from- it is like a scheme of an algorithm- how the algorithm is actually going to be performed. These can be all analysed through a graph as well. Graphs can be analysed using algorithms, whereas algorithms can be analysed using graphs.

Combing all these techniques, we have been able to do some of the analysis that is becoming quite important now. There are lots of questions that remain. This is all related not only to theoretical computer science and to mathematics; it is also related to engineering and problems of design as we have seen before- like in the travelling salesman problem. So, lots of challenges remain, and I will be happy to discuss this in detail if someone is interested. Thank you.

 

back