The United States Presidential Election – The hope for political changes in favor of the 98%

I am following with great interest the recent developments of the campaigns for the 2016 United States presidential election. During the last years I have lost hope in political change, mainly due to the extreme and ever increasing concentration of power and wealth all around the world. We are living in a system where power and wealth are factually synonymous and concentrated in the hands of a small minority. A system which consists of deep and complex interdependencies among power structures such as politics, media, police, army and intelligence agencies. Even for an optimist it should be difficult to envision how this situation could change to the better. I personally thought that we already missed the point in time where such a change would have had been possible and that a certain kind of revolution in the near future seemed unavoidable. But then I heard about Bernie Sanders, one of the potential candidates for the democratic party for the 2016 United States presidential election. Suddenly, change seems possible again and I hope with all my heart that the voters in the United States make use of this historic opportunity. Let me be more clear:

If your annual income is less than $200,000 voting for Bernie Sanders is without alternative, independent of your ethnic background, your religious beliefs, your favorite party, your gender or your age.

This seems to be a strong statement, but it is exactly what I personally believe to be rational. Let me briefly explain why. As mentioned before, power and wealth are factually synonymous and concentrated in the hands of a small minority. This means that those in power have a strong self-interest to maintain, or even to increase, this predominant inequality. The campaigns of almost all candidates are for the most part funded by wealthy individuals and big corporations. This is very suspicious for a democracy where the power should be vested in the people and this fact alone should raise your awareness. There is an exception though: Bernie Sanders.

Currently, the majority of the population is getting systematically exploited by a small minority. Most of the hard-working people do not get their fair share of the cake, families and children are living in poverty, while in the US more than 50% of the income is going to the top 10% and almost all of the income increase in recent years is going to the top 1%. In addition, the current system fosters an irreversible destruction of the environment. Those are issues which are not acceptable and of extreme urgency. Let me give you a few facts to think about (together with their sources).

  • The wealthiest 85 people on the planet have more money than the poorest 3.5 billion people combined. [1]
  • The amount of money that was given out in bonuses on Wall Street last year is twice the amount all minimum-wage workers earned in the country combined. [1]
  • Since 1990, CEO compensation has increased by 300%. Corporate profits have doubled. The average worker’s salary has increased 4%. Adjusted for inflation, the minimum wage has actually decreased. [1]
  • 110 million Americans live amongst such high levels of air pollution, the federal government considers it to be harmful to their health. [4]
  • The United States has 5% of the world’s population, but 25% of the world’s prisoners. [3]
  • Each year, humankind adds six to eight billion tons of carbon to the atmosphere by burning fossil fuels and destroying forest, pumping up the concentration of greenhouse gases responsible for global warming – an effect that could raise temperatures by three to ten degrees by the year 2050. [5]
  • In 2013, 45.3 million people lived in poverty in the USA. [2]
  • The poorest half of the US owns 2.5% of the country’s wealth. The top 1% owns 35% of it. [1]
  • The top 1% of America owns 50% of investment assets (stocks, bonds, mutual funds). The poorest half of America owns just 0.5% of the investments. [1]
  • Up to 400,000 people are killed each year in the US due to preventable medical errors. [6]
  • $765 billion, or 30% of all US healthcare costs, each year is wasted. [6]
  • The US spent $80 billion on incarceration in 2010 alone. [3]
  • Three out of four young black men in Washington, D.C., can expect to serve time behind bars. This is despite the fact that people of all races use and sell drugs at the same rate. [3]
  • More than 96% of convictions in the federal system result from guilty pleas rather than decisions by juries. [3]
  • A large study has found that up to one half of all plants and animals species on dry land could face extinction by the year 2050 due to global warming. According to the World Resources Institute, 100 species die each day due to tropical deforestation. [4]
  • 92% of US physicians admitted to making some medical decisions based on avoiding lawsuits, as opposed to the best interest of their patients. [6]

This is definitely not a world in which I want to live and I am sure that most people would agree with me. We cannot expect candidates like Hillary Clinton or Donald Trump to change things to the better for the majority of the population, the not super wealthy, or in other words: the 98% (sometimes also called the 99%). The campaigns of those candidates are funded by the establishment and those campaign contributions are not just donations, they are investments. Therefore, my hopes are with Bernie Sanders, and so I hope are your’s. If we want to live in peace with respect for the environment and its limits, if we want social securities, a working health care system and a law system based on justice, there is simply no alternative to Bernie Sanders.

I am not an American citizen myself. I was born in Germany and I am currently living in Switzerland. The problems described here are global problems and not only an American issue, although many of them are more extreme in the US than in other countries. I will probably take a closer look at some of these issues in future blog posts. For the moment I am just excited about the mere possibility of political change in the US, which might even spread to the rest of the world.



Inherent Limitations for Benchmarking Problems beyond NP

Today I would like to talk about some difficulties regarding benchmarking of hard problems. With hard problems I mean problems which are supposed to be harder than NP, or related classes, from a computational complexity point of view. This is a very interesting topic on which I am currently working in collaboration with Cassio P. de Campos from Queen’s University Belfast. The motivation for this work comes from the fact that we are both confronted with very difficult computational problems in our research. In the past I have worked a lot with stochastic combinatorial optimization problems, which might be extremely difficult to solve or even to approximate. For example, the Probabilistic Traveling Salesman Problem with Deadlines is #P-hard, which implies NP-hardness and is a much stronger hardness result. In fact, already the evaluation of the objective function for this problem is #P-hard. The question is, how can we empirically assess the performance of algorithms for difficult problems such as the Probabilistic Traveling Salesman Problem with Deadlines?

The situation is not too bad if we are able to compute the value of an optimal solution in a reasonable amount of time and if we are able to evaluate solutions in a reasonable amount of time. But since the problem is #P-hard this is only possible for very small instances. Instances of practical relevance might not belong to that category. What can we do in that case?

If we create a benchmark by sampling from the set of relevant instances in a fair way, we run into the following two problems: We are in general not able to compute an optimal solution (or the value of an optimal solution) for the benchmark instances and we are in general not able to evaluate solutions provided by algorithms on the benchmark instances (in many cases not even approximately). How can we empirically assess the performance of algorithms in such a setting? It seems that we cannot do a lot here. An algorithm returns a solution and we are not able to evaluate, or even estimate, the quality of such a solution.

How can we circumvent this problem?

Are there any ways to circumvent this problem? Well, we could sample and select instances of relevant size for which the above problems do not apply for some reason, or we could use an instance generator in order to create instances together with certain information about these instances, for example the value of an optimal solution. These two approaches  are commonly used and therefore I will thoroughly discuss them in the following.

The problem with the first approach is that we are not really creating a benchmark which is representative for the set of relevant instances. Let me give you an example. Let us consider the following approach: We sample a certain number of instances of relevant size. We then apply an exact solver to these instances with a quite huge time limit. After that, we select only those instances for further usage, which could be solved by the exact solver within the given time limit. The problem is that we are not sampling anymore from the set of relevant instances, but instead from the set of instances which can be solved by an exact solver within a given time limit. This introduces a kind of bias and there is a serious threat (in fact it might be quite likely) that the resulting instances are not representative for the set of relevant instances.

The second approach runs into some theoretical limitations. Whenever we use an instance generator to create instances together with certain additional information about these instances, we are also able to retrieve these additional information using an algorithm in NP or some related complexity class. If these are information that are useful for the benchmarking process, then they are also very likely useful for the given computational task. That means for problems beyond NP such an instance generator cannot exist, unless the corresponding complexity classes collapse. Or in other words: Any attempt to generate instances in such a way leads to a bias in the benchmark, because it is not possible to create instances from the set of relevant instances together with certain additional information without introducing some kind of bias.


Both approaches are not very satisfactory and introduce a certain kind of bias into the benchmarking process. It seems that we are not able to empirically assess the performance of algorithms for hard problems on instances which are representative for the set of relevant instances. We can only hope that the results on our biased benchmark instances generalize to the set of relevant instances, but that does not seem very scientific to me. Computational problems are becoming more and more complex nowadays and there is a huge amount of problems with a computational complexity beyond NP. We are not able to solve them exactly and we are also not able to empirically compare algorithms for such problems. I think what we ultimately can do, is to attack these difficult problems from a more theoretical point of view, for example by finding approximation algorithms for related computational tasks or by identifying important special cases which are tractable.