2019-10-17

When Results Are All That Matters: Consequences

by Andreas Zeller and Sascha Just; with Kai Greshake

The Case

In our previous post "When Results Are All That Matters: The Case of the Angora Fuzzer", we reported our findings when investigating the Angora fuzzer [1]. If you have not read that post yet, you should stop here and read our write-up first. There, we focus on our findings and problems that surprised us when experimenting with Angora.
In this article, we have collected some suggestions to advance the field of fuzzing and have a long-term impact on the reliability of software.

1. Science is about insights, not products.

To ensure scientific progress, we need to know which technique works, how, and under which circumstances. We write papers to document such insights such that the next generation of researchers, as well as the non-scientific world, can build on them.  The value of a paper comes from the impact of its insights.

2. Scientists and companies can create tools.

It is fun to build a tool, and if it works well, the better.  Typically, this will involve not one single magical technique, but a multitude of techniques working together.  Tools will have to succeed on the market, though, and will be evaluated not based on their insights, but their effectiveness.

Evaluating tools for their effectiveness can be part of a scientific approach. However, evaluation settings should
  1. be fair and thus not be defined by tool authors; and
  2. avoid overspecialization and thus involve tests not previously known to tool authors.
In other words, the only way to obtain reliable performance comparisons is by independent assessment.   Other communities do this through specific tool contests that operate on secret benchmarks created for this very purpose.  And of course, tools need to be available for evaluation in the first place. It is nice to see the security community to adapt such techniques, such as artifact evaluation.

3. Combinations of techniques must be assessed individually.

If results depend on a larger set of novel processing steps, the contribution of each must be – for instance, by replacing each processing step by a naive approach and assessing the impact of the change.  All decisions affecting performance must be well motivated and documented.

Without assessing the impact of each step individually, one can still have a great tool, but the insight on what makes it great will be very limited.  As an analogy: We know that Usain Bolt is a record shattering sprinter; the scientific insight is to find out why.

4. Document your hypotheses, experiments, and results.

Good scientific practice mandates that experiments and their results be carefully documented.  This helps others (but also yourself!) in assessing and understanding the decisions in the course of your project.  If you make some design decision, such as a parameter setting, after examining how your software runs on some example, it is important that the motivation for this design decision can be traced back to the experiment and its result.

If this sounds like lot of work, that's because it is.  We're talking about the scientific method, not some fiddling around with parameters until we reach the desired result on a benchmark.  Fortunately, there are great means to help you with these tasks.  Jupyter Notebooks [8], for instance, allow you to collect your hypotheses (in natural language), your experiment design, its results (in beautiful and interactive graphs, among others), and your next refinement step – allowing anyone (as well as yourself) to understand how a specific result came to be.  Be sure to place your notebooks (and code) under version control from day one, and throw in some tests and assertions for quality assurance.  Control your environment carefully to make results reproducible for anyone.

5. Having benchmarks to compare tools and approaches is helpful, but brings risks.

Benchmarks are helpful means to assess the performance of tools.  However, they bring two risks.
  1. First, there is the risk of having researchers focus on the benchmark rather than insights.  It is nice to have a well-performing tool, but its scientific value comes from the insights that make its performance.
  2. Second, benchmarks bring the risk of researchers knowingly or unknowingly optimizing their tools for this very benchmark. We have seen this with compilers, databases, mobile phones, fault localization, machine learning, and now fuzzing.  To mitigate the risk of overspecialization, tool performance should be compared on programs they have not seen before.
A benchmark like LAVA-M is representative for detecting buffer overflows during input processing but very little else.  As the LAVA creators state themselves, "LAVA currently injects only buffer overflows into programs" and "A significant chunk of future work for LAVA involves making the generated corpora look more like the bugs that are found in real programs." [3].

It has been shown that optimizing against the artificial LAVA bugs, such as 4-byte string triggers, can have very naive approaches yield impressive results [2].  The conceptual match between the features injected by LAVA and those features exploited by fuzzers such as Angora is striking.

The question of what makes a good benchmark for fuzzers and test generation is still open.  One possible alternative to LAVA-M is Google's fuzzing test suite which contains a diverse set of programs with real bugs [5].  Michael Hicks has compiled excellent guidelines on how to evaluate fuzzers [4, 6].

6. Researchers must resist the temptation of optimizing their tools towards a specific benchmark.

While developing an approach, it is only natural to try it out on some examples to assess its performance, such that results may guide further refinement. The risk of such guidance, however, is that development may result in overspecialization – i.e., an approach that works well on a benchmark, but not on other programs.  As a result, one will get a paper without impact and a tool that nobody uses.

Every choice during implementation has to be questioned "Will this solve a general problem that goes way beyond my example?", and one should take that choice only with a positive, well-motivated answer, possibly involving other experts who would be asked in the abstract.  We recommend that during implementation, only a very small set of examples should be used for guidance; the evaluation should later be run on the full benchmark.

Good scientific practice mandates to create a research and evaluation plan with a clear hypothesis well before the evaluation, and possibly even before the implementation.  This helps to avoid being too biased towards one's own approach. Note that the point of the evaluation is not to show that an approach works, but to precisely identify the circumstances under which it works and the circumstances under which it does not work.

Papers should investigate those situations and clearly report them. Again, papers are about insights, not competition.

7. It is nice to have tools discovering vulnerabilities...

...especially as these vulnerabilities have a value on their own. However, vulnerabilities do not follow statistical distribution rules (hint: otherwise it would be easier to find them).  Having a tool find a number of vulnerabilities in a program, therefore, is not necessarily a good predictor to find bugs in another program.

In any case, the process through which vulnerabilities were found must be carefully documented and made fully reproducible; for random-driven approaches such as fuzzers, one thus needs to log and report random seeds.  Obviously, one must be clear not to optimize tools towards given vulnerabilities.

For fuzzing tools, the technical challenge is to find inputs that cover a wide range of behavior across the program and not only during input processing and error handling.  Let us remind you that during testing, executing a location is a necessary condition for finding a bug in that very location.  Since we are still far from reaching satisfying results in covering functionality, improvements in code coverage are important achievements regardless of bugs being found.

8. What does this mean for reviewers and authors?

Papers must clearly show how the insights of the paper contribute to the result, both in terms of motivation as well as in evaluation.

In many cases, it will be hard to describe all the details of all the necessary steps in the paper. Therefore, it will be necessary to supply an artifact that allows for not only reproduction but also applying it on subjects not seen before; again, all design decisions in the code must be motivated and documented. This is tedious; this is rigorous; this is how science works.

Reviewers should be aware that an approach is not simply "better" because it performs well on a benchmark or because it found new bugs.  Approaches have a long-term impact not only through performance, but also through innovation, generality, and simplicity. Researchers are selected as reviewers because the community trusts them to assess such qualities. Tool performance that is achieved through whatever means has little scientific value.

Having said that, conference organizers should create forums for tool builders and tool users to discuss lessons learned.  Such exchanges can be extremely fruitful for scientific progress, even if they may not be subject to rigorous scientific assessment.  Tool contests with clear and fair rules would allow assessing the benefits and fallbacks of current approaches, and again to foster and guide discussions on where the field should be going.  A contest like Rode0day [7] could serve as a starting point.

Conclusion

Having tools is good, and having tools that solve problems is even better.  As scientists, however, we also must understand why what works and what does not.  As tools and vulnerabilities come and go, it is these insights that have the longest impact.  Our papers, our code and our processes, therefore, must all be shaped to produce, enable, assess, and welcome such insights.  This is the long-term path of how we as scientists can help to make software more reliable and more secure.

Acknowledgments.  Marcel Böhme, Cas Cremers, Thorsten Holz, Mathias Payer, and Ben Stock provided helpful feedback on earlier revisions of this post.  Thanks a lot!

References

[1] P. Chen and H. Chen, "Angora: Efficient Fuzzing by Principled Search." 2018 IEEE Symposium on Security and Privacy (IEEE S&P), San Francisco, CA, 2018, pp. 711-725.
[2] Of bugs and baselines
[3] B. Dolan-Gavitt et al., "LAVA: Large-Scale Automated Vulnerability Addition," 2016 IEEE Symposium on Security and Privacy (S&P), San Jose, CA, 2016, pp. 110-121.
[4] George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, and Michael Hicks. 2018. Evaluating Fuzz Testing. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS '18). ACM, New York, NY, USA, 2123-2138.
[5] Google's fuzzing test suite
[6] Michael Hicks, Evaluating Empirical Evaluations (for Fuzz Testing
[7] Rode0day
[8] Project Jupyter

2019-10-10

When Results Are All That Matters: The Case of the Angora Fuzzer

by Andreas Zeller and Sascha Just; with Kai Greshake

The Case

"Fuzzers" are programs that generate random inputs to trigger failures in tested programs. They are easily deployed and have found numerous vulnerabilities in various programs. The last decade has seen a tremendous increase in fuzzing research [4].

In 2018, Chen and Chen published the paper "Angora: Efficient Fuzzing by Principled Search" [1] featuring a new gray box, mutation-based fuzzer called Angora. Angora presented extraordinary results in its effectiveness and shines with its stellar performance on the Lava-M benchmark [3], dramatically outperforming all competition.

The reason behind this breakthrough and leap in effectiveness towards deeper penetration of target programs was cited as the combination of four techniques: scalable byte-level taint tracking, context-sensitive branch count, input length exploration, and search based on gradient descent. The former three techniques had already been explored in earlier work; but the novel key contribution, as also indicated by the paper title, was the Gradient Descent approach to solve constraints modeling branch conditions as functions and approximating their gradient.

One of our students was intrigued by the outstanding performance and wanted to investigate why and how optimization strategies affect the fuzzing performance that much - with the initial goal of actually further improving on Angora. To properly measure the effect of his work, he went and worked directly on the Angora source code. During his study [2], he came across some confusing findings, which we summarize below.

The Findings

Angora's Gradient Descent performs similar to Random Search on LAVA-M

Contrary to what is stated in the Angora paper, we could not find real differences between using a random search algorithm and using Gradient descent to solve constraints. After further investigation, it turns out that in almost all optimization cycles, the constraint is solved immediately when the first input is executed. That first input is generated by using magic byte extraction – and only when it is replaced by a random starting point does the performance of Random Search drop off. Since 32-bit magic byte values comprise all of the bugs artificially injected into the LAVA benchmark, it does not surprise that this feature dominates the performance instead of Gradient Descent.

Of course, this raises the question of why Angora is so successful. Magic byte extraction is not a new technique, but in the case of this benchmark, in combination with taint tracking, good heuristics, and other improvements it leads to spectacular results on LAVA-M. However, Gradient Descent contributed little to that success.

These findings are confirmed by Angora's authors [5]: "If the fuzzer can extract the magic bytes and copy them to the correct location in the input, then obviously the fuzzer can solve the constraint immediately." They point out that Gradient Descent will show and did show advantages outside of solving magic byte constraints, even if this is not their evaluation setting. And indeed, our findings do not imply that Gradient Descent would not work; on the contrary, we still find it a highly promising idea. However, whether and when it can make a difference is an open question.

The Angora Evaluation Methodology is Inadequate

In the Angora paper, the authors claimed to have investigated the efficacy of each contribution individually and proved the effectiveness of Gradient Descent. Unfortunately, the comparison they present to support this claim is biased towards Gradient Descent. They tried to compare the constraint-solving capabilities of random search, random search with magic byte extraction and Gradient descent. They collected an input corpus using AFL and then started Angora with each algorithm and the same input corpus. This way, all algorithms start with the same set of constraints to solve and the authors reported the corresponding solve rate for each algorithm. 
 
Since AFL uses random mutations to solve constraints, it is not surprising that all constraints that can be easily solved with random mutations have been already solved in the input corpus. This gives random search an immediate disadvantage. 
 
While inspecting the code of Angora, we also found another issue. The evaluation ([1, Section 5.4]) states that Gradient Descent is compared to random search with Magic Byte extraction. However, the Angora Gradient Descent implementation itself makes use of Magic Byte Extraction, thus guaranteeing that it always performs at least as well as its closest competitor in the comparison. This methodology is inadequate and the original conclusions about the efficacy of Gradient Descent are not supported by our research. 
 
The Angora authors state that their "exploitation routines are a part of the enhancements aforementioned. During gradient descent, Angora first descends by the entire gradient. Only when this fails to solve the constraint does Angora try to descend by each partial derivative as a crude attempt of salvation." [5] According to our findings, however, these heuristics dominate the search of Gradient Descent for constraints with a large number of input dimensions – a fact that should have been discovered in an adequate evaluation setting.

Odd Heuristics and Parameters

The Angora code contains a number of optimizations and behaviors that are not documented in the paper. While the volume of information that can be published in a paper is limited, many of these had a significant impact on the potential performance; yet, they are not documented or motivated in the released source code. These are some examples:
  • The Gradient Descent implementation goes beyond what is described in the paper. It contains special exploitation routines that inject fixed values that are known to trigger overflows and numerical bugs. The algorithm also does not only use the entire gradient but instead descends by each partial derivative.
  • Parameters like the optimization budget are set to fixed, arbitrary values without justification or documentation for the chosen parameters:
    // SEARCH
    pub const ENABLE_DET_MUTATION: bool = true;
    pub const MAX_SEARCH_EXEC_NUM: usize = 376;
    pub const MAX_EXPLOIT_EXEC_NUM: usize = 66;
    pub const MAX_NUM_MINIMAL_OPTIMA_ROUND: usize = 8;
    pub const MAX_RANDOM_SAMPLE_NUM: usize = 10;
    pub const GD_MOMENTUM_BETA: f64 = 0.0;
    pub const GD_ESCAPE_RATIO: f64 = 1.0;
    pub const BONUS_EXEC_NUM: usize = 66;
    

    Values like MAX_SEARCH_EXEC_NUM (376; the number of iterations to solve a constraint) or BONUS_EXEC_NUM (66; the number of additional iterations added under certain conditions) would be odd choices in an experiment design; computer scientists would normally prefer multiples of powers of 10, 2, or 5. These choices are not justified or documented; and in any case, the impact of these choices (compared to others) would have to be determined.
    The value of 376 for MAX_SEARCH_EXEC_NUM is a particularly odd choice. Since the few constraints that Gradient Descent operated on were actually either solved after a maximum of 50 iterations or not at all, setting the iteration budget to at most 50 should actually increase the performance on LAVA-M.
The Angora authors confirm that the "current values seem to work well on most programs, but we did not rigorously evaluate them or search for their optimal values." [5] This implies that the authors themselves do not know which factors contribute to Angora's evaluation performance, or why their choices would be particularly well suited to settings outside of the evaluation.

Reproducibility

All of the above findings refer to the published Angora code, which may differ from the one used in the paper evaluation. We have therefore asked the Angora authors to provide us with the version used for the experiments described in the paper. They said that they "have been steadily maintaining and improving the Angora repository over the past year, with numerous bug fixes and enhancements. Its current performance is similar to, if not better than, the version on which our paper's evaluation was based." Unfortunately, the history of the public repository does not go back to the time the paper was published, and our request for the original version remains unfulfilled. We do not know whether the Angora authors would be able to reproduce their published results.

Summary

The performance and effectiveness of the Angora tool are uncontested, notably on the LAVA-M benchmark. If you want to fuzz a program that is part of the LAVA-M benchmark or very similar, Angora is likely to give you good results. Finally, the authors are to be commended for making their code available, and for providing detailed and timely answers to our questions. However,
  1. Angora's novel Gradient Descent approach has little to no impact on its performance on LAVA-M;
  2. The performance of Angora is impacted by several decisions that are not documented, motivated, assessed, or evaluated individually or sufficiently; and
  3. The evaluation methodology is inadequate to support the paper's conclusions on Angora's performance.
Guidelines for good scientific practice mandate that all information and decisions that helped to achieve a specific result be well documented. This is not the case here – neither in the paper nor in the code.

It would be a mistake, however, to point at the Angora authors only. Chen and Chen did make their code available, allowing for independent assessment. Several recent fuzzing papers use similar techniques and benchmarks as Angora, yet only few make their code available. Can we trust their results?

This calls for the community to discuss and question its procedures and criteria on what makes good research. Reviewers and researchers must not only focus on great results, but also assess how these results have been obtained, whether they have been rigorously evaluated, and how they translate into insights that will stand the test of time. How this can be achieved for programs with thousands of lines of undocumented code is a pressing question.

For now, the Angora case tells us how much we do not know, and how much further insights into what works and what does not will be needed. The good news is that this opens lots of opportunities for discussion – and future research.


Acknowledgments.  Marcel Böhme, Cas Cremers, Thorsten Holz, Mathias Payer, and Ben Stock provided helpful feedback on earlier revisions of this post.  Thanks a lot!

If you liked this post, also see our follow-up post with consequences.

You can contact Kai Greshake as development@kai-greshake.de.


References






2018-02-01

Where your conference fees go to

Fees for scientific conferences can be a cause for concern. Since most researchers are paid by public money, the question is whether this money is put well to use.  So where do your conference fees go?  Let me illustrate this on an example – the ISSTA conference I organized in 2016.

ISSTA is a nonprofit event, run by dozens of volunteers who run the conference as part of their paid job (or throw in extra hours for fun and honor); none of the researchers presenting, reviewing, or organizing make any profit from this.  ISSTA is run by ACM, which means that it covers losses but also gets any surplus; again, ACM is a nonprofit organization.

ISSTA 2016 attracted 113 visitors to its main conference, and 48 visitors to its workshops.  As with most conferences in our domain, students make the majority of visitors to the main conference, followed by ACM members.  (As an ACM member, you get a discount that is higher than the annual ACM membership fee, so it pays off to join ACM even if just for one year.)



Registration fees for the main conference vary between 300€ for students registering early and 750€ for non-members registering late; workshops fees were between 125€ and 300€.  On top of these fees comes a VAT of 19%.  These attendees make the conference, and their fees make the conference income – in the case of ISSTA 2016, exactly 70,460€.

When setting up the budget, you make non-students subsidize students: At ISSTA 2016, an additional student would actually incur a small cost; but this would easily be covered by additional non-students coming in (especially when showing up on-site!).  We also got a small amount of donations; other conferences and conference chairs are much better at getting these.

Of course, you do not know how many people will attend, so you calculate conservatively – I estimated 100+ visitors for the main conference, and planned accordingly.  The key point in planning, though, is the expenses.  This is what the expenses for ISSTA 2016 eventually broke down to:



Starting clockwise at the top, the biggest category is food and drinks, including social events like reception and dinner.  We spent about 50€ per person and day, which is pretty reasonable for two full meals, coffee, drinks, snacks, and all.  We ran ISSTA at my university, meaning that we could buy drinks and snacks at retail price; if you run a conference at a hotel, you would be charged hotel prices, say $3 or more for each and every can of soft drink, and you don't want to know about beer or wine. Workshops at ISSTA 2016 had relatively high fees, but little cost (essentially coffee, cookies, and a light lunch), and thus brought in revenue.  All expenses also incur taxes – in our case, again 19% VAT.

The next big block is registration, which covers most of actually organizing and running the conference.  Most of this is the fee charged by the conference organization agency of my university. They run a few dozen conferences every year and are very well-organized: their fee includes organizing costs (the folks who answer your queries and write visa letters), university overhead, and more. This item also includes the staff on site – in our case, student aides who gave out bags, served drinks, handled your queries, or oversaw the AV – as well as free bus tickets for all attendees (rides from hotel to the conference venue and back).  Overall, I estimate that the agency saved me two months full time of planning and organizing, so contracting them was a great return on investment.

Financial refers to 2% credit card fees.  Depending on your country and your financial provider, these may vary considerably.

ISSTA features a physical meeting of the program committee – that is, all reviewers meet for two days to discuss which papers should be presented.  This contributes to the quality of the conference, but also is very expensive.  The reviewers have to assume their travel, and the conference pays for meeting rooms, day catering, and a dinner.  We co-located the PC meeting with the ICST 2016 conference in Chicago.  This attracted more attendees to ICST; in return, ICST subsidized the PC meeting from the extra revenue, keeping the cost low.  This item also includes invited speakers, for whom ISSTA 2016 covered travel expenses as well as free conference and workshop tickets.

On-site logistics primarily refers to meeting rooms. These can be expensive, depending on the venue.  My university has reasonable prices for scientific meetings, including all technical equipment. Hotels often offer meeting rooms for free if you bring in sufficiently many guests, but then they may charge you for extras such as AV or Wi-Fi – and the rental of a single AV system can easily exceed the cost of an entire meeting room at the university.

The program needs to be printed; for publicity, we distributed leaflets at conferences and otherwise used free Facebook and Twitter channels. Other conferences would throw in paid advertising and throw in extras such as the proceedings on a USB stick.  ISSTA gave you a linen bag and an online access code for the proceedings.

Conference management refers to services around the conference, notably Proceedings, the cost of editing the accepted papers and making sure they all end up in the digital library, all properly tagged; as well as the Conference Management System, which handles submissions and the review process.  We used specialized services for both, again at reasonable cost.

Low cost and high attendance, especially at workshops, gave ISSTA 2016 a high surplus of 25,000€ – which all went to ACM such that ACM and SIGSOFT could keep on organizing and supporting research in our field.  Note, though, that such a surplus is not the rule.  You are supposed to plan for some contingency and overhead, but most conferences have a much smaller surplus. Despite best efforts of conference organizers, some conferences come up with a loss that would then be covered by the sponsoring society.  With ISSTA 2016, we were lucky – and if you ever go and organize a conference, I wish you the same luck, too!

2018-01-25

I am leaving Saarland University, and the call for my successor is open

This year, I will leave Saarland University, where I have been professor for software engineering for 16 years.  Our CISPA Center for IT Security is about to segregate from Saarland University to become a Helmholtz Center, and I will be moving with it.

With base funding for 500 researchers, the new Helmholtz Center is set to become one of the largest research institutions in IT security – and hopefully one of the most renowned, too.  The recent research of my group in program analysis and test generation (watch out for papers this year!) positions us right at the intersection of software engineering and security, and I will be happy to contribute both to the research agenda as to the management of CISPA.  I will retain a footprint at Saarland University, though, keeping my professor title and reduced teaching obligations.

While the label on my office will change, my loyalty to the Saarland Informatics Campus will not falter the slightest.  I am grateful to work in one of the greatest research environments that could possibly exist, surrounded by colleagues who have proven their incredible excellence again and again, and who all work together every day for the excellence of the site as a whole.  When I started 16 years ago, we were maybe 15 professors in Computer Science; when I retire, there will be ten times as many.

This great environment can be your environment, too.  If your work is related to IT security, please check out our tenure-track and tenured career options at CISPA.  If your work is in Software Engineering, though, the call for my successor (German / English) is open as well.  It has been one of the best positions I could ever think of, and I am sure it could be the same for you!

2017-11-02

The Rejection Song

To be performed by a scientific program committee (choir) and its chair (solo voice) to the tunes of the song "Go West" (Village People / Pet Shop Boys).  A program committee decides which scientific works get published.

(Together) We‘re the committee
(Together) We‘re the experts, see?
(Together) We will read your work
(Together) We will make us heard

Re-ject! This is our song
Re-ject! cos the data‘s wrong
Re-ject! it’s been done before
Re-ject! send it out the door

(Together) We select the best
(Together) We reject the rest
(Together) and the best is us
(Together) so why make a fuzz?

Re-ject! This is out of scope
Re-ject! it’s a slipp‘ry slope
Re-ject! It’s irrelevant
Re-ject! I don’t understand!

(Together) We define the field
(Together) We do form a shield
(Together) We are the elite
(Together) Life can be so sweet

Re-ject! no related work
Re-ject! you are such a dork
Re-ject! not my expertise
Re-ject! would you fix that, please

Re-ject! cos it’s really bad
Re-ject! cos I’m getting mad
Re-ject! there’s a missing bit
Re-ject! it‘s a piece of (fade out)


Music: Victor Willis, Henri Belolo and Jacques Morali
Above lyrics: Andreas Zeller

2017-04-26

Mit dem Fahrrad aus der Saarbrücker Innenstadt zur Uni

Ich bin die meisten Tage des Jahres mit dem Fahrrad aus der Saarbrücker Innenstadt zur Uni und zurück unterwegs.  Für alle, die auch gerne mit dem Fahrrad zur Uni wollen, habe ich hier meine drei Lieblingswege auf einer Google-Karte zusammengestellt; Details folgen unten.



Und hier sind die drei Wege, sortiert von Nord nach Süd:

Der Schnelle: Am Meerwiesertalweg entlang

Ein nützlicher Weg, getrennt geführt neben starkem Autoverkehr.

Hier geht es lang: Vom Hauptbahnhof aus über den Bormannpfad (Alternative: durch Parkhaus Hela schieben), die Dudweilerstraße kreuzen und auf den Meerwiesertalweg.  Dort führt ein für Radfahrer freigegebener Radweg immer geradeaus mit zunächst moderater Steigung bis hoch zur Uni. Auf dem letzten Stück zwischen Uni-Parkhaus und Pforte wird es steil.

Ein Weg für: Normalfahrer, die schnell von A nach B wollen.

Aufpassen: Am unteren Ende des Meerwiesertalwegs sind zahlreiche Einmündungen, wo Autofahrer  insbesondere von den stadteinwärts auf der linken Seite fahrenden Radfahrern überrascht werden; erst ab der Jugendherberge wird es entspannt.  Der Weg am Meerwiesertalweg ist ein für Radfahrer freigegebener Gehweg, also gilt es, besondere Rücksicht auf die (wenigen) Fußgänger zu nehmen.  (Radfahrer dürfen zwar auch auf der Fahrbahn des Meerwiesertalwegs fahren, was aber wegen starken Autoverkehrs auf enger Fahrbahn keinen Spaß macht.)

Bester Moment: Wenn es auf dem Rückweg am Parkhaus entlang steil herunter geht – das ist der Rücksturz zur Erde.

Alternativen: Innenstadt über Scheidter Straße verlassen (siehe unten), dann über den Ilseplatz und Waldhausweg in den Meerwiesertalweg umschwenken.  Wer die Steigung am Parkhaus nicht mag, kann auch das Gelände der Sporthochschule erkunden.

Der Sportliche: Durch den Stadtwald zur Uni

Mein morgendlicher Weg zur Uni – steil und schön

Hier geht es lang: Die Innenstadt über Beethovenstraße/Blumenstraße verlassen (eine ruhige Strecke entlang Einbahnstraßen, die laut Verkehrsentwicklungsplan irgendwann zur Fahrradstraße ausgebaut werden soll), auf die Scheidter Straße wechseln und bis zu deren Ende immer höher fahren.  Hinter der Wendeschleife geht es in den Wald, wo wir sofort links an der Schranke vorbei auf einen asphaltierten Waldweg wechseln, der uns bis zur Uni führt. Der Weg führt steil durch den lauschigen Stadtwald hinauf.  Etwa auf halbem Wege geht es dann bergab, man rollt entspannt zur Uni herab, und lässt den Schweiß im kühlenden Fahrtwind zurück.

Ein Weg für: Mountainbiker, Elektroräder, Naturliebhaber.

Aufpassen: Der Radweg entlang der Scheidter Straße ist durch parkende Autos von der Fahrbahn getrennt; ausfahrende und parkende Autofahrer übersehen hier Radfahrer gerne.  Der Waldweg ist unbeleuchtet; Schnee wird spät geräumt.  Auf dem Rückweg geht es auf der Scheidter Straße steil bergab, also besser auf der Fahrbahn bleiben.

Bester Moment: Morgens im Wald den Berg hinauf.  Nichts zu hören außer Vogelgezwitscher.

Anschlüsse: Noch höher hinaus?  Auf den Schwarzenbergturm für ein kleines Workout und oben die Aussicht genießen.

Der Flache: Über Schafbrücke und Scheidt

Feierabendstrecke mit nur moderaten Steigungen, die Hälfte auf ruhigen Wegen.

Hier geht es lang: Die Innenstadt an der Saar entlang verlassen; spätestens an der Ostspange nach links abbiegen und auf die B51 nach rechts fahren. Auf dem Radweg geht es an der B51 eher unproblematisch immer geradeaus. Nach den Römerkastell geht es auf einer Fahrradspur leicht bergauf; danach geht es nach links in die Breslauer Straße, und dann gleich rechts auf die andere Seite der Bahnstrecke.  Jetzt kommt der schöne Teil: Es geht auf ruhigen Wegen durch Schafbrücke immer an der Bahn entlang, bis wir in Scheidt nach der Überführung links abbiegen und wieder mit leichter Steigung zur Einfahrt Ost der Uni fahren.

Ein Weg für: Entspannte Kilometerfresser.  Leute, die zum Ostteil der Uni wollen.

Aufpassen: In der Innenstadt sind entlang der Mainzer Straße wie auch an der Saar viele Fußgänger unterwegs, also Rücksicht nehmen. Der Abschnitt Römerkastell-Breslauer Straße ist verkehrsreich; beim Linksabbiegen in die Breslauer Straße auf eine Rotphase warten, dann den Extra-Wartebereich für Fahrradfahrer nutzen, um sich links einzuordnen. Von Scheidt zur Uni führt der Radweg getrennt auf der linken Seite entlang; Vorsicht bei Einmündungen.

Bester Moment: Auf ruhigen Wegen flott durch Schafbrücke.

Alternativen: Zwischen Römerkastell und City sind Preußenstraße und/oder Halbergstraße verkehrsarme Fahrrad-Alternativen zur B51.  Wer im Saar-Basar einkaufen mag, kann das Gelände im Westen über eine Pforte via Eschberger Weg und Im Heimerswald erreichen.  Statt des asphaltierten Radweges von Scheidt zur Uni kann man auch am Rückhaltebecken links abbiegen und einen unbefestigten Waldweg nehmen.

Anschlüsse: Sehr schöne Radwege gibt es die Saar entlang Richtung Saarlouis oder Sarregemuines.  Der Radweg nach Scheidt führt auf ruhigen Wegen weiter an der Bahn entlang Richtung St. Ingbert.  Alle Radwege sind ausgeschildert.

Hinweis: Als Rückweg (also von Uni zur Innenstadt) attraktiver, da (a) leichtes Gefälle (b) am Römerkastell weniger Kreuzungen und (c) der verkehrsreiche Abschnitt zwischen Breslauer Straße und Römerkastell ist schnell überwunden. 

2017-01-13

Twelve LaTeX packages to get your paper accepted

(with Abhik Roychoydhury and Aditya Kanade)

Why do some people get all their papers accepted, and others do not?  You may already know that in many disciplines, using the LaTeX typesetting system correlates with having your paper accepted (in contrast to, say, Word).  What you may not know is that there is a number of LaTeX packages whose usage may be crucial for success.  Here we go:
  1. The pagefit package.  This immensely useful package makes your paper exactly fit within a given page limit, applying a genetic search algorithm to reduce baseline distances, white space, font sizes, or bibliographic references until it exactly fits.  Just write \usepackage[pages=12,includingbibliography]{pagefit} and enjoy.  
  2. The autocite package. Cites all relevant work that needs to be cited.  The "citepc" option additionally cites the entire program committee, whether their work is relevant or not.
  3. The translate package.  Auto-translates your paper into a given target language (default is English).  Just type