Carbon nanotubes (CNTs) received much attention since their description in 1991. In principle, a CNT is a rolled up honeycomb grid of carbon atoms. The synthesis of branched tubes allows the connection of straight CNTs to nanotube networks with branched tubes as junctions. One of these networks are the super carbon nanotubes (SCNTs) in which each carbon-carbon bond in the grid is replaced by a CNT and each carbon atom by a regularly Y-branched tube.

SCNTs exhibit interesting mechanical properties like high strength, low weight and density, and even more flexibility than CNTs. However, the number of atoms in SCNTs increases very fast for larger tubes making atomistic simulations infeasible with state-of-the-art methodology.

This talk summarizes a modeling approach of SCNT based on a specific graph algebra and describes the construction process starting from a single atom. It also presents a novel data structure, called Compressed Symmetric Graphs (CSGs), which exploits the inherent symmetry and hierarchy in SCNTs and enables to considerably reduce the required memory for SCNT models.

Furthermore, a novel matrix-free solver is explained, which is aware of the SCNT structure and efficiently parallelized and vectorized. Thus, it is able to iteratively solve the linear equation systems while making optimal use of the hardware at hand. It is faster than a CRS-based reference solver by a factor up to 2 while only requiring half the memory.

The combination of the CSGs with the caching solver approach enables the solution of equation systems of order 6 * 10^7 while fully caching all contributions of the matrix.

In his seminal paper from 1994, Peter Shor demonstrated that current cryptographic schemes are insecure in the presence of general quantum computers since those can solve the underlying mathematical problems in polynomial time. Hence, alternatives to state-of-the-art schemes, like for RSA, are required.

One promising candidate is lattice-based cryptography in which the hardness of the cryptographic schemes results from various problems in an n-dimensional lattice. Examples are to find the shortest non-zero vector in a lattice (SVP) or to find the vector closest to a given one in the lattice (CVP).

To assess the security of a cryptographic scheme which is based on some computational problem, one has to estimate the hardness of the underlying problem. To that end, cryptanalysists aim at estimating the complexity of known algorithms or attacks to solve these problems.

This talk gives a short introduction to lattices, lattice-problems and the challenge of lattice reduction. Moreover, it presents a novel methodology to create models for the runtime of algorithms which try to solve those lattice-problems as fast as possible. The models are based on profiled training measurements and the final runtime performance models are constructed by an extended version of the open source tool, Extra-P, via parallelized simulated annealing.

The performance models derived show a very good ?t to the experimental data and a high variety in their range of complexity which are compared to predictions by theoretical upper bounds and previous average-case estimates. The common approach of estimating the runtime on the basis of the number of ?oating point operations or bit operations executed within an algorithm and combining them with theoretical assumptions about the executing processor (clock rate, operations per tick) is also evaluated. The experiments show that this approach leads to unreliable estimates for the runtime.