Table of contents
At the pan.talk on June 24, 2025, Dr. Imanol Schlag, AI Research Scientist and Tech Lead at the ETH AI Center and Co-Lead of the LLM division at the Swiss AI Initiative, answered the question of how a small country like Switzerland can build its own internationally competitive AI infrastructure.
To further emphasize the importance of the Swiss AI initiative, the Swiss {ai} Weeks were launched – a series of events taking place from September 1 to October 5, 2025, bringing together experts, companies and the public to strengthen Switzerland’s position in the field of AI.
As co-initiators of the Swiss {ai} Weeks, we are working with other partners, including the ETH AI Center, to contribute our expertise and drive forward the development of AI solutions. In our blog post, you can find out how we not only want to observe the change, but actively help shape it.
What actually makes a language model “intelligent”?
Before talking about the Swiss AI Initiative and a Swiss LLM, Imanol first took us back to the roots of machine learning. He explained what constitutes discriminative and generative models as well as neural generative language models and what role the concept of scaling laws plays in this. Simply put, scaling laws are practical observations that say: the more data, larger models (more parameters) and more computing power you give a language model, the more powerful or better it becomes – and often in a predictable way.
Reasoning models on the rise
Since the launch of ChatGPT in November 2022, the development of neural chatbots has accelerated rapidly. From text-only models to multimodal, memory-enabled assistants with strong reasoning capabilities, extended context windows and tool usage. The year 2024 saw the transition to a new generation of general-purpose assistants such as Microsoft Copilot or Gemini 2.0, which can process not only text but also images, documents and code and interact with the real world. In 2025, the focus will increasingly be on so-called reasoning models such as OpenAI o1, Deepseek R1 and Grok. With their deeper understanding, more precise conclusions and even greater integration into work processes, they are setting new standards.
Why Switzerland is building its own AI infrastructure
The Swiss AI Initiative was launched in October 2023 as a national research initiative jointly by ETH Zurich and EPFL to strengthen Switzerland’s technological sovereignty in the field of artificial intelligence. As part of the initiative, over 10 academic institutions (see graphic), more than 70 professors and around 800 researchers are involved in various research projects on generative AI, including the development and training of a Large Language Model (LLM) developed in Switzerland. A central concern here is to ensure transparency by disclosing data sources and making model decisions comprehensible in order to strengthen trust in the technology.
“Switzerland is deliberately setting a counterpoint to the dominant, mostly US-influenced AI models such as ChatGPT or Claude,” says Imanol. These models are not only linguistically and culturally strongly geared towards English and the US context. They are usually proprietary, access is restricted and there is no open data. There is also little transparency about the training methods or content. In addition, many models have stored training data from the internet without disclosing its origin or quality. Imanol considers the fact that references to such “memorized training data” are sometimes no longer documented in models such as Llama to be particularly problematic.
Alps Supercomputer: The backbone of the Swiss AI initiative
When Imanol talks about Alps, the supercomputer officially inaugurated in Lugano in September 2024, he can’t stop raving about it. With over 10,000 GH200 GPUs optimized for training large language models, it is an important key component of the Swiss AI Initiative and is one of the most powerful supercomputers in the world.
Further special features are:
🌍 The geodistributed architecture connects several Swiss research locations for flexible and redundant use.
💧 Cooling is provided sustainably by cold lake water from Lake Lugano.
🤝 Alps promotes open source AI models and international research cooperation.
Where does the Swiss language model stand?
According to Imanol, the Swiss language model is in the starting blocks and differs from overseas models in its clear focus on compliance with the Swiss Data Protection Act (DPA) and the EU Artificial Intelligence Act (“AI Act”). As already mentioned, the Swiss AI Initiative aims to provide a transparent and responsibly trained Large Language Model (LLM) that offers equivalent performance and usage costs in all supported languages.
The model is characterized by strong performance and is based on a large dataset of public data. The source code, the weighting parameters and the license are openly accessible. In addition, special attention is paid to legal compliance. The project promotes legal dialogue, respects opt-out options and prevents the unwanted memorization of sensitive information.
Pioneering AI sovereignty: Switzerland is leading the way
The pan.talk with Imanol impressively demonstrated how a small country like Switzerland can strengthen its AI sovereignty through research, infrastructure and clear values. But why is this important? Because proprietary models such as Llama 4 in Europe are partly restricted by licenses, transparency is often lacking and there are dependencies on foreign infrastructure.
The Swiss AI Initiative would like to counter this with a transparent language model developed in Switzerland. The audience showed great interest and asked some critical questions, particularly about ethical aspects and social responsibility.
The next opportunity to discuss this topic is the Swiss {ai} Weeks, which run from September 1 to October 5. There you can network and make your own contribution to the future of AI.
Talk to us if you want to start your own AI project – we are happy to help.
