08 May 2026
Innovating for Safety: SciamLab Presents LEWS at the Liquid AI Hackathon
We are thrilled to announce the participation of the SciamLab team in the latest Liquid AI Hackathon, an event that pushed us to explore new frontiers in the application of artificial intelligence for territorial safeguarding.
For this occasion, we developed LEWS (Landslide Early Warning System), an innovative rapid warning system for landslides based on the latest generation Vision-Language models.
The Challenge: Preventing Hydrogeological Risk
Landslides and mudslides represent one of the most devastating natural threats, causing significant infrastructural damage and loss of human life every year. The main challenge in managing hydrogeological risk is the time factor: identifying early warning signs before disaster strikes is extremely complex and requires constant and accurate visual monitoring.
We asked ourselves: can we train an AI to recognize the multispectral patterns that precede a landslide, automating and accelerating alert systems?
Our Solution: Fine-Tuning LFM-2.5-VL
Inspired by recent PoCs in the use of AI for wildfire prevention, we decided to apply a similar approach to hydrogeological risk.
The core of our project is the fine-tuning of the Liquid Foundation Model (LFM) 2.5 Vision-Language with 450M parameters (lfm2.5-vl-450m). We collected and labeled a specialized dataset of satellite images, drone surveys, and fixed cameras, training the model to classify and report anomalies in the terrain, cracks, vegetation changes, and suspicious accumulations of debris.
The result is a highly specialized and lightweight model, publicly released on our HuggingFace page: Sciamlab/LFM2.5-VL-450M-landslide-GGUF. The use of the GGUF format guarantees us fast and efficient inference even on edge devices (Edge AI), a fundamental requirement for field monitoring systems.
Transparency and Innovation: Try LEWS
At SciamLab, we strongly believe in open source and knowledge sharing. For this reason, we have made both the source code and a working demo of our system available.
-
Explore the Code: The entire system architecture, fine-tuning details, data collection pipeline, and evaluation metric results (F1-Score, Recall, Precision) are documented in our public GitLab repository: https://gitlab.com/sciamlab/ai/lews
-
Test the Live Pilot: Want to see the model in action? We have implemented a working pilot where you can test the system's inference capabilities in real-time. Try it here: https://lews.sciamlab.com
Next Steps
The Liquid AI Hackathon was a fantastic testbed for our team. Having developed LEWS in such a short time demonstrates the enormous potential of multimodal SLM (Small Language Models) applied to Tech for Good.
We will continue to improve the model by integrating new temporal data and refining the integration with IoT sensors in the field.
Let us know what you think by testing the pilot or opening an issue on our repository. Good luck to all the hackathon participants and... happy coding!
3 minutes