Skip to content
Use Case
Blog
Use Case9 min

Cloud Hosting for IoT: Flexibility Meets Reliability

Jean-Luc Dubouchet9 janvier 2021

Modern IoT solutions make our environment smarter and more responsive by connecting the digital and physical things around us. Naturally, such complex systems require highly flexible, secure and performant hosting for smooth, reliable and fast operation.

We spoke with Nicolas Ziegle, R&D Lead Tech at Logifleet, to understand how this Swiss company found the right cloud hosting solution for its IoT product.

Cloud IoT

Logifleet's expertise

Logifleet is a Swiss company with offices in Lausanne and St. Gallen. They provide software solutions specifically developed for fleet and resource management, primarily focused on Swiss businesses.

Their flagship product is Logifleet 360, an IoT solution built to manage and optimize the use of vehicles, machines and tools to improve logistics for any company. The target is broad: from construction companies that use the product for automatic billing, to food or pharmaceutical delivery companies that need temperature monitoring and alerts.

Scale varies significantly from client to client. Some need real-time tracking for 4 or 5 vehicles, while others manage over 1,500 connected objects on a single account with detailed monthly reports. Each installed device reports between 3 and 1,500 messages per day in the most extreme cases.

Logifleet architecture

The tech stack

Logifleet uses exclusively open-source tools:

  • Elasticsearch as the main database
  • Kafka for data processing
  • Spring Boot to serve APIs to different front-ends and as a Kafka producer/consumer
  • Redis for caching
  • Gisgraphy for geocoding
  • Kibana for Elasticsearch cluster monitoring

Kafka is the most recent addition to the architecture and is perfectly suited to Logifleet's needs, which involve processing thousands of messages per minute. Elasticsearch is fully capable of indexing and searching quickly through this volume of data.

The unique hosting challenges of IoT workloads

IoT platforms like Logifleet face hosting requirements that differ fundamentally from typical web applications. Understanding these differences is essential for choosing the right infrastructure.

Constant data ingestion. Unlike a web application where traffic follows human browsing patterns, IoT devices send data around the clock. A fleet of 1,500 vehicles reporting GPS positions every 30 seconds generates over 4 million messages per day. The hosting infrastructure must handle this sustained write load without degradation, even during batch processing or reporting windows that add read pressure on top.

Variable payload sizes. A simple GPS coordinate message might be 200 bytes, while a comprehensive vehicle diagnostic report with sensor data, error codes and telemetry can exceed 10 KB. The ingestion pipeline must handle this variability gracefully, without choking on large payloads or wasting resources on trivially small ones.

Real-time processing requirements. When a delivery truck's refrigeration unit reports a temperature anomaly, the alert must reach the dispatcher within seconds, not minutes. This means the hosting infrastructure needs low-latency message processing, which rules out architectures that batch-process incoming data on long intervals.

Data retention at scale. Fleet management clients often need historical data going back months or years for compliance reporting, route optimization analysis and dispute resolution. This creates significant storage requirements that grow linearly with the number of connected devices.

These constraints make IoT one of the most demanding use cases for cloud hosting, and one where a platform's flexibility and scaling capabilities are tested daily.

Hosting requirements

From Logifleet's R&D perspective, three requirements carry equal weight:

  1. Service availability. For an IoT solution, every minute of downtime means lost data or missed alerts.
  2. Data security. Geolocation data associated with timestamps and identifiers is inherently sensitive.
  3. Price. As an SME handling big data with a complete infrastructure, the value-for-money ratio is crucial.

Logifleet's clients share these same requirements, and the fact that their data is hosted in Switzerland is a significant competitive advantage.

Hosting history

At the very beginning of the project, Logifleet hosted data and applications in-house. But the team couldn't dedicate enough time to server maintenance. Issues related to air conditioning and summer heat highlighted the limits of self-hosting.

The company then moved to another Swiss cloud provider. Despite being one of the largest in the industry in Switzerland, this provider was quite behind technologically. Customer service was lacking. The triggering event was a night spent working on a disaster recovery situation, not because of an actual disaster, but simply due to the loss of a single server. This experience convinced the team to move to a more modern provider.

Choosing Hidora and Jelastic

Logifleet discovered Hidora in 2017 during their search for a new cloud provider. Several factors were decisive:

  • The graphical interface. Being able to control nodes through a simple GUI was a major change from the old way of working with servers.
  • Dynamic billing. Paying based on actual consumption, with configurable thresholds, perfectly matches the IoT model where load varies with working hours.
  • French-speaking support. Having human support available in your language, accessible and responsive, makes a real difference in daily operations.

After 3 years of use, Nicolas Ziegle confirms the team is definitively satisfied with this choice. The gradual shift toward a Docker-oriented model for some components, and toward native Jelastic containers for others (Spring Boot being the perfect example), happens naturally on a platform that already handles these technologies.

The transition to containers

The team was initially skeptical about containerization, more out of habit than conviction. But after testing the approach on a few servers (first Jenkins, then Redis and Gisgraphy), the advantages became clear.

Nicolas admits he was afraid of "losing control" of the infrastructure at the machine level. But after years of maintaining VM-based servers, the verdict is clear: containerization simplifies the overall architecture, integrates easily into a complete CI/CD environment and hides unnecessary complexity.

The transition followed a pragmatic, incremental approach. Rather than attempting a big-bang migration of the entire stack, the team containerized non-critical services first to build confidence and expertise. Jenkins was the natural starting point: if the CI server has a problem, it doesn't affect production users. Redis and Gisgraphy followed, both stateless or easily rebuildable services. Only after several months of running these services successfully in containers did the team tackle the core application and data layers. This progressive strategy minimized risk and allowed the team to learn at a comfortable pace.

Cost optimization through pay-per-use

The usage-based billing model is particularly well-suited to Logifleet's case. Traffic peaks always occur at the same times of day (during working hours), since clients are primarily based in Switzerland. This allows server performance to be scaled down during weekends and nights, generating significant savings.

This type of optimization is a topic we regularly discuss with our clients as part of our managed services.

The complete architecture

Logifleet's infrastructure is impressive in its complexity, with millions of messages processed daily:

  • 2 back-end and front-end servers for the main Java (Spring) application
  • A complete Elasticsearch cluster: 7 "data" nodes, 3 "master" nodes, 4 "client" nodes and 1 Kibana node
  • 3 Kafka servers + 1 Zookeeper server
  • 1 Redis cluster composed of 3 servers
  • 2 Gisgraphy geocoding servers
  • Several Kafka-consuming Spring Boot applications on individual nodes

How Kafka and Elasticsearch work together at scale

The interplay between Kafka and Elasticsearch is central to Logifleet's ability to handle peak ingestion without data loss. Kafka acts as a durable buffer between the IoT devices and the Elasticsearch indexing layer. When a burst of messages arrives — for example, at 7 AM when hundreds of vehicles start their routes simultaneously — Kafka absorbs the spike into its partitioned log while Elasticsearch consumers process records at a sustainable rate. This decoupling means that even if Elasticsearch indexing slows temporarily due to a heavy reporting query, no incoming device messages are dropped. The Kafka retention window is set to 48 hours, providing ample time for recovery in the event of a downstream processing delay.

Logifleet's three key advantages

Nicolas Ziegle summarizes the three main platform advantages:

  1. Security. Two-factor authentication was a non-negotiable requirement. Jelastic makes ISO 27001 certification feasible, which would have been impossible with the former provider.
  2. Dynamic resource management. Being able to adjust cloudlets (resources) on the fly, without restarting, was a game changer. Temporarily oversizing a server and scaling it back down, without days of advance planning, is invaluable flexibility.
  3. Interface and direct access. A well-designed interface that provides a quick overview of the complete infrastructure state. Direct access to logs from servers in the same container is a small win that adds up to days saved over the year.

Lessons for IoT companies choosing cloud hosting

Logifleet's journey from self-hosting to a modern cloud platform offers practical lessons for other Swiss IoT companies evaluating their hosting options:

  • Start with your data model, not the provider's feature list. Understanding your ingestion rates, retention requirements and query patterns should drive the architecture, which in turn determines the right hosting platform.
  • Prioritize operational simplicity. IoT teams are typically small and focused on product development. A hosting platform that requires a dedicated infrastructure engineer is a hidden cost that doesn't appear on the invoice.
  • Plan for 10x scale from day one. IoT growth is often non-linear. A single enterprise client can double your device count overnight. Your hosting must absorb this growth without re-architecture.
  • Don't underestimate the value of local support. When your fleet tracking system goes down at 7 AM and delivery trucks are waiting for route assignments, reaching a support engineer who understands your setup and speaks your language is worth more than any SLA document.

Logifleet's experience shows that even complex IoT solutions with massive data volumes can find hosting at Hidora that meets their demands. To learn more about our solutions for IoT projects, explore our consulting and managed services offerings.

Jean-Luc Dubouchet
Jean-Luc Dubouchet

Systems & Cloud DevOps Engineer

Systems & Cloud DevOps Engineer at Hidora for 8 years. Kubernetes and cloud infrastructure specialist.

CKACKADCKSCCNA

Need support?

Let's talk about your project. 30 minutes, no strings attached.