Case Studies
    ANDOR
  • (5,807)
    • (2,609)
    • (1,767)
    • (765)
    • (625)
    • (301)
    • (237)
    • (163)
    • (155)
    • (101)
    • (94)
    • (87)
    • (49)
    • (28)
    • (14)
    • (2)
    • View all
  • (5,166)
    • (2,533)
    • (1,338)
    • (761)
    • (490)
    • (437)
    • (345)
    • (86)
    • (1)
    • View all
  • (4,457)
    • (1,809)
    • (1,307)
    • (480)
    • (428)
    • (424)
    • (361)
    • (272)
    • (211)
    • (199)
    • (195)
    • (41)
    • (8)
    • (8)
    • (5)
    • (1)
    • View all
  • (4,164)
    • (2,055)
    • (1,256)
    • (926)
    • (169)
    • (9)
    • View all
  • (2,495)
    • (1,263)
    • (472)
    • (342)
    • (227)
    • (181)
    • (150)
    • (142)
    • (140)
    • (129)
    • (99)
    • View all
  • View all 15 Technologies
    ANDOR
  • (1,744)
  • (1,638)
  • (1,622)
  • (1,463)
  • (1,443)
  • (1,412)
  • (1,316)
  • (1,178)
  • (1,061)
  • (1,023)
  • (838)
  • (815)
  • (799)
  • (721)
  • (633)
  • (607)
  • (600)
  • (552)
  • (507)
  • (443)
  • (383)
  • (351)
  • (316)
  • (306)
  • (299)
  • (265)
  • (237)
  • (193)
  • (193)
  • (184)
  • (168)
  • (165)
  • (127)
  • (117)
  • (116)
  • (81)
  • (80)
  • (64)
  • (58)
  • (56)
  • (23)
  • (9)
  • View all 42 Industries
    ANDOR
  • (5,826)
  • (4,167)
  • (3,100)
  • (2,784)
  • (2,671)
  • (1,598)
  • (1,477)
  • (1,301)
  • (1,024)
  • (970)
  • (804)
  • (253)
  • (203)
  • View all 13 Functional Areas
    ANDOR
  • (2,573)
  • (2,489)
  • (1,873)
  • (1,561)
  • (1,553)
  • (1,531)
  • (1,128)
  • (1,029)
  • (910)
  • (696)
  • (647)
  • (624)
  • (610)
  • (537)
  • (521)
  • (515)
  • (493)
  • (425)
  • (405)
  • (365)
  • (351)
  • (348)
  • (345)
  • (317)
  • (313)
  • (293)
  • (272)
  • (244)
  • (241)
  • (238)
  • (237)
  • (217)
  • (214)
  • (211)
  • (207)
  • (207)
  • (202)
  • (191)
  • (188)
  • (182)
  • (181)
  • (175)
  • (160)
  • (156)
  • (144)
  • (143)
  • (142)
  • (142)
  • (141)
  • (138)
  • (120)
  • (119)
  • (118)
  • (116)
  • (114)
  • (108)
  • (107)
  • (99)
  • (97)
  • (96)
  • (96)
  • (90)
  • (88)
  • (87)
  • (85)
  • (83)
  • (82)
  • (81)
  • (80)
  • (73)
  • (67)
  • (66)
  • (64)
  • (61)
  • (61)
  • (59)
  • (59)
  • (59)
  • (57)
  • (53)
  • (53)
  • (50)
  • (49)
  • (48)
  • (44)
  • (39)
  • (36)
  • (36)
  • (35)
  • (32)
  • (31)
  • (30)
  • (29)
  • (27)
  • (27)
  • (26)
  • (26)
  • (26)
  • (22)
  • (22)
  • (21)
  • (19)
  • (19)
  • (19)
  • (18)
  • (17)
  • (17)
  • (16)
  • (14)
  • (13)
  • (13)
  • (12)
  • (11)
  • (11)
  • (11)
  • (9)
  • (7)
  • (6)
  • (5)
  • (4)
  • (4)
  • (3)
  • (2)
  • (2)
  • (2)
  • (2)
  • (1)
  • View all 127 Use Cases
    ANDOR
  • (10,416)
  • (3,525)
  • (3,404)
  • (2,998)
  • (2,615)
  • (1,261)
  • (932)
  • (347)
  • (10)
  • View all 9 Services
    ANDOR
  • (507)
  • (432)
  • (382)
  • (304)
  • (246)
  • (143)
  • (116)
  • (112)
  • (106)
  • (87)
  • (85)
  • (78)
  • (75)
  • (73)
  • (72)
  • (69)
  • (69)
  • (67)
  • (65)
  • (65)
  • (64)
  • (62)
  • (58)
  • (55)
  • (54)
  • (54)
  • (53)
  • (53)
  • (52)
  • (52)
  • (51)
  • (50)
  • (50)
  • (49)
  • (47)
  • (46)
  • (43)
  • (42)
  • (37)
  • (35)
  • (32)
  • (31)
  • (31)
  • (30)
  • (30)
  • (28)
  • (27)
  • (24)
  • (24)
  • (23)
  • (23)
  • (22)
  • (22)
  • (21)
  • (20)
  • (20)
  • (19)
  • (19)
  • (19)
  • (19)
  • (18)
  • (18)
  • (18)
  • (18)
  • (17)
  • (17)
  • (16)
  • (16)
  • (16)
  • (16)
  • (16)
  • (16)
  • (16)
  • (16)
  • (15)
  • (15)
  • (14)
  • (14)
  • (14)
  • (14)
  • (14)
  • (14)
  • (14)
  • (13)
  • (13)
  • (13)
  • (13)
  • (13)
  • (13)
  • (13)
  • (13)
  • (13)
  • (12)
  • (12)
  • (12)
  • (12)
  • (12)
  • (12)
  • (11)
  • (11)
  • (11)
  • (11)
  • (11)
  • (11)
  • (11)
  • (11)
  • (11)
  • (11)
  • (10)
  • (10)
  • (10)
  • (10)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (9)
  • (8)
  • (8)
  • (8)
  • (8)
  • (8)
  • (8)
  • (8)
  • (8)
  • (8)
  • (8)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (7)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (6)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (5)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (4)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (3)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (2)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • (1)
  • View all 731 Suppliers
Connect?
Please feel encouraged to schedule a call with us:
Schedule a Call
Or directly send us an email:
19,090 case studies
Case Study missing? Just let us know via Add New Case Study.
19,090 Case Studies Selected
USD 0.00
Buy This List
Compare
Sort by:
Med Helper Now: Revolutionizing Home Healthcare Services with IoT
Med Helper Now, a platform designed to connect home healthcare service providers with those in need, faced the challenge of building a large-scale platform with a small team. The co-founders, Moazzam Khan and Nicole Harrison, had coding experience but the scale of the project was beyond their capacity. They needed a solution that could provide them with professional components, data integration, and security while also being cost-effective and time-efficient. The platform needed to cater to two types of users - service providers and service users - and provide them with different functionalities. For service providers, the platform needed to verify licenses, manage bookings, and handle payments. For service users, it needed to facilitate bookings, manage payment settings, and enable communication with service providers.
Download PDF
DENIC Enhances Query Times by 10x Leveraging ClickHouse
DENIC eG, the administrator and operator of the German namespace on the Internet, was facing challenges in improving the user experience of the internet community due to limitations in data analytics. The data relevant for their analytics was distributed among relational databases, server log data, and various other sources. These sources were already used for monitoring and system improvements, but their analytical features were limited and cross-evaluations across a wide range of sources were costly or not feasible. The initial steps of developing the data science platform involved using a database based on a relational DBMS. The data from different sources was consolidated by Python agents in containers on Kubernetes and the results were written to target tables in the database. This approach resulted in a considerable number of target tables and containers, which were difficult to administer and became somewhat overcomplicated. Furthermore, relational databases were only suitable for larger amounts of data to a limited extent, as the processing time of a query could take several minutes to hours.
Download PDF
High-Speed Content Distribution Analytics for Disney+ with ClickHouse
Disney+'s Observability team was faced with the challenge of processing and analyzing access logs for their content distribution system. The team had to deal with a massive amount of data generated by the users of Disney+, which required a highly scaled and distributed database system. The existing solutions, such as Elasticsearch, Hadoop, and Flink, were not able to handle the volume of data efficiently. Elasticsearch, for instance, required a lot of rebalancing and used a Java virtual machine, adding an unnecessary layer of virtualization. The team was struggling to ingest all the logs due to the size of the data.
Download PDF
Boosting Game Performance: ExitLag's Transition from MySQL to ClickHouse
ExitLag, a tool that optimizes the gaming experience for over 1,700 games on over 900 servers worldwide, was facing performance issues with MySQL. They were encountering bottlenecks and slowdowns with specific analytical queries about user behavior analysis and network route mapping, especially as their data volume increased. In their continuous effort to resolve common connection problems for gamers, ExitLag developed a sophisticated method for sending connection packets from users. These packets are sent simultaneously through different routes, thus increasing the guarantee that the packet will be delivered. However, the increasing data volume was causing performance issues with their existing MySQL system.
Download PDF
Accelerating GraphQL Hive Performance: Migration from Elasticsearch to ClickHouse
GraphQL Hive, an open-source tool for monitoring and analyzing GraphQL APIs, was facing significant scaling issues. The tool, which tracks the history of changes, prevents API breakage, and analyzes API traffic, was initially using Elasticsearch for data storage. However, as the volume of data increased, the average response time began to slow down significantly. Additionally, the indexing process was problematic, with larger users affecting the query performance of smaller users. Despite attempts to improve performance by creating an index per user, the overall speed of Elasticsearch was still below expectations. The team at The Guild, the company behind GraphQL Hive, also found the JSON-based query language of Elasticsearch challenging, as they were more familiar with SQL.
Download PDF
HIFI's Transition from BigQuery to ClickHouse for Enhanced Music Royalty Data Management
HIFI, a company providing financial and business insights to music creators, was facing challenges with its data management system. The company ingests a significant amount of royalty data, with a single HIFI Enterprise account having over half a gigabyte of associated royalty data representing over 25 million rows of streaming and other transaction data. This data needs to load into the user interface as soon as a customer logs in, and there can be multiple customers logging in simultaneously. Previously, it could take up to 30 seconds to load the data, and sometimes it would not load at all due to timeouts. HIFI was using Google Cloud's BigQuery (BQ) to store royalty data, but the pricing structure of BQ was a major challenge. It discouraged data usage and contradicted HIFI's data-driven values. Google's solution to purchase BQ slots ahead of time was not feasible for HIFI as a startup, as usage patterns could change dramatically week to week.
Download PDF
Highlight.io's Observability Solution Powered by ClickHouse: A Comprehensive Case Study
Highlight.io, an open-source observability platform, initially focused on session replay and frontend web development features. However, as the need for full-stack observability grew, the platform needed to expand its offerings. This expansion was necessary to enable developers to track user experiences within web apps, identify backend errors, and analyze associated logs across their infrastructure. The challenge was to integrate these features into a single-pane view to streamline the troubleshooting process. Furthermore, the platform aimed to add logging capabilities to its stack, powered by ClickHouse, to provide deeper insights into applications by capturing and analyzing server-side logs. The goal was to handle high data ingestion rates and ensure that developers could access up-to-date information in real-time.
Download PDF
Harnessing ClickHouse and Materialized Views for High-Performance Analytics: A Case Study of Inigo
Inigo, a pioneering company in the GraphQL API management industry, was in search of a database solution that could handle a high volume of raw data for analytics. They explored various alternatives, including SQLite, Snowflake, and PostgreSQL, but none of these options met their needs. Snowflake was too slow and costly for their needs, especially when handling real-time customer data within a product. PostgreSQL, while an excellent transactional database, was unsuitable for large analytic workloads. The company was able to get it to work with around 100K rows, but past that, the indexes were growing out of control and the cost of running a PostgreSQL cluster didn’t make sense. There was significant performance degradation once they hit the 100K - 1M rows mark. Inigo needed a solution that could handle billions of GraphQL API requests, create aggregated views on high cardinality data, generate alerts, and create dashboards based on extensive data.
Download PDF
Instabug's Successful Migration to ClickHouse for Enhanced APM Performance
Instabug, an SDK that provides a suite of products for monitoring and debugging performance issues throughout the mobile app development lifecycle, faced significant challenges with performance metrics. These metrics heavily relied on frequent and vast events, posing a challenge in receiving and efficiently storing these events. Additionally, the raw format of performance events was not useful for users, requiring heavy business logic for querying and data visualization. Instabug's backend is large scale, with APIs averaging approximately 2 million requests per minute and terabytes of data going in and out of their services daily. When building their Application Performance Monitoring (APM), they realized it would be their largest scale product in terms of data. They were storing approximately 3 billion events per day at a rate of approximately 2 million events per minute. They also had to serve complex data visualizations that depended heavily on filtering large amounts of data and calculating complex aggregations quickly for user experience. Initially, they designed APM like their other products, but faced performance issues with Elasticsearch, especially for reads, and writes were also not fast enough to handle their load.
Download PDF
Juspay's Real-Time Transaction Analysis and Cost Reduction with ClickHouse
Juspay, an Indian fintech company, is responsible for over 50 million daily transactions for clients such as Amazon, Google, and Vodafone. As a pioneer in the payments industry, Juspay's mission is to streamline online payments for merchants, acting as an intermediary between payment providers and merchant systems. To ensure a seamless transaction environment, Juspay needed to provide monitoring and analytics services to guarantee the performance of the payment system. With a diverse array of merchants and their ever-evolving needs, Juspay had to keep up with frequent releases, often multiple releases daily. They needed a monitoring solution that would enable them to maintain their rapid release pace while ensuring that the latest release did not impact the running payment systems. Furthermore, Juspay was facing high operating costs with their previous solution, BigQuery from GCP, which was costing over a thousand dollars per day.
Download PDF
MessageBird's Transformation with ClickHouse: A Case Study on Enhanced Performance and Cost Efficiency
MessageBird, a cloud communications platform, processes billions of messages, calls, and emails for over 29,000 customers. The company heavily relies on data-driven insights for efficient operations, with ClickHouse, an analytical backend, playing a crucial role since 2017. However, MessageBird faced challenges with its initial setup on MySQL due to scalability and latency issues. The company needed a solution that could handle high-volume data ingestion, provide low response times, and support real-time analytics for customer-facing dashboards and APIs. Additionally, the company required a system that could monitor the delivery performance of SMS messages and promptly identify anomalies. The challenge was to find a solution that could meet these needs while also being cost-effective.
Download PDF
Driving Sustainable Data Management with ClickHouse: Introducing Stash by Modeo
Modeo, a French data engineering firm, faced the challenge of managing increasing data volumes while accurately measuring the real-time carbon emissions generated by data usage and storage. This was part of their Corporate Social Responsibility initiative focusing on climate change. The company needed a solution that would allow their customers to monitor and optimize their data platform's cost, carbon footprint, and usage. The challenge was to balance the growing data volumes with the need to minimize environmental impact, a complex issue in the field of data engineering.
Download PDF
Supercolumn: NANO Corp.'s Journey from Experimentation to Production with ClickHouse
NANO Corp., a French startup founded in 2019, was on a mission to revolutionize network probes. They aimed to create versatile, lightweight probes capable of handling bandwidths up to 100GBit/s on commodity hardware. Their vision was to offer a new kind of observability, one that combined network performance and cybersecurity. However, to fully utilize the potential of their network probes, they needed a robust database. The database had to handle fast and constant inserts, run periodic queries for alerting and custom queries launched by multiple users, and manage large volumes of data efficiently. It also needed to have a hot/cold data buffering system, be easy to maintain and deploy, and be efficient in RAM usage. Traditional RDBMS, which their main engineers had used in their previous careers, were not up to the task. They were too reliant on update speed and required clustering when overall performance became an issue. NANO Corp. needed a database as groundbreaking as their probe.
Download PDF
OONI's Transformation: Enhancing Internet Censorship Measurement with ClickHouse
The Open Observatory of Network Interference (OONI) is a non-profit organization that provides free software tools to document internet censorship worldwide. Their tools allow users to test their internet connection quality, detect censorship, and measure network interference. However, OONI faced significant challenges in handling the vast amounts of data generated from these tests. They initially used flat files, MongoDB, and PostgreSQL to store metadata from measurement experiments. As the dataset grew into hundreds of millions of rows, performance issues arose, requiring a shift from an OLTP database to an OLAP one. OONI needed a solution that could simplify their architecture while handling complex data visualizations and enabling searches and aggregations on their 1B+ row dataset.
Download PDF
Opensee: Harnessing Financial Big Data with ClickHouse
Opensee, a financial technology company, was founded by a team of financial industry and technology experts who were frustrated by the lack of simple big data analytics solutions that could efficiently handle their vast amounts of data. Financial institutions have always stored large amounts of data for decision-making processes and regulatory reasons. However, since the financial crisis, regulators worldwide have significantly increased reporting requirements, insisting on longer historical ranges and deeper granularity. This has led to an exponential increase in data, forcing financial institutions to review and upgrade their infrastructure. Unfortunately, many of the storage solutions, such as data lakes built on a Hadoop stack, were too slow for at-scale analytics. Other solutions like in-memory computing solutions and query accelerators presented issues with scalability, high hardware costs, and loss of granularity. Financial institutions were thus forced into a series of compromises.
Download PDF
Plausible Analytics Leverages ClickHouse for Privacy-Friendly Web Analytics
Plausible Analytics, a privacy-friendly alternative to Google Analytics, faced a significant challenge as it scaled its services. Since its launch in April 2019, the platform had grown to service over 5000 paying subscribers, tracking 28,000 different websites and more than 1 billion page views per month. However, the original architecture using Postgres to store analytics data was unable to handle the platform’s future growth. The loading speed of their dashboards was slow, taking up to 5 seconds, which was not conducive to a good user experience. The team realized that to continue their growth trajectory and maintain customer satisfaction, they needed a more efficient solution.
Download PDF
QuickCheck's Transformation of Unbanked Financial Services Using ClickHouse
QuickCheck, a Fintech startup based in Lagos, Nigeria, is on a mission to provide financial services to over 60 million Nigerian adults who are excluded from banking services and 100 million who do not have access to credit. The QuickCheck mobile app, which has been downloaded by more than 2 million people and has processed over 4.5 million micro-credit applications, leverages artificial intelligence to offer app-based neo-banking products. However, the company faced challenges in analyzing the vast amount of financial data, fraud analysis, and monitoring data. They needed a solution that could handle hundreds of thousands of rows of data loaded daily for portfolio risk analysis and financial metrics building.
Download PDF
Leveraging ClickHouse Kafka Engine for Enhanced Data Collection and Analysis: A Case Study of Superology
Superology, a leading product tech company in the sports betting industry, was faced with the challenge of effectively collecting and analyzing quantitative data to improve customer experience and business operations. The company needed to gather metrics such as app or site visits, customer clicks on specific pages, number of comments and followers in their social section, and various conversion events and bounce rates. The data collected varied in structure, requiring a dynamic approach to data collection and analysis. Superology was using Google Protocol Buffers (Protobuf) to collect this data, but needed a more efficient and scalable solution to handle the large volume of data and its dynamic nature.
Download PDF
Building a Unified Data Platform with ClickHouse: A Case Study on Synq
Synq, a data observability platform, faced the challenge of managing the complexity, variety, and increasing volumes of data that powered their software system. The company needed to merge operational and analytical needs into a unified data platform. They were dealing with a continuous stream of data from dozens of systems, with frequent bursts of volume when customers ran large batch processing jobs or when new customers were onboarded. The company had set ambitious performance goals for backfilling data and wanted to provide immediate value to customers as they onboarded their product. They also wanted an infrastructure that could serve their first set of defined use cases and provide functionality to support new use cases quickly. Lastly, they aimed to build a single platform that could store their raw log data and act as a serving layer for most data use cases needed by their applications and APIs.
Download PDF
TrillaBit Leverages ClickHouse for Enhanced Analytics and Reporting
TrillaBit, a dynamic SaaS platform for reporting and business intelligence, initially used Apache Solr as its data backend. However, they soon encountered several challenges. Solr, being a key-value store, was more suited to search than high-volume non-linear aggregation or data compression for performance. Its query language wasn’t as mature as SQL and it didn’t handle joins effectively. When implementing real company data from various sources, TrillaBit found that more flexibility was required in different scenarios. They needed a solution that could be managed at a low cost and could be implemented within their environment for hands-on experience and understanding. However, popular contenders like Snowflake were too expensive and didn’t allow for full on-prem implementation.
Download PDF
Leveraging Zing Data and ChatGPT for Mobile Querying and Real-Time Alerts in ClickHouse
Many companies use ClickHouse for its ability to power fast queries. However, the process of having an analyst write a query, create a dashboard, and share it throughout the organization can add significant delay to getting questions answered. This challenge is compounded by the fact that many business intelligence (BI) tools require someone at a computer to pre-create dashboards or limit users to certain filters. Furthermore, the need for real-time alerts and the ability to query based on a user's current location are increasingly important in today's fast-paced business environment.
Download PDF
Revolutionizing In-App Analytics Experience with IoT: A Case Study
The client, a market research tech company, provides interactive analytics to marketing teams, enabling them to research their competitors’ brand and marketing activities as well as their own. The platform relies on massive volumes of web-traffic data and other sources, which are continuously streamed into S3 and used for various use cases and features within the customer-facing analytics platform. A key component of delivering a great user experience for their customers is ensuring that users don’t have to wait long to get results for their questions. However, waiting more than a couple of seconds was considered unacceptable. This led the company to make several painful tradeoffs. They had to limit their analysis to a month of data (~10TB), instead of a quarter (~40TB), due to the decrease in performance when trying to analyze a larger data set. They also had to aggregate data for full-year analysis features to maintain performance, while sacrificing the ability to drill into the granular data. Furthermore, they had to limit the types of data that could be analyzed, as many semi-structured sources could not be analyzed quickly enough compared to structured data.
Download PDF
TrueCar Employs Imply Cloud for Enhanced Self-Service Analytics
TrueCar, a leading automotive digital marketplace, was facing challenges in analyzing real-time clickstream data to detect anomalies in user activity. The latency from its existing data warehouse and business intelligence stack was higher than desired, and the cost of scaling to support analytics on large and growing amounts of streaming data was a concern. TrueCar wanted to make analytics available not just to analysts, but also to business users in diverse functions such as marketing and finance. They sought to achieve this without the time and risk associated with building an end-to-end analytics capability from scratch.
Download PDF
Optimizing Database Performance and Cost with Redis on Flash: A Case Study on Ekata
Ekata, a company that provides digital identity verification services, was facing a significant challenge in managing its vast database. The company's proprietary Identity GraphTM solution was making an average of 150,000 to 200,000 calls per second to its 3TB database, a number that could even surpass during peak hours. The challenge was to handle this massive load without impacting the performance of the system. As Ekata expanded its identity dataset globally, the need for a better solution to manage this load without affecting performance and keeping operational costs low became increasingly apparent.
Download PDF
Active-Active Redis Enhances Resiliency and High-Availability for Flowdesk’s Cryptocurrency Trading Platform
Flowdesk, a financial technology services platform for digital asset issuers, faced the challenge of facilitating sub-second access to order books that store financial data across the globe. This required a high-availability, low-maintenance database service that could integrate with the Google Cloud ecosystem, support Terraform, and enable VPC peering. As Flowdesk's trading service grew, the infrastructure team realized they needed a more robust database and cache system to support real-time trading and market-making activity. They sought a cloud-based database with a global footprint that could easily synchronize data among multiple international regions. The team also needed to maintain rigorous standards for data availability among regions with the expectation of 99.999 percent uptime.
Download PDF
Redis Enterprise: A PCI-Compliant Solution for Credit Card Security Management
A Fortune 500 company in the travel and hospitality industry, serving 40 million customers annually, was faced with the challenge of managing its customers’ credit card information in a highly secure and compliant manner. The company's compliance team was in need of a highly available, PCI-compliant solution for managing credit card security codes, also known as CVV or CSC codes. The solution had to meet stringent requirements such as running in-memory, expiring stored information after a short period, and encrypting payment data to ensure minimal correlation between CVV codes and other cardholder information. The need for this solution was further accelerated by the COVID-19 pandemic, which pushed the company to modernize its applications and infrastructure, particularly its e-commerce payments platform.
Download PDF
Redis Enterprise on AWS: A Scalable Solution for HackerRank's Data Layer Needs
HackerRank, a leading platform in pre-screening, technical assessments, and remote interview solutions for hiring developers, was faced with the challenge of needing a fast, scalable, and reliable data platform that required minimal maintenance and configuration. This was crucial for the company to focus on innovation and to fulfill its mission of becoming the single source of truth for every engineer’s technical ability. Additionally, HackerRank needed a real-time leaderboard to showcase top developers. The company was using multiple solutions to cobble together a data layer, which was not efficient or sustainable for its growing needs.
Download PDF
HealthStream Enhances SaaS Platform Performance with Redis Enterprise
HealthStream, a provider of SaaS-based software solutions for healthcare organizations, faced a significant challenge in ensuring optimal performance for its customers. The company's SaaS model, which leverages microservices and cloud components, needed to minimize server processing time due to the geographic distance and network challenges faced by some of their customers. The need to deliver high performance was paramount, given that their platform, hStream™, is used daily by several hundred thousand healthcare professionals. As the utilization of HealthStream’s platform grew over time, the company's product architects had to decide between adding more hardware to meet their scalability needs, which would only be a temporary solution, or think creatively about their architecture and design of the platform.
Download PDF
iFood's Utilization of Redis Cloud for Enhanced Machine Learning Operations
iFood, a popular food ordering and delivery service in Brazil and Colombia, faced a significant challenge in maintaining the performance of its machine learning (ML) models. The company's success was directly tied to the performance of these models, which needed to process data quickly to reduce costs, increase revenue, and influence user behavior during real-time interactions. The COVID-19 pandemic presented unique opportunities for e-commerce firms, especially online delivery services that were prepared to handle an escalating volume of orders. At iFood, the technology team had to manage millions of new users and thousands of new restaurants joining its platform. Despite the surge in business volume, iFood remained committed to providing an optimal experience for its customers.
Download PDF
Performance Enhancement and Cost Reduction in Database Servers for Kicker
German football publication, kicker, faced a significant challenge as its website and mobile platform continued to attract new readership. The company's existing tech stack was reaching its performance limitations, unable to keep up with the growing demand. The kicker.de website generates more than two billion page impressions per month, with a yearly growth rate of over 15%. The mobile version of the website, along with kicker’s app, also experienced phenomenal growth. As the audience continued to grow, the performance limitations of the company’s existing tech stack became apparent. The tech team realized the need for an intelligent caching solution to maintain performance and user satisfaction.
Download PDF
test test