Hadoop, Kafka, Storm
Big data is not a single product or component – it’s an umbrella of technologies and products. You can't really harness the power of big data with a single product. You need a solution that encompasses multiple technologies, and a toolbox to integrate them.
While every big data solution is intrinsically different, the requirements are largely the same: a) ingest high velocity data, b) store large volumes of it, and c) extract information from it. Depending on the solution, low latency, performance, and throughput can be key requirements.
The most innovative big data solutions use streaming to move operational data between ingestion points, storage systems and analytical platforms. Any big data solution will need a scalable, high performance database. What else will be required? That's up to you.
Accelerate your Spark workloads and publish results using Couchbase Server. Add ETL, analytics, and machine learning to your Couchbase applications with full support for Spark Core, Spark SQL, and Spark Streaming. Now available with support for Spark 2.1, including the Structured Streaming API.Datasheet | White Paper | Presentation | Getting Started | What's New in Spark Connector 2.1
Use Couchbase as either a consumer or producer with Kafka message queues. Continuously stream data between Couchbase and Kafka as it is generated. Now available with support for Kafka Connect, which standardizes management, enables end-to-end monitoring, and supports dashboard tools such as Confluent Control Center.Solution Brief | Presentation | Getting Started | What's New in Kafka Connector 3.1
- Transaction Alerts with Kafka
- Market Basket Analysis with Spark
- Loading CVS files with Spark
- Creating a Zeppelin Interpreter
- Getting Started: Zeppelin and Spark
- Getting Started with Kafka
- ETL w/ Spark (DataFrames & Streams)
- Running a Sample Kafka Consumer
- Spark Connector 2.0.0 Released
- Announcing Elasticsearch Plugin 2.1
- Apache NiFi Processors