Kafka专题 第二篇:安装环境

Kafka专题 第二篇:安装环境

Tags
kafka
CreatedAt
Jul 29, 2022 08:05 AM
上一篇「
Kafka专题 第一篇:背景知识
」我们已经了解了不少Kafka的背景知识了。接下来我们会边实践边理解原理知识,这一篇会先讲环境安装。

一、开发环境

选择一:Confluent Cloud
Confluent是由Kafka的几个创始成员独立创办的公司,专注于提供Kafka商用的解决方案,同时也是目前Apache Kafka项目的主要贡献者。Confluent Cloud是他们提供的一个基于云端的快速搭建方案,不需要再自己手动部署Kafka,并提供管理页面来帮助监控运维。
💡
目前提供免费试用2个月,有400美金的余额,不需要添加信用卡。
  1. 首先到官网创建账户
  1. 创建集群
    1. notion image
      notion image
      地区选择香港,可减少延迟,目前只有aws有香港选项
      地区选择香港,可减少延迟,目前只有aws有香港选项
      无需填写支付方式,直接选择右下角跳过按钮
      无需填写支付方式,直接选择右下角跳过按钮
      notion image
      notion image
  1. 成功创建集群后,可在管理页面查看集群相关的信息。
选择二:Docker环境(推荐)
选择拷贝如下其中一个docker-compose.yml,建议使用第一个,因为后续的文章都是针对使用ZooKeeper的版本进行说明。
# 在docker-compose.yml目录下运行命令 docker-compose up -d
  1. Kafka + ZooKeeper
    1. docker-compose.yml
      version: '3' services: zookeeper-1: image: confluentinc/cp-zookeeper:latest hostname: zookeeper-1 container_name: zookeeper-1 ports: - "22181:22181" environment: ZOOKEEPER_SERVER_ID: 1 ZOOKEEPER_CLIENT_PORT: 22181 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_INIT_LIMIT: 5 ZOOKEEPER_SYNC_LIMIT: 2 ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:22888:23888;zookeeper-3:22888:23888 zookeeper-2: image: confluentinc/cp-zookeeper:latest hostname: zookeeper-2 container_name: zookeeper-2 ports: - "32181:22181" environment: ZOOKEEPER_SERVER_ID: 2 ZOOKEEPER_CLIENT_PORT: 22181 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_INIT_LIMIT: 5 ZOOKEEPER_SYNC_LIMIT: 2 ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:22888:23888;zookeeper-3:22888:23888 zookeeper-3: image: confluentinc/cp-zookeeper:latest hostname: zookeeper-3 container_name: zookeeper-3 ports: - "42181:22181" environment: ZOOKEEPER_SERVER_ID: 3 ZOOKEEPER_MY_ID: 3 ZOOKEEPER_CLIENT_PORT: 22181 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_INIT_LIMIT: 5 ZOOKEEPER_SYNC_LIMIT: 2 ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:22888:23888;zookeeper-3:22888:23888 kafka-1: image: confluentinc/cp-kafka:latest hostname: kafka-1 container_name: kafka-1 ports: - "9091:9091" depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:22181,zookeeper-3:22181 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:19092,PLAINTEXT_HOST://localhost:9091 kafka-2: image: confluentinc/cp-kafka:latest hostname: kafka-2 container_name: kafka-2 ports: - "9092:9092" depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 environment: KAFKA_BROKER_ID: 2 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:22181,zookeeper-3:22181 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-2:29092,PLAINTEXT_HOST://localhost:9092 kafka-3: image: confluentinc/cp-kafka:latest hostname: kafka-3 container_name: kafka-3 ports: - "9093:9093" depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 environment: KAFKA_BROKER_ID: 3 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:22181,zookeeper-3:22181 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-3:39092,PLAINTEXT_HOST://localhost:9093
  1. KRaft(Kafka without ZooKeeper)
    1. 💡
      当前最新Kafka版本(V3.2.1)的KRaft仍处于早期预览,不适合用于生产环境。等稳定后,将无需再依赖ZooKeeper了。至于KRaft是怎么替换掉ZooKeeper,简单说就是使用Raft共识算法来实现原数据的自维护。如果有机会我们等系列文章后面再展开讲讲(如果能写到那里的话😅)
      docker-compose.yml
      version: "3" services: kafka: image: 'bitnami/kafka:latest' ports: - '9092:9092' environment: - KAFKA_ENABLE_KRAFT=yes - KAFKA_CFG_PROCESS_ROLES=broker,controller - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 - KAFKA_BROKER_ID=1 - [email protected]:9093 - ALLOW_PLAINTEXT_LISTENER=yes
       
选择三:本地jar包
  1. 首先确保本地环境以安装Java8+环境,如果没有,请自行下载安装,这里就不细说了。
    1. 💡
      这里推荐一个工具叫sdk:Home - SDKMAN! the Software Development Kit Manager 可以方便安装和切换JDK版本,有点类似nvm,不过它还支持安装其他SDK。
  1. 从官网下载Kafka
  1. 解压下载包(如kafka_2.13-3.2.1.tgz),解压后进入kafka_2.13-3.2.1文件夹
    1. tar -xzf kafka_2.13-3.2.1.tgz cd kafka_2.13-3.2.1
  1. 终端启动ZooKeeper服务
    1. bin/zookeeper-server-start.sh config/zookeeper.properties
  1. 终端启动Kafka代理
    1. bin/kafka-server-start.sh config/server.properties

二、可视化

环境搭建完毕后,我们可以使用客户端连接试试。
  • Confluent Cloud
    • 如果选择这种方式搭建环境,那基本可以依赖Confluent提供的网页控制台来查看及管理Kafka。假设你执意想通过客户端应用连接查看,那这里介绍下如何使用Offset Explorer 2连接Kafka。
    • 步骤一:先从confluent管理页面获取连接参数,登录confluent管理页面,选择:
      • Data IntegrationClientsJavaCreate Kafka cluster API key
        然后拷贝右侧连接地址(如下图中 )及JAAS配置(如下图中 )
        notion image
    • 步骤二:打开Offset Explorer 2客户端
      • 创建新连接,Security选项卡选择SASL SSL
        notion image
        Advanced选项卡输入「步骤一」中获取到的连接地址,SASL Mechanism输入PLAIN
        notion image
        JAAS Config选项卡则输入「步骤一」中获取到的JAAS配置,最后点击Add添加完毕
        notion image
        打开菜单会自动连接,成功后可看到多个Broker
        notion image
  • Docker环境
    • 步骤一:首先docker ps查看到Kafka和ZooKeeper容器正常启动
      • notion image
    • 步骤二:打开Offset Explorer 2客户端,同样创建新连接,配置参数如下图:
      • notion image
        notion image
        notion image
        成功连接后可以看到三个broker
        notion image

三、为什么需要ZooKepper🤔️

notion image
Apache Kafka is depending on Apache Zookeeper for:
 

📘 引用文章