What is a simple, effective way to debug custom Kafka connectors?












3















I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?



Example uses of the SLF4J Logger that I referenced for my implementation:



Kafka FileStreamSinkTask



Kafka FileStreamSourceTask










share|improve this question





























    3















    I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?



    Example uses of the SLF4J Logger that I referenced for my implementation:



    Kafka FileStreamSinkTask



    Kafka FileStreamSourceTask










    share|improve this question



























      3












      3








      3








      I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?



      Example uses of the SLF4J Logger that I referenced for my implementation:



      Kafka FileStreamSinkTask



      Kafka FileStreamSourceTask










      share|improve this question
















      I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?



      Example uses of the SLF4J Logger that I referenced for my implementation:



      Kafka FileStreamSinkTask



      Kafka FileStreamSourceTask







      java debugging apache-kafka slf4j apache-kafka-connect






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 17 '18 at 5:04









      cricket_007

      81.3k1142111




      81.3k1142111










      asked Aug 16 '17 at 15:32









      C. OmmenC. Ommen

      3618




      3618
























          1 Answer
          1






          active

          oldest

          votes


















          13














          I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:




          • Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )

          • Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/


          • Make your connector package available to Kafka Connect in one of the following ways:




            1. Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
              plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).

            2. Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.



          • Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.



          • Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html



            Briefly: confluent start





          Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.






          • Once you have Connect worker running, start your connector by running:



            confluent load <connector_name> -d <connector_config.properties>



            or



            confluent load <connector_name> -d <connector_config.json>



            The connector configuration can be either in java properties or JSON format.




          • Run
            confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running



            cd "$( confluent current )"





          Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:



          export CONFLUENT_CURRENT=/opt/confluent
          confluent current






          • Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :



            confluent stop connect
            export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
            confluent start connect



            and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.




          I hope the above will make your connector development easier and ... more fun!






          share|improve this answer


























          • Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

            – C. Ommen
            Aug 22 '17 at 14:01






          • 1





            Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

            – C. Ommen
            Aug 23 '17 at 15:17











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f45717658%2fwhat-is-a-simple-effective-way-to-debug-custom-kafka-connectors%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          13














          I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:




          • Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )

          • Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/


          • Make your connector package available to Kafka Connect in one of the following ways:




            1. Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
              plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).

            2. Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.



          • Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.



          • Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html



            Briefly: confluent start





          Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.






          • Once you have Connect worker running, start your connector by running:



            confluent load <connector_name> -d <connector_config.properties>



            or



            confluent load <connector_name> -d <connector_config.json>



            The connector configuration can be either in java properties or JSON format.




          • Run
            confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running



            cd "$( confluent current )"





          Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:



          export CONFLUENT_CURRENT=/opt/confluent
          confluent current






          • Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :



            confluent stop connect
            export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
            confluent start connect



            and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.




          I hope the above will make your connector development easier and ... more fun!






          share|improve this answer


























          • Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

            – C. Ommen
            Aug 22 '17 at 14:01






          • 1





            Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

            – C. Ommen
            Aug 23 '17 at 15:17
















          13














          I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:




          • Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )

          • Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/


          • Make your connector package available to Kafka Connect in one of the following ways:




            1. Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
              plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).

            2. Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.



          • Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.



          • Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html



            Briefly: confluent start





          Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.






          • Once you have Connect worker running, start your connector by running:



            confluent load <connector_name> -d <connector_config.properties>



            or



            confluent load <connector_name> -d <connector_config.json>



            The connector configuration can be either in java properties or JSON format.




          • Run
            confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running



            cd "$( confluent current )"





          Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:



          export CONFLUENT_CURRENT=/opt/confluent
          confluent current






          • Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :



            confluent stop connect
            export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
            confluent start connect



            and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.




          I hope the above will make your connector development easier and ... more fun!






          share|improve this answer


























          • Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

            – C. Ommen
            Aug 22 '17 at 14:01






          • 1





            Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

            – C. Ommen
            Aug 23 '17 at 15:17














          13












          13








          13







          I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:




          • Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )

          • Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/


          • Make your connector package available to Kafka Connect in one of the following ways:




            1. Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
              plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).

            2. Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.



          • Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.



          • Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html



            Briefly: confluent start





          Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.






          • Once you have Connect worker running, start your connector by running:



            confluent load <connector_name> -d <connector_config.properties>



            or



            confluent load <connector_name> -d <connector_config.json>



            The connector configuration can be either in java properties or JSON format.




          • Run
            confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running



            cd "$( confluent current )"





          Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:



          export CONFLUENT_CURRENT=/opt/confluent
          confluent current






          • Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :



            confluent stop connect
            export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
            confluent start connect



            and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.




          I hope the above will make your connector development easier and ... more fun!






          share|improve this answer















          I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:




          • Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )

          • Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/


          • Make your connector package available to Kafka Connect in one of the following ways:




            1. Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
              plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).

            2. Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.



          • Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.



          • Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html



            Briefly: confluent start





          Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.






          • Once you have Connect worker running, start your connector by running:



            confluent load <connector_name> -d <connector_config.properties>



            or



            confluent load <connector_name> -d <connector_config.json>



            The connector configuration can be either in java properties or JSON format.




          • Run
            confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running



            cd "$( confluent current )"





          Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:



          export CONFLUENT_CURRENT=/opt/confluent
          confluent current






          • Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :



            confluent stop connect
            export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
            confluent start connect



            and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.




          I hope the above will make your connector development easier and ... more fun!







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Sep 18 '18 at 18:11

























          answered Aug 16 '17 at 17:25









          Konstantine KarantasisKonstantine Karantasis

          1,135510




          1,135510













          • Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

            – C. Ommen
            Aug 22 '17 at 14:01






          • 1





            Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

            – C. Ommen
            Aug 23 '17 at 15:17



















          • Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

            – C. Ommen
            Aug 22 '17 at 14:01






          • 1





            Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

            – C. Ommen
            Aug 23 '17 at 15:17

















          Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

          – C. Ommen
          Aug 22 '17 at 14:01





          Hey @Konstantine Karantasis, thank you for the answer. I've gotten a chance to try out Confluent, but I've found myself stuck at trying to load my connectors, is there anymore detail you could provide on the step? My connectors are packaged as such in a jar in the location you instructed: neo4k.filestream.source.Neo4jFileStreamSourceConnector and neo4k.sink.Neo4jSinkConnector; and have the following config files respectively located at ${CONFLUENT_HOME}/etc/kafka-connect-neo4j: connect-neo4j-file-source.properties and connect-neo4j-sink.properties.

          – C. Ommen
          Aug 22 '17 at 14:01




          1




          1





          Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

          – C. Ommen
          Aug 23 '17 at 15:17





          Never mind, I got the load command to find the .properties file. It was just a matter of correct filepathing.

          – C. Ommen
          Aug 23 '17 at 15:17


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f45717658%2fwhat-is-a-simple-effective-way-to-debug-custom-kafka-connectors%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          這個網誌中的熱門文章

          Xamarin.form Move up view when keyboard appear

          Post-Redirect-Get with Spring WebFlux and Thymeleaf

          Anylogic : not able to use stopDelay()