Consuming Platform Events on Microservices





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}






up vote
3
down vote

favorite












I am having few issues designing an architecture to use with Platform events and Microservices.



According to microservices principle(Kubernetes), there should be at least 3(odd number) services running so that it can handle the load properly. It works for callouts but for platform events it just gets bit tricky.



Here is our architecture :



Whenever some conditions, to send uploaded file to 3rd party for processing and get that data back in Salesforce. So flow goes like this




  1. Person X uploads a file(File can be big so can't do HTTP callouts to a 3rd party with file data)

  2. We fire a platform event with fileId(content version id)

  3. The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF


Now the problem is in microservice. There can be multiple instances of it listening and follows the independent processing architecture so that they can work without worrying what's going on in other instances.



So, in order to stop single point of failure there will be at least 3 services running,(Even more if peak load) and they listen to the same event and act independently. This is problematic as all 3(or evn more) will process and respond same data to SF.



We can't let a single microservice to listen to all events as it will be again a single point of failure. Also, we cannot pop an event out of the event bus as its an event bus and not an event queue.



Has anyone faced this and what possible solution can be built around it?










share|improve this question






























    up vote
    3
    down vote

    favorite












    I am having few issues designing an architecture to use with Platform events and Microservices.



    According to microservices principle(Kubernetes), there should be at least 3(odd number) services running so that it can handle the load properly. It works for callouts but for platform events it just gets bit tricky.



    Here is our architecture :



    Whenever some conditions, to send uploaded file to 3rd party for processing and get that data back in Salesforce. So flow goes like this




    1. Person X uploads a file(File can be big so can't do HTTP callouts to a 3rd party with file data)

    2. We fire a platform event with fileId(content version id)

    3. The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF


    Now the problem is in microservice. There can be multiple instances of it listening and follows the independent processing architecture so that they can work without worrying what's going on in other instances.



    So, in order to stop single point of failure there will be at least 3 services running,(Even more if peak load) and they listen to the same event and act independently. This is problematic as all 3(or evn more) will process and respond same data to SF.



    We can't let a single microservice to listen to all events as it will be again a single point of failure. Also, we cannot pop an event out of the event bus as its an event bus and not an event queue.



    Has anyone faced this and what possible solution can be built around it?










    share|improve this question


























      up vote
      3
      down vote

      favorite









      up vote
      3
      down vote

      favorite











      I am having few issues designing an architecture to use with Platform events and Microservices.



      According to microservices principle(Kubernetes), there should be at least 3(odd number) services running so that it can handle the load properly. It works for callouts but for platform events it just gets bit tricky.



      Here is our architecture :



      Whenever some conditions, to send uploaded file to 3rd party for processing and get that data back in Salesforce. So flow goes like this




      1. Person X uploads a file(File can be big so can't do HTTP callouts to a 3rd party with file data)

      2. We fire a platform event with fileId(content version id)

      3. The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF


      Now the problem is in microservice. There can be multiple instances of it listening and follows the independent processing architecture so that they can work without worrying what's going on in other instances.



      So, in order to stop single point of failure there will be at least 3 services running,(Even more if peak load) and they listen to the same event and act independently. This is problematic as all 3(or evn more) will process and respond same data to SF.



      We can't let a single microservice to listen to all events as it will be again a single point of failure. Also, we cannot pop an event out of the event bus as its an event bus and not an event queue.



      Has anyone faced this and what possible solution can be built around it?










      share|improve this question















      I am having few issues designing an architecture to use with Platform events and Microservices.



      According to microservices principle(Kubernetes), there should be at least 3(odd number) services running so that it can handle the load properly. It works for callouts but for platform events it just gets bit tricky.



      Here is our architecture :



      Whenever some conditions, to send uploaded file to 3rd party for processing and get that data back in Salesforce. So flow goes like this




      1. Person X uploads a file(File can be big so can't do HTTP callouts to a 3rd party with file data)

      2. We fire a platform event with fileId(content version id)

      3. The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF


      Now the problem is in microservice. There can be multiple instances of it listening and follows the independent processing architecture so that they can work without worrying what's going on in other instances.



      So, in order to stop single point of failure there will be at least 3 services running,(Even more if peak load) and they listen to the same event and act independently. This is problematic as all 3(or evn more) will process and respond same data to SF.



      We can't let a single microservice to listen to all events as it will be again a single point of failure. Also, we cannot pop an event out of the event bus as its an event bus and not an event queue.



      Has anyone faced this and what possible solution can be built around it?







      platform-event microservices






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 5 at 14:29

























      asked Nov 5 at 14:22









      Pranay Jaiswal

      10.3k31949




      10.3k31949






















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          4
          down vote



          accepted










          While I haven't really been into such a design issue, but I do see a way to overcome the situation that you are facing on this part:




          The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF




          If you can break this into different pieces, you should be able to resolve this I assume. However, this is still from a very high level, you will need to definitely look towards parallel processing and deadlock scenarios and how to avoid it.





          The approach I can think of is:




          1. Once your microservice (MS1) fetches the event details from SF, you capture the replayId and then store that in a table in your local db or elsewhere.

          2. You make the call to SF only if there does not exist the same replayId in your db

          3. Let's say in the meantime another microservice (MS2) fetches the same replayId, it doesn't make call to SF as it sees the value in the db

          4. In case of failures from MS1 callouts, you just delete the replayId from the db, so that any other microservice again getting that replayId can take care of making the updates.




          Another approach I can think here is to utilize a Queue (say using Azure service bus or MuleSoft) along with the one mentioned above. In this approach, you can have services subscribed to the Events in Salesforce publish those events as messages in the queue. Here you will though need to make sure that the replayId being added to the Queue is unique in nature.



          Now, you have another service reading from the queue which takes care of calling out to SF based on the replayId. Because you have a Queue implementation, the time you read a message from the Queue, a lock will be placed on the message and will not be available for any other service and thus you can handle any concurrency issues this way.





          In summary, you will need to make sure your microservices can communicate with each other to avoid any deadlock or concurrency issues.






          share|improve this answer























          • This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
            – Pranay Jaiswal
            Nov 5 at 15:16










          • Let me add some more details as I think it through.
            – Jayant Das
            Nov 5 at 15:21










          • Thanks, I will have a look in Azure Service Bus, It looks Interesting.
            – Pranay Jaiswal
            Nov 5 at 15:31


















          up vote
          3
          down vote













          Ultimately, you need some cooperation. You can't have an event bus with multiple independent nodes that can't communicate with each other from each processing every event. It's impossible. At some point, you need a common point of communication. Either the microservices need to chat with each other, or they need to take turns in some coordinated manner, or you need a frontend to delegate the events evenly, or you need a central queue to place the events in, or something else that will ultimately lead to a break from what you're defining as a microservice architecture. The microservices architecture is not suitable for all possible applications. This is one of those applications.






          share|improve this answer





















          • I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
            – Pranay Jaiswal
            Nov 5 at 15:18








          • 3




            @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
            – sfdcfox
            Nov 5 at 15:28











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "459"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f238392%2fconsuming-platform-events-on-microservices%23new-answer', 'question_page');
          }
          );

          Post as a guest
































          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          4
          down vote



          accepted










          While I haven't really been into such a design issue, but I do see a way to overcome the situation that you are facing on this part:




          The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF




          If you can break this into different pieces, you should be able to resolve this I assume. However, this is still from a very high level, you will need to definitely look towards parallel processing and deadlock scenarios and how to avoid it.





          The approach I can think of is:




          1. Once your microservice (MS1) fetches the event details from SF, you capture the replayId and then store that in a table in your local db or elsewhere.

          2. You make the call to SF only if there does not exist the same replayId in your db

          3. Let's say in the meantime another microservice (MS2) fetches the same replayId, it doesn't make call to SF as it sees the value in the db

          4. In case of failures from MS1 callouts, you just delete the replayId from the db, so that any other microservice again getting that replayId can take care of making the updates.




          Another approach I can think here is to utilize a Queue (say using Azure service bus or MuleSoft) along with the one mentioned above. In this approach, you can have services subscribed to the Events in Salesforce publish those events as messages in the queue. Here you will though need to make sure that the replayId being added to the Queue is unique in nature.



          Now, you have another service reading from the queue which takes care of calling out to SF based on the replayId. Because you have a Queue implementation, the time you read a message from the Queue, a lock will be placed on the message and will not be available for any other service and thus you can handle any concurrency issues this way.





          In summary, you will need to make sure your microservices can communicate with each other to avoid any deadlock or concurrency issues.






          share|improve this answer























          • This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
            – Pranay Jaiswal
            Nov 5 at 15:16










          • Let me add some more details as I think it through.
            – Jayant Das
            Nov 5 at 15:21










          • Thanks, I will have a look in Azure Service Bus, It looks Interesting.
            – Pranay Jaiswal
            Nov 5 at 15:31















          up vote
          4
          down vote



          accepted










          While I haven't really been into such a design issue, but I do see a way to overcome the situation that you are facing on this part:




          The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF




          If you can break this into different pieces, you should be able to resolve this I assume. However, this is still from a very high level, you will need to definitely look towards parallel processing and deadlock scenarios and how to avoid it.





          The approach I can think of is:




          1. Once your microservice (MS1) fetches the event details from SF, you capture the replayId and then store that in a table in your local db or elsewhere.

          2. You make the call to SF only if there does not exist the same replayId in your db

          3. Let's say in the meantime another microservice (MS2) fetches the same replayId, it doesn't make call to SF as it sees the value in the db

          4. In case of failures from MS1 callouts, you just delete the replayId from the db, so that any other microservice again getting that replayId can take care of making the updates.




          Another approach I can think here is to utilize a Queue (say using Azure service bus or MuleSoft) along with the one mentioned above. In this approach, you can have services subscribed to the Events in Salesforce publish those events as messages in the queue. Here you will though need to make sure that the replayId being added to the Queue is unique in nature.



          Now, you have another service reading from the queue which takes care of calling out to SF based on the replayId. Because you have a Queue implementation, the time you read a message from the Queue, a lock will be placed on the message and will not be available for any other service and thus you can handle any concurrency issues this way.





          In summary, you will need to make sure your microservices can communicate with each other to avoid any deadlock or concurrency issues.






          share|improve this answer























          • This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
            – Pranay Jaiswal
            Nov 5 at 15:16










          • Let me add some more details as I think it through.
            – Jayant Das
            Nov 5 at 15:21










          • Thanks, I will have a look in Azure Service Bus, It looks Interesting.
            – Pranay Jaiswal
            Nov 5 at 15:31













          up vote
          4
          down vote



          accepted







          up vote
          4
          down vote



          accepted






          While I haven't really been into such a design issue, but I do see a way to overcome the situation that you are facing on this part:




          The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF




          If you can break this into different pieces, you should be able to resolve this I assume. However, this is still from a very high level, you will need to definitely look towards parallel processing and deadlock scenarios and how to avoid it.





          The approach I can think of is:




          1. Once your microservice (MS1) fetches the event details from SF, you capture the replayId and then store that in a table in your local db or elsewhere.

          2. You make the call to SF only if there does not exist the same replayId in your db

          3. Let's say in the meantime another microservice (MS2) fetches the same replayId, it doesn't make call to SF as it sees the value in the db

          4. In case of failures from MS1 callouts, you just delete the replayId from the db, so that any other microservice again getting that replayId can take care of making the updates.




          Another approach I can think here is to utilize a Queue (say using Azure service bus or MuleSoft) along with the one mentioned above. In this approach, you can have services subscribed to the Events in Salesforce publish those events as messages in the queue. Here you will though need to make sure that the replayId being added to the Queue is unique in nature.



          Now, you have another service reading from the queue which takes care of calling out to SF based on the replayId. Because you have a Queue implementation, the time you read a message from the Queue, a lock will be placed on the message and will not be available for any other service and thus you can handle any concurrency issues this way.





          In summary, you will need to make sure your microservices can communicate with each other to avoid any deadlock or concurrency issues.






          share|improve this answer














          While I haven't really been into such a design issue, but I do see a way to overcome the situation that you are facing on this part:




          The 3rd party microservice listens to it and fetches that file from SF rest endpoint and then processes and updates its data back in SF




          If you can break this into different pieces, you should be able to resolve this I assume. However, this is still from a very high level, you will need to definitely look towards parallel processing and deadlock scenarios and how to avoid it.





          The approach I can think of is:




          1. Once your microservice (MS1) fetches the event details from SF, you capture the replayId and then store that in a table in your local db or elsewhere.

          2. You make the call to SF only if there does not exist the same replayId in your db

          3. Let's say in the meantime another microservice (MS2) fetches the same replayId, it doesn't make call to SF as it sees the value in the db

          4. In case of failures from MS1 callouts, you just delete the replayId from the db, so that any other microservice again getting that replayId can take care of making the updates.




          Another approach I can think here is to utilize a Queue (say using Azure service bus or MuleSoft) along with the one mentioned above. In this approach, you can have services subscribed to the Events in Salesforce publish those events as messages in the queue. Here you will though need to make sure that the replayId being added to the Queue is unique in nature.



          Now, you have another service reading from the queue which takes care of calling out to SF based on the replayId. Because you have a Queue implementation, the time you read a message from the Queue, a lock will be placed on the message and will not be available for any other service and thus you can handle any concurrency issues this way.





          In summary, you will need to make sure your microservices can communicate with each other to avoid any deadlock or concurrency issues.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 5 at 15:39

























          answered Nov 5 at 15:12









          Jayant Das

          9,8002522




          9,8002522












          • This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
            – Pranay Jaiswal
            Nov 5 at 15:16










          • Let me add some more details as I think it through.
            – Jayant Das
            Nov 5 at 15:21










          • Thanks, I will have a look in Azure Service Bus, It looks Interesting.
            – Pranay Jaiswal
            Nov 5 at 15:31


















          • This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
            – Pranay Jaiswal
            Nov 5 at 15:16










          • Let me add some more details as I think it through.
            – Jayant Das
            Nov 5 at 15:21










          • Thanks, I will have a look in Azure Service Bus, It looks Interesting.
            – Pranay Jaiswal
            Nov 5 at 15:31
















          This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
          – Pranay Jaiswal
          Nov 5 at 15:16




          This would work when both microservices have a delay of few microseconds in between before reacting, but if both works at same time then probably it would still cause the issue I mentioned. Now it looks more like concurrency .
          – Pranay Jaiswal
          Nov 5 at 15:16












          Let me add some more details as I think it through.
          – Jayant Das
          Nov 5 at 15:21




          Let me add some more details as I think it through.
          – Jayant Das
          Nov 5 at 15:21












          Thanks, I will have a look in Azure Service Bus, It looks Interesting.
          – Pranay Jaiswal
          Nov 5 at 15:31




          Thanks, I will have a look in Azure Service Bus, It looks Interesting.
          – Pranay Jaiswal
          Nov 5 at 15:31












          up vote
          3
          down vote













          Ultimately, you need some cooperation. You can't have an event bus with multiple independent nodes that can't communicate with each other from each processing every event. It's impossible. At some point, you need a common point of communication. Either the microservices need to chat with each other, or they need to take turns in some coordinated manner, or you need a frontend to delegate the events evenly, or you need a central queue to place the events in, or something else that will ultimately lead to a break from what you're defining as a microservice architecture. The microservices architecture is not suitable for all possible applications. This is one of those applications.






          share|improve this answer





















          • I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
            – Pranay Jaiswal
            Nov 5 at 15:18








          • 3




            @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
            – sfdcfox
            Nov 5 at 15:28















          up vote
          3
          down vote













          Ultimately, you need some cooperation. You can't have an event bus with multiple independent nodes that can't communicate with each other from each processing every event. It's impossible. At some point, you need a common point of communication. Either the microservices need to chat with each other, or they need to take turns in some coordinated manner, or you need a frontend to delegate the events evenly, or you need a central queue to place the events in, or something else that will ultimately lead to a break from what you're defining as a microservice architecture. The microservices architecture is not suitable for all possible applications. This is one of those applications.






          share|improve this answer





















          • I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
            – Pranay Jaiswal
            Nov 5 at 15:18








          • 3




            @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
            – sfdcfox
            Nov 5 at 15:28













          up vote
          3
          down vote










          up vote
          3
          down vote









          Ultimately, you need some cooperation. You can't have an event bus with multiple independent nodes that can't communicate with each other from each processing every event. It's impossible. At some point, you need a common point of communication. Either the microservices need to chat with each other, or they need to take turns in some coordinated manner, or you need a frontend to delegate the events evenly, or you need a central queue to place the events in, or something else that will ultimately lead to a break from what you're defining as a microservice architecture. The microservices architecture is not suitable for all possible applications. This is one of those applications.






          share|improve this answer












          Ultimately, you need some cooperation. You can't have an event bus with multiple independent nodes that can't communicate with each other from each processing every event. It's impossible. At some point, you need a common point of communication. Either the microservices need to chat with each other, or they need to take turns in some coordinated manner, or you need a frontend to delegate the events evenly, or you need a central queue to place the events in, or something else that will ultimately lead to a break from what you're defining as a microservice architecture. The microservices architecture is not suitable for all possible applications. This is one of those applications.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 5 at 15:02









          sfdcfox

          238k10181400




          238k10181400












          • I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
            – Pranay Jaiswal
            Nov 5 at 15:18








          • 3




            @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
            – sfdcfox
            Nov 5 at 15:28


















          • I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
            – Pranay Jaiswal
            Nov 5 at 15:18








          • 3




            @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
            – sfdcfox
            Nov 5 at 15:28
















          I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
          – Pranay Jaiswal
          Nov 5 at 15:18






          I kinda have that thought in my mind that I wanted to believe is not true. Thanks for the wise words, I will look at how to make them communicate with each other or see if some already built API for the central queue is there.
          – Pranay Jaiswal
          Nov 5 at 15:18






          3




          3




          @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
          – sfdcfox
          Nov 5 at 15:28




          @PranayJaiswal You may also want to read on en.wikipedia.org/wiki/Microservices. Microservices are allowed to talk to each other. That's not what defines a microservice. The underlying structure would probably involve shared memory/IPC to keep track of already used replay Id values, but you're still going to have a central storage area somewhere. These microservices do not exist in total isolation.
          – sfdcfox
          Nov 5 at 15:28


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f238392%2fconsuming-platform-events-on-microservices%23new-answer', 'question_page');
          }
          );

          Post as a guest




















































































          這個網誌中的熱門文章

          Hercules Kyvelos

          Tangent Lines Diagram Along Smooth Curve

          Yusuf al-Mu'taman ibn Hud