<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Rendy's Dev Journal]]></title><description><![CDATA[Indonesian 🇮🇩 | Software Engineer at Bytedance | Tinkerer | Loves computer, GNU Linux and Networking.]]></description><link>https://rendyananta.dev</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 10:33:33 GMT</lastBuildDate><atom:link href="https://rendyananta.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Effective System Design for Multi-Parties Integration with Composable Components]]></title><description><![CDATA[Disclaimer, this article is written on behalf of myself, not the company I worked on. Because of that, I will not share the detailed business impact results that refers to some confidential data and overall architecture inside Tokopedia. The purpose ...]]></description><link>https://rendyananta.dev/effective-system-design-for-multi-parties-integration-with-composable-components</link><guid isPermaLink="true">https://rendyananta.dev/effective-system-design-for-multi-parties-integration-with-composable-components</guid><category><![CDATA[software architecture]]></category><category><![CDATA[Go Language]]></category><dc:creator><![CDATA[Rendy Ananta]]></dc:creator><pubDate>Sun, 09 Jun 2024 15:56:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/3_I4NVI9d1k/upload/2ecfb90938fb30463accb2beec89e08b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Disclaimer, this article is written on behalf of myself, not the company I worked on. Because of that, I will not share the detailed business impact results that refers to some confidential data and overall architecture inside Tokopedia. The purpose I wrote this article is to share what and how I've done the way I think it should be. This article will be opiniated that may not aligned with some people. That's okay, because I just want to share the value and principle I hold in the software engineering, or even problem solving in general. The context may be different with your problem, your app scale, and company size, but I believe that my experience I wrote in this article is worth to read.</p>
<p>Before going into the details, Tokopedia is one of the largest e-commerce company in Indonesia that serve more than 100 millions users per month. I am the one of the person in charge of business process in my service that handling all of the shipping fee and free shipping (known as Bebas Ongkir in Indonesia) courier allocation in the Tokopedia system. The shipping fee are usable to charge the buyer when they order a package then handing over the shipping fee to the settlement with the respective logistic partners.</p>
<h1 id="heading-existing-condition">Existing Condition</h1>
<p>The service we maintained here have been lived for approximately 8 years, written in Go programming language. That is nothing wrong the software age actually, but the problem is coming from the features that have been introduced day-by-day for years that is already rooted inside the code that is very hard to remove. The features is deeply rooted to the current business logic, even though the features has been turned off for a couple of years.</p>
<p>After e-commerce industry evolved, the logistic partners also started to add some features to improve their pricing to keep exists and competitive with the other logistics provider as well. This directly affecting with the shipping fee calculation because we store of the partner pricing in our cache and calculate them on the fly to reduce API and network hops to the logistic partners system. <strong>This was nicely handled in the current shipping fee service, but the problem is the cache is getting more complex because so many ifs inside the method whether it is for the migration different scheme, different input, different handling for some region and so many more.</strong> Other than that, in logistics we also have multiple package handling that is also depends with the shipping fee, such as insurance or cash on delivery payment method. Let me show you the picture to visualize the shipping fee service responsibility in Tokopedia.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717920206227/4bc8eb6a-d0d3-4631-9ef4-dcf02c737122.png" alt class="image--center mx-auto" /></p>
<p><em>User Interface that served by my service.</em></p>
<p>Before calculating the shipping fee, we do multiple validations such as checking weight, distance, routes and so many featured things. After that we will check whether the cache for the request is exists or not, if the cache doesn't exists we will process the request to the partner and save the cache after the request is finished. After that we need to manipulate text, formatting, package handling modifiers as the app picture shown before.</p>
<p>Most of all, the current technical solution inside the shipping fee service uses the functional programming. I marked things that needed to be decoupled by refactoring with the star sign like the in the image shown below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717918886264/0d8b0d5d-bea0-464b-9a9c-0f1c45495570.png" alt class="image--center mx-auto" /></p>
<p><em>Existing condition inside the service.</em></p>
<p>Why do all of the function marked as the thing needed to be decoupled? Am I anti functional programming? It is a big no. Functional programming is great for its simplicity, but functional programming also preferring things that are composable and can be simplified by using the common known pattern. It means, either it is an object oriented programming or functional programming, both of them are highly encourages that a function is not only mean to solve one problem, but if possible the function needed to be generic to solve another problem easily.</p>
<h1 id="heading-why-keep-refactoring">Why Keep Refactoring?</h1>
<p>As I described above it is hard to refactor, but why keep executing even though the code is still working correctly? Yes, correct, but sometimes we saw some issues that we don't know where it is coming from (hard to debug). It can reduce the service reliability. There are also some signs that our service are bit slowed down because of unnecessary validation repetition and turned off features that highly coupled (can also be the culprit of some issues). Some of the leader boards suggest that we can start fresh our service, it will absolutely going faster. It is valid argument, but I think that will be just another story like we have been going through these years. Then I started to think that if we start fresh, it will be happened again in the future. So I proposed to use my architecture design to makes it more maintainable and efficient but still managed to get the service is fast enough. That is the idea behind this initiative.</p>
<h1 id="heading-the-thought-process">The Thought Process</h1>
<p>The principle I always do when designing a software is to <strong>divide the complex problem by three phases, pre-processing (input), processing (the actual core business logic lived here), and post-processing (output)</strong>. By dividing the phases, we can easily identify what we need to change if there is a feature update. Whether it is the input, the process or the output.</p>
<blockquote>
<p>The key principle I always use in any problem solving is to decompose the things as small as possible. The keyword "small" here is not implies that the function should only do one logic, but one responsibility. It can be have more than one logic, but as long as it handles one responsibility, I think it is enough to breaking down the complex logic here. One would say that ~30 lines of code is the sweetest spot the one method should handle. I think that is a common mistakes, code responsibility can't be represented with a total lines of code. As long as the function is clear, then you are good to go. By keep anything small, we can easily replace and remove some features that aren't needed anymore. Also, we don't need to hold the thought or compute process that took so much space in our head while debugging the software.</p>
</blockquote>
<p>The principle above is kind of abstract and difficult to understand. That's okay, it is because it have no concrete example. As you are solving more problem, you will get the sense of "smaller is simpler". The gut and your intuition will teach you as the days goes by.</p>
<p>This is the grand design to solve the problem above, it is a bit different with the actual design implemented inside Tokopedia, but that is fine we can still learn.</p>
<h2 id="heading-preprocessing-the-request">Preprocessing the Request</h2>
<p>Let's start with the pre-processing the request. It is happened when the incoming request are processed. Is it need to be overrode or keep as it is, because there are certain cases that need to override the request payload due to features. In this phase also when the validation request are done, allowing us to do some generic validation such as payload data type validation, weight, distances, features, and so many things that can break the shipping and order fulfillment process. This is a really simple process, but involve so many things to be validated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717941315052/198d5fc5-d9d5-4db8-b081-5769df8e1928.png" alt class="image--center mx-auto" /></p>
<p><em>System Design of the validation logic</em></p>
<p>The image shown above is the flow within the validation process. It is really simple, so we only need to use procedural and functional call and define functions to do those simple tasks above. Starting to do some generic validator, and then we can call the validation A, override payload with A, and do the validation B and so on.</p>
<h2 id="heading-process-the-request">Process the Request</h2>
<p>Continue to the core business logic lies, it is when we do the shipping fee calculation. There bunch of ways to do that and not all of the region can use the calculation. Based on that situations, I got the conclusion that the requests can be handled with several and various operations based on the buyer who wants to place an order, the seller who will fulfill the order, and the logistic partners, or even the service level that can be used to ship the package from the seller to the buyer. It has some rules that need to be fulfilled before deciding who is going to do what and where. Based on those two conditions, we can narrow it down with the requirements below.</p>
<ol>
<li><p>System can decide where they should look the shipping fee (the data source);</p>
</li>
<li><p>System can handle a different case of calculation and caching mechanism; and</p>
</li>
<li><p>System have expected the same exact output even for different calculation and the data sources.</p>
</li>
</ol>
<p>The three requirements above represents things that already exists in payment gateway system. I highly inspired with the agnostic payment gateway that built by the community written in PHP, <a target="_blank" href="https://github.com/thephpleague/omnipay">omnipay</a>. Those framework have the agnostic principle approach to maintain the different payment gateway and let the users decide how to use the multiple payment gateway registered in the system. This concept can help us to solve the requirement number two.</p>
<p>But in the omnipay, they don't have the capabilities to decide whether what is going to where. In the payment gateway context, the user is the one who will decide and select where the payment gateway be going. Then I need to build something that can decide where the gateway should going to. I also inspired on how router works, so I made with the idea of implement the <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc1812#section-7.4">static routing and dynamic routing concept</a>. Within the knowledge, we will be able to solve the requirement number one.</p>
<p>To fulfill the requirement number three, it is the most simplest approach of all of them. All of the gateway should have the exact output by defining an interface and using the polymorphism approach, since Go have the capability to define an interface.</p>
<p>Let's see an image below, the end architecture concept we need to solve this problem.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717942761958/634e29c4-fced-4966-b5c8-ed187f68f32b.png" alt class="image--center mx-auto" /></p>
<p><em>System Design of the core process of shipping fee calculation</em></p>
<p>In the image shown above, there is shape with a different three colors. It is defined in the legend, blue for polymorphic approach, light purple for functional, and the light green one is just a separate struct to organize our code structure.</p>
<p>Starting from the top, we have the routers, the one that have capability to use the routing rules either it is static or dynamic. The static rules means that if the request is A, then we should go into the gateway A. In the dynamic rules one, it handle all the rules having more complex routing, such as migration, experiment, feature rollout, or region whitelists as long as the rule implementing the <code>DynamicRuleInterface</code> interface. Both of the rules are preloaded in the app starts to reduce overhead in the request. The router will have the end information of the payload should go into the Gateway A / Gateway B / Gateway C and simply forward the request to its destination gateway, concurrently.</p>
<p>In the image, gateway is represented with the blue color, means it needed to comply with the <code>GatewayInterface</code>. Just like the payment gateway has, the client should define all of the <code>GatewayInterface</code> method and implement them separately. It is the polymorphism design that we don't really change as the existing condition. This time, the gateway should also have to know where to look at and how to calculate the shipping fee. Since we don't want to add more complexity to the gateway directly, I decided to breaking them down into the smaller pieces by separating the calculation logic and the cache layer handling on a different struct. <strong>Remember the philosophy, keep everything small</strong>. In these structs we can throw the cache saving asynchronously, to boost the software more faster, because it is only the data read service that doesn't needed a consistency level of transaction. It is different case with the payment, we have to wait until it is successfully wrote the data. Concurrently, we can calculate the shipping fee results to give the response to the router and combine them as a whole.</p>
<p>After all of the gateway successfully return the results, the router will receive and combine them as a one slice of data, then its job is done.</p>
<p>The approach I choose above is significantly reduce the complexity, because we are able to remove the dynamic routing rules and move into the static one if we don't need rollout or whitelists kind of feature anymore. We can separate the gateway if there is something more complex are coming ahead since we aren't capable of predicting future, but there is a room we spare for. That is the point of decoupling and breaking down the software into several components. Again, keep everything small.</p>
<h2 id="heading-present-the-output">Present the Output</h2>
<p>After we receive the results, the structs and format is different than the legacy system. First, we need to introduce a simple layer to translate the structs into the desired outcome, in this example is the legacy response. Simple, easy.</p>
<p>Second, we need to separate the component handling as it is mentioned in the first UI I show. So defining a <code>Decorator</code> interface is a helpful to modify the response is all we need. We can apply some discounts, price aggregation, labeling, adding some wording if the conditions apply. Let's see the diagram below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717945632352/4d5d02df-f545-4dc3-822e-c78c679e323d.png" alt class="image--center mx-auto" /></p>
<p><em>Presenter system design as an adapter or decorator.</em></p>
<p>In the presenter, we can have so many structs we need to modify, manipulate and map the data as needed. In this case, two structs that have responsibility to maintain the legacy flow, and the newer one that implements a sophisticated decorator. As I explained above, the decorator pattern. This can easily make the program runs faster in several cases. Let's say we have multiple APIs that serve different purpose.</p>
<ol>
<li><p>API to render the UI to user</p>
</li>
<li><p>API to calculate something and get the results to be used in another service</p>
</li>
<li><p>API to serve the original shipping fee without any component handling</p>
</li>
</ol>
<p>By leveraging the decorator pattern, we can have different strategies to implement the three different API above in a resource efficient manner.</p>
<ol>
<li><p>In the first API, since we need to render the available information to the real user, we need to use many components such as:</p>
<ol>
<li><p>Labeling;</p>
</li>
<li><p>Text manipulation (price aggregation);</p>
</li>
<li><p>Applying discounts;</p>
</li>
<li><p>Rendering UI components (e.g. set disabled and show errors); and</p>
</li>
<li><p>Additional package handling information.</p>
</li>
</ol>
</li>
<li><p>In the second API, calculating things can involve so many things such as additional package handling information that can be used in another service such as:</p>
<ol>
<li><p>Applying discounts; and</p>
</li>
<li><p>Calculating insurances for package handling</p>
</li>
</ol>
</li>
</ol>
<p>    Of course we don't need other things such as labeling, text manipulation, and rendering components that is not needed in this API. Boom, compute resource saving.</p>
<ol start="3">
<li>The third API is the most simplest one, just do some shipping fee calculation and it's done. We can create some adapter to the desired response.</li>
</ol>
<p>Some Caveats from the Internet Seniors.</p>
<p>There are some caveats from people that preferred to use concrete type and avoid doing abstraction. It is okay because the decision is in theirs, because the pitfalls it can be if they doing it wrong. I think both of the decision can lead to some mistakes if it is not deliberately choose the options. Either they will use abstraction and not using abstraction. Decision is a decision, if it is a mistakes, then be it. The decision is temporary, we still can change the decision and learn from that. Do the things that matters. Besides of that, there are some arguments why I keep do the breaking down and uses polymorphism approach till this day.</p>
<h3 id="heading-using-interface-dynamic-dispatch-is-slower-than-concrete-static-dispatch">Using Interface (Dynamic Dispatch) is Slower than Concrete (Static Dispatch)</h3>
<p>There are plenty of input and proof that interface type calls are much slower than the concrete type. It is because of using interface means that the compiler cannot decide and check the memory (if it implements some interface) and will decide in the runtime. Because of interface is known only in the runtime, then it is expected to have the concrete type one that is preloaded when the compile is happened.</p>
<p><a target="_blank" href="https://medium.com/@sanjayshiradwade/understanding-dynamic-vs-static-dispatch-in-go-a5319fcdddec">https://medium.com/@sanjayshiradwade/understanding-dynamic-vs-static-dispatch-in-go-a5319fcdddec</a></p>
<p>To me personally <strong>it is doesn't matter if the software decoupling value is much higher than the performance drawbacks</strong>. The performance drawbacks is not that high actually, the code is still run fast enough. I will show you in the end of the article. <strong>I am the one who prefer to organize code by using interfaces and structs rather than separating app into a microservice. That will add much more performance drawbacks, network failure, unknown error, and the other uncertainties.</strong> It is really not worth to do the service separation because we will add more unneeded complexity. If the engineer and traffic is not that on the hyperscale then to me it is not worth to do.</p>
<h3 id="heading-abstraction-can-make-the-code-worse">Abstraction can make the code worse</h3>
<p>There so much contents I saw on youtube, stackoverflow, twitter and so many more people are hate the usage of abstraction. I also hate the abstraction when it comes to inheritance because these things can make our code make much more ugly just to avoid a repetition. This is well explained by this youtube video.</p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=rQlMtztiAoA">https://www.youtube.com/watch?v=rQlMtztiAoA</a></p>
<p>TLDR;</p>
<blockquote>
<p>Abstraction using inheritance is awful if not well managed. Instead, use polymorphism approach (defining Interface behavior) that can simplify, isolate, and decouple things more clearer. It is okay to have redundant and repeatable code.</p>
</blockquote>
<h1 id="heading-summary">Summary</h1>
<p>That is the strategy I used to redesign and refactor the system I in charge on. Thankfully I implement the design with my colleagues that are incredible, clever and help me with giving advice along the way. The development is done in 2023 and will be finished rolling out to all endpoints in 2024. This initiative has shown a great metrics and the efforts given is paid off. In the middle of rollout, we can see that <strong>our service can receive almost +290% more throughput and -30% on average latency, and -500% max latency after implementing the system design changes.</strong></p>
<p>Simplifying things is not only makes our code is more organized. Because everything is more clear and concise we can manage the code and remove unused modules so it won't overburden the system performance. <strong>That's all. Keep everything small to manage simplicity.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Installing TimescaleDB PostgreSQL Extension On Apple Silicon]]></title><description><![CDATA[Timescale DB is a PostgreSQL extension that suitable for the time-series, event storing and data-aggregation workloads. This enable the PostgresSQL weaknesses on heavy query data read like an aggregation such as sum, avg, min, and max. Time-series da...]]></description><link>https://rendyananta.dev/installing-timescaledb-postgresql-extension-on-apple-silicon</link><guid isPermaLink="true">https://rendyananta.dev/installing-timescaledb-postgresql-extension-on-apple-silicon</guid><category><![CDATA[database]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[timescaledb]]></category><dc:creator><![CDATA[Rendy Ananta]]></dc:creator><pubDate>Fri, 12 Apr 2024 23:12:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/GNyjCePVRs8/upload/484d9094ee060322afc31cb53cefc262.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://www.timescale.com/">Timescale DB</a> is a PostgreSQL extension that suitable for the time-series, event storing and data-aggregation workloads. This enable the PostgresSQL weaknesses on heavy query data read like an aggregation such as sum, avg, min, and max. Time-series database should be able to store all of the information with a frequent data updates. As an example, storing the sensor data for a certain time range. The frequency can be every minute, every hour, every day, every week, etc. In this article we will cover the TimescaleDB extension installation to the PostgreSQL.</p>
<p>Keep in mind that <a target="_blank" href="https://docs.timescale.com/self-hosted/latest/install/installation-macos/">the official guides that can be accessed here</a>. In a Mac with an Apple Silicon CPU, it is not work as intended from the official guide and the article will show the workaround of TimescaleDB extension installation.</p>
<h1 id="heading-installing-homebrew">Installing Homebrew</h1>
<p>If you have installed the <a target="_blank" href="https://brew.sh/">homebrew</a> app, then you can skip this step. To install the latest homebrew, we can run the existing script that exist in their homepage.</p>
<pre><code class="lang-bash">/bin/bash -c <span class="hljs-string">"<span class="hljs-subst">$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)</span>"</span>
</code></pre>
<p>After installing homebrew, you can go through the installation of the Timescale DB directly.</p>
<h1 id="heading-installing-timescaledb">Installing TimescaleDB</h1>
<p>Installing TimescaleDB need to use the custom third party repositories. To add the TimecaleDB custom third party repository (Tap) to the homebrew, we can use the command below.</p>
<pre><code class="lang-bash">brew tap timescale/tap
</code></pre>
<p>After adding the repository, we can continue to install the TimescaleDB directly using the <code>brew install</code> command.</p>
<pre><code class="lang-bash">brew install timescaledb
</code></pre>
<p>Installing TimescaleDB will takes some time, since it will two pakages that consists of PostgreSQL and TimescaleDB.</p>
<h1 id="heading-setting-up-timescaledb">Setting Up TimescaleDB</h1>
<p>By default, TimescaleDB will ship its binaries that can be used to configure the PostgreSQL extension installation. <strong>This is when the issue happened in the configuration stage.</strong> To configure a TimescaleDB extensions to a PostgreSQL app, we need to use the <code>timescale-db</code> tune command.</p>
<pre><code class="lang-bash">timescaledb-tune --quiet --yes
</code></pre>
<p>Running command above will resulting an error on Apple Silicon mac devices. This is expected because of the shipped binary from the TimescaleDB tap is compiled using the amd64 instruction sets, while the Apple Silicon mac expect of arm64 instruction.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1712961106801/3937091e-1797-4376-bab4-f7418d0e4ca3.png" alt class="image--center mx-auto" /></p>
<p>To fix the error above, we need to recompile the binary into arm64 architecture. Since the TimescaleDB uses Golang and Bash script to setting up the database app instance, we can recompile the installation script to make them work in the arm64 machine. Both of the language are compatible to the arm64 architecture.</p>
<h2 id="heading-installing-golang">Installing Golang</h2>
<p>Golang can be installed using this bash command.</p>
<pre><code class="lang-bash">brew install go
</code></pre>
<p>After go is installed, we can proceed to installing the <code>timescaledb-tune</code> application with the arm64 compatible binary. Thanks to golang that have simple binary compilation on the go using the based on the repository. To install it, the command should be as follow.</p>
<pre><code class="lang-bash">$ go install github.com/timescale/timescaledb-tune/cmd/timescaledb-tune@main
</code></pre>
<p>The command above will install the <code>timescaledb-tune</code> app into your $GOPATH/bin directory. In our case, because it is default so the expected app binary directory is in the <code>~/go</code> directory. We will use the recent installation from source of <code>timescaledb</code> repository.</p>
<h2 id="heading-configuring-timescaledb">Configuring TimescaleDB</h2>
<p>In this time, we will see what the <code>timescaledb-tune</code> bring into our existing PostgreSQL and TimescaleDB installation. Before running our installed <code>timescaledb-tune</code> we need to go into the GOPATH binary directory. In our case, it would be in the <code>~/go/bin</code> directory. To get into the directory we can use this command below.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/go/bin
./timescaledb-tune --quiet --yes
</code></pre>
<p>The complete installation output can be seen in the picture below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1712961840856/564c0f06-1cf3-4904-88dd-22c0b6b4baf7.png" alt /></p>
<p>The result above indicates that our TimescaleDB and PostgreSQL are both installed and successfully configured.</p>
<h2 id="heading-copying-timescaledb-lib-into-the-postgresql-installation">Copying TimescaleDB lib into the PostgreSQL Installation</h2>
<p>PostgreSQL and TimescaleDB are now successfully installed, but the extension is not enabled yet in the postgres. To load that, we need to execute the command below. Remember that the <code>&lt;VERSION&gt;</code> tag is replaceable based on the TimescaleDB extension you have installed.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /opt/homebrew/Cellar/timescaledb/&lt;VERSION&gt;/bin/
</code></pre>
<p>In our case, the <code>&lt;VERSION&gt;</code> tag would be <code>2.14.2</code>, and the command should be like this.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /opt/homebrew/Cellar/timescaledb/2.14.2/bin/
</code></pre>
<p>After changing directory, we can procced to move the extensions and library file from TimescaleDB into the main PostgreSQL DB app by using the shipped bash script named <code>timescaledb_move.sh</code> that exist in the directory.</p>
<pre><code class="lang-bash">./timescaledb_move.sh
</code></pre>
<p>Let's run the script above.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1712962450583/97982ed1-2149-44e3-a958-3f674c225e09.png" alt class="image--center mx-auto" /></p>
<p>The result shown on the picture above indicates that the script has not going through any failure again. At this point, we can create a PostgreSQL database that enable the TimescaleDB extension.</p>
]]></content:encoded></item><item><title><![CDATA[Productivity Tips: Understanding and Make Use of Unix Standard Streams]]></title><description><![CDATA[Unix standard streams, a basic unix process that always tied with (stdin, stdout, and stderr). Probably you may use it in your software without even knowing it. Let's understand deeper and how you can make use of the standard streams and apply into y...]]></description><link>https://rendyananta.dev/productivity-tips-understand-and-make-use-of-unix-standard-streams</link><guid isPermaLink="true">https://rendyananta.dev/productivity-tips-understand-and-make-use-of-unix-standard-streams</guid><category><![CDATA[Linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[Bash]]></category><dc:creator><![CDATA[Rendy Ananta]]></dc:creator><pubDate>Sun, 24 Mar 2024 15:37:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/XU1L22IUKnc/upload/31a3bf49c467e68bff7d835f8175618c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Unix standard streams, a basic unix process that always tied with (<code>stdin</code>, <code>stdout</code>, and <code>stderr</code>). Probably you may use it in your software without even knowing it. Let's understand deeper and how you can make use of the standard streams and apply into your workflow to gain the development productivity. This article won't explain and cover the standard streams in details, however <strong>the main aim of this article would be focused on using the three standard streams and the basic unix Standard I/O (Input / Output) operator like pipeline and redirection.</strong></p>
<h2 id="heading-standard-streams">Standard Streams</h2>
<p>Starting with the Standard Streams, it is the most basic fundamentals in the Unix and Unix-Like Operating system like Linux, BSD and Mac OS. By default, it consists of three main file process such as.</p>
<ol>
<li><p><code>stdin</code> that means standard input,</p>
</li>
<li><p><code>stdout</code> that means standard output, and</p>
</li>
<li><p><code>stderr</code> that means standard error.</p>
</li>
</ol>
<p>Every application must use of those three standard I/O above to process input and output. As we know, every application have many processes. In each process, they use three standard file descriptors in the unix-like operating system, <code>stdin</code> as the input, <code>stdout</code> as their basic output, and <code>stderr</code> as the error output. To visualize what are those our computer, it can be demonstrated using the picture below.</p>
<h3 id="heading-file-descriptor-in-standard-streams">File Descriptor in standard streams</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710342040967/0515e411-3988-41aa-b219-739cf9877297.png" alt class="image--center mx-auto" /></p>
<p>In the picture above, there is a process will interact with a three numbers, which 0 that belong to <code>stdin</code>, 1 to <code>stdout</code>, and 2 to <code>stderr</code>. The number represented is the file descriptor opened in each process. Since unix system follow the philosophy of "everything as a file", then the standard streams would be accessible from <code>/dev/stdin</code> as the input stream file, <code>/dev/stdout</code> as the standard output stream, and <code>/dev/stderr</code> as the standard error output stream. Digging deeper in the golang standard library, the <code>Stdin</code>, <code>Stdout</code>, and <code>Stderr</code> files are predefined with path explained before.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711188117045/5e13a5b2-b880-4196-a878-5190d1d6e7be.png" alt class="image--center mx-auto" /></p>
<p>Let's write a simple demo program using go that use <code>stdin</code>, <code>stdout</code>, and <code>stderr</code>.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"io"</span>
    <span class="hljs-string">"os"</span>
    <span class="hljs-string">"strings"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    stdin, err := io.ReadAll(os.Stdin)

    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-built_in">panic</span>(err)
    }
    str := <span class="hljs-keyword">string</span>(stdin)

    lowercased := strings.ToLower(str)

    fmt.Fprintf(os.Stdout, <span class="hljs-string">"Hi %s"</span>, strings.ReplaceAll(lowercased, <span class="hljs-string">"\n"</span>, <span class="hljs-string">", "</span>), )
    fmt.Fprintf(os.Stdout, <span class="hljs-string">"\n"</span>)
    fmt.Fprintf(os.Stderr, <span class="hljs-string">"There is no error, this is just an example\n"</span>)
}
</code></pre>
<p>The simple go code above would do the following actions:</p>
<ol>
<li><p>Read all input given in the standard input.</p>
</li>
<li><p>Convert the input string to be all lowercase and concatenate them if the inputs is more than one line.</p>
</li>
<li><p>Print the lowercased string into standard output.</p>
</li>
<li><p>Print something into standard error just for example.</p>
</li>
</ol>
<p>While running those program, we will get the input from the <code>os.Stdin</code> (in golang, it is the standard input stream file. The input then will be buffered into the <code>stdin</code> variable that turned into lowercased string later.</p>
<p>To print an output to the terminal, we prefer using the <code>fmt.Fprintf()</code> function instead of <code>fmt.Println()</code> because under the hood, the <code>fmt.Println</code> method also write bytes to the <code>os.Stdout</code> (golang standard output file). It is the same though, it just to simplify our learning points about the unix basic standard streams. To print out to the standard error output, we also use <code>fmt.Fprintf()</code> function to print the error message to <code>os.Stderr</code> that will be explained later in this post.</p>
<blockquote>
<p>In this post, to make sure that you will be learning something, the post will prefer using screenshot only for the command used in the example. This is intended to aim you to try and type the command by yourself.</p>
</blockquote>
<h2 id="heading-standard-input-stdin">Standard Input (stdin)</h2>
<p>First thing first, let's build the program above using the following command.</p>
<pre><code class="lang-bash">go build main.go
</code></pre>
<p>The output binary file should be named <code>main</code> in the same directory. Before execution, we will explain several standard input operators such as pipe operator that can be denoted with the <code>|</code> and the <code>&lt;</code> less than operator.</p>
<h3 id="heading-input-redirection-lt-operator">Input Redirection &lt; Operator</h3>
<p>Input redirection means that we can replace the input using the files or custom input we decided to use. In this example we use a separated file named <code>names.txt</code> that contains three names in the file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711190175032/cd0bd38b-8e42-414d-8e52-1cbee997c346.png" alt class="image--center mx-auto" /></p>
<p>To redirect the standard input, we can use <code>&lt;</code> (less than) operator. The convention of its full command would be the <code>binary &lt; input</code> . It can be shown in the example below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711190269290/d4cbb25d-a778-4cdb-b260-b1465b2cae84.png" alt class="image--center mx-auto" /></p>
<p>The picture above show that we can redirect the input using the <code>&lt;</code> operator to replace the input data streams from <code>/dev/stdin</code> to <code>names.txt</code>.</p>
<h3 id="heading-pipe-operator">Pipe | Operator</h3>
<p>Pipe will be useful to combine two commands, it will <strong>take the output of the first command as the inputs of the second command</strong>. Let's take the example of the pipe operator usage in the terminal history below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711189604105/e2f6f3a9-d192-4538-8c26-ccd6f78e4e9f.png" alt class="image--center mx-auto" /></p>
<p>The commands used in the picture above is <code>echo</code> that print something to the standard output, and the <code>main</code> command (the previous simple application) will take the echo command as the input of the program.</p>
<blockquote>
<p>Hi <strong><mark>hai</mark></strong>,</p>
<p>There is no error, this is just an example</p>
</blockquote>
<p>In the first run, the output "Hai" will be used in the <code>main</code> command as the standard input. Our <code>main</code> command do the lowercasing of the "Hai" input and printed out as "hai" with the following additional output in the terminal.</p>
<p>In the second run, the output from echo command is "Rendy" that will be lowercased by executing the <code>main</code> command as it gave the output below.</p>
<blockquote>
<p>Hi <strong><mark>rendy,</mark></strong></p>
<p>There is no error, this is just an example</p>
</blockquote>
<p>Take another look with another command, <code>ls</code> to show our files in the current directory. Combining with pipe operator and <code>main</code> command gave us similar output.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711189992077/63fb0582-d104-431e-b940-a909fb8e8e12.png" alt class="image--center mx-auto" /></p>
<p>The <code>ls</code> command giving output of the files inside the directory and piped with the <code>main</code> command. All of the files would nicely concatenated using comma.</p>
<h2 id="heading-standard-output-stdout">Standard Output (stdout)</h2>
<p>Standard output has two operators such, that is the output redirection with replacing file and output redirection with append file. The replace operator use the <code>&gt;</code> (greater than), and append operator use <code>&gt;&gt;</code> (double greater than) symbols.</p>
<h3 id="heading-redirect-output-gt-operator">Redirect Output &gt; Operator</h3>
<p>First example, we will use the standard unix command like <code>echo</code> to redirect the output into a file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711272747972/a290056c-fdf7-4422-a1c5-1055e9f9a03d.png" alt class="image--center mx-auto" /></p>
<p>In the example above, we can understand that the string "Hello" is written through the file named <code>hello.txt</code>. If we do the the same thing with a different string, the <code>hello.txt</code> file will be replaced with the new string. It can be shown by the example below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711272944131/bc37fbf8-d3ba-4121-bde9-11cb30f63f2b.png" alt class="image--center mx-auto" /></p>
<p>As the example above, we don't intentionally delete the files but the old <code>hello.txt</code> file that contains "Hello" got replaced by a new string "Rendy". If we don't want to replace the string, then we should use the append output redirection operator <code>&gt;&gt;</code> .</p>
<h3 id="heading-append-output-gtgt-operator">Append Output &gt;&gt; Operator</h3>
<p>In the second attempt, we will using the same file hello.txt and write another string to the this file but using the append <code>&gt;&gt;</code> operator.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711273280711/d406fa87-5d43-462c-836f-6213e7c695dc.png" alt class="image--center mx-auto" /></p>
<p>In the example above, we can see that the <code>&gt;&gt;</code> operator did not change the file contents of the <code>hello.txt</code>. Instead it will append the "George" string below the existing file contents.</p>
<p>The standard output and standard input redirection operator can be used together with the <code>main</code> program we wrote before.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711273604988/1727120a-eb4b-42a4-80ed-4c902946ce03.png" alt class="image--center mx-auto" /></p>
<p>With the command above, you can see that the <code>Hi rendy</code> string is not printed in the terminal, instead it written in the <code>main.log</code> file. Hence the example error line <mark>"There is no error, this is just an example"</mark> that is <strong>printed in the standard error would still shown in the terminal output because we only redirect the standard output.</strong></p>
<h2 id="heading-standard-error-stderr">Standard Error (stderr)</h2>
<p>Basically, redirecting the standard error would still the same intuition as the standard output. Remember that the standard error using <code>2</code> as the file descriptor number in <a target="_blank" href="https://rendyananta.my.id/productivity-tips-understanding-and-make-use-of-unix-standard-streams#heading-file-descriptor-in-standard-streams">this picture</a>? We only need to redirect them using 2 prefix in each operator. For instance, to redirect the standard error we can use <code>2&gt;</code>, and to redirect append output of a standard error we can use <code>2&gt;&gt;</code>. Let's try that using the <code>main</code> command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711274364216/d41bcfd3-c2ff-4ec5-a53f-a2cdd55a5b30.png" alt class="image--center mx-auto" /></p>
<p>The picture above shown that we already successfully redirect the standard error to the <code>error.log</code> file. While using the append operator, we will got the similar result as the picture below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711274573713/adbe1acf-f718-49ab-a100-9abe76b1f18f.png" alt class="image--center mx-auto" /></p>
<p>That is the basic usage of the standard input, standard output, and standard error operator in bash system.</p>
<h1 id="heading-advanced-use-cases">Advanced Use Cases</h1>
<p>After understanding its function and available operators in the standard streams, there is a bunch another use case combination of its that may help improving our productivity. Utilizing the <code>main</code> program we wrote before, we can redirect the standard output to a file, and the standard error to another file like the picture below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711275147823/db0943c6-e7bd-47c8-8209-5c9625fbf1a9.png" alt class="image--center mx-auto" /></p>
<p>In some cases, there is needed a bigger input, usually from a file. It is possible by leveraging the <code>&lt;</code> operator as the example below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711275266255/06b68381-f6bb-4107-84d2-9faa8876523d.png" alt class="image--center mx-auto" /></p>
<p>There another use case that need to combine all of the output of standard error and standard output in a one file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711275340992/61a43b2d-90ee-436a-be1e-28cc5370cd46.png" alt class="image--center mx-auto" /></p>
<p>Usually we have a bunch of files in the same directory (usually logs). To querying some string, we can use the <code>cat</code> command to list all of the files that match the pattern, and find the relevant information using <code>grep</code> command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711275562042/bdce1b67-1ab1-4325-ab43-c4b6159ef049.png" alt class="image--center mx-auto" /></p>
<p>That's all. Understanding the basic standard input and output streams of unix system is worth it. We can improve the productivity because we don't need to open the IDE that may heavy, or even impossible in the production server.</p>
]]></content:encoded></item><item><title><![CDATA[Simplify a Complex Networking in a Nested Virtualization Using LXD Fan Overlay]]></title><description><![CDATA[Network is a substantial component and key support in the virtualization environment. In the modern application deployment, we use virtualization to manage resources as well as scaling them. However, there are certain cases that we might need a neste...]]></description><link>https://rendyananta.dev/simplify-a-complex-networking-in-a-nested-virtualization-and-containerization-using-lxd-fan-overlay</link><guid isPermaLink="true">https://rendyananta.dev/simplify-a-complex-networking-in-a-nested-virtualization-and-containerization-using-lxd-fan-overlay</guid><category><![CDATA[lxd]]></category><category><![CDATA[networking]]></category><category><![CDATA[Docker]]></category><category><![CDATA[virtual machine]]></category><dc:creator><![CDATA[Rendy Ananta]]></dc:creator><pubDate>Sun, 10 Mar 2024 16:32:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1710128419610/04527769-54f7-4273-9597-38c04788c2cc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710128495788/bf85b98e-10d9-4d66-b1f7-07a48bd5bd2a.jpeg" alt class="image--center mx-auto" /></p>
<p>Network is a substantial component and key support in the virtualization environment. In the modern application deployment, we use virtualization to manage resources as well as scaling them. However, there are certain cases that we might need a nested virtualization and containerization to manage more complex app and service deployment like doing a containerization inside the virtual machine that runs on a single host or more.</p>
<p>In this article, we will covering the issues in nested networking problem and how to make LXD network management in a nested virtualization. This approach is still applicable in a non LXD Cluster, but in this article we will be covering specifically in the LXD cluster environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709403457100/2e81d219-f9c5-4c4d-8ea0-b7fdd268db5f.png" alt class="image--center mx-auto" /></p>
<p>LXD Virtualization architecture in the picture above shown that we have a single host, that spawning 4 virtual machines that consists of</p>
<ol>
<li><p><code>lxd-1</code> as the LXD cluster node</p>
</li>
<li><p><code>lxd-2</code> as the LXD cluster node</p>
</li>
<li><p><code>lxd-3</code> as the LXD cluster node</p>
</li>
<li><p><code>load-balancer</code> as the load balancer that need to be accessible through the internet.</p>
</li>
</ol>
<p>They were made to manage the resource that can be used in each workloads. In those setup, we will be using three nodes of LXD cluster in the <code>lxd-1</code>, <code>lxd-2</code> , and <code>lxd-3</code> VM. LXD cluster can contains several containers to run app and service inside, to make it visible in the host's neighbor network and the internet through the edge router. This is the basic of the data center architecture looks like.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709403980897/6a746e6d-ddf8-4ea6-851b-20b1fccb9d44.png" alt class="image--center mx-auto" /></p>
<p>Let's remind the objective of this article, <strong>to simplify the networking inter containers and virtual machine communications</strong>. However, the load balancer traffic forwarding to the internet through the edge router will not be covered in this post. Based on the objective, this is the deliverables that needed to be done.</p>
<ol>
<li><p>Each container inside the LXD VM should be able to communicate with each others.</p>
</li>
<li><p>Each container inside the LXD VM should be able to communicate with another container from another LXD VM. e.g. container-3 can reach the container 1.</p>
</li>
<li><p>Each running containers should be able to be reached from the load balancer VM.</p>
</li>
<li><p>Each running containers should be able to be discovered from the host machine.</p>
</li>
</ol>
<p>If you have already LXD cluster installed, then you can skip this step and going to the <a target="_blank" href="https://rendyananta.my.id/simplify-a-complex-networking-in-a-nested-virtualization-and-containerization-using-lxd-fan-overlay#heading-the-problem">Problem</a>.</p>
<h2 id="heading-preparation">Preparation</h2>
<p>In this post, we are using Ubuntu 22.04 distribution as it is a common operating system used in the server. The test laptop used is <strong>Lenovo Yoga Slim 7 Pro</strong> with the detailed specifications are shown here.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Operating System</strong></td><td>Ubuntu 22.04.4 LTS x86_64</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Kernel</strong></td><td>6.5.0-21-generic</td></tr>
<tr>
<td><strong>CPU</strong></td><td>AMD Ryzen 7 5800HS Creator Edition (16) @ 4.463GHz</td></tr>
<tr>
<td><strong>GPU</strong></td><td>NVIDIA GeForce MX450</td></tr>
<tr>
<td><strong>Memory</strong></td><td>16GB</td></tr>
</tbody>
</table>
</div><h3 id="heading-spinning-up-virtual-machines">Spinning Up Virtual Machines</h3>
<p>Ubuntu multipass is used to create <code>lxd-1</code> , <code>lxd-2</code> , <code>lxd-3</code> , and <code>load-balancer</code> VM instances. Each of the instance will have tiny specification, <strong>1 CPU core and 1GB of RAM and 10GB of storage</strong>. Of course another VM tools in the market like virtual box, VMWare, Proxmox, Vagrant, or cloud VPS, or even LXC container can be used as well, it doesn't matter.</p>
<pre><code class="lang-bash">multipass launch --cpus 1 --memory 1G --disk 10G --name lxd-1
multipass launch --cpus 1 --memory 1G --disk 10G --name lxd-2
multipass launch --cpus 1 --memory 1G --disk 10G --name lxd-3
multipass launch --cpus 1 --memory 1G --disk 10G --name load-balancer
</code></pre>
<p>The command above will spawn three virtual machines with 1 CPU core, 1 Gig of memory, and 10 Gigs of storage. To ensure that all of that machine already live, running <code>multipass list</code> command will show all the instances on the host machine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709477768341/0ac2c95a-519a-4de9-be50-044b65c7ec01.png" alt class="image--center mx-auto" /></p>
<p>The terminal above showing that all of the instances already live and ready to be used in the next step.</p>
<h3 id="heading-forming-a-lxd-cluster">Forming a LXD Cluster</h3>
<p>Starting from <code>lxd-1</code>, we can start to configure the LXD using this command using interactive shell.</p>
<pre><code class="lang-bash">lxd-init
</code></pre>
<p>When running the command above, it will shows several questions like the images shown here.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709478815000/a1094909-fdae-4a6c-bfc0-dd8b402eab5c.png" alt class="image--center mx-auto" /></p>
<p>Take a careful look we only need this question below to be answered with <strong>yes</strong>.</p>
<blockquote>
<p>Would you like to use LXD clustering? (yes/no) [default=no]:</p>
</blockquote>
<p>Afterwards, we can leave any other questions with the LXD defaults configuration. In the default configuration of LXD cluster, they will automatically create a new <strong>Fan Overlay Network</strong>. This fan network will help us to solve the problem later on.</p>
<p>Before initializing the <code>lxd-2</code> and <code>lxd-3</code>, we need to generate the join token that will be used when configuring the rest of nodes. Command to generate token would be as follow.</p>
<pre><code class="lang-bash">lxc cluster add lxd-2
lxc cluster add lxd-3
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709479250204/d66f10b5-e41b-4291-8d19-b0b97e23d3b6.png" alt class="image--center mx-auto" /></p>
<p>Those tokens are used as an authentication that <code>lxd-1</code> (as the LXD Cluster leader) are permitting another new cluster member joining the cluster.</p>
<p>Joining member will use the same command as initialize the LXD Cluster leader with mandate the sudo command because it will modify system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709479451462/f048a319-5aca-49fe-92de-65d7be6f33a7.png" alt class="image--center mx-auto" /></p>
<p>While joining cluster, there is interactive shell questions that to be answered before leaving it with the default configurations.</p>
<blockquote>
<p>Would you like to use LXD clustering? (yes/no) [default=no]: <code>yes</code></p>
<p>What IP address or DNS name should be used to reach this server? [default=10.79.89.132]: <code>(leave it blank)</code></p>
<p>Are you joining an existing cluster? (yes/no) [default=no]: <code>yes</code></p>
<p>Do you have a join token? (yes/no/[token]) [default=no]: <code>yes</code></p>
<p>Please provide join token: <code>(enter the acquired token from lxd-1)</code></p>
<p>All existing data is lost when joining a cluster, continue? (yes/no) [default=no]: <code>yes</code></p>
</blockquote>
<p>Repeat the command in <code>lxd-3</code> host. Finally, the LXD cluster are successfully set up. Check whether the LXD Cluster are correctly formed using this command:</p>
<pre><code class="lang-bash">lxc cluster list
</code></pre>
<p>The command can be called anywhere, either in <code>lxd-1</code> or <code>lxd-2</code>, or <code>lxd-3</code> , the result should be showing that all of those host are in the same cluster like the picture below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709479816145/8d1729e6-57b1-4107-ae34-7e0df0ccfb4f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-problem">The Problem</h2>
<p>The problem that we are going to solve is in the networking part, so we may need to revisit the current networking topology after installing the cluster.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709910096334/833bd37d-b638-4540-b505-dff9e122ee1e.png" alt class="image--center mx-auto" /></p>
<p>The image above shows that the created VM has two IP addresses except for the <code>load-balancer</code>. That is because of the LXD cluster node assign a new interface for its <a target="_blank" href="https://wiki.ubuntu.com/FanNetworking">fan overlay network</a>.</p>
<blockquote>
<p>TLDR of fan networking;</p>
<p>Fan overlay networking will mask the underlay network (the containers) to the overlay network. As long as we are connected to the overlay network, then we can reach any network available as its underlay network.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709910273918/77668808-0b51-4046-8b65-5650c8db8365.png" alt class="image--center mx-auto" /></p>
<p>The IP address assigned in the <code>lxd-1</code> instance is coming from <code>lxdfan0</code> interface that use <code>240.0.0.0/8</code> netmask as well as the <code>lxd-2</code> and <code>lxd-3</code> has. While initializing the clusters, LXD has managed the networking inside the LXD cluster to be able to communicate and discover the other containers within the cluster. It has been the default network settings for LXD Cluster. Hence, if you are not using the LXD cluster setup, the LXD will use default bridge network.</p>
<p>Let's try to spawn a new container in the any node in the cluster using the command below.</p>
<pre><code class="lang-bash">lxc launch ubuntu:22.04 container-1
lxc launch ubuntu:22.04 container-2
lxc launch ubuntu:22.04 container-3
</code></pre>
<p>LXD is smart enough to separate the workloads, so the created container will be balanced accordingly based on the free node.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709911823908/4b0158e4-1ffa-4f88-925f-253413abb21a.png" alt class="image--center mx-auto" /></p>
<p>The three containers are up and running and evenly distributed to the LXD cluster node. Now, our network topology with its IP address would be as follow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710065683079/aa4d02d4-e4d3-4d6d-a26b-7cc4332aad42.png" alt class="image--center mx-auto" /></p>
<p>With the given conditions, we are having the difference network within the container and the load balancer. Here is the proof that the load balancer are not able to discover the container inside the LXD cluster.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709985573517/415725d5-8016-444f-bf93-f0a584279e66.png" alt class="image--center mx-auto" /></p>
<p>Ping test above showed that the load balancer is not know where the IP <code>240.228.0.217</code> are located. In order to make the container available to the load-balancer we need to route the load balancer with the containers. The common way to do this is to append the routing table in the <code>load-balancer</code> instance to one of the LXD cluster because at least one of the instance know where to route to the respected container. <mark>The problem using a defined routing is we need to configure the </mark> <code>load-balancer</code> <mark> and the </mark> <code>lxd-*</code> <mark> instance to be able to communicate in the both directions.</mark></p>
<h2 id="heading-solve-using-the-lxd-cluster-overlay-network">Solve Using the LXD Cluster Overlay Network</h2>
<p>Ubuntu fan networking can easily solve the issues , because adding a new node to join the network is fairly simple and straightforward. The <code>fanctl</code> can be installed by using the <code>ubuntu-fan</code> package in the <code>load-balancer</code> instance.</p>
<pre><code class="lang-bash">sudo apt install ubuntu-fan
</code></pre>
<p>After the package has been installed, joining a fan network can be done using <code>fanctl</code> as shown in the picture below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709997759558/62b0264f-f816-46ff-8e96-ba03853fde50.png" alt class="image--center mx-auto" /></p>
<p>The <code>fanctl up</code> command will register the underlay <code>10.79.89.225/24</code> network to the overlay network of <code>240.0.0.0/8</code> network. The command before will create two new interfaces that will be used to register the overlay network (<code>fan-240</code>) as a bridge interface and its tunnel interface (<code>ftun0</code>) to route the packet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709998657674/a1b6967b-5a14-4e56-845a-c08f817fdfb2.png" alt class="image--center mx-auto" /></p>
<p>Let's try to ping the container again.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709998964723/c46cec84-734b-4968-8d1f-74d16ca53743.png" alt class="image--center mx-auto" /></p>
<p>It works! Let's try to reverse the other way around.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709999115193/2de13d02-caa5-465e-9b82-0d1849fcca71.png" alt class="image--center mx-auto" /></p>
<p><strong>Voila~ it works 🪄️!!</strong></p>
<p>Apparently, the command above is not persisting the <code>fan-240</code> config interface. If the system is still using the legacy <code>/etc/network/interfaces</code> config model, then we can use the <code>ifupdown</code> command method as it is explained in the documentation example. However, the Ubuntu 22.04 uses <code>systemd-networkd</code> to configure its host interfaces. Upping interface <code>fan-*</code> requiring executing <code>fanctl</code> command as it is mentioned in the docs, that by default is not supported by <code>systemd-networkd</code>. Another tool that can handle this is the <code>networkd-dispatcher</code> . The complete docs can be accessed in the <a target="_blank" href="https://gitlab.com/craftyguy/networkd-dispatcher">owner repository</a>.</p>
<p>First, install the <code>networkd-dispatcher</code> package using the command below.</p>
<pre><code class="lang-bash">sudo apt install networkd-dispatcher
</code></pre>
<p>After completing installation, by default there will be pre-created directories in the <code>/etc/networkd-dispatcher</code> . If the directories doesn't exist, we can create them manually using this command.</p>
<pre><code class="lang-bash">sudo mkdir -p /etc/networkd-dispatcher/{routable,dormant,no-carrier,off,carrier,degraded,configuring,configured}.d
</code></pre>
<p>Creating a hook script that will be executed when the <code>ens3</code> interface is up in the <code>load-balancer</code> instance. Create a hook script in the <code>routable.d</code> directory and mark it as an executable. The command would go as follow.</p>
<pre><code class="lang-bash">sudo touch /etc/networkd-dispatcher/routable.d/fan.sh
sudo chmod +x /etc/networkd-dispatcher/routable.d/fan.sh
</code></pre>
<p>Let's modify the content of the <code>fan.sh</code> using your favorite text editor. In this command, we will be using vim.</p>
<pre><code class="lang-bash">sudo vim /etc/networkd-dispatcher/routable.d/fan.sh
</code></pre>
<p>The bash script contents can be seen here. Obviously, you need to modify the interface name as my machine use <code>ens3</code>, that can be different on your machine. The <code>fanctl</code> command should be customized as well using the value of your underlying network and overlay network.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-variable">$IFACE</span>"</span> != <span class="hljs-string">"ens3"</span> ];
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$0</span>: not the interface target, ignoring"</span>
    <span class="hljs-built_in">exit</span> 0
<span class="hljs-keyword">fi</span>

<span class="hljs-keyword">case</span> <span class="hljs-string">"<span class="hljs-variable">$STATE</span>"</span> <span class="hljs-keyword">in</span>
    routable)
        <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$0</span>: configuring fan network for interface: <span class="hljs-variable">$IFACE</span>"</span>
        fanctl up -u 10.79.89.225/24 -o 240.0.0.0/8
        ;;
    *)
        <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$0</span>: nothing to do with <span class="hljs-variable">$IFACE</span> for \`<span class="hljs-variable">$STATE</span>'"</span>
        ;;
<span class="hljs-keyword">esac</span>
</code></pre>
<p>Restart the instance and the <code>fan-240</code> interface should be created as well as the <code>ens3</code> has been configured. To make the host also available in the overlay network, then we just need to do the same things in the <code>load-balancer</code> instance. Restart the instance and see if the fan interface is up with the <code>ens3</code> interface.</p>
<p>This is a checkpoint on what we have been doing. After setting up the overlay network, we can compare on the complexity problem that exists before.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710065683079/aa4d02d4-e4d3-4d6d-a26b-7cc4332aad42.png" alt class="image--center mx-auto" /></p>
<p>The picture above is showing that the both the host and the load balancer does not know where to find the specific container inside the LXD Cluster. Registering a new routing table seems to solve the problem, but the directions is only one way not even a roundtrip. For instance, registering a route in the <code>load-balancer</code> to container inside the LXD Cluster through the <code>lxd-3</code> or <code>lxd-2</code> or <code>lxd-1</code>, the <code>container-1</code> still won't be able to reach the load balancer. It will becoming more complex and hard to maintain if we want to add more cluster nodes in the LXD Cluster, or introducing a new workload model like docker container.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710085734962/93b0941e-8ac9-4e90-a934-f3d0f9bf3986.png" alt class="image--center mx-auto" /></p>
<p>In the picture above, it shows that the network topology are becoming flat. It shows that that we can significantly reduce the network complexity. Each instance can communicate each other easily. Even the host can reach the LXD containers and vice-versa. Now, using the current network topology, we will try to add the docker engine to the overlay network.</p>
<h2 id="heading-connecting-docker-to-the-overlay-network">Connecting Docker To The Overlay Network</h2>
<p>From the previous solution we know that fan networking create a new bridge interface. Thus, by the fan network interface attached, means that we can utilize them even further like adding the docker to the overlay network.</p>
<p>We will try to install docker to the load balancer. We won't cover the installation process, but you can follow the official docker installation instruction <a target="_blank" href="https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository">here</a>. After the installation succeeded, try to launch a new nginx container with <code>docker-container-1</code> name.</p>
<pre><code class="lang-bash">sudo docker container run -d --name docker-container-1 nginx
</code></pre>
<p>By default, the <code>docker-container-1</code> will use the default bridge network and any instances, even the host (baremetal host) cannot reach it except the <code>load-balancer</code> VM as the docker host. To utilize the fan networking, we can create a new bridge network using the <code>fan-240</code> interface. In this command, we need to add the subnet as the fan network subnet, the IP range using the host network but sliced in the /24 subnet. Then the gateway address using the <code>load-balancer</code> fan IP address. Last, we need to exclude the unwanted IP range to be used, such as <code>240.225.0.0</code>.</p>
<pre><code class="lang-bash">sudo docker network create --driver bridge \
                      -o <span class="hljs-string">"com.docker.network.bridge.name=fan-240"</span> \
                      -o <span class="hljs-string">"com.docker.network.driver.mtu=1450"</span> \
                      --subnet=240.0.0.0/8 \
                      --ip-range=240.225.0.0/24 \
                      --gateway=240.225.0.1 \
                      --aux-address=<span class="hljs-string">"net=240.225.0.0"</span> \
                      fanbr0
</code></pre>
<p>After running the command above, we can directly attach the new bridge network to the docker container using the command below.</p>
<pre><code class="lang-bash">sudo docker network connect fanbr0 docker-container-1
</code></pre>
<p>After running those command, all of the member of the network overlay can communicate to the new nginx container within the <code>load-balancer</code> VM. The IP address of the <code>docker-container-1</code> can be queried using the <code>docker inspect</code> command and filtering the result using <code>jq</code> (installation required).</p>
<pre><code class="lang-bash">sudo docker inspect docker-container-1 | jq <span class="hljs-string">'.[0] | .NetworkSettings | .Networks | .fanbr0 | .IPAddress'</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710087192323/0aec27c5-3d48-41fa-89b9-269f9de2c9ab.png" alt class="image--center mx-auto" /></p>
<p>As the image above, the IP adderss of the nginx container is <code>240.225.0.2</code> . Here are the test results of ping and curl in the <code>container-2</code> that is inside the LXD Cluster to the <code>docker-container-1</code> inside the docker host <code>load-balancer</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710087343348/6994bccc-bc82-4dd6-8ced-56d3d1406ef5.png" alt class="image--right mx-auto mr-0" /></p>
<p>That's all. Nice and clean results. Now, let's draw the final network topology again after the overlay network applications and the docker network connection.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710087603500/1d59919e-ef35-45cd-8920-6e3d597b8255.png" alt class="image--center mx-auto" /></p>
<p>As the picture above new <code>docker-container-1</code> is just appending the flat structure we already built above. Anyone can connect to anywhere as long as they know the IP address inside the overlay network.</p>
<h2 id="heading-wrap-up">Wrap Up</h2>
<p>Network management is a one of the key pillar of cloud, virtualization, and containerization environment that plays a big role. The misconfiguration and mismanagement could add more complexity in the long run. The tested environment are forming three nodes virtual machines that spawn a LXD Cluster inside it plus one virtual machine with another containerization workload model such as docker. The test results demonstrate that usage of the overlay network could significantly redact the configuration management complexity in a nested and complex virtualization and containerization environment.</p>
]]></content:encoded></item><item><title><![CDATA[Linux Cap: Cara Elevasi Privilege tanpa Menjadi Root]]></title><description><![CDATA[Jadi kemarin aku coba bikin-bikin aplikasi yang harus listen di restricted port. Restricted port ini yang biasanya ada di rentang 1 - 1024. Di atas itu baru bisa pakai user biasa buat listen port, misal mau jalanin aplikasi di port 8000 ya tidak ada ...]]></description><link>https://rendyananta.dev/linux-cap-cara-elevasi-privilege-tanpa-menjadi-root</link><guid isPermaLink="true">https://rendyananta.dev/linux-cap-cara-elevasi-privilege-tanpa-menjadi-root</guid><dc:creator><![CDATA[Rendy Ananta]]></dc:creator><pubDate>Sat, 29 Jul 2023 13:54:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4Mw7nkQDByk/upload/9080f5adbe032065d98ad46304f3876b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Jadi kemarin aku coba bikin-bikin aplikasi yang harus <em>listen</em> di <em>restricted port</em>. <em>Restricted port</em> ini yang biasanya ada di rentang 1 - 1024. Di atas itu baru bisa pakai <em>user</em> biasa buat <em>listen port</em>, misal mau jalanin aplikasi di port 8000 ya tidak ada masalah. Sebagai contoh, aku coba tulis program simpel biar kebayang.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"log"</span>
    <span class="hljs-string">"net/http"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    handler := http.NewServeMux()

    handler.HandleFunc(<span class="hljs-string">"/"</span>, HandleHealthCheck)

    log.Printf(<span class="hljs-string">"listening app in localhost:80"</span>)
    <span class="hljs-keyword">if</span> err := http.ListenAndServe(<span class="hljs-string">"localhost:80"</span>, handler); err != <span class="hljs-literal">nil</span> {
        log.Panic(err)
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">HandleHealthCheck</span><span class="hljs-params">(rw http.ResponseWriter, r *http.Request)</span></span> {
    rw.Write([]<span class="hljs-keyword">byte</span>(<span class="hljs-string">"service is healthy"</span>))
}
</code></pre>
<p>Potongan kode di atas adalah aplikasi web sederhana yang jalan di port 80. Port 80 ini kan termasuk di port yang <em>restricted</em>, jadi butuh <em>privilege</em> untuk dapat menjalankannya. Beberapa hal yang bisa dilakukan untuk jalankan aplikasi ini adalah dengan menjalankannya dengan menjadi root terlebih dahulu.</p>
<p>Untuk Menjalankannya, aku juga bikin bash script untuk <em>build</em> dan menjalankan <em>binary</em> yang telah dibuat.</p>
<pre><code class="lang-sh"><span class="hljs-meta">#!/bin/bash</span>
go build -o main app.go

<span class="hljs-keyword">if</span> [[ <span class="hljs-variable">$1</span> == <span class="hljs-string">'--run'</span> ]]; <span class="hljs-keyword">then</span>
  ./main
<span class="hljs-keyword">fi</span>
</code></pre>
<p>Untuk menjalankannya tinggal panggil <em>script</em>-nya aja seperti ini</p>
<pre><code class="lang-plaintext">➜  example-linux-cap$ sh build.sh --run
</code></pre>
<p>Maka hasil keluarannya akan kurang lebih seperti di bawah.</p>
<pre><code class="lang-plaintext">2023/07/29 22:10:23 listening app in localhost:80
2023/07/29 22:10:23 listen tcp 127.0.0.1:80: bind: permission denied
panic: listen tcp 127.0.0.1:80: bind: permission denied

goroutine 1 [running]:
log.Panic({0xc0000bff50?, 0x67e1bb?, 0x0?})
        /usr/lib/go/src/log/log.go:384 +0x65
main.main()
        /home/rendy/Workspace/private/example-linux-cap/app.go:15 +0x105
</code></pre>
<h2 id="heading-menjadi-root">Menjadi Root 🥚</h2>
<p>Cara yang paling mudah adalah dengan menjadi root. Menjalankannya hanya cukup dengan <code>prefix</code> sudo. Untuk awalan mari gunakan cara bodoh untuk menjalankan aplikasi tersebut. Kenapa cara bodoh, karena dapat mengakibatkan peretas mendapatkan bug yang ada di aplikasi dan mengeksploitasinya. Tapi tidak apa-apa karena ini bagian dari belajar. Nanti kita juga akan belajar cara yang lebih baik.</p>
<p>Pertama-tama ganti <em>user</em> ke root terlebih dahulu.</p>
<pre><code class="lang-plaintext">➜  example-linux-cap$ sudo su
➜  example-linux-cap sudo su
[sudo] password for rendy: 
[root@canvas-mobile example-linux-cap]# whoami
root
</code></pre>
<p>Setelah menjadi root, mari dicoba kembali untuk menjalankan aplikasi.</p>
<pre><code class="lang-plaintext">[root@canvas-mobile example-linux-cap]# sh build.sh --run
2023/07/29 22:21:38 listening app in localhost:80
</code></pre>
<p>Aplikasi sukses berjalan. Untuk memastikan, bisa menggunakan perintah curl ke localhost:80. Hasilnya akan seperti di bawah.</p>
<pre><code class="lang-plaintext">➜  example-linux-cap curl localhost:80
service is healthy%
</code></pre>
<p>Cara ini juga bisa diraih dengan menggunakan sudo, tanpa mengganti user ke root. Caranya adalah seperti ini.</p>
<pre><code class="lang-plaintext">[root@canvas-mobile example-linux-cap]$ sudo sh build.sh --run
2023/07/29 22:21:38 listening app in localhost:80
</code></pre>
<p>Untuk memastikan, dapat menggunakan perintah curl seperti yang sebelumnya.</p>
<pre><code class="lang-plaintext">➜  example-linux-cap curl localhost:80
service is healthy%
</code></pre>
<p>Cara ini sukses, tapi sangat tidak dianjurkan demi keamanan <em>server</em>, karena ketika sekali saja terkena serangan <em>exploit</em>, <strong>seluruh akses di <em>server</em> akan juga dapat diambil alih oleh penyerang (<em>hacker</em>)</strong>. Fatal banget pengaruhnya.</p>
<h2 id="heading-menggunakan-linux-cap">Menggunakan Linux Cap 🎩</h2>
<p>Sekarang, menuju ke hidangan utama yaitu ke Linux <em>capabilities</em>. Sebenarnya <em>capabilities</em> ini sudah lama dirilis, sejak kernel versi 2.2, tapi dokumentasinya cukup minim. Kegunaannya juga cukup low level, jadi ini jarang digunakan oleh <em>end-user</em>. Tapi sebenarnya fitur ini banyak digunakan untuk menjalankan aplikasi yang <em>rootless</em>, seperti <a target="_blank" href="https://podman.io/"><strong>podman</strong></a> yang merupakan <em>tool</em> untuk memanajemen <em>container</em> yang dapat berjalan secara <em>rootless</em>.</p>
<p>Kembali ke <em>linux capabilities</em>, dilansir dari laman linux man-pages (Halaman manual linux), <em>capabilities</em> ini dipecah menjadi lebih kecil-kecil sesuai dengan aksi yang ingin dilakukan. Untuk lengkapnya, bisa dilihat langsung di <a target="_blank" href="https://man7.org/linux/man-pages/man7/capabilities.7.html">halaman dokumentasinya</a>, namun sebagai contoh, ini aku lampirkan sedikit di bawah.</p>
<pre><code class="lang-plaintext">CAP_NET_BIND_SERVICE
    Bind a socket to Internet domain privileged ports (port
    numbers less than 1024).

CAP_NET_BROADCAST
    (Unused)  Make socket broadcasts, and listen to
    multicasts.

CAP_NET_RAW
    •  Use RAW and PACKET sockets;
    •  bind to any address for transparent proxying.
</code></pre>
<p>Pada kasus ini, yang diperlukan untuk <em>listen</em> di <em>restricted port</em>, berarti memerlukan <em>capability</em><code>CAP_NET_BIND_SERVICE</code>. Untuk memberikan capabilities pada sebuah <em>file</em> atau <em>binary</em>, diperlukan pengetahuan juga terkait tipe <em>capability</em> yang akan dilampirkan. Dikutip dari laman linux man, ada 3 tipe <em>capability</em> yang tersedia.</p>
<pre><code class="lang-plaintext">Permitted (formerly known as forced):
    These capabilities are automatically permitted to the
    thread, regardless of the thread's inheritable
    capabilities.

Inheritable (formerly known as allowed):
    This set is ANDed with the thread's inheritable set to
    determine which inheritable capabilities are enabled in
    the permitted set of the thread after the execve(2).

Effective:
    This is not a set, but rather just a single bit.  If this
    bit is set, then during an execve(2) all of the new
    permitted capabilities for the thread are also raised in
    the effective set.  If this bit is not set, then after an
    execve(2), none of the new permitted capabilities is in
    the new effective set.
</code></pre>
<p>Berdasarkan penjelasan tersebut, kita tidak bisa menggunakan tipe <code>Inheritable</code> karena dia tipenya diwariskan, jadi belum <em>parent process thread</em> yang akan dijalankan akan memiliki <em>capability</em><code>CAP_NET_BIND_SERVICE</code>. Sehingga, kita perlu menambahkan capability <code>CAP_NET_BIND_SERVICE</code> dengan tipe <strong><em>Permitted</em></strong> (untuk memastikan) dan <strong><em>Effective</em></strong>. Setelah mengetahui, berarti <em>build script</em> yang telah dibuat tadi perlu dilakukan penambahan. Untuk memberikan capability pada sebuah file, terdapaat perintah program <code>setcap</code>.</p>
<p>Untuk menggunakannya, diperlukan minimal 2 argumen yang pertama adalah <em>capability</em> nya dalam bentuk string dan target file nya</p>
<pre><code class="lang-plaintext"># setcap &lt;capabilities&gt; &lt;target-file&gt;
</code></pre>
<p>Berdasarkan dokumentasi, format <em>capabilities</em> string berbentuk <code>&lt;capability&gt;=type</code>. Untuk <em>capability</em> yang dibutuhkan yaitu <code>CAP_NET_BIND_SERVICE</code> dan tipenya disingkat, <code>e</code> untuk <em>effective</em> dan <code>p</code> untuk <em>permitted</em>. Sehingga formatnya menjadi seperti ini <code>cap_net_bind_service=ep</code>. Setelah itu, <em>build script</em> yang tadi diubah menjadi seperti ini.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
go build -o main app.go
sudo <span class="hljs-built_in">setcap</span> <span class="hljs-string">'cap_net_bind_service=ep'</span> main

<span class="hljs-keyword">if</span> [[ <span class="hljs-variable">$1</span> == <span class="hljs-string">'--run'</span> ]]; <span class="hljs-keyword">then</span>
  ./main
<span class="hljs-keyword">fi</span>
</code></pre>
<h3 id="heading-the-moment-of-truth">The Moment of truth 🥁</h3>
<p>Sekarang waktunya membuktikan apakah berhasil menggunakan linux <em>capabilities</em>.</p>
<pre><code class="lang-plaintext">➜  example-linux-cap sh build.sh 
➜  example-linux-cap ./main 
2023/07/29 22:55:19 listening app in localhost:80
</code></pre>
<p>Hasil keluaran terminal di atas menandakan bahwa tanpa <em>user</em> root, menjalankan aplikasi web di <em>restricted port</em> tetap bisa dicapai menggunakan linux <em>capabilities</em>.</p>
<p>Jadi apa yang bisa disimpulkan terkait percobaan ini? Ya tidak semuanya harus menjadi root. Linux <em>capabilities</em> memberikan kenyamanan untuk mengatur dan mengerucutkan <em>permission</em> dengan lebih detail.</p>
<p>Agar aman dalam menggunakan linux cap adalah, ekspektasinya memberikan <em>capabilities</em> pada sebuah <em>binary</em> ketika <em>installation</em>. Karena di saat itu-lah membutuhkan akses root untuk menambahkan <em>capability</em>. Di luar itu, linux kernel yang akan melakukan pengecekan <em>capability</em> pada sebuah <em>binary</em>. Sehingga scopenya benar-benar kecil, yaitu di level <em>capability</em> pada sebuah <em>thread process</em>, tidak di level <em>user</em>. Dengan terbatasnya <em>permission</em> yang diset pada sebuah <em>binary</em>, dapat memungkinkan kita memperkecil kemungkinan untuk diretas. Istilah kerennya sih <strong><em>Hardening</em></strong> -- 🚧.</p>
]]></content:encoded></item><item><title><![CDATA[Utilisasi Laptop Rusak Jadi Server]]></title><description><![CDATA[Liburan semester 4, aku decluttering banyak barang di rumah. Perlahan-lahan jadi minimalis, membuang barang-barang yang udah lama nggak pernah dipake, membersihkan tempatnya, sampai akhirnya menemukan laptop ini yang sudah nggak dipake sekitar 3 tahu...]]></description><link>https://rendyananta.dev/utilisasi-laptop-rusak-jadi-server</link><guid isPermaLink="true">https://rendyananta.dev/utilisasi-laptop-rusak-jadi-server</guid><category><![CDATA[networking]]></category><dc:creator><![CDATA[Rendy Ananta]]></dc:creator><pubDate>Sun, 08 Mar 2020 13:54:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/0TCUOrQ00gg/upload/0f01c8e9dc788bfd5ce6028a7fa224b4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Liburan semester 4, aku <em>decluttering</em> banyak barang di rumah. Perlahan-lahan jadi minimalis, membuang barang-barang yang udah lama nggak pernah dipake, membersihkan tempatnya, sampai akhirnya menemukan laptop ini yang sudah nggak dipake sekitar 3 tahun. Saat itu terpikir cara untuk pake laptop ini, dengan dipasang linux aja, kemudian terpikir untuk bikin jaringan sederhana di rumah dan laptop ini sebagai servernya.</p>
<p>Laptop ini yang rusak layar dan keyboardnya. Jikalaupun dijual juga sayang, karena banyak kenangan dan jasanya 😚, salah satunya laptop ini bikin aku seneng <em>ngoprek-ngoprek</em> waktu masih SMP.</p>
<h2 id="heading-fitur-server-impian">Fitur Server Impian</h2>
<h3 id="heading-berbagi-koneksi-internet">Berbagi Koneksi Internet</h3>
<ul>
<li>Berbagi koneksi internet, karena biasanya aku <em>share</em> internet pake hotspot dari <em>smartphone</em>. Hotspot itu bikin <em>smartphone</em> aku panas dan baterainya cepat habis, mungkin ini bisa jadi solusi yang efektif.</li>
</ul>
<h3 id="heading-web-server">Web Server</h3>
<ul>
<li>Pas ada proyek IT atau yang berhubungan dengan hal itu, server ini bisa buat tempat untuk hosting website sendiri.</li>
</ul>
<h3 id="heading-dns-server">DNS Server</h3>
<ul>
<li><p>Buar bisa buka reddit sama netflix yang diblock sama ISP. <em>hehe</em>.</p>
</li>
<li><p>Setelah host website di <em>local network</em> , website tersebut dapat diatur menggunakan domain tertentu agar tidak perlu menghafalkan alamat IPnya pas ingin akses.</p>
</li>
<li><p>Mengembangkan aplikasi android lebih gampang untuk dijadikan <em>host</em> API untuk <em>development</em>. Karena nggak perlu mengubah file <code>/etc/hosts</code> di device android untuk resolve domain lokal.</p>
</li>
</ul>
<h3 id="heading-cloud-server">Cloud Server</h3>
<ul>
<li>Google Drive x Google Photos / iCloud x iCloud Photos Replacement. Jadi data-data pribadi menjadi lebih aman dan gratis.</li>
</ul>
<h3 id="heading-print-server">Print Server</h3>
<ul>
<li>Misal ingin <em>ngeprint</em>, dapat dilakukan di mana saja asalkan terhubung dengan jaringan rumah. Nggak perlu colok-colok USB printer lagi.</li>
</ul>
<h3 id="heading-deployment-sandbox">Deployment Sandbox</h3>
<ul>
<li>Akan berguna kalo nanti diinstal kubernetes cluster dan docker di servernya, untuk belajar DevOps.</li>
</ul>
<p>Banyak kan kegunaannya! Kembali ke topik awal, jadi laptop ini akan digunakan untuk <em>home server</em> yang fiturnya sudah kusebutkan tadi.</p>
<h2 id="heading-perlengkapan-dan-bahan">Perlengkapan dan Bahan</h2>
<h3 id="heading-pc-sebagai-server">PC sebagai server</h3>
<p>Untuk PC aku pake laptopku yang rusak tadi. Speknya biasa saja, tapi sudah lumayan juga jaman dulu. Ini speknya :</p>
<blockquote>
<p>Core i3 Sandy Bridge 2.2 Ghz dual core, RAM 4GB, HDD 1TB.</p>
<p>Server ini bisa diganti pakai Raspberry PI, atau Desktop PC. Terserah ya.</p>
</blockquote>
<p>Sebenernya ingin menggunakan Raspberry PI, karena bentuknya kecil dan hemat daya karena pas dihitung-hitung, konsumsi listriknya jika menggunakan Raspberry PI akan 4 kali lebih efisien (15watt) daripada menggunakan laptop (65watt). Tapi kalau menggunakan Raspberry PI, jangan diinstall kubernetes cluster ya, masih belum tau apakah dia mampu atau tidak. 😆</p>
<h3 id="heading-router">Router</h3>
<p>Untuk router, aku menggunakan router yang terinstall routerOS yang paling murah yaitu MikroTik HAP Lite 2 karena kebutuhannya menurutku yang sangat ringan, jadi tidak perlu yang banyak fitur.</p>
<h3 id="heading-koneksi-internet">Koneksi Internet</h3>
<p>Cukup menggunakan modem 3g lama yang menganggur dari sewaktu SMP. Kenapa pake 3g ?? Ya, karena 3g dari ISP ini sudah lumayan kencang. Tapi nanti akan <em>diupgrade</em> menggunakan modem 4g atau menggunakan internet kabel yaa. Yang penting tujuan-tujuan itu bisa tercapai terlebih dahulu. ISP yang digunakan bebas ya jika menggunakan modem, tapi aku menggunakan Telkomsel By.U.</p>
<h3 id="heading-kabel-lan">Kabel LAN</h3>
<p>Yang terpenting juga, kabel LAN nya jangan lupa! Pakai Cat 5 / Cat 5e terserah. Tapi jika kecepatan internet yang dimiliki di atas 100Mbps, maka membutuhkan kabel LAN bertipe Cat 5e atau 6. Untuk gawai yang tidak memiliki port ethernet, beli USB to Ethernet converter juga ya.</p>
<h3 id="heading-printer">Printer</h3>
<p>Ini tidak esensial, jika tidak ingin menggunakan Printer Server. Disini aku mempunyai printer, tapi kondisinya rusak, belum tau apakah akan diperbaiki atau beli baru :weary:. Fitur ini akan ditunda dulu entah sampapi kapan.</p>
<h2 id="heading-total-pengeluaran">Total Pengeluaran</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>#</td><td>Item</td><td>Harga</td><td>Jumlah</td><td>Total</td></tr>
</thead>
<tbody>
<tr>
<td>1</td><td>Laptop</td><td>Rp. 0</td><td>1</td><td>Rp. 0</td></tr>
<tr>
<td>2</td><td>Router Mikrotik RB941D</td><td>Rp. 304.000</td><td>1</td><td>Rp. 304.000</td></tr>
<tr>
<td>3</td><td>Modem 3G Huawei E173</td><td>Rp. 0</td><td>1</td><td>Rp. 0</td></tr>
<tr>
<td>4</td><td>Lan Straight Cat 5e 1.5m</td><td>Rp. 4000</td><td>2</td><td>Rp. 8.000</td></tr>
<tr>
<td>5</td><td>USB to LAN Converter</td><td>Rp. 22.000</td><td>1</td><td>Rp. 22.000</td></tr>
<tr>
<td></td><td><strong>Grand Total</strong></td><td></td><td></td><td><strong>Rp. 342.000</strong></td></tr>
</tbody>
</table>
</div><p><em>Totalnya cuman Rp. 342.000 kalau mau memanfaatkan barang bekas sepertiku haha.</em></p>
<h2 id="heading-topologi">Topologi</h2>
<p>Sebelum eksekusi, baiknya direncanakan dulu apakah idenya <em>feasible</em> dan memungkinkan untuk direalisasikan. Topologi yang akan aku pakai adalah seperti ini.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708873528025/5b883ad7-027d-4aac-9899-8dc4aa85eb1d.jpeg" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Jadi akan dibuat seperti ini, karena koneksi menggunakan modem, maka server akan digunakan sebagai gateway karena dial-up modem dilakukan di server.</p>
<p>Kemudian, koneksi internet dari server akan diteruskan ke router dan dibagikan ke host-host yang lain menggunakan wireless ataupun ethernet</p>
</blockquote>
<h2 id="heading-perangkat-lunak">Perangkat Lunak</h2>
<ol>
<li><p>OS Server menggunakan <a target="_blank" href="https://ubuntu.com/download/server">Ubuntu Server 18.04 LTS</a></p>
</li>
<li><p>Perlindungan firewall menggunakan <a target="_blank" href="https://help.ubuntu.com/community/UFW"><code>ufw</code></a></p>
</li>
<li><p>Dial-up Modem menggunakan <a target="_blank" href="https://github.com/wlach/wvdial"><code>wvdial</code></a></p>
</li>
<li><p>Masquerading untuk berbagi koneksi internet ke port ethernet menggunakan <a target="_blank" href="https://help.ubuntu.com/community/IptablesHowTo"><code>iptables</code></a></p>
</li>
<li><p>DNS Server menggunakan <a target="_blank" href="https://github.com/DNSCrypt/dnscrypt-proxy"><code>dnscrypt-proxy</code></a></p>
</li>
<li><p>Web Server menggunakan <a target="_blank" href="https://nginx.org"><code>nginx</code></a></p>
</li>
<li><p>Cloud Server menggunakan <a target="_blank" href="https://nextcloud.com">Nextcloud</a></p>
</li>
<li><p>Containerization menggunakan <a target="_blank" href="https://docker.com">Docker</a></p>
</li>
<li><p>OS Router menggunakan <a target="_blank" href="https://mikrotik.com/">MikroTIK</a></p>
</li>
<li><p><a target="_blank" href="https://mikrotik.com/download/archive">Winbox</a> untuk konfigurasi MikroTIK</p>
</li>
<li><p>Untuk melakukan dial-up modem di background dan juga secara otomatis dapat menggunakan <a target="_blank" href="https://supervisord.org">Supervisor</a></p>
</li>
</ol>
<h2 id="heading-ilustrasi">Ilustrasi</h2>
<p>Sejauh pemakaian 2 minggu, beberapa permasalahan yang sering aku alami:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708873535652/9b5e4d6c-6bf0-452f-b4a9-0a4d661e0ea2.jpeg" alt class="image--center mx-auto" /></p>
<ol>
<li>Koneksi akan tidak bisa digunakan jika salah satu client melakukan download, terlebih jika server sedang update. <em>Jadi client yang lain nggak kebagian bandwidth dan malah terkesan tidak mendapat koneksi internet</em>. Masih belum dapat disimpulkan apakah karena modemnya atau ada konfigurasi yang masih belum pas.</li>
</ol>
<p><strong>UPDATE</strong></p>
<ol>
<li><p>Jadi, sekarang aku pake internet provider, karena habis banyak di kuota hehe.</p>
</li>
<li><p>Sekarang pakai raspberrypi, karena laptop itu benar-benar <strong>sudah mati total</strong> karena iseng pakai DVD ROMnya (tidak tahu hubungan antara DVD ROM dan mati-totalnya). Sekarang jadi benar-benar hemat listrik.</p>
</li>
<li><p>Karena mikrotiknya ada issue, jadi sekarang ganti pakai router 5Ghz punya xiaomi.</p>
</li>
<li><p>Karena pake internet provider, aku cuman bisa akses reddit aja, karena netflixnya diblok biar nggak bisa akses hostnya.</p>
</li>
</ol>
<p>Intinya, mulai seadanya dulu tidak ada masalah, terus di-<em>upgrade</em> pelan-pelan. Tidak semuanya langsung bagus, semuanya perlu proses. Cukup di sini ceritaku. Jadi tunggu ceritaku selanjutnya ya~~ 🙈</p>
]]></content:encoded></item></channel></rss>