Search for the blog
Youtube extension update log #1
Yesterday, it seems, at least on myside, youtube made some update to how they handle requests and stuff. While working on the extension to make it work with the new request handling, I thought I might share the updates I made because of youtube. ## Update 1: Using title instead of href to categorize. It all started last week, around Thursday, May 15th, 2025. <br> I was working on the extension, and suddenly my folders were empty. I soon found out that youtube decided to change where the channels in the subscriptions pane would redirect to. I guess they wanted to align the logic with the mobile side and instead of going straight to the channel page, they redirect to a page with recent videos of the channel and have another button that redirects to the channel itself. The problem with this was, that previously, channels had anchors that redirected to channel itself and the href was used to filter and refill the folders. Now the href has changed. I was then left with 2 choices. 1. use the new href as the id for the folders, or 2. use the channel title. Choice number 1 wasn't really valid because the items in the content doesn't have that href inside them. I could have taken the href and create a map linking it with the channel href, like: new href : channel href but I didn't want to do that, and I am not sure if the href that youtube provides for the new page is permanent (it looks ). So I went with choice 2. <br> Of course, choice 2 isn't really that reliable, since the channel owner could change the name on a whim (or if they get hacked by tesla news). I will deal with those problems when they come. ## Update 2: Filtering the contents when folder active After that issue had been resolved, I continued to work on filtering function of the extension. How it works is: 1. click on the folder 2. hide all items that are not part of the folder 3. add `first-column` attribute accordingly based on order. This was pretty simple enough to do, but I also wanted to make sure that the items get filtered as they are lazily-loaded. So I injected a js file to make lazy-loading requests send a message to the extension and trigger filtering once the new items were loaded. It worked... until yesterday. # Update 3: Dealing with change in the way Youtube handles youtube request Today, I was working on the extension to make sure that the resizing of the browser re-added the margin for left-most items. I saw that it worked and went and filtered to a folder that triggers loading right away due to small size. New items loaded, and it did not filter itself. I debugged the injected code and it stopped detecting loading request. There I knew that they moved on from regular fetch api to some kind of custom one. So my plan is to use MutationObserver instead to detect content loading and trigger filtration. The plan is: 1. Observe the subscription contents container. 2. if children of the container increase in number, and 3. if a folder is active, 4. trigger filtration. Hopefully, this works and be better than before. I'll write more once this gets done.
Wed May 21 2025
Self-Hosting #4: "DNS stand for Domain, eNcrypt and Security"
> A Sonnet on the Folly of Ignoring Web Security ``` Oh, careless souls who mock encryption’s art, And scoff at firewalls, built to guard your gate, You jest at risks while hackers ply their part, Unveiling data, sealing others' fate. What mirth is found in passwords weak and frail, In leaving doors unlatched to thieves unseen? The careless click, the unprotected trail, Breeds chaos vast within the cyber scene. A fortress digital must firm endure, To shield the secrets of both heart and trade. Neglect invites the breach, a fate unsure, Where trust dissolves, and havoc is displayed. So heed this verse, and guard what’s held most dear, For webs unkept invite a world of fear. ``` *by ChatGPT, with prompt `write me a sonnet that ridicules people for downplaying web security and explains why web security matters`* Of course, DNS doesn't actually stand for what's written on the title. It's **Domain Name System**. It gives a "Human-readable" address to the ip addresses that points to the services. For example, go ahead and type `dns.google` in the address bar and check which ip addresses the different urls hide. Anyways, It's time to get a domain name from a registrar. For us, there's 1 thing that we need to check.<br> Does the registrar support dynamic DNS? ## Step 1. Getting a D from DNS registrar that handles DDNS So, what is a dynamic DNS and why does it matter? Well, unless you have a static public ip (which is mostly given to business isp client) your ip can be changed regularly by the isp. <br> So, what dynamic DNS is a service that DNS registrar can provide to allow users to update the ip address if and when the public ip address changes. I first checked Godaddy, but there were some reviews saying that DDNS is locked for higher tier or something, so that was too bad. <br> Then, there was NameCheap, which had good enough documentation on DDNS. Also, they had some sales, so I chose them as my registrar. The whole process is pretty easy. 1. Search for the domain name you want to purchase. 2. buy them. 3. Set them up. In step 2, there's something called `domain privacy` that NameCheap provides for free, so that your information does not go public in the whois database.<br> Being completely unaware of how many scam calls I'd be getting if I had my info on whois database, I went without it, thinking to myself, "I want to own this domain and if I don't have my name on it, well, do I really own it?" Well... In the first few week of purchasing the domain, I received about 4 to 7 spam calls everyday (work days, I didn't get as much on weekend). Now, I get one every morning, around 7:30 AM. Yeah, choose wisely. Also I had to go into the dashboard of NameCheap, activate the domain by sending email to my email and clicking on links. ## Step 2. Connecting your IP to the domain name NameCheap will initially have your IP connected to their placeholder page. To connect it to your ip, 1. In your namecheap dashboard, click manage for your domain. 2. Go to advanced DNS tab 3. Edit your host records so that you have 2 `A+ dynamic DNS record` with host `@` and `www` and fill in the value with your public ip address. 4. Done! It's that simple! Too bad next part wasn't (resulting in long time between the previous post and this one (that and Advent of Code 2024)) ## Step 3. SSL certificate through letsEncrypt (using nginx and certbot) With the Domain connected, I navigate to my web app that is open in the machine.<br> However, there's something that bothers me in the address bar. **NOT SECURE!** Well, modern web standardized HTTPS, which is a good thing. It encrypts the communications over the wide, wild web, using Transport Layer Security (TLS). Further, It makes sure that the website is validated by a secure third party operation. This doesn't mean that it magically makes everything secure. Just, a lot more than plain http. Anyways, to get HTTPS to work, we need to get an SSL (Secure Sockets Layer) certificate and we will get it from **Let's Encrypt**. For the technical background on how the certificate process works, please check [how it works in Let's Encrypt website](https://letsencrypt.org/how-it-works/) Honestly, I don't know well enough to give an explanation. Normally, this step would be a breeze because the whole thing was built for ease of use. But remember, we are dealing with an android device. Nothing works out of box. So I looked high and low for a solution and found 2 github guides that let me actually make this work. - Link that has information about setting up certbot in termux: [synapse-termux/GUIDE.md](https://github.com/medanisjbara/synapse-termux/blob/main/GUIDE.md) by Med Anis Jbara - Link on setting up NextJS with nginx: [by Jakir Hussain](https://gist.github.com/iam-hussain/2ecdb934a7362e979e3aa5a92b181153) Here's my Summary on what to do. 1. Install nginx, python, and virtualenv for python ``` $ pkg install nginx python $ pip install virtualenv ``` 2. set up virtual environment for certbot $PREFIX = usr directory ``` $ python -m venv $PREFIX/opt/certbot // The directory can be set elsewhere, but this should be the default $ $PREFIX/opt/certbot/bin/pip install --upgrade pip // updating pip of the virtual environment $ $PREFIX/opt/certbot/bin/pip install certbot certbot-nginx // if the above step results in error, reset the cache of pip using "pip cache purge" // it's "pip cache purge", not "$PREFIX/../pip cache purge" ``` 3. for convenience, link certbot in virtual environment to bin folder `$ ln -s $PREFIX/opt/certbot/bin/certbot $PREFIX/bin/certbot` <br> This allows you to use certbot command, and not $PREFIX/opt/certbot/bin/certbot 4. Initialize nginx (if you don't already have nginx in your system) ``` $ curl "https://raw.githubusercontent.com/medanisjbara/synapse-termux/main/nginx.conf" -O $PREFIX/etc/nginx/nginx.conf ``` and the nginx.conf file should look like below ``` worker_processes auto; # bind worker processes to available cpu include /data/data/com.termux/files/usr/etc/nginx/modules-enabled/*.conf; # for separate conf file events { worker_connections 768; # number of connections per process. (10k for production) } http { # Basic Settings sendfile on; # use linux sendfile() for I/O, for efficiency (default chunk: 2MB) tcp_nopush on; # allow sending files in full packet types_hash_max_size 2048; # hash table size include /data/data/com.termux/files/usr/etc/nginx/mime.types; default_type application/octet-stream; # Stream of byte (least specific) # SSL Settings ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # The default since 1.23.4 ssl_prefer_server_ciphers on; # server decides of the cipher suite # Logging Settings access_log /data/data/com.termux/files/usr/var/log/nginx/access.log; error_log /data/data/com.termux/files/usr/var/log/nginx/error.log; # Gzip Settings gzip on; # gzip html files to client # Virtual Host Configs include /data/data/com.termux/files/usr/etc/nginx/conf.d/*.conf; include /data/data/com.termux/files/usr/etc/nginx/sites-enabled/*; } ``` 5. make sites-available directory and create a configuration file in it. ``` $ mkdir $PREFIX/etc/nginx/sites-available $ touch $PREFIX/etc/nginx/sites-available/next ``` Write to the `next` file the below. ``` server { server_name your.domain.name; location /_next/static { alias /data/data/com.termux/files/home/portfolio/.next/static; # directory to your next project's static folder add_header Cache-Control "public, max-age=3600, immutable"; } location / { try_files $uri.html $uri/index.html @public @nextjs; add_header Cache-Control "public, max-age=3600"; } location @public { add_header Cache-Control "public, max-age=3600"; } location @nextjs { # the proxy to next server proxy_pass http://localhost:8000; # The port that your NEXTJS app uses proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; # upgrade protocol based on client header proxy_set_header Connection 'upgrade'; # match the above upgrade made proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; # upgrade http header value will not be cached } ``` 6. make sites-enabled directory and link the conf in sites-available inside it ``` $ mkdir $PREFIX/etc/nginx/sites-enabled $ ln -s $PREFIX/etc/nginx/sites-available ``` 7. run nginx and execute certbot command ``` $ nginx $ certbot --work-dir $PREFIX/var/lib/letsencrypt --logs-dir $PREFIX/var/log/letsencrypt \ --config-dir $PREFIX/etc/letsencrypt --nginx-server-root $PREFIX/etc/nginx \ --http-01-port 8080 --https-port 8443 -v --nginx -d your.domain.name ``` Make sure that you have forwarded port 8080 to 80 and 8443 to 443 in the machine using `iptables`. 8. Replace `listen 443 ssl` in $PREFIX/etc/nginx/sites-available/next to `listen 8443 ssl`. And now after running the webapp on port 8000 and re-running the nginx, we finally have ssl working. ## Step 4. A bit of firewall security I cleared iptables and made the minimal rewiring for forwarding ports. <Br> I don't want people exploiting the small machine I have (and my network), so I want to implement some security. Basic rule for security: Open what you use, close everything else. Luckily, There are bunch of suggestions online for setting up iptables for webserver.<br> I'll list a few below, but beware and don't run any scripts you find online without checking that it is safe. 1. [Serverfault post](https://serverfault.com/questions/184646/a-secure-standard-iptables-rule-set-for-a-basic-https-webserver) 2. [Reddit post](https://www.reddit.com/r/debian/comments/9kpts8/best_security_iptables_rules_for_web_server/) 3. [Cyberciti post](https://bash.cyberciti.biz/firewall/linux-iptables-firewall-shell-script-for-standalone-server/) 4. [Systutorials post](https://www.systutorials.com/how-to-use-iptables-to-limit-rates-new-ssh-incoming-connections-from-each-ip-on-linux/) And we are finally done! --- ## Closing remarks This took longer than I expected, but through it, I learned a lot about dns, ports, linux, nginx and firewall. I'll probably make a video tutorial summarizing the serie, and will be using the server to host some tools/pages. Good bye and next post will be about v2 update of this site. (nextjs 15, blog writing update, new tools, database, etc). Sincerely, Jin Byun
Wed Jan 15 2025
Self-Hosting #3: "Hello Web, Good bye warranty!"
## RANT It's been a while. I was busy with works in Sharing Life Society, sick with stomach cramp and a bit of cold, and heavily stuck on networking stuff (The solution, like always, is too simple). Man, I almost lost it and wanted to quit so badly. But I didn't, and here is the fruit of my labour. (with a 'u' cause I'm Canadian) Oh yeah, and this is gonna be super long, so if you want the meat of it, I'd read it in the order 3, 5/third try, 2, and 1. ------------ > Following are steps I took in temporal order. The correct order to approach this is vastly different, so don't follow my footstep. ## 1. local ip, remote ip As of now, I've been connecting to the machine using a local ip address.<br> Your PC, phone, tv and any other IoT devices will have a unique ip address that the router uses to distinguish each machines.<br> It's what we use to connect local pc to the machine. (using ssh) But to expose the machine to the net, we need to know what the remote/public ip is. For local IP, we used ifconfig, which gives us the local ip address of the machine. However, Your own machine/router cannot know what the remote ip is on its own. It's because we, the client, have no control over the ip address, just as our pc doesn't decide the local IP address.<br> Fortunately, there are many services that let you check your remote ip. So google "what is my IP" and use any free option. <br> you could even host your own ip checker! Once you have your public IP, record it somewhere privately. ## 2. port forwarding Now, if you try to access the machine through the remote ip, it will not work. There are few reasons. To name a few: 1. You have the same remote ip for all devices connected to your router. So if you use the remote ip, router will not know where to direct the request to. 2. Which is why most consumer router starts off blocking all requests that are not mandatory. That's where port-forwarding comes to play. Just like how we set the local IP of the tablet static, we: 1. Login to the router, 2. Go the advanced setting, 3. set up port-forwarding (I did it for http:80 and https:443) with the local ip address of the tablet. This opens up port 80 and 443 at the ip of your machine so that people can access the server with http/s request. And now try to start the next web app with port set to 80 and... ``` ~ npm run start > next start -p 80 ⨯ Failed to start server Error: listen EACCES: permission denied 0.0.0.0:80 ``` When we set up ssh in self-hosting #1, we found that termux, without root excess, could not use the default ssh port of 22.<br> Similarly, We can't use port 80 without root access. All ports below 1024 is locked away from non-root user. So we now need to take the big gun out and root the machine. ## 3. Rooting the SM-P550 (All data will be removed) ### Links - [Magisk for rooting](https://topjohnwu.github.io/Magisk/) - [bifrost for getting the firmware](https://github.com/zacharee/SamloaderKotlin) - [odin for flashing the machine](https://odindownload.com/download/clean/) ### Steps 1. Make sure you have enough space in the machine, about **6 GB of free space**. If you don't, get a **sd card**. Also, make sure **dev mode** is enabled, along with **usb debugging**. 2. Get **Magisk apk** from the link. Make sure to get the version that can support your version of os (27 was the most recent that supported kitkat: Android 7) 3. Get the most recent firmware of your machine from the **bifrost** application. For my case, model number was sm-p550 and region code was XAC (Canada, WIFI-only machine). 4. Copy the **AP** file from the firmware and the **magisk apk** into the machine. 5. Install Magisk from the apk and run it to create modified AP file. (instruction in magisk link) 6. Transfer the new modified AP file back to PC. Use **ADB** or **sd card** to make that transfer. 7. Download **odin**, version 3.13.1, and run it. 8. Reboot the machine in download mode (press and hold <kbd>home + vol down + power</kbd>, and when prompted, press <kbd>vol up</kbd>) 9. Connect the machine to PC through usb and check that odin detects the machine. If not, install usb driver. Put files from the firmware (step 3) to appropriate slot, replacing the AP with the modified file from the magisk app. 10. Run start and and check that the installation has passed. 11. The machine will reboot and try to update. Do not worry even if it fails midway (32% for me). Wait until the machine goes to **android recovery mode**. 12. use the <kbd>vol up</kbd> and <kbd>down</kbd> to select factory reset and <kbd>power</kbd> to execute. 13. Reboot the system. The machine can stay on the samsung logo for few minutes, so do something else and return to find the machine in initial setup page. 14. Do the initial setups (wifi, data permission for google and samsung, etc) and reinstall **magisk**. 15. Let the magisk do its things, reboot, and now you are rooted. 16. Reinstall **Termux**, set it up, type `su` in terminal. **It will not go through**, but by doing so, Magisk will detect that termux is requesting super user permission. 17. Go to **Magisk**, click on <kbd>super user</kbd> tab in bottom nav bar, and give termux super user permission. 18. Go back to termux and type `su`. If you have super user permission, the terminal will change. type `exit` to exit the super user. ### Some key problems I ran into while rooting 1. If you don't have enough storage, Magisk will not be able to create the modified AP file from the original. The factory reset flushes all data anyway, so **backup data** and erase all apps. If that's not enough, use sd card. I used sd card. 2. Canada has quite a few region codes. BMC, TLS, RGS, XAC, etc. If your model is wifi only like mine, XAC should work. For cellular, check with your provider. 3. I started with Odin 3.10.6 to flash the firmware. It resulted in error in the modem section and gave me a **Fail**. Use Odin version > 3.10.6 to prevent such error. If you did face this error, press and hold <kbd>vol down + power</kbd> to turn machine off and reboot back into download mode. Then, get a more recent version of odin and repeat the steps. 4. The Magisk instruction talks of **oem unlocking**. Some machines don't have such feature at all, making them naturally oem unlocked. It was the case with sm-p550. So check the **upper left corner** of download mode to see if your machine has oem unlocking. If it does, follow the steps in the Magisk link. 5. Your pc would have stored the machines identity inside a file called `known_hosts` in `$User/.ssh` folder. Remove it so that you can re-connect to the machine through ssh. ## 4. Re-cloning of repo, but with SSH instead of HTTPS I had some problems with storage while rooting the machine. Because of that, I wanted to install as less packages as possible in order to save some space. So when reinstalling git for the web app repo, I didn't want to install anything extra, such as `gh`. <br> So I decided to use ssh. I mean, I'm already using dropbear for PC connection, so why not for github? ``` ~ pkg install git # this automatically install openssh and deletes dropbear ~ pkg install dropbear # So install dropbear again, which will remove openssh ``` Dropbear, the ssh client/server of my choice, made this a bit complicated. ### first, create keys use `dropbearkey -t ed25519 -f id_ed25519 _C "your email address"` to generate private/public key pair. Copy the content of the public key and use it to create new ssh key in your `github/setting/access/ssh` section. <br> Move the keys from `home` directory to `.ssh` directory for cleanliness. ### Don't use `ssh`! Of course, I followed the tutorial in the github documentation and I tested the connection with `ssh -T git@github.com`, and it caused error, stating no auth method could be used. Searching high and low, I found [this thread](https://groups.google.com/g/beagleboard/c/h6XiKjT9-ZI/m/xgA0kIGViKgJ), letting me know that using `dbclient`, with options `-y`(accept hostkey) and `-i`(idfile location) will work. and it did. ``` dbclient -yi ~/.ssh/id_ed25519 git@github.com ``` After finding that this worked, I suddenly got curious. > Does chatgpt have the answer? I asked GPT, "how do I access ssh server of github using dropbear?" GPT gave me the steps and I followed. And of course, it was all wrong. But do take this with a grain of salt, as I used the free tier.<br> Your mileage might vary if you use the paid tier. ### Override the GIT_SSH_COMMAND So can I `git clone` any repo I have on github now? The answer is no. `git clone` still uses `ssh` for connection. So you need to override it using **git config**. 1. create a bash file, with script as below ``` #!/data/data/com.termux/files/usr/bin/bash dbclient -y -i ~/.ssh/id_ed25519 $* ``` 2. In terminal, type `git config --global core.sshCommand ~/path to the bash file` You've successfully overridden the ssh command and now, git clone, pull, push, etc will use dbclient! ## 5. Network mishaps (Iptables, nginx) Wouldn't it be nice if we could just write `npm run start --port 80` and be done with it. But life is constantly giving me lemons. Time to make some lemonades. ### First try: using NAT redirection through iptables I saw that instead of using port 80 directly, we can actually intercept the request to port 80 and redirect it to other ports. With root access, now we can install iptables with the following lines ``` pkg install root-repo // for root related libraries and packages pkg install iptables ``` Why did I install **iptables**? **Iptables** allow user to set chained rules for controlling in/out network traffic.<br> One can define specific network packet to allow/block, etc. <br> Basically, it lets you modify the firewall. It is very powerful tool, and with power, comes great complexity. Here's small part of iptables that we will be using. #### Table: Group the rules based on the function 1. Filter: Determine whether a packet can go through the network. It is the default table 2. Network Address Translation (NAT): Reroute packets to different Network/IP/PORT 3. Mangle, Raw and Security: Not used for this application #### Chains: The sequence of packet. Contain list of rules 1. Input: handles incoming packets directed to local application/service. In Filter table 2. Output: Manages packets from local app/service. In all tables 3. Prerouting: Alters packets before any routing decisions are made. in NAT table 4. others: Not used for for this application #### Target/Jump: The response to a rule adherence 1. Accept: Allow packet to pass 2. REDIRECT: Change the destination port Now, without context, most suggestions for NAT redirection goes ``` sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080 ``` `sudo` for super user permission<br> The `-t` option let you specify the table, and the lack of it defaults to the filter table.<br> `-A` is for appending, meaning that the new rule will be the last to be evaluated in the chain,<br> After `-A`, INPUT and PREROUTING is indicative of which chain the rule appends to.<br> `-i` is the interface, or what is connected to the network. Your machine's interface is `eth0`<br> `-p` sets the packet, which in our case is TCP, used for http.<br> '--dport' is the destination port of the incoming packet. We want incoming packet with port 80.<br> `-j` let you specify what the iptables should do with a packet that satisfy the rules `-ip --dport`.<br> `--to-port` redirects the packet to the specified port. Now, in normal system, this should work. But are we working with normal system? No. So I just dropped using iptables and started looking for other solution. ### Second try: NGINX and failure with SE LINUX Saying that NGINX is a pretty popular reverse proxy will be a huge understatement. It's powerful, and just like iptables, it is very confusing. Luckily, There was a template written by **Jakir Hussain**, which I [link right here](https://gist.github.com/iam-hussain/2ecdb934a7362e979e3aa5a92b181153) Here's what I did. ``` ~ pkg install nginx ~ nano ../usr/etc/nginx/nginx.conf # open the editor and replace the server block with what was in the link ~ sudo nginx ~ cd portfolio ~ npm run start ``` This didn't work. I was stumped with error 500 when I tried to access the web app through port 80. (by the way, you don't have to specify port 80) I checked the error.log file in `usr/var/log/nginx` directory.<br> `socket() (13:permission denied) while connecting to upstream.` with that as the keyword, I searched the google, and the problem seemed to lie in SE Linux. What is SE linux? Well, it's a module called "Security-Enhanced" Linux that makes thing more secure by enforcing some rules. And the solutions on the internet was to relax the rules. The options were: 1. Set SE linux to permissive, by using the command `sudo setenforcing 0` 2. If You can't do that through termux, try a Magisk module that does that in boot. 3. Since not enforcing SE linux is bad for security, instead just bypass what's necessary using `setsebool -P httpd_can_network_connect 1` 4. For more safety, instead of `httpd_can_network_connect`, set `httpd_can_network_relay` to true. Tried all of them. Well, at least tried to try them all because sm-p550 didn't have `setsebool`. Samsung locks SE Linux to "enforcing", even if you are rooted. The command is inside a kernel, where they double check to enforce SE linux. (which is why step 2 didn't work)<br> In order to find out all of that, I looked at the system log, I looked at the kernel log, I tried to find a config file for SE linux (didn't exist) to change hard-coded way, Tried changing permission of the files and directories in play, etc, etc, etc! Things were out of my hand. I even questioned whether it was the problem with my ISP in some point. I was **FRUSTRATED**. ### Third try: Back to iptables Now, I didn't show you people, but with iptables, you can actually list the rules in a table. `sudo iptables -L`<br> And you know what? This tablet is filled with rules. So I thought, "Maybe I'll take a look at all the rules there are in this machine. Who knows, maybe something is in there that's causing all of this". ``` ~ touch iptableRules.txt ~ sudo iptables -nL >> ./iptableRules.txt ~ sudo iptables -t nat -nL >> ./iptableRules.txt ~ sudo iptables -t mangle -nL >> ./iptableRules.txt ~ sudo iptables -t raw -nL >> ./iptableRules.txt # back in the PC to copy the txt file to PC > scp -p 8022 /data/data/com.termux/files/home/iptableRules.txt Desktop ``` I couldn't understand more than half of them.<br> There were custom chains, custom target/jump, A chain calling another chain...<br> I even forgot to check for another table that exists, called "security" which no one talks about in the internet (unless you specify it) So I said, "-F it"<br> and I flushed the iptables. ``` ~ sudo iptables -F # Writing the redirection rule again # For NGINX part, I tried with, and without the rules, so don't think that this was why NGINX didn't work ~ sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT ~ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080 ``` > And it worked. ---- ## The moral of the story + what's next? There are reason why people use a machine/virtual machine with real OS. <br> Android and termux come very close, but it's not real. I made it work, but at what cost? Turmoils, times spent, security concerns, etc. But anyway, a W is a W. The next step for this will be to connect this remote ip to a domain, get an ssl certificate so that we can use https, and future-proof by setting up a system for dynamic dns. After that, I think we can focus on security (the minimum) and retrying nginx. Thanks for reading. **BYE!**
Fri Nov 22 2024
Self-Hosting #2.5: Next Lives!
## previously, on Self-Hosting... I mentioned how I couldn't build next web app in the termux of SM-P550.<br> With the problem with SWC, segfaults, the way babel was not supported in termux, etc. And at the end, I mentioned that I tried to use the build from my pc, copy it over to the machine and tried to run it. ``` // from directory that contains the repo scp -P 8022 -r .next ip address:/data/data/com.termux/files/home/portfolio // connecting to the machine with ssh dropbear // in the machine cd portfolio npm run start ``` This resulted in error, and being tired and frustrated, I called it quits and posted the previous blog. However, I just couldn't walk away, and I looked at the error message one more time. ## Steps taken to find the way The error, in short, said that it was missing something from `node_module/next/dist/compiled/next-server`.<br> So, just like the `.next` folder, I copied that from pc to machine as well. ``` scp -P 8022 -r node_modules/next/dist/compiled/next-server ` ip address:/data/data/com.termux/files/home/portfolio/node_modules/next/dist/compiled ``` And when I did that and ran `npm run start`, **It actually ran through** and only crashed when I tried to access the web app from browser. This was a eureka moment for me.<br> The next thing I tried was copying the entire `node_modules` folder to the machine. > And magically, it worked. --- Here's what you can do to run a next build in a machine like sm-p550. > Lines that start with "~" is ran in termux and lines that start ">" is ran in powershell ``` // 1. Get the repo from github ~ git clone repoAddress.git ~ cd repo // 2. run npm install (clean install w/out package-lock.json to make sure) // to install baseline of required packages ~ npm install // 3. build the next web app in your pc and copy the resulting .next and node_modules over to the machine > npm run build > scp -P 8022 -r .next ip address:/repoAddress > scp -P 8022 -r node_modules ip address:/repoAddress // 4. After that, install sharp npm package if not yet installed (for image optimization) ~ npm i sharp // Now you can run the next app! ~ npm run start ``` So I don't have to create a web app from scratch now! (will be doing that on other series) --- ## ps. It has been some time since the last post, and the fix metioned above was done last week. <br> However, there was some hardships connecting the local server to the web. The next blog will cover them, but for heads up, it involves **rooting**(voiding the warranty, if it exists) the sm-p550. Once again, this blog series is not a step by step guide. It's a progress record. <br> I'll probably make a summary blog once this is all done. That, you can follow.
Fri Nov 08 2024
Self-Hosting #2: Git, Next, and failure
### Ingredients - The machine from [step 1](https://jinbyun.vercel.app/blogs/15) - A repo containing Next.js web application - A Github account ### Preparation ``` PS C:\Users\JinByun> dropbear Welcome to Termux! Community forum: https://termux.com/community Gitter chat: https://gitter.im/termux/termux IRC channel: #termux on libera.chat Working with packages: * Search packages: pkg search <query> * Install a package: pkg install <package> * Upgrade packages: pkg upgrade Subscribing to additional repositories: * Root: pkg install root-repo * X11: pkg install x11-repo Report issues at https://termux.com/issues ~ $ pkg install git ~ $ pkg install gh ``` Preparation is done. # Step 1: authenticating GitHub Now, with git and github installed, run `gh auth login`, it will guide you with some prompts. <br> This is what I have done. ``` ~ $ gh auth login ? Where do you use GitHub? GitHub.com ? What is your preferred protocol for Git operations on this host? HTTPS ? How would you like to authenticate GitHub CLI? Paste an authentication token Tip: you can generate a Personal Access Token here https://github.com/settings/tokens The minimum required scopes are 'repo', 'read:org', 'workflow'. ? Paste your authentication token: ********************** ``` Let's break down what each prompt is asking for. ### 1. Where do you use GitHub? Where else can you use Github? Is Github open-source? No, Github is not open source. However, Github has a product called **Github Enterprise**. <br> Github enterprise let the user(enterprise) self-host github, and that's what makes this question valid. <Br> You do use GitHub, but not at https://github.com. So for a personal project just go with `GitHub.com` ### 2. What is your preferred protocol for Git operations? Here we have 2 choices. Let's start with SSH, since we went over it in last blog. #### SSH Similar to how we've set up the ssh server in the machine, we are now using the machine as ssh client and github as ssh server. It requires user to generate ssh key, add the key to your gh account, set up ssh-agent forwarding and more. #### HTTPS authenticate in browser (log in to browser) or tokens. That's it. #### Why SSH through all the complication? Well, in short, security and skill issue. With HTTPS, if a perpetrator has access to your machine, your github repos are very likely to be completely compromised. Also, the communication between client and server is hidden behind the black box. So if it fails, you won't know why. With SSH, you are given more power. You are the one that creates the passphrase and pass it on to GitHub. You can make your pc more secure by utilizing hardware keys. Without that key, perpetrators cannot try brute-forcing through the passphrase. Me, I just want to connect and get it over with, so I choose HTTPS ### 3. How would you like to authenticate gh? I normally go with browser authentication, since it's quick and easy. <br> But in this case, I have to use the token because I can't use the browser in termux (as it is). > Surprisingly, you can use GUI and install browser in termux. However, I don't think it will be feasible in my SM-P550.<br> *If you are interested, check [this link](https://wiki.termux.com/wiki/Graphical_Environment) on how and [this link to a video on example](https://youtu.be/H63LtxFyIuc?si=rD8QD5B5tK3pTks6&t=292)* To get the token, go to the [settings/token](https://github.com/settings/tokens) page and Generate new token. I used the classic token. Give it a memorable name, set the expiration date, and remember to give it **read:org, repo and workflow** permission. Once you generate the token, copy and paste it on to termux and you are authenticated! ## Step 2: clone, run and ... With access to the github repo, I cloned my web app (this page) ``` git clone https://thisrepo.git ``` and once done, I `cd`ed into the project. Before anything, I remembered that I needed my .env file, so I have copied my .env file from pc to termux. ``` // PC powershell type ./.env | ssh ipaddress -p 8022 "cat >> portfolio/.env" ``` And with the .env file, I tried ``` npm i ``` And hell started. ## step 3. Too Long record of failures ### First Err: node-gyp (source code builder for node) `npm i` threw `gyp: Undefined variable android_ndk_path in binding.gyp while trying to load binding.gyp` Apparently, this happens because termux is on android. The package node-gyp expects `android_ndk_path` to be set, but it isn't. A work around was found [here](https://github.com/termux/termux-packages/issues/20717#issuecomment-2196523557) <br> I followed the step to edit the `configure.js` file and the gyp ran clean! Problem solved! ### Second Err: Sentry (error logging service) compatibility Sentry CLI does not support Android. Who knew?! <br> Luckily, I don't use much of it. So It was easy to create a new branch called "termux" and removed sentry from the code. <br>With the new branch, now, ``` git pull git checkout termux npm i // was successful with the sentry removed npm run build > next build ▲ Next.js 14.2.16 - Environments: .env Creating an optimized production build ... Downloading swc package @next/swc-android-arm-eabi... ⨯ Failed to download swc package 14.2.16 ``` ### Third Err: SWC, or the lack of it SWC, or speedy web compiler, is what compiles next codes into production ready builds. However, The production of android version of SWC halted at version 13.2.4. So When I try to build or even dev the web app, the lack of valid swc just breaks the app. Below is a list of things I tried to fix it: - Install swc version 13.2.4 and wing it. - Try to use babel instead of swc for build (For some reason, Next doesn't let this happen) - Use next version 13 (results in segfault) - Use build from PC and try to run it in termux. I kinda expected next to fail in termux, but not in this way. I honestly expected it to just use up too much memory and kill the process, but failing at build, this is tough. On next blog, I will be trying to get a website actually running, and go over dynamic ip management (ddns). I'll probably go with Golang Backend. Stay tuned!
Fri Nov 01 2024
Self-Hosting #1: setting up the machine
> The reason why we can't have good things in life is because we use the wrong tools and force it to work. ## Preamble Normally, servers are built on pc hardware, be it old pc or dedicated raspberry pi-esque low power machine. <figure><img src="https://upload.wikimedia.org/wikipedia/commons/3/38/Inside_and_Rear_of_Webserver.jpg" alt="example of server" width=500><figcaption>Rodzilla at English Wikipedia, CC BY-SA 3.0 <http://creativecommons.org/licenses/by-sa/3.0/>, via Wikimedia Commons</figcaption></figure> Such machines are easily upgradeable,<br> either by adding more storage (RAM and HDD/SSD)<br> or upgrading the processing power (CPU and GPU). In my home, there are few spare machines, to list few, there are: 1. A PC with Intel core 2 cpu and 6GB of ddr3 RAM and GTX 1050 2. A laptop with 8GB of RAM, 4000 series intel core i5 cpu. 3. Samsung Galaxy A tablet (Old) I chose the tablet. > There are many downsides to this path, so I don't recommend you follow my steps. I'm just recording my progress. --- ## Setting up the tablet for develepment ### stage 1. Thank Termux (for both the good, and bad) SM-p550 is an **Android** tablet, and Android, although Linux-based, doesn't really let you run it like one out of box.<br> So, you either need to go into the interweb spiral to get a Linux os for your hardware, <br>or get [*termux*, a terminal app with Linux environment](https://termux.dev/en/). Now you have a perfectly fine linux machine. (at least, for now) In termux, you can download the available packages for your dev environment, be it JS, Go or Elixir.<br> check [the wiki](https://wiki.termux.com/wiki/Software) to learn what you can do (if you are following along) ### stage 2. ssh (dropbear) Termux is fascinating. until you try to use the virtual keyboard for navigation and actual coding. <br> So lets bring the development to my actual pc with the help of Secure Shell protocol(ssh). SSH, if you are not familiar, is: a protocol that help user to securely connect one machine to another through unsecured network. The good people maintaining the wiki has given us a guide line on how we can do that. [Remote Access](https://wiki.termux.com/wiki/Remote_Access) In the guide, they give us two choices for SSH. One is openSSH(more popular, more features) and the other is Dropbear(less features, lighter) If my tablet was more recent, I would have used OpenSSH since that is what the guide recommends. However, My tablet only has 2 GB of RAM, and only about half of it is accessible. I need all the savings I can get, so Dropbear it is. So I followed the instruction ``` pkg upgrade pkg install dropbear passwd // setting up the password dropbear ``` and to check whether it worked, I went and tested it by typing `ssh 127.0.0.1` on termux. ``` ssh: Connection to user@127.0.0.1:22 exited: Connect failed: Connection refused ``` > ? say what now? I'll cut to the chase. <br> *Termux*, although very close to actual Linux, is an emulation over the android system. That limits the number of port that termux can access. ([source](https://blog.geggus.net/2018/06/a-sshd-on-port-22-hack-for-termux-on-android/)) <br> Normally, SSH defaults to port 22. However, termux defaults ssh server port to 8022. It was written right on top of the guide, but I skipped through and missed it. So when I tried SSH with out port specified, what I have done was knocking on port 22, while the ssh server was waiting for me at port 8022. After learning more about the limitations, I was finally able to access the ssh server in termux from powershell of pc. `ssh ${lan ip of tablet} -p 8022` ### stage 3: Key instead of password (Windows strikes again!) Did you know that you can set up pubkey auth for SSH? Well you can with simple script like below: ``` ssh-keygen -t rsa -b 2048 -f id_rsa // prompt for passphrase // key generated ssh-copy-id -p 8022 -i id_rsa IP_ADDRESS ``` But wait, what's this?! ``` ssh-copy-id : The term 'ssh-copy-id' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + ssh-copy-id + ~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (ssh-copy-id:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException ``` "Aw, you don't ssh-copy-id in powershell! That's a bummer." is a euphemism to how I felt when I saw that. So, I went online, found [this post](https://chrisjhart.com/Windows-10-ssh-copy-id/), and tried it. ``` type $env:USERPROFILE\.ssh\id_rsa.pub | ssh {IP-ADDRESS-OR-FQDN} "cat >> .ssh/authorized_keys" // resulted in error, so I checked the user/username folder and found id_rsa.pub in there. // So I changed the above to: type $env:USERPROFILE\id_rsa.pub | ssh {IP-ADDRESS-OR-FQDN} "cat >> .ssh/authorized_keys" // prompt for ssh password ``` and It worked. The public key was found in .ssh/authorized_keys file. I tried to ssh into the server, but then, it asked for my password again. Can you guess why? `ssh-keygen` is supposed to save the keys inside .ssh folder. But in my PC, it saved it outside of it.<br> When using pubkey auth in windows for ssh, windows check for the private key inside `.ssh` folder. In my case, there was none because they were in `user/username` folder, the parent folder of `.ssh`. After moving the keys into the `.ssh` folder, I finally accessed the ssh server without password. ### last stage: Make life easier by fixing things I'm a British Columbian and I use Shaw Internet. so accessing the router may be different if the two informations doesn't apply to you. #### Getting a static LAN ip for your machine 1. Open up a browser, and on the address bar, type in 10.0.0.1 and press enter. 2. If it's your first time accessing the router the credential is as followed: <br> Username : admin <br> Password: password 3. Click on `Connected Devices` in the left pane 4. Find the device that hosts the ssh server and click on edit. 5. Change the configuration to Reserved IP and set up a reserved IP Now you don't have to worry about ip changing. #### Making an alias to accessing the machine (powershell) It's an hassle to write down `ssh ip address -p 8022` everytime you try to access the machine. Instead, save it in your profile and give it short, but memorable alias. 1. open up powershell 2. `code $PROFILE` to open up VS code and edit the profile. (replace `code` with text editor of your choice) 3. add `function dropbear { ssh ip address -p 8022 }` and save. now you can use `dropbear` to access the machine! --- ### Postamble Now I have a machine that I can readily access and manipulate at will. <br> Next step is to get a website running on the machine. I'm thinking of getting a copy of this website(built with next) to run on it. However, One worry I have is the lack of memories in the machine. <br> Will a Next web app run in a memory-confined machine? Find out next time, next blog.
Mon Oct 28 2024
Blog Demo, and trying out things.
* This is all about the twentieth century poet, Kimberley Kleinstein. (not) I will be writing down ideas, progresses, and stories of my endeavors here. This blog doesn't have a img section yet, and will have to rely on urls from another source, but will add image adding function soon. The blogs to come, starting with the next one will be about self-hosting. <br> Now, when you think about (or search about) self-hosting, it's either going to be using a vps or hosting a server that you use at home. ### I'm not going to do that. I'll host a website that: 1. is connected to the internet, 2. use a machine that exists in my home 3. without getting a static ip address. stay tuned!
Thu Oct 24 2024