<rss version="2.0">
  <channel>
    <title>Meet Gor</title>
    <link>https://www.meetgor.com</link>
    <description>Tech blog by Meet Gor</description>
    <language>en-us</language>
    <pubDate>Sat, 09 May 2026 05:38:52 UTC</pubDate>
    <item>
      <title>Flight Observatory Mumbai Airspace Case Study</title>
      <link>https://www.meetgor.com/posts/flight-observatory-mumbai-airspace-case-study</link>
      <description>A closer analysis on the airspace around Mumbai&#39;s BOM/VABB (CSMIA) Airport. Questioning the intuitions from memories and answering and reasoning through the data.</description>
      <pubDate>Thu, 02 Apr 2026 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;I do write blogs, newsletters, but this is something wild I had written in a while. I was always curious and interested in knowing about the flights flowing over Mumbai, I do always look up and see the Airline when it appears above my head in front or on the side. But I never actually read or discovered anything true or factual about it. As a software developer, that is very unsettling, I have some ideas and observations but never tested them. I can excuse myself that I am busy and that is a toy little project to get off my hands. But now, we have LLMs and no excuses. I don&#39;t need a second invitation to explore my curiosity, off I went.&#xA;&#xA;And came back with around a week or 2 of digging in data, downloading the ADS-B stream of airspace and crunching the numbers for the BOM (VABB) Mumbai&#39;s Chatrapati Shivaji International Airport (CSMIA) and its vicinity. The numbers I must tell you were staggering and I couldn&#39;t stop starring at them, I felt relieved and felt my childhood was worth while.&#xA;&#xA;I am writing this blog post to explain to myself, how I got here. It was a chaos in start, total misery of data. But after a few attempts of successfully accumulating the data only for BOM airport&#39;s vicinity of 300 kilometers, I was able to finally make sense of the pile of data (since at one point it looked like a goble of mess).&#xA;&#xA;This will be a two part blog. This one being non-technical (that is no grilling from peer developers) which will explain the why and what of the case study of the Mumbai Airport. The next part (out in a couple of days from now) will be a treat for developers (or curse words for myself, ok it won&#39;t be that bad), which will explain the how. &#xA;&#xA;This part is about my intuitions and the memories I had since growing up near the vicinity of Mumbai Airport. I explain what dataset I used (it was not a simple available dataset in the public, mind you that), how I structured it, what questions I asked, how I formed the queries (not in terms of SQL but human logical terms), and how I reason through the results. And trust me everything made sense.&#xA;&#xA;So, if you have been in Mumbai, and wondered how many planes flew above your head in a day, you are in for a treat! Buckle up! Because Mumbai Airport never stands still.&#xA;&#xA;Check out the full case study here -&gt; [Flight Observatory - Mumbai Airspace Case Study](https://dev.meetgor.com/flight-observatory/case-study/mumbai-airport/)&#xA;&#xA;## Flashback of Mumbai from 2010-2018&#xA;&#xA;I have spent my life in Mumbai, the two decades that I have lived here, I often feel nostalgic lookin back at the memories. The one that stands out even today is the plane spotting. Since childhood, I have seen, heard, observed planes flying above me, beyond the buildings, buzzing and slashing the concrete jungles and flat blue slums. I distinctly remember the days from my school, the 7th A classroom, located on the top of the school, the highest room on the school. I remember the days I shared sighting planes through the periods with my friends.&#xA;&#xA;It was a nostalgic experience to see &#xA;- the regular &#34;Air India&#34; on time, &#xA;- &#34;Lufthansa&#34; with its guzzling sound, &#xA;- occasional &#34;UPS&#34; with the jumbo cargo,&#xA;- and the millions of &#34;Jet Air&#34; and &#34;Spicejet&#34; among them. &#xA;&#xA;There was a different joy of just seeing and observing them, admiring the beauty of Boeing 787, 777 and 747s, Airbus A330 and occasional very rare Airbus A380 the king. &#xA;Those are fresh in my mind. I was lucky to be honest to have an airplane enthusiastic friends with me. One of them was notoriously a die-hard fan of British Airways though. The 11:45 British Airways was a treat to him, with me cursing words at British people, hoping the plane to crash (just teasing him of course). &#xA;&#xA;We almost had a anticipation on the next plane to come and over the year we had a schedule that which one will come next depending on the period at the day.&#xA;We had formulated that the BOM Airport&#39;s frequency was 2:30 minutes (that&#39;s 24 flights in an hour, 576 in a day). That is a lot of flights landing over our head, I used to brag that BOM was (is?) the single runway busiest airport. And take a strange pride in defaming British Airways. Those were the times :)&#xA;&#xA;And now, I don&#39;t have that luxury to sit with that friend and gaze at the sky (if I will, I might get existential crisis I think). I wanted to get those moments back, I don&#39;t have the luxury to sit with that friend though, but I realize I have another friend a tool of sorts. LLMs.&#xA;&#xA;I realize I am a programmer now, whatever I want to do, I can. Its just a wish away.&#xA;&#xA;## The Idea&#xA;&#xA;And I began making this project. To observe the sky. Not by my eyes. By tracking, analyzing and crunching data from APIs.&#xA;&#xA;Nothing comes for free. Constraints start developing, but there is no engineer who has no constraints. For me I didn&#39;t want to setup a cloud infrastructure(not that I can&#39;t I just don&#39;t have the budget for it) and do bunch of credit card swipes, I didn&#39;t have the luxury for that, and for buying heavy 200$ subscription of those AI-tools either. So what? &#xA;&#xA;&gt; Engineers aren&#39;t made from freedom. They&#39;re shaped by constraints.&#xA;&#xA;I explored every API out there to see what I can use the best. And lo and behold I found some. Not everything was great, there were samples, just a day for a month, limited data, but I kept digging and digging. I found what I needed, data points for Mumbai (or any airport for that matter) across a decade. I thought I might get full-day, full-year data, a decade worth of data. But what I got was a decade worth of data, for all months, but only for the first day of the month. Sigh!&#xA;&#xA;But I kept moving. I don&#39;t care how little the data is. It is real. It is 12 days of the year for 8 years. That is not a lot, but enough to find patterns. And I can surely say, Mumbai airport hasn&#39;t changed drastically over the decade.&#xA;&#xA;I thought that was not enough! I wanted more, I wanted live data. Can I get live? I though I could, I did but sort of. Its not the most accurate representation of flights. But a good guess, a practical one at least.&#xA;&#xA;So, I built [Flight Observatory](https://dev.meetgor.com/flight-observatory). A live and historical data analysis tool or a webapp of sorts for viewing flights captured by the ADS-Bs around the world. I kept storing the live data and gobbled them into a historical section. From the other dataset and some live observations(which turned out to be a disappointment) I also made a case study for Mumbai Airport.&#xA;&#xA;This will be not quite nerdy and technical, even if you are not a technical person, strap on. Its going to be a fun read.&#xA;The next part will be a treat for the technical people reading this, so put on your glazing glasses and bickering comments for that post later.&#xA;&#xA;## What I found (TLDR)&#xA;&#xA;Mumbai Airspace is very consistent. It is busy in a very Mumbai way: a lot of movement, but with a clear pattern. The traffic pattern is strongest in the early morning and evening. &#xA;&#xA;The daily activity spreads across the early morning, the workday, and the evening. In the airport-proximate analysis:&#xA;- The strongest hourly movement shows up around 07:00 IST&#xA;- Other heavy hours around 06:00, 11:00, 17:00, 20:00, 21:00, and 22:00.&#xA;&#xA;On the sampled days, the data produced:&#xA;- `21,923` airport-proximate events&#xA;- Split into:&#xA;    - `12,723` landings &#xA;    - `9,200` takeoffs &#xA;- With a median landing-to-next-takeoff gap of `1.5` minutes. &#xA;&#xA;That last number matters. It is the airport&#39;s strength and the health checkup number. It says the airport is turning movements over very quickly, which matches the reputation Mumbai has for being tightly packed.&#xA;&#xA;The airspace itself is occupied for a while:&#xA;- Median landing run: `34.5 minutes`&#xA;- Median takeoff run: `25.1 minutes`&#xA;&#xA;So landings stay visible longer than takeoffs. That makes sense operationally, arrivals spend more time in descent and approach, while departures clear out faster once they lift off.&#xA;&#xA;The same aircraft also comes back around fairly quickly:&#xA;- median same-aircraft ground time: `96.5 minutes`&#xA;That means the aircraft are not sitting idle for long. The fleet is being reused in a steady loop, which fits a high-frequency airport like Mumbai.&#xA;&#xA;And the shape of the traffic is not random:&#xA;- Saturday is the busiest weekday in the sample&#xA;- the landing and takeoff corridors split cleanly across the airport area&#xA;- the dominant heading clusters show the same directional axis being used again and again&#xA;&#xA;Across the full analysis set, I ended up with about&#xA;- ~17 million observations (that looks like a gigantic lot, but it isn&#39;t)&#xA;- ~151,000 inferred flight runs&#xA;- Spread across ~4,900 aircraft hexes&#xA;- ~4,800 registrations&#xA;- ~140 aircraft types.&#xA;&#xA;That feels right for Mumbai. It is a city that never really stops moving (it actually sleeps, but internationally awake), and the airspace behaves the same way. The airport-proximate movement counts are also not tiny, it is structured, dense, and repetitive. The airport works like a tight system, not a loose one.&#xA;&#xA;## Notes and Clarifications&#xA;&#xA;The data itself was not clean in the neat, airline-report sense. It is raw observation data: points every few seconds, over and over again, with altitude,&#xA; speed, heading, position, and callsign changing as aircraft move across the sky every 5 seconds or 30 seconds(depending on the year). I had to group those observations into inferred flight runs before it became useful. Once I did that, the picture got much clearer. &#xA;&#xA;Most of the useful traffic is post-covid traffic too. The analysis is heavily weighted toward 2022 to 2025. &#xA;&#xA;&gt; The ADS-B for the BOM (VABB) actually started the tracking of airports around late 2017, early 2018, and that too was per minute. So it was very sparse and not necessarily capturing the full spectrum.&#xA;&gt; This data is from 2018 to 2026, not the full day of each month. It is sampled to be the first full day of each month (i.e. 1st April 2024, 1st of May 2024, and so on). Also the data points from 2018 to mid-2020 is snapshot of 60 seconds and moving from late 2020 through 2026 (till march) is snapshot of 5 seconds.&#xA;&gt; The data is only for ~78 days(not continuous, its the 1st day of month). 2022-2025 full years (12 days), there are only 3 days in 2018, since ADS-B was active after mid-2018, and the dip in 2020-21 due to COVID, so little or no traffic observations.&#xA;```&#xA;----------------------------&#xA;|  Year  |  Number of days |&#xA;---------------------------|&#xA;|  2018  |      3          |&#xA;|  2019  |      11         |&#xA;|  2020  |      5          |&#xA;|  2021  |      9          |&#xA;|  2022  |      12         |&#xA;|  2023  |      12         |&#xA;|  2024  |      12         |&#xA;|  2025  |      12         |&#xA;|  2026  |      3          |&#xA;----------------------------&#xA;```&#xA;&#xA;&gt; The numbers are real, but they are still a proxy. They are not ATC logs. They are not airport ops records. They are ADS-B-derived movement patterns around Mumbai. That means the data is powerful for showing pattern, density, clustering, and dominance, but it should not be mistaken for official runway statistics. This data doesn&#39;t infer the activity the airport is doing exactly as in an official ATC sense, but it does tell you something real about how packed and continuous the traffic feels once you look at it through ADS-B data.&#xA;&#xA;&gt; ADS-B short for Automatic Dependent Surveillance Broadcast that allows the aircraft to capture its own position and other details like (altitude, GPS based location, direction, speed, hex code i.e. unique aircraft code) and broadcasts (sends in the open as a radio signal), this gets captured by listening Air Traffic Controllers(ATC), Airports, satellites, networks, etc which then track and use it for internal purposes. That could be then used for tracking on the Airport, communnicating, averting accidents, etc.&#xA;&#xA;&#xA;## Initial Questions and Exploration&#xA;&#xA;I was looking for a historical dataset that could show me flight activity over Mumbai. I did not expect it to be perfect, but I did expect it to be workable. The first thing to understand is that this is not a clean flight log. It is a stream of observations. Each aircraft keeps broadcasting its position, altitude, speed, heading, and identity over time. That means the raw data is not `one row per flight`. It is more like `one row per movement in time for a given aircraft`. To make it useful, I had to turn those repeated observations into inferred flight runs. That is what makes the analysis readable. Instead of staring at thousands of raw position points, I can ask more human questions:&#xA;&#xA;- When is Mumbai busiest?&#xA;- Which airlines show up the most?&#xA;- Which aircraft dominate?&#xA;- Where do planes start descending?&#xA;- Where do they take off from?&#xA;- How long is the gap between one landing and the next takeoff?&#xA;&#xA;That shift from raw observations to inferred runs is what made the rest of the project possible.&#xA;I landed on the ADS-B Exchange historical data because it looked promising. At first glance, it seemed like I had found a long historical archive. But there was a catch I did not fully appreciate at the beginning: much of the data was sampled by month, and a lot of it was only the first day of each month. That was smaller than I hoped, but still useful enough to extract patterns from.&#xA;&#xA;That was the point where I stopped thinking about the data as a raw archive and started thinking about it as something I could actually analyze.&#xA;&#xA;## Making &#34;sense&#34; of the data &#xA;&#xA;So, I had millions of observations. Because a single minute has 12 snapshots and if you do the math (not asking you to do it, but if you are curious), we end up with around 17,280 snapshot for a single day (12 snapshots per minute * 60 minutes * 24 hours). If you stack it up for around 12 months, that really grows big.&#xA;&#xA;So, what exactly were these snapshots? Could I directly use them in querying the database?&#xA;&#xA;No, not quite. It actually depends on what questions we might need answering.&#xA;&#xA;&#xA;### What is a Snapshot here&#xA;&#xA;A snapshot is basically a list of observations (or set of status) for the aircrafts currently in the airspace. The initial snapshot that I download from [ADS-B Exchange](https://samples.adsbexchange.com/index.html#readsb-hist) has a full world air-traffic and I never want that full data, I just want the tiny airspace around the Mumbai one. So we filter the latitude and longitude position around Mumbai. So, each snapshot finally becomes the aircrafts flying around the radius (30km to give a number) at that time. So we take those kind of snapshots at 5 second interval.&#xA;&#xA;So let&#39;s suppose there are 2 flights around Mumbai currently one landing and one taking off:&#xA;&#xA;These will be a snapshot.&#xA;```json&#xA;[&#xA;    {&#xA;        &#34;icao24&#34;: &#34;123456&#34;,&#xA;        &#34;latitude&#34;: &#34;19.123&#34;,&#xA;        &#34;longitude&#34;: &#34;73.123&#34;,&#xA;        &#34;altitude&#34;: 500,&#xA;        &#34;velocity&#34;: -100,&#xA;        &#34;heading&#34;: 90,&#xA;        &#34;timestamp&#34;: &#34;2022-01-01T00:00:00Z&#34;&#xA;    },&#xA;    {&#xA;        &#34;icao24&#34;: &#34;8901&#34;,&#xA;        &#34;latitude&#34;: &#34;19.083&#34;,&#xA;        &#34;longitude&#34;: &#34;72.123&#34;,&#xA;        &#34;altitude&#34;: 1000,&#xA;        &#34;velocity&#34;: 300,&#xA;        &#34;heading&#34;: 90,&#xA;        &#34;timestamp&#34;: &#34;2022-01-01T00:00:00Z&#34;&#xA;    }&#xA;]&#xA;```&#xA;After 5 seconds you might see a change in the latitude and longitude (positions), the speed, and the altitude (also the timestamp). So we get another snapshot which shows the status of those aircraft currently in the radius of Mumbai, maybe in the next snapshot another aircraft is about to land, so it appears in the snapshot too. So on, you get the point right? If the aircraft stays within the radius of Mumbai for more than a minute (which everyone might, often more than a minute even 10-20 minutes?) then we might have 12 observations for that aircraft across those snapshots. We basically have a record or a progression of the aircraft over time. That is, in essence, a time series data. We are measuring a change of an entity over a period of time.&#xA;&#xA;So, this is what I get from the downloaded snapshots which are in millions.&#xA;&#xA;How do I make sense of them?&#xA;&#xA;Well, let&#39;s ask a question, then the scope can be reduced or we can move in specific direction.&#xA;&#xA;## Questions on the data&#xA;&#xA;&gt; What are the total number of flights over Mumbai in 2026 so far?&#xA;&#xA;Well, now you can see, we are talking about flights or distinct aircraft over a time. That is a distinct thing from the observations we have, we have to first lock in the time, but the deeper question is how do we group these observations to get flights?&#xA;&#xA;That is why the question need to change a little before it can be answered properly. I am asking how many distinct flight runs exist in the data, how long they last, when they start, when they end, and whether they are landing, taking off, or just passing through the Mumbai airspace. Once I do that, the raw snapshots stop being noise and start becoming something I can actually reason about.&#xA;&#xA;So, what constitutes a flight run? That is a question that we need to answer before proceeding.&#xA;&#xA;#### What is a Flight Run&#xA;&#xA;A flight run is my way of saying: this group of observations belongs to the same aircraft movement. It is not one row, and it is not one timestamp. It is the whole stretch of time where the same aircraft keeps showing up in the snapshots and its position, altitude, speed, and heading keep changing in a way that looks like one continuous movement.&#xA;&#xA;Now, how do we distinguish an aircraft?&#xA;&#xA;A unique aircraft means one distinct plane in the dataset, usually identified by its unique aircraft code, like hex or registration.&#xA;&#xA;This matters because the same aircraft can appear in many snapshots while it is in Mumbai&#39;s airspace. If I only counted rows, I would be counting the same aircraft again and again. But if I group those rows into a single run, then I can treat it more like one flight-shaped event. That gives me something much closer to what I actually want to study.&#xA;&#xA;### Observations to Flight Runs&#xA;&#xA;So, I loaded the raw snapshots/observations into the dataset after compressing from json to csv to parquet files (dont worry about the details), into a duck database session.&#xA;&#xA;I had table schema like:&#xA;- `observations` is just all the monthly Parquet files unioned together.&#xA;- `observations_clean` cleans the fields and turns text like altitude, speed, lat/lon, etc. into usable numeric columns.&#xA;- `observations_enriched` adds convenience fields like registration, aircraft type, airline_code, is_indian_registration, etc.&#xA;&#xA;From this we need to group into a flight run that could track each flight. So, I went and created a view that will do that:&#xA;&#xA;As we know we have `hex` in the data for each observations, that can be used to identify unique aircrafts in the airspace, however there is a catch.&#xA;&#xA;Each aircraft in ADS-B data is identified by a unique hex code, which represents the physical plane. But planes don&#39;t operate just one journey, they fly multiple routes throughout the day. This is where the callsign becomes important. A callsign represents the flight the aircraft is currently operating (for example, AI101 or 6E203).&#xA;&#xA;So, we need a combination of `hex` and `callsign` to identify a flight run.&#xA;I sorted the observations by aircraft and time, and then compared each row with the one just before it. If the same aircraft disappeared for too long, I treated that as a break. If the flight code changed, I treated that as a new run too. That gave me a cleaner way to read the data. The same aircraft could still appear many times, but those repeated sightings would stay inside one continuous stretch instead of being counted as separate things. In other words, I was no longer looking at a pile of random snapshots. I was looking at movement. Once that rule was in place, the rest of the questions started making sense. I could count runs instead of rows, compare landings and takeoffs, look at how long an aircraft stayed in the airspace, and see when the traffic was building up or dropping off. That was the point where the data stopped feeling messy and started feeling usable.&#xA;&#xA;That is the first step to answer the question, how many flights are there in Mumbai over 2026?&#xA;&#xA;### How many flights are there in Mumbai over 2026?&#xA;&#xA;Once I started grouping those repeated sightings into flight runs, the question became much clearer. Instead of asking how many rows exist, I could ask how many distinct aircraft movements exist from 1st of January, February and March of 2026. That is the number I actually care about. It tells me how much traffic was visible, when it showed up, and how those movements were spread across the day.&#xA;&#xA;There are approximately `7,237` inferred flight runs for the 3 full days of the 2026.&#xA;&#xA;For the 12 full days of 2024, there were approximately `51,441` inferred flight runs, and for 2025 there were approximately `43,324`. For 2026, it is just the first quarter (3 days compared to 12 days of those earlier years, so a good balance).&#xA;&#xA;Now we can move into interesting questions&#xA;&#xA;### When is Mumbai most busy?&#xA;&#xA;Once I had the raw snapshots grouped into flight runs and filtered down to the airport-proximate ones, the first thing I wanted to see was simple: when does Mumbai actually get busy?&#xA;&#xA;For a &#34;24 hour day&#34; chart, I used:&#xA;- one row per hour&#xA;- count movements for each sampled day + hour&#xA;- then average those hourly counts across sampled days&#xA;&#xA;That gives us a number representing on a typical sampled day, how many movements happen at 07:00, 08:00, etc.?&#xA;&#xA;![Landing Takeoff Hourly](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/landing_takeoff_hourly.png)&#xA;&#xA;The answer is pretty clear. In this sample, the busiest hour is 07:00 IST, with 1,086 movements. Right behind it are 20:00 (1,074), 17:00 (1,073), 18:00 (1,069), 21:00 (1,044), 11:00 (1,043), 19:00 (1,038), and 22:00 (1,032). The quietest hour is 03:00, with just 246 movements. So the traffic is not flat at all. It has a shape, and that shape is very visible once you count the flights as runs instead of raw rows.&#xA;&#xA;However if we take the average across all the sampled days, we get the same picture.&#xA;&#xA;For construcing the query for getting this, we want to know when the airport is most active during the day, so we can:&#xA;- Keeps only airport-proximate events (lat and lon)&#xA;- Labels each event as landing or takeoff (based on speed and altitude)&#xA;- Converts the time to IST&#xA;- Groups by hour of day&#xA;- Counts movements in each hour bucket&#xA;&#xA;![Movement Hourly Average](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/hourly_avg.png)&#xA;&#xA;&#xA;If we take the median instead, then the busiest hours are 17:00 IST, with 30.46 movements, 18:00 IST with 30.54 movements and 07:00 IST with 30.17 movements that makes it very consistent to the average. The quietest hour is still 03:00 with just 7.69 movements.&#xA;&#xA;That gives us a good answer and the difference is honestly not that big. The busiest hour is 07:00 and 17:00 with summed average and median respectively, and quietest is 03:00.&#xA;The summed view and the averaged view tell the same story, but from two slightly different angles. In the total sample, 07:00 is the busiest hour, followed closely by 20:00 and 17:00. In the average-per-day view, the evening shifts slightly ahead, with 17:00 and 18:00 taking the top two spots and 07:00 still staying near the top. The quietest hour is the same in both views: 03:00. So the exact peak changes a little, but the shape does not. Mumbai is still busiest in the morning and evening, and quietest in the late night.&#xA;&#xA;&#xA;### Which Airlines Show Up Most&#xA;&#xA;I want to know which airlines dominate the traffic. &#xA;&#xA;Once the rows were grouped into flight runs, I could stop looking at raw sightings and just count which airline codes kept appearing. I used the first part of the flight code as a rough airline label and counted the runs for each one. That makes the dominant carriers obvious instead of buried inside repeated observations.&#xA;&#xA;What I did was with a query:&#xA;- Takes the grouped flight runs&#xA;- Pulls the airline prefix from the flight code&#xA;- Counts how many runs each prefix appears in&#xA;&#xA;What we get:&#xA;- A dominance chart&#xA;- It shows which carriers keep repeating in the sample&#xA;- IndiGo and Air India should stand out immediately&#xA;&#xA;This was quite evident, as I have observed Mumbai&#39;s sky for a long. Indigo and Air India are firmly dominant across the board. However, there are some other interesting observations like Vistara Air, Akasa and Emirates. In earlier pre-covid era, it used to be Jet Airways, but now Indigo has taken the lead.&#xA;&#xA;The difference in the first 3 is quite a big one, and I think its right! Since Indigo and Air-India, not only in Mumbai, they are preferred over other domestic airlines.&#xA;&#xA;Some other notable ones down the order are `SIA` for Singapore Airlines, `DLH` as Lufthansa, and `BAW` as British Airways. They are not as heavily appearing as domestic ones, they are regulars on a routine, each day every day.&#xA;&#xA;```&#xA;┌──────────────┬───────────┐&#xA;│ airline_code │ movements │&#xA;│   varchar    │   int64   │&#xA;├──────────────┼───────────┤&#xA;│ IGO          │     55050 │&#xA;│ AIC          │     20892 │&#xA;│ VTI          │      9789 │&#xA;│ AKJ          │      6923 │&#xA;│ UAE          │      5057 │&#xA;│ SEJ          │      4869 │&#xA;│ AXB          │      4514 │&#xA;│ QTR          │      3728 │&#xA;│ GOW          │      2191 │&#xA;│ ETD          │      2186 │&#xA;│ IAD          │      2151 │&#xA;│ @@@          │      1708 │&#xA;│ SVA          │      1660 │&#xA;│ ABY          │      1542 │&#xA;│ ETH          │      1504 │&#xA;│ OMA          │      1479 │&#xA;│ SDG          │      1448 │&#xA;│ GFA          │      1214 │&#xA;│ SIA          │       979 │&#xA;│ KAC          │       825 │&#xA;│ DLH          │       771 │&#xA;│ THY          │       754 │&#xA;│ MAS          │       745 │&#xA;│ BAW          │       721 │&#xA;│ FDB          │       705 │&#xA;└──────────────┴───────────┘&#xA;```&#xA;&#xA;![Airline Breakdown](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/airline_routine.png)&#xA;&#xA;A good validation query! It seems the data was correctly analysed.&#xA;&#xA;### Which Aircraft Type Dominates&#xA;&#xA;This was something I didn&#39;t have much idea about. I was not well versed with flight models and their names. So I let it shape what we can find.&#xA;&#xA;```&#xA;┌────────────────────────────────────────────────┬───────────┐ &#xA;│ aircraft_type                                  │ movements │     &#xA;│    varchar                                     │   int64   │&#xA;├────────────────────────────────────────────────┼───────────┤&#xA;│ A20N (Airbus A320neo)                          │     49269 │&#xA;│ A21N (Airbus A321neo)                          │     27739 │&#xA;│ B38M (Boeing 737 MAX 8)                        │     13138 │&#xA;│ A320 (Airbus A320)                             │     12587 │&#xA;│ B77W (Boeing 777-300ER)                        │      7543 │&#xA;│ B738 (Boeing 737-800)                          │      5643 │&#xA;│ B789 (Boeing 787-9 Dreamliner)                 │      3162 │&#xA;│ A321 (Airbus A321)                             │      2761 │&#xA;│ B788 (Boeing 787-8 Dreamliner)                 │      2482 │&#xA;│ B77L (Boeing 777-200LR / 777 freighter family) │      2428 │&#xA;│ A359 (Airbus A350-900)                         │      2385 │&#xA;│ A333 (Airbus A333)                             │      2067 │&#xA;│ AT76 (A319)                                    │      1862 │&#xA;│ A319 (Airbus A319)                             │      1736 │&#xA;│ A388 (Airbus A388)                             │      1648 │&#xA;└────────────────────────────────────────────────┴───────────┘&#xA;```&#xA;&#xA;![Aircraft type breakdown](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/aircraft_types.png)&#xA;&#xA;&#xA;### Where Landings Start&#xA;&#xA;The question here is not where the aircrafts lands on the runway. It is where the aircrafts start coming down. That is a different thing. If I want to find the landing corridor, I should not look for the final stop point. I should look for the first clear movement in each run when the altitude starts dropping in a way that looks like descent. That gives me the start of the landing path, which is what matters for the map.&#xA;&#xA;Once I have those descent-start points, the next step is simple: group them by location and altitude. If the same area keeps showing up, that means aircrafts are repeatedly beginning their descent there. That is how the east-side corridor and the Ghatkopar-area low altitudes start making sense. The map is not showing a single landing dot. It is showing the place where arrival begins to feel like arrival.&#xA;&#xA;The patterns for a sharp dip is observed near the airport only, but overall, the gradual dip is seen on the Navi Mumbai side which is very true. The aircrafts gradually lowering their speeds and altitude descending into the runway strip.&#xA;&#xA;- 19.09,73.02 is east of the airport, out toward the eastern Mumbai / Thane creek / Navi Mumbai corridor.&#xA;- 19.09,73.08 and 19.09,73.10 are farther east, so they sit deeper into the Navi Mumbai / Airoli / Ghansoli / Thane-side approach corridor.&#xA;&#xA;```&#xA;┌────────────┬────────────┬────────────────┬────────────────────┬───────────────┐&#xA;│ lat_bucket │ lon_bucket │ descent_points │     avg_alt_ft     │ median_alt_ft │&#xA;│   double   │   double   │     int64      │       double       │    double     │&#xA;├────────────┼────────────┼────────────────┼────────────────────┼───────────────┤&#xA;│      19.09 │      73.08 │            142 │ 3405.6338028169016 │        3400.0 │&#xA;│      19.09 │       73.1 │            135 │  3652.962962962963 │        3600.0 │&#xA;│      19.09 │      72.86 │            133 │   36.2406015037594 │           0.0 │&#xA;│      19.09 │      73.02 │            123 │  2638.008130081301 │        2625.0 │&#xA;│      18.58 │      73.94 │            122 │ 2115.1639344262294 │        2175.0 │&#xA;│      19.09 │      73.01 │            119 │  2439.285714285714 │        2425.0 │&#xA;│      19.09 │      73.11 │            118 │ 3747.4576271186443 │        3750.0 │&#xA;│      19.09 │      73.09 │            116 │ 3565.0862068965516 │        3525.0 │&#xA;│      19.09 │      72.94 │            113 │ 1342.9203539823009 │        1350.0 │&#xA;│      19.09 │      73.12 │            113 │ 3951.7699115044247 │        3925.0 │&#xA;│      19.09 │      73.23 │            108 │ 5678.7037037037035 │        5675.0 │&#xA;│      19.09 │      73.24 │            102 │  5867.156862745098 │        5850.0 │&#xA;│      19.09 │      73.13 │            101 │  4157.425742574257 │        4175.0 │&#xA;│      19.09 │      73.07 │             99 │ 3325.5050505050503 │        3325.0 │&#xA;│      19.09 │      73.18 │             97 │  4990.721649484536 │        5025.0 │&#xA;│      19.09 │       73.2 │             97 │  5232.216494845361 │        5175.0 │&#xA;│      19.09 │      73.14 │             96 │  4284.635416666667 │        4275.0 │&#xA;│      19.09 │      72.91 │             94 │  819.4148936170212 │         862.5 │&#xA;│      18.58 │      74.01 │             92 │ 3408.4239130434785 │        3400.0 │&#xA;│      19.09 │       73.0 │             92 │  2288.586956521739 │        2300.0 │&#xA;...&#xA;...&#xA;&#xA;```&#xA;![Landing Density](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/descent_density_east_bw.png)&#xA;&#xA;A clear strip can be seen with the aircrafts descending into the runway strip. Also the dip is more pronounced in the Navi Mumbai / Airoli / Ghansoli / Thane-side approach corridor.&#xA;&#xA;I also looked for a query on the Ghatkopar region, and it was amazing to see that it was so near. Around `19.08, 72.94` is Ghatkopar, and aircrafts have an approximate average of 800 feets. 90% of them are below 1,100 feet, that is crazy low. And that clearly explains why the strip straight to the East-West of the airport line (which some part of Ghatkopar falls into) doesn&#39;t have any tall towers.&#xA;&#xA;```&#xA;┌────────────────┬───────────────────┬───────────────┬────────────┬────────────────────┐&#xA;│ descent_points │    avg_alt_ft     │ median_alt_ft │ p90_alt_ft │     p95_alt_ft     │&#xA;│     int64      │      double       │    double     │   double   │       double       │&#xA;├────────────────┼───────────────────┼───────────────┼────────────┼────────────────────┤&#xA;│            395 │ 859.7544303797469 │         775.0 │     1175.0 │ 1332.4999999999989 │&#xA;└────────────────┴───────────────────┴───────────────┴────────────┴────────────────────┘&#xA;```&#xA;&#xA;Another good metric to verify the dataset is proving right.&#xA;&#xA;### Where Takeoffs Start&#xA;&#xA;Takeoffs are the same idea in reverse. The question is not where the plane ends up in the sky. The question is where it starts climbing away from the airport. So instead of looking for a finished takeoff, I look for the first point in each run where altitude clearly begins to rise. That gives me the start&#xA;  of the departure path. &#xA;&#xA;Then I group those start points by location. If the same area keeps lighting up, that tells me where departures are really getting going. That is why the  takeoff map is tighter and closer to the airport than the landing map. It is not trying to show the whole flight. It is only trying to show the first step out of the airport, which is the part that makes the pattern visible.&#xA;&#xA;The sharp climb starts very close to the airport itself, and that is what the takeoff pattern shows clearly. The aircrafts begin to gain altitude almost immediately after leaving the runway, so the takeoff strip stays tight around the airport side instead of spreading out far into the city.&#xA;&#xA;- `19.09,72.81` and `19.09,72.82` sit just west of the airport and catch the early climb corridor right after liftoff. This is the sea side, beyond the Juhu aerodrome, perfect. `19.09,72.78` to `19.09,72.80` are a little farther out, but they still belong to the same western departure corridor, showing that the climb pattern stays compact near the field.&#xA;- `19.09,72.84`, `19.09,72.85`, `19.09,72.86`, and 19.09,72.87 are even closer to the airport edge, so they sit in the immediate departure strip where the climb becomes visible first.&#xA;- &#xA;```&#xA;┌────────────────────┬───────────────────┬────────────────┬────────────────────┬───────────────┐&#xA;│     lat_bucket     │    lon_bucket     │ takeoff_points │     avg_alt_ft     │ median_alt_ft │&#xA;│       double       │      double       │     int64      │       double       │    double     │&#xA;├────────────────────┼───────────────────┼────────────────┼────────────────────┼───────────────┤&#xA;│              19.09 │             72.81 │           1473 │ 2092.6171079429737 │        2050.0 │&#xA;│              19.09 │ 72.82000000000001 │           1446 │ 1889.3672199170123 │        1825.0 │&#xA;│              19.09 │             72.84 │           1374 │ 1094.5778748180494 │        1075.0 │&#xA;│              19.09 │ 72.85000000000001 │           1329 │  764.8796087283672 │         725.0 │&#xA;│              19.09 │             72.83 │           1208 │ 1518.4395695364237 │        1450.0 │&#xA;│              19.09 │             72.87 │           1079 │  465.8016682113068 │         200.0 │&#xA;│              19.09 │              72.8 │            994 │ 2233.6770623742455 │        2150.0 │&#xA;│              19.09 │             72.86 │            848 │  698.1816037735849 │         400.0 │&#xA;│              19.09 │             72.79 │            538 │  2579.460966542751 │        2525.0 │&#xA;│              19.09 │             72.78 │            486 │  2790.792181069959 │        2750.0 │&#xA;│              19.09 │             72.77 │            418 │ 3224.5215311004786 │        3000.0 │&#xA;│              19.09 │             72.76 │            331 │ 3665.7099697885196 │        3475.0 │&#xA;│              19.09 │             72.75 │            234 │ 3924.5726495726494 │        3662.5 │&#xA;│              19.09 │             72.88 │            206 │  976.2135922330098 │         200.0 │&#xA;│              19.09 │             72.74 │            137 │  4216.605839416058 │        3875.0 │&#xA;│              19.09 │             72.73 │            129 │  4780.232558139535 │        4925.0 │&#xA;│              19.09 │             72.72 │             68 │  4781.985294117647 │        4837.5 │&#xA;│              19.18 │             72.74 │             62 │  7492.741935483871 │        7325.0 │&#xA;│              19.19 │             72.75 │             62 │  7558.870967741936 │        7525.0 │&#xA;│ 19.080000000000002 │             72.73 │             54 │  5360.185185185185 │        5162.5 │&#xA;...&#xA;...&#xA;```&#xA;![Takeoff Density](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/takeoff_density_west_bw.png)&#xA;&#xA;There is no major analysis here, since it has plain ocean in front of it, we can&#39;t really track the location thereafter. We can just use the direction that we&#39;ll check separately.&#xA;&#xA;### Frequency of landings and takeoffs&#xA;&#xA;I wanted the spacing between landings on their own, and takeoffs on their own. So I grouped the airport events by type, sorted them in time order, and measured the gap between one landing and the next landing, and one takeoff and the next takeoff. The median is the number to trust most here, because a few long gaps pull the average up. &#xA;&#xA;The process was simple:&#xA;- Classify events as landing or takeoff&#xA;- Sort by time inside each sampled day&#xA;- Compare each event to the next event of the same type&#xA;- Measure the minutes between them&#xA;&#xA;![Landing and Takeoff Frequency](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/landing_takeoff_frequency.png)&#xA;&#xA;In this sample, landings are spaced about 2.58 minutes apart at the median, and takeoffs about 3.0 minutes apart.&#xA;&#xA;What can we observe from this?&#xA;- Landings are spaced a bit more tightly than takeoffs&#xA;- Takeoffs have a slightly longer pause between consecutive events&#xA;- The median is the number to trust most here, because a few long gaps pull the average up&#xA;&#xA;This is the answer I have been chasing all along and it truly feels great to see it that close. As children we had observed the landing frequency as around 2 minute 30 seconds, and it couldn&#39;t be more accurate than this. But the great thing about the BOM/VABB Airport is that it still holds true even after a decade. It truly is the single runway busiest airport.&#xA;&#xA;&#xA;### Landing-to-Takeoff Gaps&#xA;&#xA;I want to measure the time between a flight landed and another took-off (not the same aircraft), any flight, like just after a flight landed, how much time it took for another to takeoff. I think it has to be in the range of 1-2 minutes. That&#39;s the status BOM/VABB Airport has to be the busiest single run way airport. Let&#39;s check and validate it with data.&#xA;&#xA;To answer that, I look at the airport events in time order and take each landing as a starting point. Then I find the next takeoff that follows it and&#xA;  measure the time gap between the two. If that gap is small, it means the airport is keeping the movement going very quickly. If it is large, it means there was a longer pause before the next departure. The interesting part is not the average alone, but the shape of the gaps across the sample.&#xA;&#xA;In this sample, that gap is very short most of the time. The median is 1.5 minutes, and the average is about 2.1 minutes, across 5,297 landing-to-takeoff  intervals. Most of the bars are packed into the first few minutes, which tells me the airport is turning movements over very quickly. The long tail is there too, but it is small. So the real story is not &#34;there are a few huge delays&#34; it is that the typical landing is followed by a takeoff almost immediately.&#xA;&#xA;![Landing-to-Takeoff Turnaround Gap](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/turnaround_gap_refresh_clean_horizontal.png)&#xA;&#xA;And another question with intuition validated with data.&#xA;&#xA;If you think about it for a while, the median is 1.5 minutes, if you do the math (60/1.5 = 40 movements / hour). That alligns well with the hourly average. And it even validates the historical peak of Mumbai of 51 movements with 1,036 daily movements, that number is not far.&#xA;&#xA;It is that the airport&#39;s fame comes from how little space there is between operations. The chart makes that visible: most of the time, the next takeoff follows a landing almost immediately, and that is exactly the kind of traffic that built BOM&#39;s reputation.&#xA;&#xA;### Weekday Pattern&#xA;&#xA;This is a bit interesting question, since I honestly had no clue about the answer. I thought every day would have the same pattern. I was wrong.&#xA;&#xA;To get this, it was straight forward.&#xA;I took the airport events I had already classified as landings and takeoffs, grouped them by weekday in Mumbai time, and counted how many movements landed in each bucket. That gives a simple answer to a simple question: which day of the week gets the most airport activity? This question is asking about the breakdown of the flights based on days of the week.&#xA;&#xA;```&#xA;┌─────────────┬──────────────┬──────────┬──────────┬───────────┐&#xA;│ weekday_num │ weekday_name │ landings │ takeoffs │ movements │&#xA;│    int64    │   varchar    │  int128  │  int128  │   int64   │&#xA;├─────────────┼──────────────┼──────────┼──────────┼───────────┤&#xA;│           0 │ Sunday       │     1444 │     1239 │      2683 │&#xA;│           1 │ Monday       │     1383 │      942 │      2325 │&#xA;│           2 │ Tuesday      │     1695 │     1337 │      3032 │&#xA;│           3 │ Wednesday    │     1748 │     1053 │      2801 │&#xA;│           4 │ Thursday     │     1805 │     1236 │      3041 │&#xA;│           5 │ Friday       │     1859 │     1323 │      3182 │&#xA;│           6 │ Saturday     │     2789 │     2070 │      4859 │&#xA;└─────────────┴──────────────┴──────────┴──────────┴───────────┘&#xA;```&#xA;&#xA;![Weekday Takeoff-Landing Flights Breakdown](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/weekday_breakdown.png)&#xA;&#xA;The result is pretty clear in the sample. Saturday stands out as the busiest day by a wide margin, with 4,859 movements, followed by Friday at 3,182 and Thursday at 3,041. Monday is the quietest at 2,325.&#xA;&#xA;&gt; Landings are higher than takeoffs on every weekday, but the weekend spike is the real signal here. So the sample does not look evenly spread across the week, it has a strong Saturday peak and a weaker Monday trough.&#xA;&#xA;I then looked into the dataset and tried to find the count of each day in the sample, since i had around ~70 days of data (1 day * 12 months * 6 years)&#xA;&#xA;- Saturday 12&#xA;- Sunday 11&#xA;- Wednesday 11&#xA;- Friday 11&#xA;- Tuesday 10&#xA;- Thursday 10&#xA;- Monday 8&#xA;&#xA;And that can explain why Saturday is the busiest day. The sample is not evenly distributed, but it is more or less evenly distributed across the calendar.&#xA;&#xA;&#xA;### Hotspots / Direction&#xA;&#xA;The question here is when the aircrafts starts landing or taking off, what direction are they actually pointing? That&#39;s quite obvious right? But I want to identify a deeper pattern here, from where are they coming from and where are they heading towards.&#xA;&#xA;That matters because it tells you whether Mumbai&#39;s traffic is spread randomly across the compass or funneled through one stable corridor. If the airport is using the same flow again and again, the headings should cluster into a small set of directions instead of being evenly scattered.&#xA;&#xA;To get this, we can:&#xA;- take each inferred flight run&#xA;- classify as a landing or takeoff&#xA;- capture the heading at the relevant point in the run.&#xA;&#xA;For landings, it uses the heading at the end of the run.&#xA;For takeoffs, it uses the heading at the start. &#xA;&#xA;![Landing Direction Map](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/landing_direction_map.png)&#xA;&#xA;![Takeoff Direction Map](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/takeoff_direction_map.png)&#xA;&#xA;Those headings are then grouped into direction sectors and counted, which is what produces the direction chart.&#xA;&#xA;![Landing Heading Histogram](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/landing_heading_hist_vertical.png)&#xA;![Takeoff Heading Histogram](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/takeoff_heading_hist_vertical.png)&#xA;&#xA;The result is very strong and very repetitive. Most landings cluster around 270°, with a smaller 90° cluster, and takeoffs are also dominated by 270°. In the heading data, the top landing sector is 270° with 14,316 points, and the top takeoff sector is also 270° with 10,848 points. That tells me the airport is not behaving like a random cloud of movement. It is using the same directional axis again and again, which is exactly why the hotspot map looks like a corridor instead of a scatterplot.&#xA;&#xA;### Time in Airspace&#xA;&#xA;The question here is are the aircrafts just passing through, or do they stay around Mumbai for a while? The idea is simple, if landings take longer than takeoffs, that usually means approach and descent are keeping the aircraft in the airspace longer.&#xA;&#xA;One thing I wanted to know was how long aircraft actually stay around Mumbai once they enter the airport flow.&#xA;&#xA;![Time in Airspace](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/special_time_in_airspace.png)&#xA;&#xA;So I looked at the duration of each inferred run and compared landings, takeoffs, and the full set together. The pattern is pretty clear: landings stay in the airspace longer than takeoffs. The median landing run is 34.5 minutes, while the median takeoff run is 25.1 minutes. Across all airport events, the median is 27.3 minutes. That tells me the traffic is not just passing through fast. A lot of it is lingering long enough for the airspace to feel full, especially on the landing side.&#xA;&#xA;The interesting part here is not just that landings last longer than takeoffs. It is what that tells me about the way Mumbai moves aircraft through its airspace. A landing is not a single movement. It is a long descent, even slower, and then the final stretch into the runway, so it naturally stays visible in the data for longer. Takeoffs are different. Once an aircraft leaves the runway and starts climbing out, it clears the airport zone much faster. &#xA;&#xA;That is why the landing median is 34.5 minutes, while takeoffs sit closer to 25.1 minutes. This does not mean the aircraft is simply flying slower in the sky. It means the arrival side of the airport keeps aircraft in the flow for longer, which is exactly what you would expect at a busy airport like Mumbai. It also hints at sequencing and approach management: arrivals are being held in the pattern of the airport longer than departures, which is the kind of  behavior that makes the airspace feel crowded even before you count the total number of flights.&#xA;&#xA;So the real takeaway is this: Arrivals spend more time inside the airport&#39;s operating envelope, while departures clear out faster. That difference is part of the airport&#39;s style, and it helps explain why the airspace feels full even when the hourly flow looks ordinary on paper.&#xA;&#xA;&#xA;### Aircraft Turnaround Time&#xA;&#xA;&gt; After this aircraft lands, how long does it stay on the ground before it takes off again?&#xA;&#xA;This is a good question, because it gives a deeper look at the activity on the airport. It doesn&#39;t necessarily give much but still a valid metric to observe.&#xA;&#xA;So I matched each landing with the next takeoff for the same aircraft on the same day. That gives a proper aircraft-level ground-time measure, instead of a flight-level handoff.&#xA;&#xA;![Aircraft Turnaround Time](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/special_same_aircraft_ground_gap.png)&#xA;&#xA;What I observed:&#xA;The turnaround time is steady, but not extremely fast. The median ground time is 96.5 minutes, which is about 1 hour 37 minutes. The middle half of the aircraft stays between 77.5 and 130.5 minutes, and the 90th percentile goes up to 185.3 minutes. Most of the pairs sit in the 60-180 minute range, which suggests a working fleet that is being reused at a regular pace rather than left idle for long stretches.&#xA;&#xA;That also hints at the kind of airport Mumbai is. This looks much more like a high-frequency rotation airport than a place where aircraft disappear for half a day between uses. A lot of the traffic is probably made up of domestic and short-to-medium-haul operations, where the same aircraft can land, sit for a bit, and then go back out again within a couple of hours. So the takeaway is not just &#34;there is a gap&#34; It is that Mumbai keeps the aircraft moving in a fairly tight loop, which fits the airport&#39;s reputation for being heavily used and constantly in motion.&#xA;&#xA;### International vs Domestic Flights&#xA;&#xA;I also wanted to see how much of Mumbai&#39;s traffic is Indian-registered versus foreign-registered. I could not see the route directly in the ADS-B feed, so I used aircraft registration as a proxy: `VT-` for Indian-registered aircraft and everything else as foreign-registered. That is not the same thing as domestic vs international routes, but it gives a useful high-level split.&#xA;&#xA;```&#xA;┌────────────────────────┬───────────┐&#xA;│         bucket         │ movements │&#xA;│        varchar         │   int64   │&#xA;├────────────────────────┼───────────┤&#xA;│ Unknown                │      3399 │&#xA;│ Indian-registered (VT) │    110492 │&#xA;│ Foreign-registered     │     37589 │&#xA;└────────────────────────┴───────────┘&#xA;```&#xA;&#xA;![Domestic vs International Flight Aircraft Carrier Split](https://meetgor-cdn.pages.dev/flight-observatory-mumbai/special_domestic_international_proxy.png)&#xA;&#xA;The result is clear: about `72.9%` of movements are Indian-registered, while `24.8%` are foreign-registered. &#xA;So the airport is still dominated by Indian traffic by count, but foreign aircraft are not a side note, or even a minority. They are a substantial and regular part of the flow.&#xA;&#xA;&#xA;## What Surprised Me&#xA;&#xA;What surprised me most was not that Mumbai is busy. I already expected that. What surprised me was how tight and consistent the whole system is. The airport does not just have a lot of movement, it has a deadly consistency to it. A landing is followed by a takeoff almost immediately, the same aircraft comes back into service in about an hour and a half, and the busiest hours are not random bursts but repeatable peaks that keep showing up in the same parts of the day.&#xA;&#xA;The other thing that stood out was how directional the traffic is. Once I started looking at landings and takeoffs on the map, the pattern stopped looking like scattered noise and started looking like a corridor. The airport is not using the full sky evenly. It keeps leaning on the same approach and departure axis, which makes the movement feel much more concentrated than a simple flight count would suggest. I mean that is no brainer, since it can disrupt the residential living, the noise and the unpredictable zoning of planes is not airports should be doing. And Mumbai does it very neatly.&#xA;&#xA;I also did not expect the weekday pattern to be so uneven. I assumed the traffic would be roughly similar across the week, but Saturday clearly pulled ahead in the sample. That was a good reminder that airports are not just shaped by the clock, they are also shaped by the calendar. Even with a dataset this limited, the weekly pattern still shows through.&#xA;&#xA;There is the big myth which gets debunked here, that is &#34;Mumbai Airport is busiest in the late night&#34;. Opposite is true here. The airport is busiest in the morning and evening, and quietest in the late night. The airport is not empty for long. Though there might be international flights in the late night but they don&#39;t land and takeoff with the frequency as close as Indigo does, so that Mumbaikars can sleep finally.&#xA;&#xA;Another small but satisfying surprise was that the numbers matched the memory. As a child, I had the feeling that Mumbai&#39;s sky was never empty for long. The data backed that up. It felt satisfying to see the numbers in front of me for the frequency of landing, the takeoff to landing gaps, the dominance of airline, and mapping them to intuition and memories. The frequency is consistent like crazy, the turnaround period really tight, people do have a favorite preference of Indigo, everything just feels calm but its really quick and consistent. The airport really does stand up to its name &#34;Single Runway Busiest Airport&#34;.&#xA;&#xA;## Conclusion&#xA;&#xA;Well! That was a lot I admit. But it was fun. Atleast to me (I had said, it was a post for me!). I hope you got a little smile out of it if not just felt good reading about how Mumbai&#39;s Airport Operate.&#xA;This is still ADS-B data, so it is a proxy, not an official operational log. But for the question I started with, it is enough. The feeling I had as a kid was real, Mumbai&#39;s sky was never empty for long. The frequency of landing is truly at 2 minute 30 second roughly speaking, which just makes me feel joyous, the airport turnaround time is quite a sharp and tight one, making it really a Mumbai-like busy runway.&#xA;But that is just part 1 buddy! Haven&#39;t nerded about the complications of ADS-B data and my struggles of it, fighting with the limited disk space to compress the data and juggernauting queries to get results. That is the next part.&#xA;&#xA;&#xA;Check out the full case study here -&gt; [Flight Observatory - Mumbai Airspace Case Study](https://dev.meetgor.com/flight-observatory/case-study/mumbai-airport/)</content>
      <type></type>
    </item>
    <item>
      <title>2025: Year Review</title>
      <link>https://www.meetgor.com/posts/2025-review</link>
      <description>I had focused on tech on this reviews. I would keep it same this year, I might write a separate non-tech, a human yearly review on my separate blog. Gist I move</description>
      <pubDate>Wed, 31 Dec 2025 00:00:00 UTC</pubDate>
      <content>I had focused on tech on this reviews. I would keep it same this year, I might write a separate non-tech, a human yearly review on my separate blog.&#xA;&#xA;## Gist&#xA;&#xA;I moved from a software developer to a reliable and responsible developer. I took ownership and sat on problems more than the level of my comfort zone. It made me a better developer, it helped me sympathize the people around me, and connect more. These moments might create life long relations, but from the macro perspective they looked frustrating and annoying at times. Here I am still finding peace and standing and walking through what has not been a smooth journey both at work and life. I was thinking work and life could mean same thing once I got out of college, but those kept diverging and I had to keep both my feet at par with each one&#39;s pace.&#xA;&#xA;I hit you with quotes and I will again&#xA;&#xA;&gt; &#34;Its easy to judge anyone, its harder to be someone&#34;&#xA;&#xA;I have read a lot of books, and one quote just hits me is this, I will explain this in my non-tech review post. Its quite easy to point out flaws and judge the other person from outside, but the same person if you were in his or her shoes, its quite hard to the otherwise.&#xA;&#xA;It&#39;s easy to judge a developer, but people slowly realized vibe coding hits a halt at some point, you are not replacing a developer, you are trying to replace someone who cares deeply about the problem and not someone who just can write code.&#xA;&#xA;Vibe coding rose and fall with the same speed.&#xA;&#xA;## Key Achievements&#xA;&#xA;1. Got promoted as a Software Developer from Junior (completed 2 years at Docsumo 8 month internship + 16 month full time)&#xA;2. Wrote 52 newsletter editions consistently reflecting on weekly learning&#xA;&#xA;&#xA;I didn&#39;t finish any goals I wrote in 2024 review,&#xA;- Write blogs for FreeCodeCamp (at least 3) (none written)&#xA;- Learn Kubernetes in detail (not yet)&#xA;- Write 25 articles on 100-days-of-Golang (wrote 3)&#xA;- Learn Rust or Zig (Nope)&#xA;- Get 2K subscribers on YouTube (Nope, not even 100)&#xA;&#xA;I want to straight honest here, things change rapidly in 2025, I barely had things in control. I was consistent, I had time, yes! But I had to pivot to other goals to keep my self sane.&#xA;&#xA;Not clearing 5 goals doesn&#39;t mean I cleared none&#xA;&#xA;I did&#xA;- Understand AI Agents before the hype (early January)&#xA;- Learn in depth about SQL and SQLite with a learning log&#xA;- Complete Advent of SQL (15 blog posts)&#xA;&#xA;Also some broader or abstract goals like&#xA;- Adapt LLMs in the development process&#xA;- Reflecting more than consuming&#xA;- Understanding one technology in depth (sqlite)&#xA;- Complete things rather than shove it off after a while (project graveyard, or irresponsible employee)&#xA;&#xA;&#xA;Let&#39;s break it down to 3 things&#xA;1. Work&#xA;2. Side hustles&#xA;3. Learning&#xA;&#xA;## Work&#xA;&#xA;I work at Docsumo, so I work with documents. The field of intelligent document processing is taken by the storm by LLMs. Prior to LLMs, there were ML and rule based systems to extract data. Now we have LLMs, we not only can extract structured data, but can have a conversation with the document.&#xA;&#xA;It has changed the way document are treated. We decided to go full on LLMs and use the latest model for extracting data. It was tough, there were issues, wired behaviors and of course hallucinations. Most of them were related to prompt but also more on how we call the LLM, through plain api, with structured output, or as an agent.&#xA;&#xA;We started with the first simple api call, sat on it for quote some months, but external factors forced me to use other tools to find the extraction more accurate. I started experimenting with LLMs more than ever from the 2nd quarter of 2025. LLMs were released every week with minor improvements or major shift in tool calling. Everything had to go through the LLM, write and review code with Coding agents, use LLM to extract data, debug with LLMs.&#xA;&#xA;There was a tremendous shift in how we develop software and I was blessed and lucky to be in a company that helps developers stay up to date with the cutting edge tools like Cursor, Windsurf, Code Rabbit, Gemini, OpenAI playground and what not. I was given these tools, as a new toolkit. No one actually knew how to use them, they knew the general sense of intuition, but it was like a alienated tool. Everyone gave their own shot at using it, and most them hated it at first, but then they understood the pattern, changed the way I and they used it and almost in no time it became the default editor and tool that we use now.&#xA;&#xA;I never new change in software was that rapid! Maybe this is a special time, but every time we say that.&#xA;&#xA;I honestly was reluctant to use AI tools as daily driver at first, but saw the speed of others and it bogged me down that I was missing out on something. And that was code-review and hands off understanding of the code.&#xA;&#xA;It was very hard for me to understand and debug the code that I haven&#39;t written, I just accepted it without understanding, because it feels daunting to review code. Especially when you haven&#39;t written and reviewed a lot of code, human code. Its almost like reviewing the classical books (from Shakespere and Tolstoy) after reading Harry Potter. You don&#39;t have the vocabulary to understand those books, you can&#39;t review things that you don&#39;t know the patterns and the nuts and bolts of. It is possible but its quite tedious and overwhelming.&#xA;&#xA;&#34;Overwhelming&#34; this is a underrated term for what I want to describe how I felt working throughout the year. Coding was supposed to be hands on, but when exposed with a magical bot that can write and churn hundreds and thousands of changes in few minutes is just not processable from my tiny brain.&#xA;&#xA;I failed badly at using these tools, with skeptism. But then slowly over couple of months, with the right task coming at hand, and trying to throw anything at it, I understood. What can work and what can&#39;t work given a task to an LLM. I realized the hard way, that LLMs are good for throw-away code. The code that you can just use it for producing something that you don&#39;t care about how it did it. Like log analysis, scripts for downloading data, filtering files based on some names, analysing and combining bunch of CSVs, I don&#39;t care if you use proper variables and correct abstractions to do those things. These are the things where I never second guessed myself to throw at LLM and take whatever it spat out with a grain of salt and modified it till it made sense.&#xA;&#xA;I am still finding it hard to write everything of that sort to LLM, as nothing is left to do! I feel bored! Coding was supposed to be fun, a bit time consuming but a good exercise to the brain. But people have started preferring velocity over craft. I am not against it, but sometimes we get too carried away with human ambition.&#xA;&#xA;I hope we slow down in 2026, though I am skeptical about it, it might just get worse.&#xA;&#xA;We need time to understand and explore what can be do with the current set of tools and technologies that these AI labs and all of us have created and release just in the last year. There is absurd amount of things that we need to build with them, in fact I think we can stay as is without new models or breakthrough for decades with just these tools. But Damn! We are not stopping here, are we?&#xA;&#xA;I think in 2026, I need to build more and get a taste and expertise in systems. I am an engineer and writing code is already obsolete. The only thing that can stand out is understanding how things can fit and change in existing or new system software.&#xA;&#xA;Just technical metrics, I don&#39;t care but gives me a sense of wonder of wow! that was quite a lot!&#xA;- 336 commits on core services that I maintain&#xA;- 101 PRs closed&#xA;- Favorite model? GPT 5 (I am surprised cursor)&#xA;- 708 M tokens burnt (geese)&#xA;&#xA;&#xA;## Side Hustle&#xA;&#xA;Side projects of course!&#xA;&#xA;I had 100+ ideas but never picked up anyone of them seriously. What is wrong with me?&#xA;&#xA;Is it due to AI, no. It was same as before AI too. AI just made it easier to reach that point.&#xA;&#xA;With LLMs and these CLI coding agents, I can scaffold projects in seconds than in minutes. I initially thought it was cool, so that I can actually build the core thing, once the boilerplate code is created by LLM. But boy! I was wrong. It creates a layer of understanding that I then have to dread through, then with the half mine and half LLM baked system, I have to steer myself or the LLM in completing the project. And that&#39;s the bottleneck I am not able to solve it yet. I am writing this and realizing the problem now. Reflection is key in understanding problems. Writing is the easiest way to it.&#xA;&#xA;I don&#39;t have much to say, I tried building CLIs, TUIs, Web apps, all failed due to my non-thinking of the parts, and was too lazy to write myself or LLM was not able to understand me, my prompting skills have gotten better over time, but I am bad at expressing things in plain english quickly, I need time, but the excitement of seeing it dampens it.&#xA;&#xA;Side project graveyard is ever expanding, and we need something to cure it! LLMs have made it easier to expand, but also they have to power to complete them, the skill is the gap that is preventing me from achieving just that.&#xA;&#xA;2026, Fingers crossed. I can solve at least one problem that people can relate to and love.&#xA;&#xA;## Learning&#xA;&#xA;This I think is the least interesting thing for people right now, LLMs have made it look obsolete to learn what&#39;s new in SQLite 3.51, or how to use f and t strings in python 3.14. But I think it had given us ample reasons to learn anything. Why? Developer are paid to fix problems, not write code. If you know certain things, you always and always will have advantage over LLMs. You can be smart by leveraging LLMs to learn, in the thinking mode, it can spit out legit facts, but do the opposite thing of what it intended to do, you as a developer can see and do things better than LLMs. This is still a problem in most of the overthinking LLMs. They are getting better, but a human thinking will always out-power an LLM with fake thinking.&#xA;&#xA;I decided to learn SQLite in 2025, and will continue to learn in 2026, its something that cannot be completed. I might also start learning Kubernetes the same way, though I wanted to learn it since 2 years but never started.&#xA;&#xA;People will soon realize that learning is something that they can&#39;t outsource to an LLM, nor the thinking. Because humans at core are curious creatures, if you think you can live without learning anything, than you are not a software developer. Simple. Learning is an lifelong process, nothing can stop it, it could be never completed. There will always be something to learn. For instance, have you ever wondered how LLMs can read files? Like PDFs and words or images, since the data is not a clean text, how does it parse it? Well that is a rabbit hole for learning which could benefit you over just using LLM to churn and do a trial and error for correct result. If you have a tool, you need to understand it to the finest to hone and master it, for no such tool can be mastered.&#xA;&#xA;&gt; Learning to learn is going to be more important than building in the future&#xA;&#xA;That&#39;s just my opinion! Who knows, human just take AI slop and feed it to the next AI until they have something they like and never learn! That would be the worst of time for humans like me to live.&#xA;&#xA;- 52 Newsletters published at every Saturday midnight IST&#xA;- 53 SQLogs written&#xA;- 3 in-depth posts about Golang&#xA;- 458 commits&#xA;- 209 day reading streak on daily.dev&#xA;&#xA;&#xA;&#xA;## Lessons Learnt&#xA;&#xA;1. Never outsource thinking&#xA;2. Be with people and speak out&#xA;3. Habits are actually easier to build, the temptation to skip it for today is what gets better of me&#xA;4. Developers were never for writing code, they were for caring about the problem the deepest way&#xA;5. I can learn anything, the only thing in my way is my mindset and distractions&#xA;&#xA;Keeping mind peaceful from outside though a lot is burning in the house, is something I juggled with.&#xA;&#xA;## Off to 2026&#xA;&#xA;There are surley certain things I want to do explicitly in 2026 like&#xA;1. Understanding databases on a deeper level&#xA;2. Understanding LLMs on a deeper level&#xA;3. Building tools with LLMs for fun&#xA;4. Writing on my blog with less friction to publish thoughts, journals and devlogs or TILs&#xA;5. Building educative material related to databases and SQL&#xA;&#xA;Yes, these are very specific, but I don&#39;t like lame and broader goals like &#34;Be a better developer&#34;, &#34;Understand systems&#34;, and what not. I want to dive deep, I want to replace scrolling with action.&#xA;&#xA;2025 was wild, it was slow yet models kept falling like dog water!&#xA;&#xA;I don&#39;t expect for LLMs to go in 2026, that is the last thing that could happen, but I can see the thing starting to build around them, tooling for understanding them, helping with the smallest of things human care for, its already in all the things we use, yet people don&#39;t use it as reliable tools. The ones they do, they might soon realize what kind of fire they are playing with.&#xA;&#xA;Anyways! I hope everyone a Happy New year and may 2026 be kinder, more fulfilling to you.&#xA;&#xA;Keep Thinking!</content>
      <type></type>
    </item>
    <item>
      <title>Golang Web: DELETE Method</title>
      <link>https://www.meetgor.com/posts/golang-web-delete-method</link>
      <description>Introduction We have explored GET, POST, PUT, and PATCH methods in our previous entries for this series. It is the final entry for all the HTTP methods which is</description>
      <pubDate>Sun, 12 Jan 2025 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We have explored GET, POST, PUT, and PATCH methods in our previous entries for this series. It is the final entry for all the HTTP methods which is going to be the `DELETE` method. In this entry of the series, we will take a view on how to construct and request an HTTP DELETE Method to an API Endpoint.&#xA;&#xA;The DELETE Method is quite simple. You just provide the URI of the resource. Most often, the request body is not needed. That request will simply delete the request entity from the server (in the database or wherever the resource is stored).&#xA;&#xA;Let’s understand it in a more detailed way.&#xA;&#xA;## What is a DELETE Method&#xA;&#xA;The DELETE method requests that the server remove the association between the target resource and its URI (Uniform Resource Identifier). This doesn&#39;t necessarily mean the underlying data is physically deleted; it means the resource is no longer accessible through that specific URL. DELETE can also be used to remove relationships between resources, effectively &#34;delinking&#34; them.&#xA;&#xA;According to the RFC:&#xA;&#xA;&gt; The DELETE method requests that the origin server remove the association between the target resource and its current functionality.&#xA;&#xA;Examples:&#xA;&#xA;* **Social Media** (Deleting a Tweet): When you delete a tweet, you&#39;re sending a DELETE request to the server. This removes the tweet from your timeline and makes it inaccessible via its URL. While the data might be archived or retained for a period, the key action is removing the public association between the tweet and its online presence. This is closer to a true deletion than the cart example.&#xA;    &#xA;* **E-Commerce** (Removing an Item from a Cart): When you remove an item from your online shopping cart, you&#39;re sending a request (often a DELETE) to remove the item from your cart. The actual product remains available in the store&#39;s inventory. This is a clear example of delinking. You&#39;re deleting the link between your cart and the product, not the product itself.&#xA;    &#xA;&#xA;Let’s start constructing a simple DELETE Request in Golang.&#xA;&#xA;## A Simple DELETE Request&#xA;&#xA;We don’t have a specific method for `DELETE` in `net/http` as we have for `GET` and `POST`, so we need to create a request and use a client to send the request.&#xA;&#xA;### Constructing the URL&#xA;&#xA;We would need to define the endpoint that we are hitting. We can directly use the API URL or construct the API URL on the fly, depending on the ID and dynamic parameters. DELETE requests usually delete a particular entity. We would generally have some form of identifier for that entity/object on the database, etc. So, in this case, it is the user&#39;s ID, so we can pass the post to the URL.&#xA;&#xA;```go&#xA;// define URL to hit the API&#xA;apiURL := &#34;https://reqres.in/api/users/4&#34;&#xA;&#xA;// OR&#xA;&#xA;baseURL := &#34;https://reqres.in&#34;&#xA;userID := 4&#xA;apiURL := fmt.Sprintf(&#34;%s/api/users/%d&#34;, baseURL, userID)&#xA;```&#xA;&#xA;We can either directly define the URL or dynamically construct the URL, that is quite straightforward. The latter one is the one we usually use and design.&#xA;&#xA;The DELETE Request doesn’t usually require a request body, however, if your server requires some specifications, you can construct the body as we did with the previous examples in POST, PUT, or PATCH method requests.&#xA;&#xA;### Constructing and sending the DELETE Request&#xA;&#xA;We can simply construct the request by specifying the http.MethodDelete as the request method, the URL to hit, and a body(optional) just like a `GET` request. Once we have the request, we can create the default client and send the request.&#xA;&#xA;```go&#xA;// create a DELETE request&#xA;req, err := http.NewRequest(http.MethodDelete, apiURL, nil)&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;&#xA;// construct the default http client and send the request&#xA;client := &amp;http.Client{}&#xA;resp, err := client.Do(req)&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;```&#xA;&#xA;This is the normal code used for constructing an HTTP request in Golang, we create a request using the NewRequest function that takes in the method type, the URL to send the request, and the body if any. Then we need to create a http.Client for sending the request, we usually create a client with default values and send the request using the Do method on the created client using the request that we constructed earlier.&#xA;&#xA;### Fetching the Response&#xA;&#xA;Once the request is sent, we can fetch the response and read the body as bytes, and check the status if that succeeded or failed.&#xA;&#xA;```go&#xA;fmt.Println(&#34;Response Status:&#34;, resp.Status)&#xA;respBody, err := io.ReadAll(resp.Body)&#xA;if err != nil {&#xA;    log.Fatal(err)&#xA;}&#xA;fmt.Println(&#34;Response Body:&#34;, string(respBody))&#xA;```&#xA;&#xA;We can grab the Status field for checking the status code and message for the request. Usually, the body would be empty since there is no resource we are expecting after deletion of the object. However, if the server is implemented in a way to return the deleted object, you can read the bytes of the body and unmarshal it to the desired struct.&#xA;&#xA;So, that is the entire code to create a simple Delete request with Go, simply construct the URL with the identifier for the resource to be deleted, create the request, and send the request, and if the status code is 204 (usually) then we can assume it succeeded.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;baseURL := &#34;https://reqres.in&#34;&#xA;&#x9;userID := 2&#xA;&#x9;apiURL := fmt.Sprintf(&#34;%s/api/users/%d&#34;, baseURL, userID)&#xA;&#xA;&#x9;req, err := http.NewRequest(http.MethodDelete, apiURL, nil)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#xA;&#x9;client := &amp;http.Client{}&#xA;&#x9;resp, err := client.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Println(&#34;Response Status:&#34;, resp.Status)&#xA;&#x9;respBody, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#x9;fmt.Println(&#34;Response Body:&#34;, string(respBody))&#xA;}&#xA;```&#xA;&#xA;## Facts about the DELETE Method&#xA;&#xA;* DELETE Method is idempotent: Similar requests will result in identical behavior or response since once the resource is deleted, the resource won’t exist and hence the behavior would not hinder any other parts.&#xA;    &#xA;* DELETE Method is not safe: The operation is quite dangerous as it could literally remove a resource from a database/storage. Hence it is called not safe as it is making changes on the server.&#xA;    &#xA;&#xA;I have also included some more examples of DELETE requests [**here**](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/web/methods/delete/).&#xA;&#xA;That&#39;s it from the 36th part of the series, all the source code for the examples are linked in the GitHub on the [**100 days of Golang**](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/web/methods/delete/) repository.&#xA;&#xA;[**100-days-of-golang**](https://github.com/Mr-Destructive/100-days-of-golang)&#xA;&#xA;## Conclusion&#xA;&#xA;That would be it from the DELETE Method in Golang. We can use this method just like a normal `GET` request however a bit more carefully.&#xA;&#xA;Hope you found this article, helpful, leave some feedback or any suggestions if you have any. Thank you for reading.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang Web: PATCH Method</title>
      <link>https://www.meetgor.com/posts/golang-web-patch-method</link>
      <description>Introduction In previous sections of this series, we&#39;ve covered the GET, POST, and PUT methods. Now, we will explore the PATCH method, which differs from the ot</description>
      <pubDate>Sat, 11 Jan 2025 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In previous sections of this series, we&#39;ve covered the GET, POST, and PUT methods. Now, we will explore the PATCH method, which differs from the others in several key ways. The PATCH method is somewhat more flexible and depends on how the server or API you&#39;re working with is designed.&#xA;&#xA;In this section, we&#39;ll focus on understanding what the PATCH method is and how to use it. While we will dive deeper into building and structuring a full CRUD API later in the series, the focus here will be on the what and why of the PATCH method, not the how.&#xA;&#xA;## What is the PATCH Method?&#xA;&#xA;The PATCH method is often compared to the PUT method, but with one important distinction: PATCH is used to perform partial updates on a resource. Unlike PUT, which typically requires you to send the entire resource to update it, PATCH allows you to send only the fields that need to be updated. This makes it a more efficient option when updating a subset of a resource.&#xA;&#xA;In a PATCH request, the body usually contains instructions in a format like JSON, which specifies the fields to update. These instructions define the changes to be applied to the resource. For example, you may only want to change one field of a user&#39;s profile, such as their email address, while leaving the rest of the data untouched.&#xA;&#xA;## PATCH vs. PUT&#xA;&#xA;Key Differences While both PATCH and PUT are used to modify resources, there are significant differences in their behavior:&#xA;&#xA;* PUT replaces the entire resource. When you send a PUT request, you must include the full representation of the resource, even if you&#39;re only changing a small part of it.&#xA;    &#xA;* PATCH, on the other hand, is for partial updates. You only need to include the fields that are changing, not the entire resource.&#xA;    &#xA;&#xA;If the update involves more fields than just the ones you&#39;re changing, PUT may be the better choice. However, the scope of this article is to focus solely on the PATCH method.&#xA;&#xA;## How Does a PATCH Request Work?&#xA;&#xA;In the simplest terms, a PATCH request allows you to perform a partial update on a resource. It is similar to a PUT request, but more specific in how it updates the resource. According to the HTTP specification, an ideal PATCH request should:&#xA;&#xA;* Accept a &#34;patch document&#34; in the request body, which contains the list of operations to apply (e.g., &#34;replace&#34;, &#34;add&#34;, &#34;remove&#34;).&#xA;    &#xA;* Apply these updates to the target resource.&#xA;    &#xA;* If the update cannot be applied correctly, the operation should fail without applying any of the changes.&#xA;    &#xA;&#xA;This ensures that no partial or inconsistent updates are left behind&#xA;&#xA;For example, if you&#39;re updating a user&#39;s email address and something goes wrong in the middle of the operation, the PATCH request should ensure that the email isn&#39;t updated partially. If there’s an error, none of the updates should be applied, ensuring data consistency.&#xA;&#xA;Also, the patch method is not idempotent, meaning that if you send the same input/request, it need not necessarily return the same output. Because we are not sending the actual original entity, we are sending the partial data fields that need to be updated, so it might update the original entity on subsequent requests since there is no original request sent in the request; it only identifies the resource from the URI and fields to update in the request body.&#xA;&#xA;Now, let’s sum up the patch request in a few words&#xA;&#xA;* Updates specific fields mentioned in the patch document&#xA;    &#xA;* Can be partial (only the fields that need to be updated are sent, unlike PUT, which typically replaces the entire resource)&#xA;    &#xA;* Not necessarily idempotent (depends on the implementation)&#xA;    &#xA;* Not Safe (since resources will be updated on the server side)&#xA;    &#xA;&#xA;## Basic PATCH request&#xA;&#xA;Let’s start with the basic PATCH request that we can create in Golang. The \[net/http\]() package will be used to construct the request, and will be using `encoding/json` and some other utilities for string and byte parsing.&#xA;&#xA;So, first we will construct a HTTP request using the \[http.NewRequest\]() with the parameters like the http method to use, the URL to hit, and the request body if any. We will then need to send the json body which would consist of the fields to be updated.&#xA;&#xA;### Defining the API/Server Endpoint URL&#xA;&#xA;We would need to define the endpoint that we are hitting, we can directly use the API URL or we can construct the API URL on the fly depending on the id, and parameter that will be dynamic. As PATCH request, usually modify a particular entity, we would generally have some form of identifier for that entity/object on the database, etc. So in this case, it is `id` of the post, so, we can pass the post in the URL.&#xA;&#xA;```go&#xA;// define URL to hit the API&#xA;apiURL := &#34;https://jsonplaceholder.typicode.com/posts/4&#34;&#xA;&#xA;// OR&#xA;// baseURL := &#34;https://jsonplaceholder.typicode.com&#34;&#xA;// postId := 4&#xA;// postURL := fmt.Sprintf(&#34;%s/posts/%d&#34;, baseURL, postId)&#xA;```&#xA;&#xA;We can either directly define the URL or dynamically construct the URL, that is quite straightforward. The later one is the one we usually use and design.&#xA;&#xA;### Constructing the JSON Body&#xA;&#xA;This section is a little dependent on the context as you might have a direct json string that you can directly pass to the API or you might have a golang object that you need to Marshal in order to convert that object into string/bytes.&#xA;&#xA;1. Direct JSON String&#xA;    &#xA;    So, there is nothing to do here, since the object is already in the form of a json string.&#xA;    &#xA;    ```go&#xA;    reqBody := `{&#34;body&#34;: &#34;new body&#34;}`&#xA;    ```&#xA;    &#xA;    However, if you have certain fields that you need to exclude or omit, you have to construct a struct and then marshal it&#xA;    &#xA;2. Marshalling (converting object into bytes/string)&#xA;    &#xA;    We need to convert the Golang native object into some form of a json string or bytes that could be sent over the network. That process is called [marshalling](https://en.wikipedia.org/wiki/Marshalling_\(computer_science\)) or serialisation.&#xA;    &#xA;&#xA;```go&#xA;type Post struct {&#xA;&#x9;ID     int    `json:&#34;id,omitempty&#34;`&#xA;&#x9;Title  string `json:&#34;title,omitempty&#34;`&#xA;&#x9;Body   string `json:&#34;body,omitempty&#34;`&#xA;&#x9;UserId int    `json:&#34;userId,omitempty&#34;`&#xA;}&#xA;&#xA;userObj := Post{&#xA;    Body: &#34;New Body&#34;,&#xA;}&#xA;&#xA;var reqBody []byte&#xA;reqBody, err := json.Marshal(userObj)&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;&#xA;log.Println(&#34;New body:&#34;, string(reqBody))&#xA;// New body: {&#34;body&#34;:&#34;New Body&#34;}&#xA;```&#xA;&#xA;In the above snippet, we have defined a `Post` struct with the fields like `ID`, `Title`, `Body`, `UserID` , and those have `omitempty` tag along with the json fields that we want to marshal into. The omitempty will omit or ignore the fields if they are empty or not present in the object/instance of this structure. So in the example, `userObj` is the instance of the `Post` struct and it only has the `Body` populated, so the reqBody will only contain one field `body` in the json representation. The \[json.Marshal\]() is the function that we use to convert the object/instance into a byte form.&#xA;&#xA;This `reqBody` will serve as the request body for the request that will be a `PATCH` method to the mentioned endpoint / API URL.&#xA;&#xA;### Constructing the HTTP PATCH Request&#xA;&#xA;Now, we have the parts that we need to construct the request, we can combine the parts and hit the endpoint. However, it is a bit different compared to `GET` and `POST` request that we do in Golang, the HTTP package has built in methods for the `GET` and `POST` methods, however for methods like `PUT`, `PATCH`, `DELETE` and others, we need to construct a [Request](https://pkg.go.dev/net/http#Request) object and then send that request.&#xA;&#xA;```go&#xA;req, err := http.NewRequest(&#34;PATCH&#34;, postURL, strings.NewReader(reqBody))&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;req.Header.Set(&#34;Content-Type&#34;, &#34;application/json&#34;)&#xA;&#xA;// in case of wired utf-8 characters appear in the body&#xA;//req.Header.Set(&#34;Content-Type&#34;, &#34;application/json; charset=utf-8&#34;)&#xA;```&#xA;&#xA;To do that, we call the [NewRequest](https://pkg.go.dev/net/http#NewRequest) method with the parameters like the HTTP method, the URL, and the request Body all of which we have at the moment.&#xA;&#xA;* The method is `PATCH`&#xA;    &#xA;* The URL is `postURL`&#xA;    &#xA;* The body is `strings.NewReader(reqBody)` as we need a `io.Reader` object instead of string or byte slice&#xA;    &#xA;&#xA;So, once we have that, we would also set the `Header` with the field of `Content-Type` and the value as `application/json` since the request body has json representation of the patch document that will be sent.&#xA;&#xA;### Sending the Request&#xA;&#xA;Once, the `req` object is created, we also need a [Client](https://pkg.go.dev/net/http#Client) to send the request, so we create the client as default http.Client object with defaults and call the [Do](https://pkg.go.dev/net/http#Client.Do) method with the `req` as the request parameter in order to send the request with the default client.&#xA;&#xA;This method returns the response object, and an error if any.&#xA;&#xA;We also add the `defer resp.Body.Close()` in order to avoid leaks and safely access the response body.&#xA;&#xA;```go&#xA;client := &amp;http.Client{}&#xA;resp, err := client.Do(req)&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;defer resp.Body.Close()&#xA;```&#xA;&#xA;At this point, we can now start consuming the response and use it for further processing as per the needs.&#xA;&#xA;### Unmarshalling the Response&#xA;&#xA;We first read the response into a string or byte representation using the io.ReadAll method and then use the json.Unmarshal to convert the bytes into golang object/instance.&#xA;&#xA;```go&#xA;var updatedPost Post&#xA;&#xA;respBody, err := io.ReadAll(resp.Body)&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;    &#xA;// convert the response json bytes to Post object in golang&#xA;err = json.Unmarshal(respBody, &amp;updatedPost)&#xA;if err != nil {&#xA;&#x9;log.Fatal(err)&#xA;}&#xA;&#xA;fmt.Println(updatedPost)&#xA;fmt.Println(updatedPost.Title)&#xA;```&#xA;&#xA;In the above example, we have read the response Body which can be accessed as the `Body` field in the [Response](https://pkg.go.dev/net/http#Response) object via the `resp` variable. So, the function will return the `respBody` as a string or an error if any. Then using this string object, we can use the json.Unmarshal function to send this string and create this `updatedPost` object of Post struct. This method will mutate this object as we have passed it by reference indicated by `&amp;updatedPost` . So, this will do two things, one update / mutate the `updatedPost` instance from the `respBody` and give any error if any arrises during the [deserialsation](https://developer.mozilla.org/en-US/docs/Glossary/Deserialization) of the response .&#xA;&#xA;Now, we have the object in golang from the response bytes, we can use it as per requirements.&#xA;&#xA;So, that is the example in the entirety.&#xA;&#xA;Let’s simplify the steps which are similar to the POST/PUT method as well&#xA;&#xA;* Define/construct URL&#xA;    &#xA;* Marshal the object into JSON string as the request body&#xA;    &#xA;* Construct the request object (method, URL and the body)&#xA;    &#xA;* Send the request with the default client&#xA;    &#xA;* Read the response and deserialise/unmarshall&#xA;    &#xA;* Access the object in golang&#xA;    &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;type Post struct {&#xA;&#x9;ID     int    `json:&#34;id,omitempty&#34;`&#xA;&#x9;Title  string `json:&#34;title,omitempty&#34;`&#xA;&#x9;Body   string `json:&#34;body,omitempty&#34;`&#xA;&#x9;UserId int    `json:&#34;userId,omitempty&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// define URL to hit the API&#xA;    baseURL := &#34;https://jsonplaceholder.typicode.com&#34;&#xA;    postId := 4&#xA;    postURL := fmt.Sprintf(&#34;%s/posts/%d&#34;, baseURL, postId)&#xA;&#xA;    // define the body -&gt; with the field to update&#xA;&#x9;reqBody := `{&#34;body&#34;: &#34;new body&#34;}`&#xA;&#x9;fmt.Println(&#34;New body:&#34;, reqBody)&#xA;&#xA;    // send a new request, with the `PATCH` method, url and the body&#xA;&#x9;req, err := http.NewRequest(&#34;PATCH&#34;, postURL, strings.NewReader(reqBody))&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;    // set the header content type to json&#xA;&#x9;req.Header.Set(&#34;Content-Type&#34;, &#34;application/json&#34;)&#xA;&#xA;&#x9;client := &amp;http.Client{}&#xA;&#x9;resp, err := client.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#x9;fmt.Println(&#34;Response status code:&#34;, resp.StatusCode)&#xA;&#x9;fmt.Println(&#34;Response Status:&#34;, resp.Status)&#xA;&#x9;&#xA;    var updatedPost Post&#xA;&#xA;&#x9;respBody, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;    &#xA;    // convert the response json bytes to Post object in golang&#xA;&#x9;err = json.Unmarshal(respBody, &amp;updatedPost)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Println(updatedPost)&#xA;&#x9;fmt.Println(updatedPost.Title)&#xA;&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;New body: {&#34;body&#34;: &#34;new body&#34;}&#xA;Response status code: 200&#xA;Response Status: 200 OK&#xA;{4 eum et est occaecati new body 1}&#xA;&#xA;title: eum et est occaecati                                                                         &#xA;body: new body&#xA;id: 4&#xA;user id: 1&#xA;```&#xA;&#xA;As you would see, it has only updated the `body` and has not updated the other fields.&#xA;&#xA;If you would have sent a similar body with a `PUT` method, the results would have been different. That would have been dependent on the implementation of the API of course, but if there is only a few fields in the request body for a PUT method, it would have replaced the values with the empry values which are not present in the request body.&#xA;&#xA;That is the difference between a `PUT` and a `PATCH` method, the `PATCH` method, ideally should only update the fields of the entity which are mentioned in the request body, whereas the `PUT` method has to update the entire resource whether the fields are provided or not. Again, the implementation of these API on the server plays a vital role in how the behaviour defers and the method in itself would perform.&#xA;&#xA;This is also called as `JSON Merge Patch`&#xA;&#xA;## JSON Merge PATCH&#xA;&#xA;The above API is implementing a [Merge PATCH](https://datatracker.ietf.org/doc/html/rfc7386) which is to say, merge the changes in the actual entity.&#xA;&#xA;Let’s say there is a Blog post Entity on a Server, you have a post that you are writing as an author. The post has a id of `4` let’s say and you are constantly changing the body of the post.&#xA;&#xA;So, you don’t want to send the `title` or `author_id` or any field that is not changing from the client again and again while saving, so the `MERGE PATCH` endpoint will be helpful in that case, where the client only sends the required fields to be updated.&#xA;&#xA;In this example, the user would only send the `body` of the post to the API every time it makes changes or saves the draft. In some cases, it might also want to change the title, so it will include the title, but not all the fields. The API will know it is a `PATCH` request and the content type is `json` so it will only change or update the fields that are provided in the request body to the actual entity in the database or wherever it is stored on the server.&#xA;&#xA;So, that is what is the JSON Merge PATCH or Merge PATCH in general. JSON Merge PATCH is specific to the JSON based document APIs.&#xA;&#xA;Below is the example, the same steps but a different endpoint. A user API that I have specifically created for demonstrating the difference in a PUT vs Merge PATCH vs JSON PATCH requests.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;ID    int    `json:&#34;id,omitempty&#34;`&#xA;&#x9;Name  string `json:&#34;name,omitempty&#34;`&#xA;&#x9;Email string `json:&#34;email,omitempty&#34;`&#xA;&#x9;Roles string `json:&#34;roles,omitempty&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;    baseURL := &#34;https://dummy-json-patch.netlify.app/.netlify/functions&#34;&#xA;    userID := 2&#xA;&#x9;apiURL := fmt.Sprintf(&#34;%s/users/?id=%d&#34;, baseURL, userID)&#xA;    &#xA;    userObj := User{&#xA;&#x9;&#x9;Name:  &#34;dummy name&#34;,&#xA;&#x9;&#x9;Roles: &#34;dummy role&#34;,&#xA;&#x9;}&#xA;&#xA;&#x9;var jsonPatchBody []byte&#xA;&#x9;jsonPatchBody, err := json.Marshal(userObj)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;    &#xA;    // OR directly define the json as string&#xA;&#x9;//jsonMergePatchBody := `{&#xA;    //    &#34;name&#34;: &#34;new dummy name&#34;,&#xA;    //    &#34;roles&#34;: &#34;new dummy role&#34;&#xA;    //}`&#xA;&#xA;&#x9;req, err := http.NewRequest(&#34;PATCH&#34;, apiURL, strings.NewReader(jsonMergePatchBody))&#xA;&#x9;req.Header.Set(&#34;Content-Type&#34;, &#34;application/json&#34;)&#xA;&#xA;&#x9;client := &amp;http.Client{}&#xA;&#x9;resp, err := client.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;var user User&#xA;&#xA;&#x9;resBody, err := io.ReadAll(resp.Body)&#xA;&#x9;fmt.Println(string(resBody))&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;err = json.Unmarshal(resBody, &amp;user)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Println(&#34;Updated/Patched User&#34;, user)&#xA;&#x9;fmt.Println(&#34;Id:&#34;, user.ID)&#xA;&#x9;fmt.Println(&#34;Name:&#34;, user.Name)&#xA;&#x9;fmt.Println(&#34;Email:&#34;, user.Email)&#xA;    fmt.Println(&#34;Bio:&#34;, user.Bio)&#xA;&#x9;fmt.Println(&#34;Roles:&#34;, user.Roles)&#xA;}&#xA;```&#xA;&#xA;Original User with id 2&#xA;&#xA;```plaintext&#xA;{&#34;id&#34;:2,&#34;name&#34;:&#34;dummy&#34;,&#34;email&#34;:&#34;dummyyummy@user.com&#34;,&#34;bio&#34;:&#34;empty bio&#34;,&#34;roles&#34;:&#34;read&#34;}&#xA;&#xA;Id: 2&#xA;Name: dummy&#xA;Email: dummyyummy@user.com&#xA;Bio: empty bio&#xA;Roles: read&#xA;```&#xA;&#xA;Output of the program&#xA;&#xA;```plaintext&#xA;Request Body: {&#34;name&#34;:&#34;dummy name&#34;,&#34;roles&#34;:&#34;dummy role&#34;}&#xA;&#xA;{&#34;id&#34;:2,&#34;name&#34;:&#34;dummy name&#34;,&#34;email&#34;:&#34;dummyyummy@user.com&#34;,&#34;bio&#34;:&#34;empty bio&#34;,&#34;roles&#34;:&#34;dummy role&#34;}&#xA;&#xA;Updated/Patched User {2 dummy name dummyyummy@user.com empty bio dummy role}&#xA;&#xA;Id: 2&#xA;Name: dummy name&#xA;Email: dummyyummy@user.com&#xA;Bio: empty bio&#xA;Roles: dummy role &#xA;```&#xA;&#xA;In the above example, the only fields that will be updated are `name` and `roles` , since the API is implemented to only update the fields provided in the json merge patch document (request body).&#xA;&#xA;As you can see that, only the `name` and `roles` are changed. The name was `dummy` that changed to `dummy name` and role changed from `read` to `dummy role` .&#xA;&#xA;Now, let’s see the same request but with PUT method on it.&#xA;&#xA;Before we hit this API however, let’s note what the user with id 2 is&#xA;&#xA;```json&#xA;{&#xA;  &#34;id&#34;: 2,&#xA;  &#34;name&#34;: &#34;dummy name&#34;,&#xA;  &#34;email&#34;: &#34;dummyyummy@user.com&#34;,&#xA;  &#34;bio&#34;: &#34;empty bio&#34;,&#xA;  &#34;roles&#34;: &#34;dummy role&#34;&#xA;}&#xA;```&#xA;&#xA;This is the result of our recent patch request. Now, we will send a PUT request to the same user with a different body.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;ID    int    `json:&#34;id,omitempty&#34;`&#xA;&#x9;Name  string `json:&#34;name,omitempty&#34;`&#xA;&#x9;Email string `json:&#34;email,omitempty&#34;`&#xA;&#x9;Bio   string `json:&#34;bio,omitempty&#34;`&#xA;&#x9;Roles string `json:&#34;roles,omitempt&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;    baseURL := &#34;https://dummy-json-patch.netlify.app/.netlify/functions&#34;&#xA;    userID := 2&#xA;&#x9;apiURL := fmt.Sprintf(&#34;%s/users/?id=%d&#34;, baseURL, userID)&#xA;    &#xA;&#x9;userObj := User{&#xA;&#x9;&#x9;Name:  &#34;not a dummy name&#34;,&#xA;&#x9;&#x9;Roles: &#34;not a dummy role&#34;,&#xA;&#x9;}&#xA;&#xA;&#x9;var jsonPatchBody []byte&#xA;&#x9;jsonPatchBody, err := json.Marshal(userObj)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(&#34;Request Body:&#34;, string(jsonPatchBody))&#xA;&#xA;&#x9;//jsonPatchBody := `{&#xA;&#x9;//    &#34;name&#34;: &#34;dummy&#34;,&#xA;&#x9;//    &#34;roles&#34;: &#34;new dummy role&#34;&#xA;&#x9;//}`&#xA;&#xA;&#x9;req, err := http.NewRequest(&#34;PUT&#34;, apiURL, strings.NewReader(string(jsonPatchBody)))&#xA;&#x9;req.Header.Set(&#34;Content-Type&#34;, &#34;application/merge-patch+json&#34;)&#xA;&#xA;&#x9;client := &amp;http.Client{}&#xA;&#x9;resp, err := client.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;var user User&#xA;&#xA;&#x9;resBody, err := io.ReadAll(resp.Body)&#xA;&#x9;fmt.Println(string(resBody))&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;err = json.Unmarshal(resBody, &amp;user)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Println(&#34;Updated/Patched User&#34;, user)&#xA;&#x9;fmt.Println(&#34;Id:&#34;, user.ID)&#xA;&#x9;fmt.Println(&#34;Name:&#34;, user.Name)&#xA;&#x9;fmt.Println(&#34;Bio:&#34;, user.Bio)&#xA;&#x9;fmt.Println(&#34;Email:&#34;, user.Email)&#xA;&#x9;fmt.Println(&#34;Roles:&#34;, user.Roles)&#xA;}&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```plaintext&#xA;Request Body: {&#34;name&#34;:&#34;not a dummy name&#34;,&#34;roles&#34;:&#34;not a dummy role&#34;}&#xA;&#xA;{&#34;id&#34;:2,&#34;name&#34;:&#34;not a dummy name&#34;,&#34;email&#34;:&#34;&#34;,&#34;bio&#34;:&#34;&#34;,&#34;roles&#34;:&#34;not a dummy role&#34;}&#xA;&#xA;Updated/Patched User {2 not a dummy name   not a dummy role}&#xA;&#xA;Id: 2&#xA;Name: not a dummy name&#xA;Bio:&#xA;Email:&#xA;Roles: not a dummy role&#xA;```&#xA;&#xA;As you can see the `name` and `roles` are updated, however the `bio` and `email` fields are empty. Since we only said to update the `name` and `roles` fields, but it was a `PUT` request, it expects all the fields, and if any of the fields are missing, it will consider them as empty and update those as well.&#xA;&#xA;So, the difference might be crystal clear now. When to use `PATCH` and when to avoid `PUT`.&#xA;&#xA;* When you have a large set of updates, preference could be given to PUT&#xA;    &#xA;* If you have very specific fields to update and a very limited fields PATCH is recommended&#xA;    &#xA;&#xA;There is other type of PATCH specifically designed for JSON APIs, or I should say JSON Documents APIs.&#xA;&#xA;## JSON PATCH&#xA;&#xA;The [JSON PATCH](https://datatracker.ietf.org/doc/html/rfc6902/) is a specification in which we can define what operations to perform on which fields, or path of the fields to replace, move or copy to.&#xA;&#xA;&gt; A JSON Patch document is a JSON document that represents an array of objects. Each object represents a single operation to be applied to the target JSON document.&#xA;&#xA;As it takes this operations, it applies them sequentially and hence it won’t replace all the fields for the entity, as that is the expected behavior of the PATCH method. In other words, it would only apply changes to the fields and the related fields provided in the json patch document (request body).&#xA;&#xA;There are a few operations that you can perform with this json patch method, and provide the instructions accordingly for individual operations in the JSON PATCH document.&#xA;&#xA;Operations&#xA;&#xA;* add&#xA;    &#xA;* remove&#xA;    &#xA;* replace&#xA;    &#xA;* move&#xA;    &#xA;* copy&#xA;    &#xA;* test&#xA;    &#xA;&#xA;So, for each of the operations, a high level definition can be considered as:&#xA;&#xA;* To add a field you can specify the operation as `add` , the path as the field to be added, and the value as the actual value to be added&#xA;    &#xA;* To remove a field, you can specify the operation as `remove` , and the path as the field to remove&#xA;    &#xA;* To replace a field, you can specify the operation as `replace`, the path as the field to be updated/replaced, and the value of the actual value to be added&#xA;    &#xA;* To move a field, you can specify the operation as `move`, the **from** as the field to be updated/moved from and the path to the field the from value should be moved to.&#xA;    &#xA;* To copy a field, you can specify the operation as `copy`, the from as the field to updated/copied from and the path to the field to which the value should be copied to.&#xA;    &#xA;* The test operation is a bit different as it is used for comparison of a `path` value to the actual value specified in the object. It might return true or false, but not actually return it might be used as a checkpoint for continuing with the operation in the document.&#xA;    &#xA;&#xA;In this example, we are creating a similar patch request, but using this json patch document kind of structure.&#xA;&#xA;### Construct the json-patch document&#xA;&#xA;### Send the request&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;ID    int    `json:&#34;id&#34;`&#xA;&#x9;Name  string `json:&#34;name&#34;`&#xA;&#x9;Email string `json:&#34;email&#34;`&#xA;    Bio   string `json:&#34;bio&#34;`&#xA;&#x9;Roles string `json:&#34;roles&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;baseURL := &#34;https://dummy-json-patch.netlify.app/.netlify/functions&#34;&#xA;    userID := 2&#xA;&#x9;apiURL := fmt.Sprintf(&#34;%s/users/?id=%d&#34;, baseURL, userID)&#xA;&#xA;&#x9;jsonPatchBody := `[&#xA;        {&#xA;            &#34;op&#34;: &#34;replace&#34;,&#xA;            &#34;path&#34;: &#34;/name&#34;,&#xA;            &#34;value&#34;: &#34;new dummy name&#34;&#xA;        },&#xA;        {&#xA;            &#34;op&#34;: &#34;replace&#34;,&#xA;            &#34;path&#34;: &#34;/roles&#34;,&#xA;            &#34;value&#34;: &#34;new dummy role&#34;&#xA;        },&#xA;    ]`&#xA;&#xA;&#x9;req, err := http.NewRequest(&#34;PATCH&#34;, apiURL, strings.NewReader(jsonPatchBody))&#xA;&#x9;req.Header.Set(&#34;Content-Type&#34;, &#34;application/json-patch+json&#34;)&#xA;&#xA;&#x9;client := &amp;http.Client{}&#xA;&#x9;resp, err := client.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;var user User&#xA;&#xA;&#x9;resBody, err := io.ReadAll(resp.Body)&#xA;&#x9;fmt.Println(string(resBody))&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;err = json.Unmarshal(resBody, &amp;user)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Println(&#34;Updated/Patched User&#34;, user)&#xA;&#x9;fmt.Println(&#34;Id:&#34;, user.ID)&#xA;&#x9;fmt.Println(&#34;Name:&#34;, user.Name)&#xA;&#x9;fmt.Println(&#34;Bio:&#34;, user.Bio)&#xA;&#x9;fmt.Println(&#34;Email:&#34;, user.Email)&#xA;&#x9;fmt.Println(&#34;Roles:&#34;, user.Roles)&#xA;}&#xA;```&#xA;&#xA;Original User (id=2) before the request&#xA;&#xA;```json&#xA;{&#xA;  &#34;id&#34;: 2,&#xA;  &#34;name&#34;: &#34;dummy name&#34;,&#xA;  &#34;email&#34;: &#34;dummyyummy@user.com&#34;,&#xA;  &#34;bio&#34;: &#34;empty bio&#34;,&#xA;  &#34;roles&#34;: &#34;dummy role&#34;&#xA;}&#xA;```&#xA;&#xA;Output of the Program (JSON PATCH Request)&#xA;&#xA;```plaintext&#xA;{&#34;id&#34;:2,&#34;name&#34;:&#34;new dummy name&#34;,&#34;email&#34;:&#34;dummyyummy@user.com&#34;,&#34;bio&#34;: &#34;empty bio&#34;, &#34;roles&#34;:&#34;new dummy role&#34;}     &#xA;&#xA;Updated/Patched User {2 new dummy name dummyyummy@user.com empty bio new dummy role} &#xA;&#xA;Id: 2&#xA;Name: new dummy name&#xA;Email: dummyyummy@user.com&#xA;Bio: empty bio&#xA;Roles: read&#xA;```&#xA;&#xA;### References:&#xA;&#xA;* [Wikipedia: HTTP Request Methods](https://en.wikipedia.org/wiki/HTTP#Request\_methods)&#xA;    &#xA;* [Ruby on Rails: Patch is the new primary HTTP method for updates](https://rubyonrails.org/2012/2/26/edge-rails-patch-is-the-new-primary-http-method-for-updates)&#xA;    &#xA;* [RFC 5789](https://datatracker.ietf.org/doc/html/rfc5789)</content>
      <type></type>
    </item>
    <item>
      <title>Understanding Go&#39;s Weak Package</title>
      <link>https://www.meetgor.com/posts/understanding-go-s-weak-package</link>
      <description>Go&#39;s weak package provides weak references for garbage collection.</description>
      <pubDate>Thu, 02 Jan 2025 00:00:00 UTC</pubDate>
      <content>&#xA;Go&#39;s weak package provides weak references for garbage collection.&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Golang weak package</title>
      <link>https://www.meetgor.com/posts/golang-weak-package</link>
      <description>Introduction What is weak package about? What are weak pointers Difference of weak and strong references Context of Garbage collector and unique pointers Mechan</description>
      <pubDate>Thu, 02 Jan 2025 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;## What is weak package about?&#xA;&#xA;## What are weak pointers&#xA;&#xA;## Difference of weak and strong references&#xA;&#xA;## Context of Garbage collector and unique pointers&#xA;&#xA;## Mechanism of weak pointers&#xA;&#xA;## Usage of weak pointers&#xA;&#xA;## Examples&#xA;&#xA;## Quirks and Scenarios&#xA;&#xA;## Conclusion</content>
      <type></type>
    </item>
    <item>
      <title>2024: Year Review</title>
      <link>https://www.meetgor.com/posts/2024-review</link>
      <description>Introduction It is the end of the year, and I write this yearly as I have been writing it since . I get a lot of insights on the things after writing them. Let�</description>
      <pubDate>Tue, 31 Dec 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;It is the end of the year, and I write this yearly as I have been writing it since [2022](https://www.meetgor.com/tag/developer-life/). I get a lot of insights on the things after writing them. Let’s dive in for my journey from intern to full-time backend developer.&#xA;&#xA;&gt; “2024, You had me in the first half”&#xA;&#xA;No jokes here, I was scared as shit till May 31st, something special happened on May 31st, 2024 in my life!&#xA;&#xA;Well, I got a full-time offer letter and the results of my bachelor’s degree which I passed with an 8.9 CGPA (not good but not bad either). If someone had to tell me an hour could change your life then this would be the day, exactly between 5 and 6 in the evening, I went from dropped shoulders to the happiest person in the world.&#xA;&#xA;I can’t describe that feeling; it has a special place in my heart. From June 2023 to May 2024, I faced rejections left and right, but there was hope, a flicker of fire left that carried me here. I went from having nothing to being overwhelmed by so many great things that happened to me.&#xA;&#xA;I am grateful to my parents, family, manager, friends, and Mother Nature for blessing me with whatever I receive in 2024 🥹&#xA;&#xA;### Gist&#xA;&#xA;It was a good year, I learned a lot. I can say confidently I am more mature than I was a year ago. The year 2023 was a bit harsh for me, especially in the end, but that continued the effects for some time in 2024 as well. However, as I said, that single hour changed my life in a good way.&#xA;&#xA;I became more of a problem thinker than a software developer, I was solving real-life problems at work. I faced challenges and failed at times, but I reflected, learned from them, and moved ahead. This is the aim of this post as well. I want to learn from this year. It feels overwhelming at first, but if I put it word by word, sentence by sentence, then it becomes crystal clear where I shined and where I fumbled, and the holes in the wall.&#xA;&#xA;I feel like the first half of 2024 was the test of my patience and consistency. I was interning at Docsumo, I was assigned the task of one specific feature, then I got a few more tasks simultaneously and kept on chipping in bug fixes and understanding of the codebase, which by the end of one quarter I was given the sole responsibility of maintenance. That was not as boring as you might expect, it was challenging, it was about thinking through edge cases and solving the issues that the users were facing. This helped me develop my problem-solving skills and improve my communication skills.&#xA;&#xA;&#xA;## Lessons learned&#xA;&#xA;- Hard work pays off, it might take time, but it does, patience is the key&#xA;- Things can change in an instance, don&#39;t wait for the right/perfect time, make every second count&#xA;- People are going to judge anyway, so don&#39;t hold back on the things that you wish to do, and don&#39;t let that fear hold you&#xA;- Writing is one of the things that can make your thoughts and impact immortal (to some extent)&#xA;&#xA;I won&#39;t leave this article without a quote or two, I really think quotes can change your perspective of life (not all obviously, but some)&#xA;&#xA;&gt; &#34;Do one thing every day that scares you&#34;&#xA;&gt; -- Eleanor Roosevelt&#xA;&#xA;I started recording videos, recording the voice-over (yikes, I hated my voice for the initial few videos, I still hate it, but have gotten used to it).&#xA;I don&#39;t just say, I do! Or I do and then say like this article 😎&#xA;&#xA;One more, please?&#xA;&#xA;&gt; &#34;It&#39;s not the years in your life that count. It&#39;s the life in your years.&#34; &#xA;&gt; -- Abraham Lincoln&#xA;&#xA;This is how I feel right now, I overcame one obstacle in my life, and the other is right here waiting for me to shoot down. You can&#39;t wait for life to ease out for you, you will have to find happiness in the hardships and highs and lows in your life.&#xA;&#xA;## Achievements&#xA;&#xA;* Land a full-time offer as a junior backend developer at [Docsumo](https://www.docsumo.com/)&#xA;    &#xA;* Started a YouTube channel, despite fear and doubts&#xA;    &#xA;* Started a [Tech Newsletter](https://techstructively.substack.com/) (summary of my learnings) with 23 editions written in 23 weeks&#xA;    &#xA;* Created a writing routine for 50 days (wrote some literature/philosophical-related articles)&#xA;    &#xA;&#xA;I feel good, but there is a mountain to climb. It is a marathon, not a sprint. I am excited about the year ahead, all the things I will learn, what responsibilities will be given, and will I make new connections in tech.&#xA;&#xA;## Technical Retrospective&#xA;&#xA;I would like to view the year into the things I created namely the blogs and the video that I made.&#xA;&#xA;### Articles Written&#xA;&#xA;This could have been better but still managed to write around 13 articles (excluding this), about Cody setup in Neovim, Nginx basics, Golang HTTP POST and PUT methods, Advent of Code approach and solutions in Golang.&#xA;&#xA;* [Configuring Sourcegraph’s Cody in Neovim](https://www.meetgor.com/neovim-sourcegraph-cody/)&#xA;    &#xA;* [NGINX Survival Guide](https://www.meetgor.com/series/nginx-survival-guide)&#xA;    &#xA;* [100 Days of Golang](https://www.meetgor.com/series/100-days-of-golang): Edition 34th and 35th Article about [POST](https://www.meetgor.com/golang-web-post-method/) and [PUT](https://www.meetgor.com/golang-web-put-method/) method&#xA;    &#xA;* Using LibSQL Database in Golang: [Connection](https://www.meetgor.com/turso-libsql-db-golang/) and using [Embedded Replicas](https://www.meetgor.com/turso-libsql-embedded-replicas-golang/)&#xA;    &#xA;* [Safely using Maps in Golang: The difference between declaration and initialization](https://www.meetgor.com/golang-safely-using-maps/)&#xA;    &#xA;* [Advent of Code, 2024 in Golang](https://www.meetgor.com/series/advent-of-code-2024): Approach and Solution Explanation (till day 5, might write a few more, have some rough drafts to be completed)&#xA;    &#xA;&#xA;### Youtube Channel&#xA;&#xA;On September 9th I published my first YouTube video, I always wanted to create some tutorial about the things I know or want to learn. On that day, I kept my doubts and fears aside and hit the record button. It went bad, but I published the video. It took me roughly 5 hours to record the voice-over for a 6-minute video, I was in shambles after that. I then decided to double down on creating YouTube videos, to remove my fear of communicating, speaking, and expressing ideas and emotions through YouTube. And after 4 months, I have published around 14 videos. Not only that, I started to do live streams on weekends as well. I did around 16 live streams on Twitch and YouTube, I feel a lot better and a lot more confident.&#xA;&#xA;And yes the time required for me to create a video, record voice-over, and do the editing is significantly improving with each video, the last video I posted of 50 minutes, I recorded within an hour, one shot, and the next hour, the video was published. If I am the only person to benefit from my videos, then so be it.&#xA;&#xA;&#xA;Here is the summary of the videos I made from September to December:&#xA;&#xA;* Using Turso LibSQL Database in Golang \[[remote-connection](https://youtu.be/vBrvX0X0phw?si=eEDV_qa9uXqI_vyK)\] \[[embedded replicas](https://youtu.be/BitxB40rdVw?si=E3KK1hoGf1hY0klq)\] \[[libsql local server](https://youtu.be/rIXePHhdUSI?si=IYk9A4zpBvr9pILf)\]&#xA;    &#xA;* Using [Goose](https://youtu.be/9kMeb7d2id8?si=FHyFKcXme8WyFij_) and [SQLC](https://youtu.be/mv1vCqUy_5o?si=dG25tC1U_zsoAhPd) with LibSQL in a Golang Application&#xA;    &#xA;* [Deploying Netlify cloud functions in Golang](https://youtu.be/BY2Z2Em7OdA?si=u2zjCWoOOI4wRivC)&#xA;    &#xA;* What’s new in Golang 1.24 \[[SplitSeq](https://youtu.be/hee9KUhvQsY?si=FqhGRXbhKOgPrVLj)\] \[[Omit Zero JSON Tag](https://youtu.be/RFUz4_axOZg?si=88iUnjGBKCx_fFiv)\]&#xA;    &#xA;* Advent of Code \[[day-1](https://youtu.be/4U97gLyz0Ss?si=vxfmbQxaaUf1GRmk)\] \[[day-2](https://youtu.be/jGrPkldp4rY?si=Brdb1ulBYwUfXr7F)\] \[[day-3](https://youtu.be/nhueQbOKdTs?si=MjHPXbKoCml5x-Ut)\] \[[day-4](https://youtu.be/029R_DKLqko?si=Iv5rVD_meKlXbMJs)\] \[[day-5](https://youtu.be/7pl7WRmfepQ?si=QdGigfhn04rSUEp4)\] … ran out of time&#xA;&#xA;I just have only 50 odd subscribers and a couple of thousand views, it is just the start, and I am not looking into the metrics right now, surely I am trying to improve and build a community, but numbers don&#39;t mean anything to me right now at least. I am creating videos that I would like to watch myself if I hadn&#39;t known about the concepts that I once didn&#39;t knew.&#xA;    &#xA;&#xA;### Open Source Contributions&#xA;&#xA;I wanted to do many more open-source contributions, but since I worked on creating YouTube videos, I didn’t get much headspace to work on them. Next year I want to create some form a habit to read some code, that way I know I will bang my head to fix the issues or discover something new. Creating a chain of exploring, learning, and contributing.&#xA;&#xA;* Minor additions on [steampipe](https://github.com/turbot/steampipe/pull/4315) and [turso-cli](https://github.com/tursodatabase/turso-cli/pull/815) repositories&#xA;    - Added CSV flag for separator in db create command&#xA;    - Fix the bug for only showing timing when verbose flag is set&#xA;&#xA;### Projects Created&#xA;&#xA;I created a few projects, some of them just to explore the technologies and little MVPs, one project that I can be open to showing is the [Outsmarty](https://github.com/Mr-Destructive/outsmarty) project. This project is like a Psyche Clone in Golang, it’s not a mobile application, it is a web app. I have not completed it as usual but it is a good concept that I created till most of the heavy lifting, I usually quit at the frontend interfaces.&#xA;&#xA;## Non-Technical Stuff&#xA;&#xA;I was really into writing and reading in the second half of the year. I decided to create a morning routine for a couple of weeks, but Indian parents, had to put that routine into trash can, but it was a good experience while it lasted.&#xA;&#xA;* Read 6 books \[[GoodReads profile](https://www.goodreads.com/user/show/82596574-meet)\]&#xA;    &#xA;* Wrote [50 Posts](https://mythstomeaning.substack.com/) about mythology and philosophy&#xA;    &#xA;&#xA;Hopefully, as I carry forward this momentum and energy, I will likely have one novella completed this year. I not just hoping to do that I am planning it now.&#xA;&#xA;## Goals for 2025&#xA;&#xA;Moving into 2025, I want to write more code and of course articles and videos. Let’s get more specific of what I am thinking of:&#xA;&#xA;* Write blogs for FreeCodeCamp (at least 3)&#xA;    &#xA;* Learn Kubernetes in detail&#xA;    &#xA;* Write 25 articles on 100-days-of-Golang&#xA;    &#xA;* Learn Rust or Zig&#xA;    &#xA;* Get 2K subscribers on YouTube (100 videos)&#xA;    &#xA;## Conclusion&#xA;&#xA;It was a year to remember for sure. I said this last year as well but in a bad way. If there was a year to forget, it would have been late 2023 and the first half of 2024, but the second half of 2024 was a blessing for me. I overcame my fear of speaking to some extent, at least starting to address the issues. &#xA;&#xA;If I were to sum up the year, I am definitely happier than I was last year, and a lot more excited than I was last year. Money can&#39;t buy happiness, if you don&#39;t find joy within, nothing can be happiness. I think curiosity, consistency, and grit are pillars of my life and I stick to it this year and want to continue to do so.&#xA;&#xA;I will leave you with one more quote,&#xA;&#xA;&gt; &#34;Life is not about waiting for the storm to pass, it&#39;s about learning to dance in the rain.&#34; &#xA;&gt; -- Vivian Greene&#xA;&#xA;I hope your year was good too, if not may 2025 be the year of your dreams, you absolutely got this.&#xA;&#xA;See you on the other side 👋&#xA;&#xA;Happy New Year 🎆&#xA;&#xA;And as always,&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Advent of Code Day 5 in Golang: Ordering Pages</title>
      <link>https://www.meetgor.com/posts/aoc-2024-day-5</link>
      <description>Introduction It is day 5 of the advent of code, and today we have an interesting problem of ordering pages. Let’s dive into the problem and how I approached i</description>
      <pubDate>Wed, 11 Dec 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;It is day 5 of the advent of code, and today we have an interesting problem of ordering pages. Let’s dive into the problem and how I approached it. It was a pretty simple problem if thought it peacefully, otherwise, it would get into a map, list, and indices mess.&#xA;&#xA;## Input&#xA;&#xA;In the input for day 5, we have two sections, The first defines the rules for ordering the pages, specifically which page should come before which and the second contains the actual order of pages.&#xA;&#xA;```plaintext&#xA;47|53&#xA;97|13&#xA;97|61&#xA;97|47&#xA;75|29&#xA;61|13&#xA;75|53&#xA;29|13&#xA;97|29&#xA;53|29&#xA;61|53&#xA;97|53&#xA;61|29&#xA;47|13&#xA;75|47&#xA;97|75&#xA;47|61&#xA;75|61&#xA;47|29&#xA;75|13&#xA;53|13&#xA;&#xA;75,47,61,53,29&#xA;97,61,53,29,13&#xA;75,29,13&#xA;75,97,47,61,53&#xA;61,13,29&#xA;97,13,75,29,47&#xA;```&#xA;&#xA;So, the first section has the rules mapped out, the other has the ordering of pages, and each line is a query or a list of pages as our actual data to process. We need to use it in the processing of parts 1 and 2.&#xA;&#xA;### Reading Sections&#xA;&#xA;So, we need to parse these sections and read them in a data structure that could be easier to access.&#xA;&#xA;One way to do that would be&#xA;&#xA;* A list with two sections&#xA;    &#xA;* The first section will be a list&#xA;    &#xA;    * The list will be a list of integers to hold the two integers i.e. for rules&#xA;        &#xA;* The second section will be a list&#xA;    &#xA;    * The list will be a list of integers to hold the page list&#xA;        &#xA;&#xA;So, the data structure would look like a list of list of list of integers.&#xA;&#xA;```go&#xA;&#xA;func ReadFileSections(path string) [][][]int {&#xA;&#xA;&#x9;fileBytes := ReadFileBytes(path)&#xA;&#x9;lines := []string{}&#xA;&#x9;separator := []byte(&#34;&#xA;&#xA;&#34;)&#xA;&#x9;for _, line := range bytes.Split(fileBytes, separator) {&#xA;&#x9;&#x9;if string(line) != &#34;&#34; {&#xA;&#x9;&#x9;&#x9;lines = append(lines, string(line))&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#xA;&#x9;sections := [][][]int{}&#xA;&#x9;for i, section := range lines {&#xA;&#x9;&#x9;nums := [][]int{}&#xA;&#x9;&#x9;lineStrs := strings.Split(section, &#34;&#xA;&#34;)&#xA;&#x9;&#x9;separator := &#34;,&#34;&#xA;&#x9;&#x9;if i == 0 {&#xA;&#x9;&#x9;&#x9;separator = &#34;|&#34;&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;for _, lineStr := range lineStrs {&#xA;&#x9;&#x9;&#x9;if lineStr == &#34;&#34; {&#xA;&#x9;&#x9;&#x9;&#x9;continue&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;numL := []int{}&#xA;&#x9;&#x9;&#x9;for _, numStr := range strings.Split(lineStr, separator) {&#xA;&#x9;&#x9;&#x9;&#x9;num, _ := strconv.Atoi(numStr)&#xA;&#x9;&#x9;&#x9;&#x9;numL = append(numL, num)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;nums = append(nums, numL)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;sections = append(sections, nums)&#xA;&#x9;}&#xA;&#x9;return sections&#xA;}&#xA;```&#xA;&#xA;The above function called `ReadFileSections` takes in a path to the input file and returns a slice/array) of the list of list of integers as discussed. We first read the file and split the bytes into two newline characters that will be the separator for the sections, we will store the lines as a list of strings, the first will contain the rule lines and the second will contain the page list lines.&#xA;&#xA;Then we iterate over the section and split the individual lines for sections separately with the respective separator i.e. `|` for the first section and (whitespace) for the second section. We are parsing each line to get a list of integers and append them to the respective sections.&#xA;&#xA;So, we now have data that we can use to construct the rules and pages to help process the problem.&#xA;&#xA;### Constructing Rules&#xA;&#xA;Now, we need to process the rules list for convenient access, we need to get the page number that should appear after a given page, so we will use a map of integer with a list of integers, where the key will be the first number and the one of the value will be the second number ( the number that should appear after in order of the pages).&#xA;&#xA;```go&#xA;func ConstructRules(rulesList [][]int) map[int][]int {&#xA;&#x9;rules := make(map[int][]int)&#xA;&#x9;for _, rule := range rulesList {&#xA;&#x9;&#x9;rules[rule[0]] = append(rules[rule[0]], rule[1])&#xA;&#x9;}&#xA;&#x9;return rules&#xA;}&#xA;```&#xA;&#xA;We simply iterate over the list of integers and map the first element as the key and the value as the second element in the list, so to visualize:&#xA;&#xA;```go&#xA;FROM&#xA;&#xA;[][]int&#xA;&#xA;[&#xA;    [47,53]&#xA;    [97,13]&#xA;    [97,61]&#xA;]&#xA;&#xA;TO&#xA;&#xA;map[int][]int&#xA;{&#xA;    47: [53]&#xA;    97: [13,61]&#xA;}&#xA;```&#xA;&#xA;So, now have the rules as a map of integers with integers.&#xA;&#xA;### Constructing indices&#xA;&#xA;Now, to make the first and second parts easier, we need to make a map for each number in the rules section with the indices that appear in the pages list.&#xA;&#xA;So, we will iterate over the rules, which is a map of integers and integers, we will create a map of integers that will help us create a unique list of integers from the rules.&#xA;&#xA;Now, once we have the list of integers from the rules, we will iterate over all the numbers and on each page line, check on which index it appears, for creating a list of integers(indices).&#xA;&#xA;So, we iterate over all the numbers in the line of pages, if we find that number in the list of pages, we append the index, however, if we don’t we append -1, so for each line we need to have an index appended for that number like so:&#xA;&#xA;```plaintext&#xA;# 75&#xA;&#xA;75,47,61,53,29  -&gt;  0&#xA;97,61,53,29,13  -&gt; -1&#xA;75,29,13        -&gt;  0&#xA;75,97,47,61,53  -&gt;  0&#xA;61,13,29        -&gt; -1&#xA;97,13,75,29,47  -&gt;  2&#xA;&#xA;75[0,-1,0,0,-1,2]&#xA;&#xA;# map[int][]int&#xA;# 75 -&gt; int&#xA;# [0,-1,0,0,-1,2] -&gt; []int&#xA;```&#xA;&#xA;So, in the above example, we have taken 75 for reference, we will get the index for each list of page numbers, and we get the list of indexes where 75 appears.&#xA;&#xA;Now, this can be done with the following function:&#xA;&#xA;```go&#xA;&#xA;func GetPageIndices(rules map[int][]int, pages [][]int) map[int][]int {&#xA;&#x9;nums := make(map[int]bool)&#xA;&#x9;for num, list := range rules {&#xA;&#x9;&#x9;nums[num] = true&#xA;&#x9;&#x9;for _, elem := range list {&#xA;&#x9;&#x9;&#x9;if !nums[elem] {&#xA;&#x9;&#x9;&#x9;&#x9;nums[elem] = true&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#xA;&#x9;numIndices := make(map[int][]int)&#xA;&#x9;for num, _ := range nums {&#xA;&#x9;&#x9;for _, numLine := range pages {&#xA;&#x9;&#x9;&#x9;index := -1&#xA;&#x9;&#x9;&#x9;for i, n := range numLine {&#xA;&#x9;&#x9;&#x9;&#x9;if n == num {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;index = i&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;numIndices[num] = append(numIndices[num], index)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return numIndices&#xA;}&#xA;```&#xA;&#xA;So, we now have the index mapped at each page numbers list from the rules.&#xA;&#xA;## Part 1&#xA;&#xA;Now, for part one we need to iterate over each page update (line), then we need to check if the page numbers are following the rules, each number should follow the rules. This means that if a number is after a certain number but the rule says it should be before, then it has violated the page numbering rule in that update, so we cannot consider it as the correct ordered page, we need to add the middle page number of each update which is correctly ordered as the answer for the part one.&#xA;&#xA;To do that, we iterate over each page update, then we need to iterate over each of the numbers in that page update, we obtain all the rules associated with that number (let’s call it the current number) since we have a map of integers with a list of integers. Now, we have to check if the number that we currently are in is before the numbers in its rules. So, we check with the index of the current number using the number indices that we created which is a map of the number with a list of integers as indices. So, we obtain the list of indices of the map with the current number as the key for the map and the index in the list as the number of line/page updates we are currently in.&#xA;&#xA;Then once we have got the index for the current number, we obtain the same for the second number which is all the numbers in its rule, and if that number in its rule is present in that page line/update i.e. it is not -1 and if that is the case, we get the index of it similarly and check if it appears after the current number following the rule, And so if any number violates the rule, we need to mark the page update as not in correct order.&#xA;&#xA;As we see that the index rule for that page update is violated, we mark the order as false. If we see the ordered flag is still true, we update the score with the middle element of that page update.&#xA;&#xA;```go&#xA;&#xA;func GetOrderedPages(rules, numIndices map[int][]int, pages [][]int) int {&#xA;&#x9;score := 0&#xA;&#xA;&#x9;for index, pageLine := range pages {&#xA;&#x9;&#x9;ordered := true&#xA;&#x9;&#x9;for _, num1 := range pageLine {&#xA;&#x9;&#x9;&#x9;rule := rules[num1]&#xA;&#x9;&#x9;&#x9;index1 := numIndices[num1][index]&#xA;&#x9;&#x9;&#x9;for _, num2 := range rule {&#xA;&#x9;&#x9;&#x9;&#x9;index2 := numIndices[num2][index]&#xA;&#x9;&#x9;&#x9;&#x9;if index1 == -1 || index2 == -1 {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;continue&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;&#x9;if index1 &gt; index2 {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;ordered = false&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;if ordered {&#xA;&#x9;&#x9;&#x9;score += pageLine[int(len(pageLine)/2)]&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return score&#xA;}&#xA;```&#xA;&#xA;So, to reiterate, we create a function called `GetOrderedPage` with rule and number indices as a map of integers with a list of integers and the pages which is a list of integers as the page update. We return the score as the output of this function.&#xA;&#xA;We iterate through each of the page updates, then through each page number in the update, we check for the rule of that number and if the index of that is lower than the current number we mark it as not ordered, and hence at the end of each page update we update the score with the middle element of the page update, if the order is correct.&#xA;&#xA;So, that will be part one summed up, we just have to get the score of the correct ordered page updates.&#xA;&#xA;## Part 2&#xA;&#xA;In part 2 however, we need to check if the page update is in order, if it is not then we need to make it in order.&#xA;&#xA;We do a similar thing for part 2, we need to iterate over each of the page updates, and for each number in that page update, we need to check if the rule is violated or not, if we encounter any case where the rule is violated for any number, we mark the flag ordered as false, this we will use to correct the order of the page updates. After updating the pages in that page line/update, we need to add the score with middle element of the corrected order of page update.&#xA;&#xA;```go&#xA;&#xA;&#xA;func GetCorrectOrderedPages(rules, numIndices map[int][]int, pages [][]int) int {&#xA;&#xA;&#x9;score := 0&#xA;&#x9;for index, pageLine := range pages {&#xA;&#x9;&#x9;ordered := true&#xA;&#x9;&#x9;for _, num1 := range pageLine {&#xA;&#x9;&#x9;&#x9;rule := rules[num1]&#xA;&#x9;&#x9;&#x9;index1 := numIndices[num1][index]&#xA;&#x9;&#x9;&#x9;for _, num2 := range rule {&#xA;&#x9;&#x9;&#x9;&#x9;index2 := numIndices[num2][index]&#xA;&#x9;&#x9;&#x9;&#x9;if index1 == -1 || index2 == -1 {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;continue&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;&#x9;if index1 &gt; index2 {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;ordered = false&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;if !ordered {&#xA;&#x9;&#x9;&#x9;newLine := CorrectPageOrder(pageLine, rules)&#xA;&#x9;&#x9;&#x9;score += newLine[len(newLine)/2]&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return score&#xA;}&#xA;```&#xA;&#xA;We need to implement the CorrectPageOrder function that takes in the page line or page update and the rules, we need to create a new page update, that will populate the page that follows all the rules.&#xA;&#xA;So, we first keep track of the initialized elements index and update the index if we need to move the element before it.&#xA;&#xA;So, we iterate over all the numbers in the page update and set the index before any number in the rule, if we encounter any such number in the rule map, we need to update the index with the index of that number.&#xA;&#xA;And once we have got the index where we want to swap the element to, we create a slice before that index and append that number into it, and append everything after that index.&#xA;&#xA;```go&#xA;func CorrectPageOrder(line []int, rules map[int][]int) []int {&#xA;&#x9;newLine := []int{}&#xA;&#x9;for _, num := range line {&#xA;&#x9;&#x9;index := make(map[int]int)&#xA;&#x9;&#x9;for i, n := range newLine {&#xA;&#x9;&#x9;&#x9;index[n] = i&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;newInsertIndex := len(newLine)&#xA;&#x9;&#x9;for _, rule := range rules[num] {&#xA;&#x9;&#x9;&#x9;if idx, ok := index[rule]; ok {&#xA;&#x9;&#x9;&#x9;&#x9;if newInsertIndex &gt; idx {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;newInsertIndex = idx&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;afterNum := slices.Clone(newLine[newInsertIndex:])&#xA;&#x9;&#x9;newLine = append(newLine[:newInsertIndex], num)&#xA;&#x9;&#x9;newLine = append(newLine, afterNum...)&#xA;&#x9;}&#xA;&#x9;return newLine&#xA;}&#xA;```&#xA;&#xA;So, this function will find the index of a number to place it at the most extreme left (beginning of the list) so that we are not violating any rules for that number, then we create a slice to append that number before that index and append everything after that index.&#xA;&#xA;That’s it from part two, we have updated the page order if there were any discrepancies in the page order.&#xA;&#xA;## Conclusion&#xA;&#xA;So, that is it from day 5 of Advent of Code in Golang, let me know if you have any suggestions, and how you approached it. any better solutions?&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Advent of Code Day 4 in Golang: Finding XMAS and X-MAS</title>
      <link>https://www.meetgor.com/posts/aoc-2024-day-4</link>
      <description>Introduction Moving on to day 4, we have a grid problem in front of us, we are given some numbers in the form of a grid, i.e. some rows and columns with some up</description>
      <pubDate>Mon, 09 Dec 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Moving on to day 4, we have a grid problem in front of us, we are given some numbers in the form of a grid, i.e. some rows and columns with some upper case letters. What we need to do is to find is the word `XMAS` in any direction (up, left, down, right, diagonals), and in the second part we need to find the word `MAS` forming an X.&#xA;&#xA;So, let’s see how we can approach this and solve it in golang.&#xA;&#xA;You can check out my solutions [here on GitHub](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day04/main.go)[.](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day04/main.go)&#xA;&#xA;### [Constructin](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day02/main.go)g the grid&#xA;&#xA;The most fundamental part of the problem lies in actually converting the text into a grid or a matrix form. We can split the lines, into individual lines and append each character as an element in a list, and that way we can have a list of list of strings which is a matrix or grid-like (2-dimensional) structure.&#xA;&#xA;So, below is the input for the puzzle.&#xA;&#xA;```plaintext&#xA;MMMSXXMASM&#xA;MSAMXMSMSA&#xA;AMXSXMAAMM&#xA;MSAMASMSMX&#xA;XMASAMXAMM&#xA;XXAMMXXAMA&#xA;SMSMSASXSS&#xA;SAXAMASAAA&#xA;MAMMMXMMMM&#xA;MXMXAXMASX&#xA;```&#xA;&#xA;We need to convert it into something like this&#xA;&#xA;```plaintext&#xA;[&#xA;    [M M M S X X M A S M]&#xA;    [M S A M X M S M S A]&#xA;    [A M X S X M A A M M]&#xA;    [M S A M A S M S M X]&#xA;    [X M A S A M X A M M]&#xA;    [X X A M M X X A M A]&#xA;    [S M S M S A S X S S]&#xA;    [S A X A M A S A A A]&#xA;    [M A M M M X M M M M]&#xA;    [M X M X A X M A S X]&#xA;]&#xA;```&#xA;&#xA;So, this is a list of strings, we can say in golang it is a `[][]string` . We can do that by creating a function like this:&#xA;&#xA;```go&#xA;func ConstructGrid(lines []string) [][]string {&#xA;&#x9;grid := [][]string{}&#xA;&#x9;for _, line := range lines {&#xA;&#x9;&#x9;row := []string{}&#xA;&#x9;&#x9;for _, char := range strings.Split(line, &#34;&#34;) {&#xA;&#x9;&#x9;&#x9;row = append(row, char)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;grid = append(grid, row)&#xA;&#x9;}&#xA;&#x9;return grid&#xA;}&#xA;```&#xA;&#xA;The above function takes in a list of strings and returns a list of list of strings that are individual letters in the grid.  &#xA;We can read the file bytes and split the bytes on newline characters and then this will be used as the input for this function.&#xA;&#xA;So, once the input is parsed into a grid, we can start thinking about the actual logic of finding the word `XMAS` in it.&#xA;&#xA;## Part 1&#xA;&#xA;So, in the first part, we need to find the word `XMAS` in the matrix which could be appearing:&#xA;&#xA;* forwards (as `XMAS`)&#xA;    &#xA;* backward (as `SAMX`)&#xA;    &#xA;* upwards&#xA;    &#xA;     ```plaintext&#xA;            S&#xA;            A&#xA;            M&#xA;            X&#xA;        ```&#xA;        &#xA;* downwards&#xA;    &#xA;     ```plaintext&#xA;            X&#xA;            M&#xA;            A&#xA;            S&#xA;        ```&#xA;        &#xA;* Diagonal upwards (right or up left)&#xA;    &#xA;     ```plaintext&#xA;            S&#xA;              A&#xA;                M&#xA;                  X&#xA;            &#xA;            OR&#xA;                  S&#xA;                A&#xA;              M &#xA;            X&#xA;        ```&#xA;        &#xA;* Diagonals downwards (right or left)&#xA;    &#xA;     ```plaintext&#xA;                     X&#xA;                   M&#xA;                 A&#xA;               S&#xA;            &#xA;            OR&#xA;            X&#xA;              M&#xA;                A&#xA;                  S&#xA;        ```&#xA;        &#xA;&#xA;So, there are 8 directions where `XMAS` could appear in the grid, there could n number of these `XMAS` . We need to find the count of these in the grid.&#xA;&#xA;![](https://cdn.hashnode.com/res/hashnode/image/upload/v1733761379973/8a0a0263-b286-47f1-a338-410dec2c6b7b.png)&#xA;&#xA;To approach this, we can either find the first character in the word `XMAS` and then search in all directions one by one and check if we find `M` and if we have found `M` in any of the direction, we keep moving ahead in that direction and check if there is `A` and `S` in that direction.&#xA;&#xA;The approach looks like this:&#xA;&#xA;* Initialize the counter to 0&#xA;    &#xA;* Iterate over each line&#xA;    &#xA;    * Iterate over each character in the line&#xA;        &#xA;        * Check if the character is equal to `X`&#xA;            &#xA;        * If the character is `X`→&#xA;            &#xA;            * Iterate over all the directions (up, down, right, left, up-left, up-right, down-left, down-right)&#xA;                &#xA;                * For that direction if we find the character to be `M`&#xA;                    &#xA;                * Keep moving ahead in the same direction to find `A` and `S` similarly, if we found all the characters `XMAS` then, increment the counter&#xA;                    &#xA;                * Else choose another direction in the loop&#xA;                    &#xA;&#xA;This looks complex and large but is simple, focus one thing at a time and you can solve this easily.&#xA;&#xA;So, for the implementation of this, we need to define a few things first:&#xA;&#xA;```go&#xA; var directions [][]int = [][]int{&#xA;&#x9;[]int{0, -1},   // up&#xA;&#x9;[]int{0, 1},  // down&#xA;&#x9;[]int{1, 0},   // right&#xA;&#x9;[]int{-1, 0},  // left&#xA;&#x9;[]int{1, -1},   // up right&#xA;&#x9;[]int{-1, -1},  // up left&#xA;&#x9;[]int{1, 1},  // down right&#xA;&#x9;[]int{-1, 1}, // down left&#xA;}&#xA;&#xA;var wordList []string = []string{&#34;X&#34;, &#34;M&#34;, &#34;A&#34;, &#34;S&#34;}&#xA;```&#xA;&#xA;So, we have defined the list of integers in the directions which are the x and y coordinates that we need to add or subtract to get to the desired location. It is basically like a unit vector, it has a distance of 1 and a direction indicated by `+` or `-` indicating to move the left or right for x coordinates and up and down for y c-ordinates.&#xA;&#xA;So, let me explain that more clearly, let’s say I am at `(1,2)` in the grid which is of 4x4 dimension.&#xA;&#xA;```plaintext&#xA;A B C D&#xA;E F G H&#xA;I J K L&#xA;M N O P&#xA;```&#xA;&#xA;So, at 2,1 we have `G` , so we check some directions for this&#xA;&#xA;up → `0,-1` → 2+0, 1-1 → 2,0, we have moved to `C`&#xA;&#xA;right → `1,0` → 2+1, 1+0 → 3,1 , we have moved to `H`&#xA;&#xA;down, left → `-1,1` → 2-1, 1+1 → 1, 2, we have moved to `J`&#xA;&#xA;So, you get the idea, that we are moving in some directions using these coordinates.&#xA;&#xA;We can use these to get the next direction jump we want to make to find if that element has the next character in the word that we are searching.&#xA;&#xA;We will write a function that does this first and abstract the function that checks if we have found the word in the grid.&#xA;&#xA;```go&#xA;func TraverseGrid(grid [][]string) int {&#xA;&#x9;score := 0&#xA;&#x9;for x, row := range grid {&#xA;&#x9;&#x9;for y, char := range row {&#xA;&#x9;&#x9;&#x9;if char == wordList[0] {&#xA;&#x9;&#x9;&#x9;&#x9;for _, direction := range directions {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;if FindXMAS(x, y, 1, direction, grid) {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;&#x9;score += 1&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return score&#xA;}&#xA;```&#xA;&#xA;The above function takes in a grid and returns an integer which will be the score i.e. the count of words `XMAS` found in the grid/matrix.&#xA;&#xA;First, we need to iterate through each of the rows in the grid, for each row, we iterate over the character, so we will have x and y coordinates as the index of the grid. We need to then check if the current character is `X` or `wordList[0]` , if that is the case, we iterate over all the directions and check if we can find `XMAS` i.e. `MAS` in that direction, if so we increment the counter. What is the `FindXMAS` function, let’s abstract that away, and pass in the `x`, `y`, which are the coordinates of the current word, `1` which will be the word position of the `XMAS` and in this case, we already have found `X` we need to find `MAS` in that direction. We pass the grid and the direction, so this function will return true or false if that direction has `MAS` in it.&#xA;&#xA;So to iterate:&#xA;&#xA;* We iterate over the grid and get `row` and `x` as the list of strings and index of the current row.&#xA;    &#xA;* For each row i.e. list of strings, we iterate over the list of strings to get `char` and `y` as the character (string) and the index of that character in the list of the string.&#xA;    &#xA;* If we find the current character to be equal to `X` which is the 0th index of the `wordList` then&#xA;    &#xA;    * We iterate over all the directions and call the function `FindXMAS` to check if the remaining word `MAS` in that direction&#xA;        &#xA;    * If we find all the words, we increment the counter.&#xA;        &#xA;* So, we return the counter as we count the number of words `XMAS` in the grid/matrix.&#xA;    &#xA;&#xA;Now, we can implement the `FindXMAS` function, that takes in a `x`, `y` coordinates, the `wordPosition`, the direction and the grid, and return if the word is found.&#xA;&#xA;* First, we take the current x coordinates and add the direction’s x component (0th index or first element)&#xA;    &#xA;* add current y coordinates to the direction’s y component (1st index or second element)&#xA;    &#xA;* if the word position i.e.. the word index or the word itself in the current function is equal to the wordList, it means that it has found the required word completely&#xA;    &#xA;* We need to check by adding the direction to the x and y coordinates, we are not overshooting the width and height of the grid, so if we do, we return a false&#xA;    &#xA;* The final if is for checking if the current character is equal to the word that we are looking for, it could be `M`, `A` , or `S` . If so, we return the recursively call the `FindXMAS` function by passing the updated x and y coordinates and the next word in the wordList, we keep the direction the same and pass the entire grid.&#xA;    &#xA;&#xA;```go&#xA;func FindXMAS(x, y, wordPosition int, direction []int, grid [][]string) bool {&#xA;&#x9;xNext := x + direction[0]&#xA;&#x9;yNext := y + direction[1]&#xA;&#x9;if wordPosition &gt; len(wordList)-1 {&#xA;&#x9;&#x9;return true&#xA;&#x9;}&#xA;&#xA;&#x9;if xNext &lt; 0 || xNext &gt;= len(grid) || yNext &lt; 0 || yNext &gt;= len(grid[x]) {&#xA;&#x9;&#x9;return false&#xA;&#x9;}&#xA;&#xA;&#x9;if grid[xNext][yNext] == wordList[wordPosition] {&#xA;&#x9;&#x9;return FindXMAS(xNext, yNext, wordPosition+1, direction, grid)&#xA;&#x9;}&#xA;&#x9;return false&#xA;&#xA;}&#xA;```&#xA;&#xA;So, we have implemented the `FindXMAS` function, this will just return if we have found the `MAS` word by going in a particular direction by updating the coordinates and checking if the word at that position in the gird is the next word in `MAS` list.&#xA;&#xA;So, this is what the entire first part looks like:&#xA;&#xA;```go&#xA;func main() {&#xA;&#x9;lines := ReadFileLines(&#34;sample.txt&#34;)&#xA;&#x9;grid := ConstructGrid(lines)&#xA;&#x9;score := TraverseGrid(grid)&#xA;&#x9;fmt.Println(score)&#xA;}&#xA;```&#xA;&#xA;We take in the lines as a list of strings and pass that to `ConstructGrid` and get the grid, finally, we call `TraverseGrid` , by passing the grid and getting the score as the count of the words `XMAS` in the grid.&#xA;&#xA;That’s it from the part 1.&#xA;&#xA;## Part 2&#xA;&#xA;For part two, we need to find `MAS` in the cross shape, like below:&#xA;&#xA;```plaintext&#xA;M.S&#xA;.A.&#xA;M.S&#xA;```&#xA;&#xA;So, to solve this, we can do a similar approach but much simpler, we just need to find `A` as there will always be the word `MAS` in the center, so we just check if we have `A` and the top-left, top-right, or bottom-right, bottom-left has `M` or `S` .&#xA;&#xA;We get the coordinates of the top-right, top-left positions, down-right, and down-left positions by adding and subtracting 1 from it. We make a basic check if we are not overshooting the boundary of the grid. If we overshoot the boundaries, we won’t find the `MAS`&#xA;&#xA;But if we are within the grid, we now get the character at those 4 positions, we check if the top-left and bottom-right have `M` and `S` or `S` or `M`, similarly for top-right and bottom-left has `M` and `S` or `S` or `M` respectively. This is the diagonal search for `M` and `S` above and below the character `A`.&#xA;&#xA;So, if we have both the diagonal matched we return true.&#xA;&#xA;```go&#xA;&#xA;&#xA;func FindMAS(x, y int, grid [][]string, wordList []string) bool {&#xA;&#x9;xL, yT := x-1, y+1 // Top-left neighbor&#xA;&#x9;xR, yD := x+1, y-1 // Bottom-right neighbor&#xA;&#xA;&#x9;// Check if the indices are within bounds&#xA;&#x9;if xL &lt; 0 || xR &gt;= len(grid) || yT &lt; 0 || yD &lt; 0 ||&#xA;&#x9;&#x9;yT &gt;= len(grid[xL]) || yD &gt;= len(grid[xR]) {&#xA;&#x9;&#x9;return false&#xA;&#x9;}&#xA;&#xA;&#x9;topLeft := grid[xL][yT]&#xA;&#x9;bottomRight := grid[xR][yD]&#xA;&#x9;topRight := grid[xR][yT]&#xA;&#x9;bottomLeft := grid[xL][yD]&#xA;&#xA;&#x9;word1, word3 := wordList[1], wordList[3]&#xA;&#xA;&#x9;isDiagonalMatch := (topLeft == word1 &amp;&amp; bottomRight == word3) || (topLeft == word3 &amp;&amp; bottomRight == word1)&#xA;&#x9;isAntiDiagonalMatch := (topRight == word1 &amp;&amp; bottomLeft == word3) || (topRight == word3 &amp;&amp; bottomLeft == word1)&#xA;&#xA;&#x9;return isDiagonalMatch &amp;&amp; isAntiDiagonalMatch&#xA;}&#xA;```&#xA;&#xA;So, that is the simple implementation for finding the `MAS` the diagonal.&#xA;&#xA;Now, we need to change the `TraverseGrid` a bit, as we just iterate over the grid, and check if we have `A` in the character in the row, i.e. `wordList[2]`. Now, if we have `A`, we need to call the `FindMAS` function with the current coordinates and the grid, if that function returns true, we increment the counter,.&#xA;&#xA;```go&#xA;&#xA;func TraverseGrid2(grid [][]string) int {&#xA;&#x9;score := 0&#xA;&#x9;for x, row := range grid {&#xA;&#x9;&#x9;for y, char := range row {&#xA;&#x9;&#x9;&#x9;if char == wordList[2] {&#xA;&#x9;&#x9;&#x9;&#x9;if FindMAS(x, y, grid) {&#xA;&#x9;&#x9;&#x9;&#x9;&#x9;score += 1&#xA;&#x9;&#x9;&#x9;&#x9;}&#xA;&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return score&#xA;}&#xA;```&#xA;&#xA;So, that is the final implementation of part 2, we get the count of `MAS` in the cross direction.&#xA;&#xA;You can check out my solutions [here on GitHub](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day04/main.go)[.](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day04/main.go)&#xA;&#xA;## Conclusion&#xA;&#xA;So, that is it from day 4 of Advent of Code in Golang, let me know if you have any suggestions, and how you approached it. any better solutions?&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Advent of Code Day 3 in Golang: Do Or Don’t Regex</title>
      <link>https://www.meetgor.com/posts/aoc-2024-day-3</link>
      <description>Introduction Well, it is day 3 of the advent of code 2024, and I have been doing it on live streams. I am behind two days but working through them one by one. S</description>
      <pubDate>Sat, 07 Dec 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Well, it is day 3 of the advent of code 2024, and I have been doing it on live streams. I am behind two days but working through them one by one. So far, I have learned a lot of things in Go. Let’s dive in for the day 3.&#xA;&#xA;## Part 1&#xA;&#xA;Part one to any AOC problem seems straightforward, but as soon as part two is revealed, the real implementation starts to break the sweat (if you weren’t optimistic or thoughtful)&#xA;&#xA;For part 1 for this day, was to parse a string that contained `mul(X,Y`) an expression, where X could be any 3-digit number. So, there could be multiple such expressions within the string and the purpose was to multiply the X and Y in the individual expression and add them up.&#xA;&#xA;![AOC Day 3 Part 1](https://meetgor-cdn.pages.dev/aoc-2024-d3-solution-part1.jpg)&#xA;&#xA;```plaintext&#xA;&#xA;xmul(2,4)&amp;mul[3,7]!^don&#39;t()_mul(5,5)+mul(32,64](mul(11,8)undo()?mul(8,5))&#xA;```&#xA;&#xA;In this example above there are 4 such expressions, and if we add the multiplications of those, we get the answer as 161.&#xA;&#xA;### Approach&#xA;&#xA;It looks like a Regex pattern, finding an expression-like pattern in a string. So, the approach would be to find such expressions with a regex pattern and parse the numbers to integers, and multiply them, simply.&#xA;&#xA;You could go ahead and write the parser for iterating over each character in the string and parsing the tokens, then evaluating the expression. That is a valid approach, but I choose to do this because I don’t know how to write a parser honestly, I want to try that solution at the end as well.&#xA;&#xA;But for the first part, a quick regular expression seems a good idea.&#xA;&#xA;### Constructing the Regular Expression&#xA;&#xA;The first thing is to write the regular expression for the `mul(X,Y)` part which is the only challenging section in part one. The rest is just simple math.&#xA;&#xA;So, we need to find `mul`, then a `(` then any number which is 1 to 3 digits long, then `,` and again a number that is 1 to 3 digits long, and finally ends with a `)`&#xA;&#xA;That translates to:&#xA;&#xA;```plaintext&#xA;mul\((\d{1,3}),(\d{1,3})\) &#xA;```&#xA;&#xA;Let’s breakdown:&#xA;&#xA;* `mul` for capturing the literal word `mul`&#xA;    &#xA;* `\(` this is for the first parenthesis in the expression `mul()` , we need to escape the bracket in a regular expression if we want to match it, so we use `\` before it.&#xA;    &#xA;* Then we have a match group `(\d{1,3})` , this will be the `X` in `mul(X,Y)`:&#xA;    &#xA;    * A match group is like a group of a match in a regex, basically, if you want to capture specific parts in the entire match, then we use `()` to group them individually, this is not necessary but helps in getting the right things without overhead&#xA;        &#xA;    * In this case, we are using a match group for capturing a number which can have 1 to 3 digits.&#xA;        &#xA;    * The other way to write this is `([0-9]{1,3})` , which also would do the same thing, (NOTE: there are some differences in `[0-9]` and `\d`, but that is very subtle and won’t affect this puzzle, if you are still curious, I prefer reading this [StackOverflow question](https://unix.stackexchange.com/questions/414226/difference-between-0-9-digit-and-d))&#xA;        &#xA;* We then use `,` for the separator of `X` and `Y` operands in the `mul(X,Y)` expression&#xA;    &#xA;* We then similarly do the match for `Y` in `mul(X,Y)` with the `(\d{1,3})` match group&#xA;    &#xA;* Finally, we end the regular expression with the `)` to end the expression&#xA;    &#xA;&#xA;### Code it&#xA;&#xA;This is quite straightforward, we grab the line as a string and use [regexp.MustCompile](https://pkg.go.dev/regexp#MustCompile) function which gives us a [Regexp](https://pkg.go.dev/regexp#Regexp) object, that in turn has a few methods associated with it to find, match, replace, and other things that can be done with a regular expression on a string.&#xA;&#xA;Once we have the `mulRegex` , we can use the [FindAllStringSubmatch](https://pkg.go.dev/regexp#Regexp.FindAllStringSubmatch) method associated with the `Regexp` struct in the `regexp` package. The function takes in a string to perform the regex on, and the maximum number of matches to return. We want all the results, so we pass them in `-1` to get all the matches.&#xA;&#xA;Now, this method returns a slice of a slice of strings, each slice is a match, and within each slice, there is a slice of string, with the matched string and the subsequent match groups in the string if any.&#xA;&#xA;```go&#xA;func FindMulExpression(line string) [][]string {&#xA;  mulRegex := regexp.MustCompile(`mul\((\d{1,3}),(\d{1,3})\)`)&#xA;  return mulRegex.FindAllStringSubmatch(line, len(line))&#xA;}&#xA;```&#xA;&#xA;So, the above function will return something like this&#xA;&#xA;```go&#xA;[&#xA;    [mul(2,4) 2 4]&#xA;    [mul(5,5) 5 5]&#xA;    [mul(11,8) 11 8]&#xA;    [mul(8,5) 8 5]&#xA;]&#xA;```&#xA;&#xA;This is a list of list of strings, these look like numbers but those are string types in Golang.&#xA;&#xA;Now we have this, we can create the actual logic part of obtaining the result, which is to parse these expressions, multiply `X` and `Y` and add the results for each of the expressions.&#xA;&#xA;```go&#xA;func Multiply(matches [][]string) int {&#xA;&#x9;score := 0&#xA;&#x9;for _, match := range matches {&#xA;&#x9;&#x9;x, err := strconv.Atoi(string(match[1]))&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;panic(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;y, err := strconv.Atoi(string(match[2]))&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;panic(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;score += x * y&#xA;&#x9;}&#xA;&#x9;return score&#xA;}&#xA;```&#xA;&#xA;This is pretty straightforward, we iterate over each of the matches, that is one `mul(X,Y)` expression and parse the `X` and `Y` each into integers and multiply them, the result obtained is then added to the score. We do this for each `mul(X,Y)` expression that is found in the string(line) and return the final score.&#xA;&#xA;### Input&#xA;&#xA;Now, the example gave us a single string, but I realized there were six lines in the input (individual input), so we needed to parse each line and add up the score.&#xA;&#xA;Remember this as it will be critical in part 2, it took me some time and questioning my existence to realize we need to combine all the lines to get the result (not necessary in part 1 but for sure in part 2).&#xA;&#xA;## Part 2&#xA;&#xA;This is where things go wrong usually. At least for me, it did.&#xA;&#xA;I started with a very naive approach, with a forever loop and finding the index of do or don’t and stripping off those sections, and then looping until we have no do and don’ts left. That was working for the test case but failed on the actual input.&#xA;&#xA;The approach I finally came up and was working by tweaking the same approach slightly.&#xA;&#xA;### Approach&#xA;&#xA;What I came up with is to find the first match location of `don’()` and `do()` strings in the entire string, we take that and remove the parts after `don’t()` or before `do()` . That way we can trim down the string to only valid/enabled `mul()`expressions.&#xA;&#xA;![AOC Day 3 Part 2](https://meetgor-cdn.pages.dev/aoc-2024-d3-solution-part2.jpg)&#xA;&#xA;So, the approach more clearly defined will be:&#xA;&#xA;* Find the location (index) of the `don’t()` and `do()` expressions&#xA;    &#xA;* We need to keep track of whether the previous string was enabled or disabled so will keep a flag to append the enabled expressions (part of the string) to the final result&#xA;    &#xA;* If none of them are found, return the string(line) as it is&#xA;    &#xA;* If we found either of them then:&#xA;    &#xA;    * If we find don’t first (`don’t()` appears before `do()`)&#xA;        &#xA;        * If the enabled flag is true → append the string before the `don’t()` expression&#xA;            &#xA;        * Then toggle the enabled as false and trim off the `don’t()` part  &#xA;            (This way we have completed checking everything before the don’t expression, so we remove that part from the line string)&#xA;            &#xA;    * If we find do first (`do()` appears before `don’t()`)&#xA;        &#xA;        * If the enabled flag is true → append the string before the `do()` expression&#xA;            &#xA;        * Then toggle the enabled as true and trim off the `do()` part  &#xA;            (This way we have completed checking everything before the do expression, so we remove the part from the line string)&#xA;            &#xA;* We do this until there is no line string left to be checked&#xA;    &#xA;&#xA;### Code&#xA;&#xA;I used simple Strings.Index to get the first match index for the substring, In this case, I want the first matching index for `don’t()` and `do()` . Once we have the indices of both the matches, we can iterate over till we are not left with any do or don’ts in the string.&#xA;&#xA;If we have either do or don’t we append to the string the part before don’t if enabled or part before do if enabled and toggle on and off the enabled flag accordingly. By the end of the loop, the result string will have only the enabled parts of the line/string.&#xA;&#xA;```go&#xA;func StripDoDont(line string) string {&#xA;    result := &#34;&#34;&#xA;    enabled := true&#xA;    dontOffset := len(&#34;don&#39;t()&#34;)&#xA;    doOffset := len(&#34;do()&#34;)&#xA;&#xA;    for len(line) &gt; 0 {&#xA;        dontIndex := strings.Index(line, &#34;don&#39;t()&#34;)&#xA;        doIndex := strings.Index(line, &#34;do()&#34;)&#xA;&#xA;        if dontIndex == -1 &amp;&amp; doIndex == -1 {&#xA;            if enabled {&#xA;                result += line&#xA;            }&#xA;            break&#xA;        }&#xA;        &#xA;        if dontIndex != -1 &amp;&amp; (doIndex == -1 || dontIndex &lt; doIndex) {&#xA;            if enabled {&#xA;                result += line[:dontIndex]&#xA;            }&#xA;            enabled = false&#xA;            line = line[dontIndex+dontOffset:]&#xA;        } else {&#xA;            if enabled {&#xA;                result += line[:doIndex]&#xA;            }&#xA;            enabled = true&#xA;            line = line[doIndex+doOffset:]&#xA;        }&#xA;    }&#xA;&#xA;    return result&#xA;}&#xA;```&#xA;&#xA;I take this function and pass it to the multiply function where I get the matching patterns for the `mul` expression and do the math.&#xA;&#xA;The [strings.Index](https://pkg.go.dev/strings#Index) method takes in a string and a substring to find within that string and returns the index of the first occurring instance of that substring. With that we can identify if the line string contains the `do()` or `don’t()` expressions, if they don’t we simply return the line and if there are instances of them, we loop and trim the string before and after the expressions depending on whether the flag is enabled or disabled.&#xA;&#xA;Let’s take an example and walk through the logic:&#xA;&#xA;```plaintext&#xA;abcxmul(1,3)don&#39;t()mul(9, 7)do()mul(1,2)don&#39;t()mul(8,7)&#xA;&#xA;enabled = True&#xA;result = &#34;&#34;&#xA;line = &#34;abcxmul(1,3)don&#39;t()mul(9, 7)do()mul(1,2)don&#39;t()mul(8,7)&#34;&#xA;---&#xA;After Iteration 1:&#xA;    result -&gt; abcxmul(1,3)&#xA;    line -&gt; mul(9, 7)do()mul(1,2)don&#39;t()mul(8,7)&#xA;    enabled = False&#xA;---&#xA;After Iteration 2:&#xA;    result -&gt; abcxmul(1,3)&#xA;    line -&gt; mul(1,2)don&#39;t()mul(8,7)&#xA;    enabled = True&#xA;---&#xA;After Iteration 3:&#xA;    result -&gt; abcxmul(1,3)mul(1,2)&#xA;    line -&gt; mul(8,7)&#xA;    enabled -&gt; False&#xA;---&#xA;After Iteration 4:&#xA;    No do and don&#39;t found&#xA;    result -&gt; abcxmul(1,3)mul(1,2)&#xA;    break out of loop&#xA;---&#xA;&#xA;Result -&gt; abcxmul(1,3)mul(1,2)&#xA;```&#xA;&#xA;We process the result with the same `Multiply` function that we used in the first part after passing it through the `FindMulExpression` function that will return all the mul expressions in the result line string.&#xA;&#xA;### Heads up with the input&#xA;&#xA;The actual input of the puzzle is I think multiple lines, so we need to preserve this state of the line in all the remaining lines. OR, we could be smarter and create a single large string and process that. Both are valid and would give the same results. I just didn’t like to add an overhead of keeping track of all the states and line, so I just concatenated all the lines and processed that single string.&#xA;&#xA;## Conclusion&#xA;&#xA;This was a simple problem in essence but if you are not aware of regex you could go down a rabbit hole of writing your own parser or wired string manipulation (just like I did).&#xA;&#xA;That’s it form day 3, I will be doing more live stream solving over the weekend and may be the next week as well. Find the code for my AoC solutions here on GitHub.&#xA;&#xA;Till then,&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Advent of Code Day 2 in Golang: Slicing and Dicing Reports</title>
      <link>https://www.meetgor.com/posts/aoc-2024-day-2</link>
      <description>Introduction So, this is day 2 of the Advent of Code 2024 in Golang, and we will be exploring my approach and solution for the same. The problem was not as easy</description>
      <pubDate>Mon, 02 Dec 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;So, this is day 2 of the Advent of Code 2024 in Golang, and we will be exploring my approach and solution for the same. The problem was not as easy but was pretty simple after implemented and found correct.&#xA;&#xA;You can check out my solutions [here on GitHub](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day02/main.go).&#xA;&#xA;## Part 1&#xA;&#xA;We have been given some lines called reports, and each report has a bunch of levels. and the requirement of the report is that it needs to be either strictly increasing or decreasing by a factor of at least 1 or at most 3.&#xA;&#xA;This means if the first two elements are increasing even by one, the other subsequent elements in that report should be increasing (by 1, 2, or 3) levels, and there cannot be any change (i.e. 0 change in two adjacent numbers, or two adjacent numbers cannot be same)&#xA;&#xA;```plaintext&#xA;7 6 4 2 1&#xA;1 2 7 8 9&#xA;9 7 6 2 1&#xA;1 3 2 4 5&#xA;8 6 4 4 1&#xA;1 3 6 7 9&#xA;```&#xA;&#xA;* We first do the input parsing, it is pretty straightforward, we need to split it by `&#xA;` to get individual reports, this will be a string so `”7 6 4 2 1&#34;` , we want to get a slice of integers.&#xA;    &#xA;* So we go ahead and split by spaces/whitespace `” “` to get the individual levels (numbers) and we need to convert them into integers.&#xA;    &#xA;* Once we have individual strings of the report i.e. levels as `[“7”, “6”, “4”, “2”, “1”]` , we need to cast them to integers.&#xA;    &#xA;* We iterate over each of them and cast them to integers and append to the list.&#xA;    &#xA;* Once we have constructed the list, we append to the reports list which will be the array of arrays, i.e. each line is a report, and each report has many levels so slice of slice of integers.&#xA;    &#xA;&#xA;```go&#xA;func SplitLevels(lines []string) [][]int {&#xA;&#x9;reportLevels := [][]int{}&#xA;&#x9;for i, reportLine := range lines {&#xA;&#x9;&#x9;reportLevels = append(reportLevels, []int{})&#xA;&#x9;&#x9;for _, levelStr := range strings.Split(reportLine, &#34; &#34;) {&#xA;&#x9;&#x9;&#x9;level, err := strconv.Atoi(levelStr)&#xA;&#x9;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;reportLevels[i] = append(reportLevels[i], level)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return reportLevels&#xA;}&#xA;```&#xA;&#xA;Once we have constructed the reports and levels, we move ahead in actually analyzing the patterns within the levels in the individual reports.&#xA;&#xA;For that:&#xA;&#xA;* We first take individual reports, calculate the difference between the first two elements, and remember to use absolute difference carefully here.&#xA;    &#xA;* We need to maintain a flag which indicates whether the levels in the report are increasing or decreasing, which can be determined with the first two elements.&#xA;    &#xA;    That is if the first two elements are increasing, the subsequent levels should also be increasing and if they are decreasing then all the levels should be decreasing as well&#xA;    &#xA;* We first have a guard check, if the difference between them is 0 or greater than 3 or less than -3 which is the condition of the levels to be safe. If that is the case then we return false that is the report is not safe.&#xA;    &#xA;* We now iterate on the report after the first two elements, we then compute the difference between the next two levels, if the flag is increasing is true and the current difference is less than or equal to 0 or it exceeding 3 we also mark it as false&#xA;    &#xA;* The other condition is that if the flag is is decreasing, which means the first two elements were having a negative difference, so we check if the current difference is greater than or equal to 0 or it is less than -3, if that is the case we mark that as false&#xA;    &#xA;* After computing the difference for all the levels, if we come out of the loop, we return true as we didn’t see any discrepancy in the levels.&#xA;    &#xA;&#xA;```go&#xA;&#xA;&#xA;func IsSafe(report []int) (bool) {&#xA;&#x9;prevDiff := report[1] - report[0]&#xA;&#x9;isIcreasing := prevDiff &gt; 0&#xA;&#x9;if prevDiff == 0 || prevDiff &gt; 3 || prevDiff &lt; -3 {&#xA;&#x9;&#x9;return false&#xA;&#x9;}&#xA;&#xA;&#x9;for i := 2; i &lt; len(report); i++ {&#xA;&#x9;&#x9;currDiff := report[i] - report[i-1]&#xA;&#x9;&#x9;if isIcreasing {&#xA;&#x9;&#x9;&#x9;if currDiff &lt;= 0 || currDiff &gt; 3 {&#xA;&#x9;&#x9;&#x9;&#x9;return false&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;if currDiff &gt;= 0 || currDiff &lt; -3 {&#xA;&#x9;&#x9;&#x9;&#x9;return false&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return true&#xA;}&#xA;```&#xA;&#xA;## Part 2&#xA;&#xA;For part two, we need to do a few things, we need to compute if the report is safe or not, and if that is unsafe, we can almost remove one element from the report to make it safe.&#xA;&#xA;For that the approach is:&#xA;&#xA;* Get the index where we first saw the discrepancy in the levels&#xA;    &#xA;* Check by removing that element from the report, if that makes the report safe, then return true i.e. we found the safe report&#xA;    &#xA;* If we still find the report unsafe, remove the element before the index where the discrepancy was found, if now we find it safe after removing that element, then mark it safe&#xA;    &#xA;* If still we find the report unsafe, then remove the element after the index where we originally found the discrepancy, if the report becomes safe, we mark that report safe&#xA;    &#xA;* Else we mark the report unsafe, as we cannot find only the element removable that makes the report safe.&#xA;    &#xA;&#xA;```go&#xA;func RemoveAndCheck(report []int, index int) bool {&#xA;&#x9;if index &gt; len(report)-1 || index &lt; 0 {&#xA;&#x9;&#x9;return false&#xA;&#x9;}&#xA;&#x9;reportNew := append([]int{}, report[:index]...)&#xA;&#x9;reportNew = append(reportNew, report[index+1:]...)&#xA;&#x9;safe, _ := IsSafe(reportNew)&#xA;&#x9;fmt.Println(safe, report)&#xA;&#x9;return safe&#xA;}&#xA;&#xA;func RemoveLevels(report []int) bool {&#xA;&#x9;safe, unsafeIndex := IsSafe(report)&#xA;&#x9;if safe {&#xA;&#x9;&#x9;return true&#xA;&#x9;} else {&#xA;&#x9;&#x9;if RemoveAndCheck(report, unsafeIndex) {&#xA;&#x9;&#x9;&#x9;return true&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;if RemoveAndCheck(report, unsafeIndex-1) {&#xA;&#x9;&#x9;&#x9;return true&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;if RemoveAndCheck(report, unsafeIndex+1) {&#xA;&#x9;&#x9;&#x9;return true&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;return false&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;You can check out my solutions [here on GitHub](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day02/main.go).&#xA;&#xA;## Conclusion&#xA;&#xA;So that was it, a pretty simple problem for day 2 of the advent of code 2024 in Golang. Hope you enjoyed this walkthrough of the day one puzzle in the Advent of Code 2024 in Golang.&#xA;&#xA;Let me know if you have any other interesting solutions, or if you have anything to share about this, any feedback, questions, or suggestions are welcome.&#xA;&#xA;Thank you for reading, and I will see you tomorrow for day 3&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Advent of Code, 2024, Day 1 in Golang: Historian Hysteria</title>
      <link>https://www.meetgor.com/posts/aoc-2024-day-1</link>
      <description>Introduction Hello everyone, it’s that time of the year, Advent of Code, I will be solving this year as well with Golang. In previous years I have been doing</description>
      <pubDate>Sun, 01 Dec 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Hello everyone, it’s that time of the year, Advent of Code, I will be solving this year as well with Golang. In previous years I have been doing Advent of Code but was not able to keep up with the pace and left it midway (not even halfway). This year however I am determined and want to solve all the problems as much as I can.&#xA;&#xA;Let’s dive into the first day which should be and is pretty simple and straightforward. A simple list and map creation and traversal and basic math operations.&#xA;&#xA;I also live-streamed the solution, you can check it out the [stream on YouTube](https://www.youtube.com/live/3K02tEEBgto?si=ojS5rsh5nGpk3U-B)&#xA;&#xA;And also a [shorter video](https://youtu.be/4U97gLyz0Ss?si=SvINHaGz-mow_q3O) on the approach and solution in Golang.&#xA;&#xA;&lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#34;https://www.youtube.com/embed/4U97gLyz0Ss&#34; frameborder=&#34;0&#34; allowfullscreen&gt;&lt;/iframe&gt;&#xA;&#xA;Or you stick here and continue reading. Thank you&#xA;&#xA;You can check out my solutions [here on GitHub](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day01/main.go).&#xA;&#xA;## Part 1&#xA;&#xA;[Advent of code, 2024, day 1](https://adventofcode.com/2024/day/1)&#xA;&#xA;We are given two lists here, the first part aims to find the absolute difference (distance) between each element sorted from smallest to largest.&#xA;&#xA;So, in essence, we take the two lists, sort them and one by one, for each corresponding element paired up, we take the absolute difference and sum that difference up for all the numbers in the list.&#xA;&#xA;```plaintext&#xA;3   4&#xA;4   3&#xA;2   5&#xA;1   3&#xA;3   9&#xA;3   3&#xA;```&#xA;&#xA;So, first, we need to split the input into different lists:&#xA;&#xA;1. We first range over all the lines, initialize two empty lists of integers&#xA;    &#xA;2. Then we split the line with the space as the separator, so this gives us the slice of strings as `[“3”, “4”]`&#xA;    &#xA;3. But we need to elements as integers, so take the first number and convert it to integer, similarly we do it for the second number.&#xA;    &#xA;4. Then once we have both numbers, we append them to the corresponding lists, the first number goes to the first list, and the second is appended to the second list.&#xA;    &#xA;5. Then we return the two lists&#xA;    &#xA;&#xA;NOTE: You cannot take the difference of those two numbers here itself, since we need to find the smallest number and sort the numbers in each list, so we need to get the lists populated first.&#xA;&#xA;```go&#xA;func SplitLists(lines []string) ([]int, []int) {&#xA;&#x9;listOne := []int{}&#xA;&#x9;listTwo := []int{}&#xA;&#xA;&#x9;for _, line := range lines {&#xA;&#x9;&#x9;// |3   4&#xA;&#x9;&#x9;// [&#34;3&#34;,&#34;4&#34;] slice of string ([]string)&#xA;&#x9;&#x9;// 3 &#xA;        // 4&#xA;        // [3,4] slice of int ([]int)&#xA;&#x9;&#x9;numbers := strings.Split(line, &#34;   &#34;)&#xA;&#x9;&#x9;numOne, err := strconv.Atoi(numbers[0])&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;numTwo, err := strconv.Atoi(numbers[1])&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;listOne = append(listOne, numOne)&#xA;&#x9;&#x9;listTwo = append(listTwo, numTwo)&#xA;&#x9;}&#xA;&#x9;return listOne, listTwo&#xA;}&#xA;```&#xA;&#xA;In the above code, we have initialized two empty slices of strings, we take the parameter lines which is a slice of string, indicating a line-by-line string representation of the input. I have parsed the input with these helper functions.&#xA;&#xA;The ReadFileBytes and ReadFileLines, one the bytes, the other gives the line-by-line string which gives a slice of strings.&#xA;&#xA;So once we have the lines, we iterate over each line and split the lines on space to get the two numbers. So, the line “`3 4`“ will be split into `[“3”, “4”]` . Now we get the first element and convert it into an integer as we need to sort and take the difference later.&#xA;&#xA;So, by accessing the first and second elements in the split line as `numbers[0]` and `numbers[1]` and converting the type to integer, [strconv.Atoi](https://pkg.go.dev/strconv#Atoi) function, which takes in a string and gives back an integer or an error.&#xA;&#xA;Now, we have two numbers as integers, we append the first element to the first element as `listOne = append(listOne, numOne)` and `listTwo = append(listTwo, numTwo)`&#xA;&#xA;So, we append one by one as we iterate over the input through all the lines, so at the end of this function, we will have two lists of integers.&#xA;&#xA;```&#xA;[3 4 2 1 3 3]&#xA;[4 3 5 3 9 3]&#xA;```&#xA;&#xA;Then, once we have the slices of integers, we sort those lists. Then we range over the lists one by one element, since both the lists are of the same size, we can reference one by the index of the other.&#xA;&#xA;Then for each difference of the two integers (one from the first list and the other from the second list), we cast it to a `float64` and pass it to the [math.Abs](https://pkg.go.dev/math#Abs) function, which is annoying as Golang doesn’t have an absolute function for intgers. We cast the integer to float for parsing it to the Abs method and cast the returned float64 value back to int. Kind of wired but fine.&#xA;&#xA;We keep adding the absolute differences for each paired difference of the elements in the two lists. At the end, we will have a final score which is the score for part one.&#xA;&#xA;```go&#xA;func PartOne(lines []string) int {&#xA;&#x9;listOne, listTwo := SplitLists(lines)&#xA;&#x9;sort.Ints(listOne)&#xA;&#x9;sort.Ints(listTwo)&#xA;&#x9;totalScore := 0&#xA;&#x9;for i := range listOne {&#xA;&#x9;&#x9;totalScore += int(math.Abs(float64(listOne[i] - listTwo[i]))&#xA;&#x9;}&#xA;&#x9;return totalScore&#xA;}&#xA;```&#xA;&#xA;## Part 2&#xA;&#xA;For part two, we need to take all the numbers in the first list count the number of times that number has occurred in the second list, and take a product of them and add it up for all the numbers.&#xA;&#xA;So in the example:&#xA;&#xA;```plaintext&#xA;3   4&#xA;4   3&#xA;2   5&#xA;1   3&#xA;3   9&#xA;3   3&#xA;```&#xA;&#xA;The numbers in the first list are \[3,4,2,1,3,3\]&#xA;&#xA;We have to count the occurrences of each of them in the second list&#xA;&#xA;So, in the second list \[4,3,5,3,9,3\], the number `3` occurs `3` times, so we do `3×3` which is `9` and then, do the same for `4` which occurs only once in the second list so, we get `4`, then `2` occurs `0` times, so we get `0`&#xA;&#xA;We get → `(3×3) + (4×1) + (2×0) + (1×0) + (3×3) + (3×3)`&#xA;&#xA;The first number is the element in the first list and the second number is the occurrence of that number in the second list.&#xA;&#xA;which comes out to be `9+4+0+0+9+9` , so the answer is `31` for the example.&#xA;&#xA;Once it is clear, what we have to do, we simply have to iterate over the second list and create a map of the frequency/occurances/number of times it appears in that list.&#xA;&#xA;### Solution&#xA;&#xA;So, we will have to modify the `SplitLists` functions a bit, we need to split and also map the second list with the key as the number itself and the value as its count in the second list.&#xA;&#xA;Just that change, we create an additional return value with an empty map of integers with integers. The mapTwo variable will be a map that will have a key as the number in the second list and its value as the number of times it is present in that list.&#xA;&#xA;```go&#xA;func SplitListsAndMap(lines []string) ([]int, map[int]int) {&#xA;&#x9;listOne := []int{}&#xA;&#x9;listTwoCounts := make(map[int]int)&#xA;&#xA;&#x9;for _, line := range lines {&#xA;&#x9;&#x9;numbers := strings.Split(line, &#34;   &#34;)&#xA;&#x9;&#x9;numOne, err := strconv.Atoi(numbers[0])&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;numTwo, err := strconv.Atoi(numbers[1])&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;listOne = append(listOne, numOne)&#xA;        listTwoCounts[numTwo] += 1&#xA;&#x9;}&#xA;&#x9;return listOne, listTwoCounts&#xA;}&#xA;```&#xA;&#xA;So, as we iterate over each line, we parse the string number into an integer and increment its count in the map.&#xA;&#xA;```&#xA;[3 4 2 1 3 3]&#xA;map[3:3 4:1 5:1 9:1]&#xA;```&#xA;&#xA;In the actual calculation of the score, we need to iterate over the elements of the first list and multiply the number with its count in the second list as we now have the map of it. We multiply those and add them up for each line, which becomes the final score.&#xA;&#xA;```go&#xA;func PartTwo(lines []string) int {&#xA;    similarityScore := 0&#xA;&#xA;&#x9;listOne, mapTwo := SplitListsAndMap(lines)&#xA;&#xA;&#x9;for _, numOne := range listOne {&#xA;&#x9;&#x9;score := numOne * mapTwo[numOne]&#xA;&#x9;&#x9;similarityScore += score&#xA;&#x9;}&#xA;&#xA;&#x9;return similarityScore&#xA;}&#xA;```&#xA;&#xA;So, that is how we got the final score for part two.&#xA;&#xA;You can check out my solutions [here on GitHub](https://github.com/Mr-Destructive/advent_of_code/blob/main/2024/src/day01/main.go).&#xA;&#xA;## Conclusion&#xA;&#xA;So that was it, a pretty simple problem for day 1 of the advent of code 2024 in Golang. Hope you enjoyed this walkthrough of the day one puzzle in the Advent of Code 2024 in Golang.&#xA;&#xA;Thank you for reading, and I will see you tomorrow for day 2&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Use Embedded Replicas of LibSQL Database hosted on Turso with a Golang Application</title>
      <link>https://www.meetgor.com/posts/turso-libsql-embedded-replicas-golang</link>
      <description>Introduction Welcome to the Let&#39;s Go with Turso series. In this series, we will learn how to interact with LibSQL databases with Golang. In the past article of</description>
      <pubDate>Thu, 31 Oct 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Welcome to the Let&#39;s Go with Turso series. In this series, we will learn how to interact with LibSQL databases with Golang. In the past article of the series, we explored how to connect remote/local LibSQL database in golang.&#xA;&#xA;With this section, we will specifally dive into understanding how to create, connect, and query local embedded replicas of LibSQL database hosted on Turso with a Golang application.&#xA;&#xA;If you want to check out the YouTube video, check this out:&#xA;&#xA;[LibSQL Embedded Replicas on Turso in Golang](https://www.youtube.com/watch?v=BitxB40rdVw)&#xA;&#xA;&lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#34;https://www.youtube.com/embed/vBrvX0X0phw&#34; frameborder=&#34;0&#34; allowfullscreen&gt;&lt;/iframe&gt;&#xA;&#xA;## LibSQL Embedded Replicas on Turso in Golang&#xA;&#xA;### Embedded Replicas&#xA;The embedded replica is a native feature of the libSQL database. With embedded replicas, we can have faster writes as with the local database and global read access from the remote database.&#xA;&#xA;Embedded replica is like creating a copy of a primary remote database and using it for performing any operations from the applications as a local database and then it has a mechanism or standard to sync up with the primary remote database. The primary remote database serves as a single source of truth, however we can use the database locally as well. This however should be done carefully to avoid data corruption or stale data. To cope up with this stale or corruption of data, the periodic syncing can be used.&#xA;&#xA;Embedded replicas have a 3 fold process&#xA;- Create a copy of the primary remote database&#xA;- Perform any operations on the local database&#xA;- Sync up with the primary remote database&#xA;&#xA;![Embedded Replicas for LibSQL](https://meetgor-cdn.pages.dev/embedded-replicas-flow.png)&#xA;&#xA;There are a few things to remember here:&#xA;- The local database can read it&#39;s newly written data, however other local replica databases can only view those changes once the sync has happened and the primary database has been updated from the local copy.&#xA;- These would require to sync the local with the primary database first and then the other local database also needs to sync with the primary database.&#xA;&#xA;You can read more about it [here](https://docs.turso.tech/features/embedded-replicas/introduction) on the Turso documentation.&#xA;&#xA;Let&#39;s get started,&#xA;&#xA;What we are going to do is create a step by step procedure to understand how to work with embedded replicas of LibSQL database.&#xA;&#xA;1. Perform the operations on the local LibSQL database&#xA;2. Create a Embedded Replica and sync up with the primary remote database&#xA;3. Fetch data from the primary remote database&#xA;&#xA;![Embedded Replicas of LibSQL with Golang](https://meetgor-cdn.pages.dev/LibSQL_Embedded_Replicas_on_Turso_in_Golang.gif)&#xA;&#xA;In this way, we can understand how to operate the Embedded Replicas as a whole from locally as well as remotely&#xA;&#xA;## Initializing a Golang project&#xA;&#xA;Let&#39;s start with initializing a Golang project.&#xA;&#xA;```bash&#xA;# go mod init &lt;git-provider-domain&gt;/&lt;username&gt;/&lt;project-name&gt;&#xA;# Example&#xA;&#xA;go mod init github.com/mr-destructive/lets-go-with-turso&#xA;&#xA;```&#xA;&#xA;This will initialize the project in the current directory, creating a `go.mod` file with the specification of the Golang version and the packages that we will install and use in this module.&#xA;&#xA;## Installing go-libsql package&#xA;&#xA;We will need to install the [go-libsql](https://github.com/tursodatabase/go-libsql) package, this is the driver for LibSQL for Golang. To install simply use the `go get` command to be installed as the dependency for the project&#xA;&#xA;```bash&#xA;go get github.com/tursodatabase/go-libsql&#xA;```&#xA;&#xA;Once this is done, we can create a local LibSQL database.&#xA;&#xA;## Creating a local LibSQL database&#xA;&#xA;Let&#39;s create a local LibSQL database. With the `go-libsql` package, we can connect to the local database. This will be done using the `libsql` driver. There is really no much difference than the normal SQLite database driver, this works just like SQLite, the only difference being the ability to connect to the remote database.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;database/sql&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#xA;&#x9;_ &#34;github.com/tursodatabase/go-libsql&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;dbName := &#34;file:./local.db&#34;&#xA;&#xA;&#x9;db, err := sql.Open(&#34;libsql&#34;, dbName)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to open db %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer db.Close()&#xA;}&#xA;```&#xA;&#xA;The above code will simply connect to the local LibSQL database which resides as the `local.db` file. Now, to demonstrate that it is an actual sqlite-like database, we can execute queries on the connected database.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;database/sql&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#xA;&#x9;_ &#34;github.com/tursodatabase/go-libsql&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;dbName := &#34;file:./local.db&#34;&#xA;&#xA;&#x9;db, err := sql.Open(&#34;libsql&#34;, dbName)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to open db %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer db.Close()&#xA;&#x9;rows, err := db.Query(&#34;SELECT ABS(RANDOM()%5) FROM generate_series(1,10)&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to query %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer rows.Close()&#xA;&#x9;for rows.Next() {&#xA;&#x9;&#x9;var i int&#xA;&#x9;&#x9;if err := rows.Scan(&amp;i); err != nil {&#xA;&#x9;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to scan %s&#34;, err)&#xA;&#x9;&#x9;&#x9;os.Exit(1)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;fmt.Println(i)&#xA;&#x9;}&#xA;&#xA;}&#xA;```&#xA;Output:&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;3&#xA;0&#xA;4&#xA;4&#xA;2&#xA;1&#xA;2&#xA;3&#xA;4&#xA;1&#xA;&#xA;$ go run main.go&#xA;&#xA;0&#xA;2&#xA;1&#xA;2&#xA;3&#xA;2&#xA;2&#xA;1&#xA;0&#xA;2&#xA;```&#xA;&#xA;Just a simple `SELECT ABS(RANDOM()%5) FROM generate_series(1,10)` query will be executed on the connected database. This query will basically genrate a random number between `-4` and `4` and take the absolute value i.e. now between 0 and 4, this will genrate 10 such numbers.&#xA;&#xA;Now, we can move into the actual embedded replica creation of the primary remote database and use it as a local database to perform operations.&#xA;&#xA;## Creating an Embedded Replica of Turso&#39;s LibSQL database&#xA;&#xA;We need a few things to specify before we create the embedded replica, first being the primary remote database, that typically is a libsql database hosted on turso or self hosted. We also need to provide the local database path, where the local-replica will be stored or we can say the copy of the primary libsql database will be created.&#xA;&#xA;We will be using the [LibSQL.NewEmbeddedReplicaConnector](https://pkg.go.dev/github.com/levydsa/libsql-go#NewEmbeddedReplicaConnector) to create a local replica of a libsql database. The function takes in two paramters, the first paramter being the local database filename path to create the copy into, and the second paramter being the primary database URL. The function returns a connector object or an error if any. The connector object is then further used with [OpenDB](https://pkg.go.dev/database/sql#OpenDB) function to create a database connection. The `OpenDB` function returns a reference of database connections which we&#39;ll use to connect and perform operations on the database.&#xA;The same `connector` object could be used to sync with the primary database after performing operations on the local database with the [Sync](https://pkg.go.dev/github.com/levydsa/libsql-go#Connector.Sync) method. This will pull or push the changes from the local database to the primary database.&#xA;&#xA;We can configure the syncing mechanism while creating the embedded replica with the additional parameters to the `NewEmbeddedReplicaConnector` function. There are [Options](https://pkg.go.dev/github.com/levydsa/libsql-go#Option) to include for the paramters that could be passed like:&#xA;&#xA;- `WithAuthToken(string)`: This will be used to authenticate with the primary database.&#xA;- `WithSyncInterval(time.Duration)`: This will be used to specify the interval of syncing between the local and primary database.&#xA;- `WithEncrytion(string)`: This will be used to encrypt the local database.&#xA;- `WithReadYourWrites(bool)`: This will be used to specify if the local database can read the newly written changes or not.&#xA;&#xA;So, let&#39;s create a exmaple to create a embedded replica, make some changes by creating tables and then syncing the local with primary, finally appending some data to the local and reading those.&#xA;&#xA;#### Create the Embedded Replica&#xA;&#xA;We first need to create a copy of the primary database, as said, we will configure the 2 paramters that we need to create the embedded replica with `NewEmbeddedReplicaConnector`. Then once we have the connector object, we open up a database connection, at that point we can further run queries on the local replica that was just created from the primary remote LibSQL database.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;database/sql&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;path/filepath&#34;&#xA;&#xA;&#x9;&#34;github.com/tursodatabase/go-libsql&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;dbName := &#34;local.db&#34;&#xA;    // this is not required, but can be used to create a temporary directory and then delete it later&#xA;&#x9;dir, err := os.MkdirTemp(&#34;&#34;, &#34;libsql-*&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(&#34;Error creating temporary directory:&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer os.RemoveAll(dir)&#xA;&#xA;    // first paramter required for creating NewEmbeddedReplicaConnector&#xA;&#x9;dbPath := filepath.Join(dir, dbName)&#xA;&#x9;fmt.Println(dbPath)&#xA;&#xA;    // second paramter required for creating NewEmbeddedReplicaConnector&#xA;&#x9;dbURL := os.Getenv(&#34;TURSO_DATABASE_URL&#34;)&#xA;&#x9;dbAuthToken := os.Getenv(&#34;TURSO_AUTH_TOKEN&#34;)&#xA;&#xA;&#x9;connector, err := libsql.NewEmbeddedReplicaConnector(dbPath, dbURL, libsql.WithAuthToken(dbAuthToken))&#xA;&#x9;fmt.Println(connector)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to open db %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer connector.Close()&#xA;&#xA;    // open a database connection from the connector object&#xA;&#x9;db := sql.OpenDB(connector)&#xA;&#x9;fmt.Println(&#34;Connected to database&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to open db %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer db.Close()&#xA;}&#xA;```&#xA;&#xA;In the above code, we first create a temporary directory with the help of [MkdirTemp](https://pkg.go.dev/os#MkdirTemp), this is not required, but would be easier for cleanup later. We then the path for the local database to be created. The combined path string `dbPath` will serve as the first paramter to the `NewEmbeddedReplicaConnector`. Then we have taken the `dbURL` and the `dbAuthToken` from the environment variables `TURSO_DATABASE_URL` and `TURSO_AUTH_TOKEN` respectively. The `dbURL` will serve as the second paramter for the `NewEmbeddedReplicaConnector` that is the URL of the primary remote LibSQL database. The function `NewEmbeddedReplicaConnector` will return the `Connector` object if successfull in creation of the replica, else return `err` if it fails. The connector object needs to be closed at the end of the program, so we use the `defer connector.Close()` that will close the connection to the primary database at the end of the program. The `sql.OpenDB` is used to create the connection with the local database that will be created from the `connector` object. Finally we also need to close the local database connection at the end of the program.&#xA;&#xA;Further, we will try to query the local replica and create tables and append data to it.&#xA;&#xA;### Adding data to teh local replica&#xA;&#xA;Once we have the `db` connection to the local database, we can noramlly query the database as we did in the previous example, of querying the local LibSQL database. Let&#39;s start by creating a table `posts` to the local replica, this will basically create the schema in the local database.&#xA;&#xA;```go&#xA;    ....&#xA;&#xA;&#x9;createPostTableQuery := `CREATE TABLE IF NOT EXISTS posts(&#xA;        id INTEGER PRIMARY KEY,&#xA;        title VARCHAR(100),&#xA;        description VARCHAR(255),&#xA;        content TEXT&#xA;    );`&#xA;&#xA;&#x9;_, err = db.Exec(createPostTableQuery)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to create table %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;```&#xA;&#xA;The `createPostTableQuery` will be the `SQL` to create the table `posts` if it doesn&#39;t already exist in the database (local replica). Then with the help of [db.Exec](https://pkg.go.dev/database/sql#DB.Exec) function, we can execute the query and return back the rows if it had created any. In this case it won&#39;t as we have just added a table.&#xA;&#xA;Then, we can either sync the database to the primary, but let&#39;s populate the table `posts` with some data before syncing with the primary db.&#xA;&#xA;```go&#xA;&#xA;&#x9;createPostQuery := `INSERT INTO posts(title, description, content) &#xA;        VALUES(?, ?, ?)`&#xA;&#xA;&#x9;_, err = db.Exec(createPostQuery, &#34;test title&#34;, &#34;test description&#34;, &#34;test content&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to insert %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#xA;```&#xA;&#xA;We have created the `createPostQuery` similarly to insert into the `posts` table in the local replica. The values are added with the placeholders in the `Exec` function as positional parameters. Once we have executed the query, this will populate the `posts` table in the lcoal replica.&#xA;&#xA;We can now finally sync with the primary remote LibSQL database to make sure that the primary database also has these migrations applied.&#xA;&#xA;### Syncing the local replica&#xA;&#xA;Remember, `connector` is for primary database and `db` is for the local replica. So, we will sync the remote database from the replica that was created with the `connector.Sync`&#xA;&#xA;```go&#xA;&#xA;&#x9;_, err = connector.Sync()&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to sync %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Printf(&#34;Successfully synced %s db&#xA;&#34;, dbPath)&#xA;&#x9;rows, err := db.Query(&#34;SELECT * FROM posts&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to query %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer rows.Close()&#xA;&#x9;for rows.Next() {&#xA;&#x9;&#x9;var id int&#xA;&#x9;&#x9;var title string&#xA;&#x9;&#x9;var description string&#xA;&#x9;&#x9;var content string&#xA;&#x9;&#x9;if err := rows.Scan(&amp;id, &amp;title, &amp;description, &amp;content); err != nil {&#xA;&#x9;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to scan %s&#34;, err)&#xA;&#x9;&#x9;&#x9;os.Exit(1)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;fmt.Println(id, title, description, content)&#xA;&#x9;}&#xA;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```bash&#xA;&#xA;$ go run main.go                                                            &#xA;&#xA;/tmp/libsql-349052144/local.db&#xA;&amp;{0x2eec9d0 &lt;nil&gt; &lt;nil&gt;}&#xA;Connected to database&#xA;Successfully synced /tmp/libsql-349052144/local.db db&#xA;1 test title test description test content&#xA;```&#xA;&#xA;Once we have synced the local replica, we can now query the database i.e. the local replica, with the changes, also note that this could also be done without syncing the database, but the primary database won&#39;t have the applied changes.&#xA;&#xA;We finally Query the local replica with the query `SELECT * FROM posts` and print out the results. This has the 1 record in the `posts` table that we inserted.&#xA;&#xA;So, that&#39;s how we basically create a local replica from a remote LibSQL database hosted on Turso. We first create the path for the local database to be copied, then provide the primary database URL and credentials, then request a copy of the primary database, then we perform any mutation or operations on the local copy and finally sync up with the remote primary database to persist the data from that replica (acting like a session of database operation).&#xA;&#xA;That wraps the article for now.&#xA;&#xA;For all the code related to this article, you can check out the [Let&#39;s Go with Turso](https://github.com/mr-destructive/lets-go-with-turso) GitHub repo for all the examples and additional examples for using LibSQL with Golang.&#xA;&#xA;&#xA;## Conclusion&#xA;&#xA;So, that is a wrap for this part of the series, we have explored how to create a local embedded replica from a remote LibSQL database hosted on Turso with Golang. In the next part of the series, we will explore how to setup a local LibSQL database server and then connect it with Golang.&#xA;&#xA;Thank you for reading this post, If you have any questions, feedback, and suggestions, feel free to drop them in the comments.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Connect LibSQL Database hosted on Turso in a Golang Application</title>
      <link>https://www.meetgor.com/posts/turso-libsql-db-golang</link>
      <description>Introduction Welcome to the new series in Golang, Let&#39;s Go with Turso. In this series, we will learn how to interact with LibSQL databases with Golang. We will</description>
      <pubDate>Mon, 30 Sep 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Welcome to the new series in Golang, Let&#39;s Go with Turso. In this series, we will learn how to interact with LibSQL databases with Golang. We will connect with a remote/local LibSQL database, create Embedded replicas, set up a local LibSQL database, and so much more as we explore and find out more features of LibSQL.&#xA;&#xA;## Connect a LibSQL database in a Golang application&#xA;&#xA;In this post, we will learn how to connect and query a LibSQL database hosted on Turso/Cloud in a Golang Application using libsql-client package. We will go from setting up golang project, installing turso-cli, creating a database on turso with the cli, connecting to the database with shell, and golang and finally, we can query the database using Golang.&#xA;&#xA;If you want to check out the YouTube video, check this out:&#xA;&#xA;[Conenct LibSQL Database hosted on Turso with Golang](https://www.youtube.com/watch?v=vBrvX0X0phw)&#xA;&#xA;&lt;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#34;https://www.youtube.com/embed/vBrvX0X0phw&#34; frameborder=&#34;0&#34; allowfullscreen&gt;&lt;/iframe&gt;&#xA;&#xA;### Initializing a Golang project&#xA;&#xA;Let&#39;s start with initializing a Golang project.&#xA;&#xA;```bash&#xA;# go mod init &lt;git-provider-domain&gt;/&lt;username&gt;/&lt;project-name&gt;&#xA;# Example&#xA;&#xA;go mod init github.com/mr-destructive/lets-go-with-turso&#xA;&#xA;```&#xA;&#xA;This will initialize the project in the current directory, creating a `go.mod` file with the specification of the Golang version and the packages that we will install and use in this module.&#xA;&#xA;### Installing Turso CLI&#xA;&#xA;```bash&#xA;# Linux/Windows&#xA;curl -sSfL https://get.tur.so/install.sh | bash&#xA;curl -sSfL https://get.tur.so/install.sh | bash&#xA;&#xA;# macOS&#xA;brew install tursodatabase/tap/turso&#xA;&#xA;```&#xA;&#xA;This will install the Turso CLI. To verify that Turso CLI is installed properly, you can run the version command to check the setup.&#xA;&#xA;```&#xA;turso --version&#xA;```&#xA;&#xA;Once it is installed, we can now log in into Turso platform, simply by running the `auth signup` or `auth login` to Register or Log-in.&#xA;&#xA;```&#xA;turso auth signup&#xA;&#xA;# OR&#xA;&#xA;turso auth login&#xA;```&#xA;&#xA;This will redirect to the browser for the OAuth flow, once signed up and logged in, this will allow to interact with the Turso platform with the CLI that we downloaded.&#xA;&#xA;To make sure we are logged in as the correct user, we can run the `auth whoami` command to get the currently logged-in user.&#xA;&#xA;```&#xA;turso auth whoami&#xA;```&#xA;&#xA;This will print the username if you are logged-in. If everything seems correct, we can move ahead with the database creation step.&#xA;&#xA;### Creating a Remote LibSQL Database on Turso&#xA;&#xA;To create a LibSQL database hosted on Turso, we will use the `turso db create` command.&#xA;&#xA;```&#xA;turso db create&#xA;&#xA;# OR&#xA;&#xA;turso db create &lt;name&gt;&#xA;```&#xA;&#xA;This will create a database with the specified name, even if you don&#39;t provide the name, it will give out a random friendly two-word name to your database. It will create a database on the nearest location available from your location.&#xA;&#xA;This command will output the following:&#xA;&#xA;```&#xA;Created database &lt;db-name&gt; at group default in 1.99s.&#xA;&#xA;Start an interactive SQL shell with the following:&#xA;    turso db shell &lt;db-name&gt;&#xA;&#xA;To see information about the database, including a connection URL, run:&#xA;    turso db show &lt;db-name&gt;&#xA;&#xA;To get an authentication token for the database, run:&#xA;    turso db tokens create &lt;db-name&gt;&#xA;```&#xA;&#xA;The next step, it shows to start an interactive shell, to see information about the database, and to generate an authentication token for the database.&#xA;&#xA;We will move to the next part, which would be to create an authentication token for accessing the database from an external application.&#xA;&#xA;### Generating and Storing Authentication Token for LibSQL Database&#xA;&#xA;After we executed the `db create` command, and it created the database on the Turso cloud, there was a command hint for creating a `token` with the command `db tokens create`&#xA;&#xA;So, this command will create a JWT authentication token, that will be used to connect and read/write to the database.&#xA;&#xA;```bash&#xA;turso db tokens create &lt;db-name&gt;&#xA;&#xA;# OR&#xA;&#xA;turso db tokens create &lt;db-name&gt; --read-only&#xA;&#xA;# OR&#xA;&#xA;turso db tokens create &lt;db-name&gt; --expiration 30d&#xA;```&#xA;&#xA;We can use the simple `db tokens create &lt;db-name&gt;` to create an authentication token for the database with (read + write access). You can copy that returned token into a environment variable, or wherever your application can read that token.&#xA;&#xA;This could be stored in the environment variable as follows:&#xA;&#xA;```bash&#xA;export TURSO_AUTH_TOKEN=&#34;&lt;token&gt;&#34;&#xA;```&#xA;&#xA;To make a `read-only` token, we can use the flag `--read-only`. This will be handy, if you only have a database as a local replica, and the only purpose of the database is for querying data.&#xA;This will prevent any write operation on the database.&#xA;&#xA;We can also use the `--expiration` flag that will be used to set the duration of the token. By default the value for expiry is `never`, but that could be a little too risky if you are making a serious application. You can either set it to `7d` which will make the token expire after seven days.&#xA;&#xA;&#xA;Now, we can get the remote database URL and connect to the database. The URL could be obtained by running the command `db show &lt;db-name&gt;`&#xA;&#xA;```&#xA;turso db show &lt;db-name&gt;&#xA;```&#xA;&#xA;This will output the following:&#xA;&#xA;```bash&#xA;Name:           &lt;db-name&gt;&#xA;URL:            libsql://&lt;db-name&gt;-&lt;username&gt;.turso.io&#xA;ID:             &lt;db-id&gt;   &#xA;Group:          default&#xA;Version:        0.24.22&#xA;Locations:      bom&#xA;Size:           4.1 kB&#xA;Archived:       No&#xA;Bytes Synced:   0 B&#xA;Is Schema:      No&#xA;&#xA;Database Instances:&#xA;NAME        TYPE        LOCATION&#xA;bom         primary     bom&#xA;```&#xA;&#xA;The above output shows the meta-information of the database. This also has the URL hosted on Turso. We can construct the URL using the name of the database and your username as `libsql://&lt;db-name&gt;-&lt;username&gt;.turso.io`, you can set this in an environment variable or in the configuration wherever you can access it from the application.&#xA;&#xA;To set the URL of the database in your application, you can use the `TURSO_DB_URL` environment variable.&#xA;&#xA;```bash&#xA;export TURSO_DATABASE_URL=&#34;libsql://&lt;db-name&gt;-&lt;username&gt;.turso.io&#34;&#xA;```&#xA;&#xA;So, we have a remote database URL, and the access token configured, these are the two pieces that we will need to connect, read and write to the libsql database.&#xA;&#xA;&#xA;### Installing LibSQL Client for Golang&#xA;&#xA;So, we can install the [libsql-client-go](https://pkg.go.dev/github.com/tursodatabase/libsql-client-go/libsql) package for Golang which will be used as an SDK in Golang to interact with a remote LibSQL database.&#xA;&#xA;```bash&#xA;go get github.com/tursodatabase/libsql-client-go/libsql&#xA;```&#xA;&#xA;This will install the package `libsql` into the golang module. Now, we can use this in our golang application.&#xA;&#xA;### Populating the LibSQL Database&#xA;&#xA;Moving ahead, we have a database, but it doesn&#39;t have data! So let&#39;s create some tables and insert some rows. We can use the `db shell` command to open an interactive SQL shell on a remote LibSQL database.&#xA;&#xA;```bash&#xA;turso db shell libsql://&lt;db-name&gt;-&lt;username&gt;.turso.io&#xA;```&#xA;&#xA;This will be a default `sqlite3` like a shell, where we can execute SQL commands, like `.schema`, `.mode`, `.tables`, etc.&#xA;&#xA;```bash&#xA;  .dump       Render database content as SQL&#xA;  .help       List of all available commands.&#xA;  .indexes    List indexes in a table or database&#xA;  .mode       Set output mode&#xA;  .quit       Exit this program&#xA;  .read       Execute commands from a file&#xA;  .schema     Show table schemas.&#xA;  .tables     List all existing tables in the database.&#xA;```&#xA;&#xA;And definitely, we can use the normal SQL queries, to read, write and delete data from the database.&#xA;&#xA;#### Creating a Table&#xA;&#xA;First, let&#39;s create a simple table, called `posts` with columns like `id`, `title` as a `VARCHAR(100)`, `description` as a `VARCHAR(255)`, and `content` as `TEXT` which won&#39;t be `NULL`.&#xA;&#xA;```sql&#xA;CREATE TABLE posts&#xA;  (&#xA;     id          INTEGER PRIMARY KEY,&#xA;     title       VARCHAR(100),&#xA;     description VARCHAR(255),&#xA;     content     TEXT NOT NULL&#xA;  ); &#xA;```&#xA;&#xA;This will create a table `posts` on the LibSQL database, yes this will mutate the primary LibSQL database which is hosted on Turso.&#xA;&#xA;#### Inserting Rows&#xA;&#xA;Now, since we have the `posts` table, we will insert some rows into the table.&#xA;&#xA;```bash&#xA;INSERT INTO posts (title, description, content)&#xA;VALUES &#xA;    (&#39;test title&#39;, &#39;test description&#39;, &#39;test content&#39;);&#xA;```&#xA;&#xA;Now, we have some rows populated in the `posts` table. We can add more tables, and rows into the database, as usual, but this is just an example, so I&#39;ll keep it short.&#xA;&#xA;### Connecting to the LibSQL Database&#xA;&#xA;Now, we have something to query from a database, after we connect to the database via the Golang application program.&#xA;&#xA;First, we will grab two pieces to connect to the database.&#xA;&#xA;```bash&#xA;export TURSO_DATABASE_URL=&#34;libsql://&lt;db-name&gt;-&lt;username&gt;.turso.io&#34;&#xA;export TURSO_AUTH_TOKEN=&#34;&lt;token&gt;&#34;&#xA;```&#xA;&#xA;Now, let&#39;s start with the Golang program code.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    dbURL := os.Getenv(&#34;TURSO_DATABASE_URL&#34;)&#xA;    dbToken := os.Getenv(&#34;TURSO_AUTH_TOKEN&#34;)&#xA;    dbUrl := fmt.Sprintf(&#34;%s?authToken=%s&#34;, dbURL, dbToken)&#xA;}&#xA;```&#xA;&#xA;This will be the basic config to grab the database URL and the authentication token, then we can construct the `dbURL` along with `dbToken` to construct the complete dbURL which will allow to access the database.&#xA;&#xA;Moving ahead, we will import `database/sql` package that will be used to open the database connection and `github.com/tursodatabase/libsql-client-go/libsql` to connect to the remote LibSQL database.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;database/sql&#34;&#xA;    &#34;fmt&#34;&#xA;    &#34;os&#34;&#xA;&#xA;&#x9;_ &#34;github.com/tursodatabase/libsql-client-go/libsql&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;dbURL := os.Getenv(&#34;TURSO_DATABASE_URL&#34;)&#xA;&#x9;dbToken := os.Getenv(&#34;TURSO_AUTH_TOKEN&#34;)&#xA;&#x9;dbUrl := fmt.Sprintf(&#34;%s?authToken=%s&#34;, dbURL, dbToken)&#xA;&#xA;&#x9;db, err := sql.Open(&#34;libsql&#34;, dbUrl)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to open db %s: %s&#34;, dbUrl, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer db.Close()&#xA;&#xA;}&#xA;```&#xA;&#xA;The `sql.Open` function will open the connection to the database, this will return a `sql.DB` object. The driver selected is `libsql` and the `dbURL` is the entire URL along with the authentication token.&#xA;&#xA;&#xA;```go&#xA;type Post struct {&#xA;&#x9;Id          int&#xA;&#x9;Title       string&#xA;&#x9;Description string&#xA;&#x9;Content     string&#xA;}&#xA;&#xA;rows, err := db.Query(&#34;SELECT * FROM posts&#34;)&#xA;if err != nil {&#xA;    fmt.Fprintf(os.Stderr, &#34;failed to query: %s&#34;, err)&#xA;    os.Exit(1)&#xA;}&#xA;&#xA;for rows.Next() {&#xA;    var post Post&#xA;    if err := rows.Scan(&amp;post.Id, &amp;post.Title, &amp;post.Description, &amp;post.Content); err != nil {&#xA;        fmt.Fprintf(os.Stderr, &#34;failed to scan: %s&#34;, err)&#xA;        os.Exit(1)&#xA;    }&#xA;    fmt.Println(post)&#xA;}&#xA;defer rows.Close()&#xA;```&#xA;&#xA;Now, let&#39;s query some data from the database. We can construct the `Post` struct that will be used to grab the required fields like `Id`, `Title`, `Description`, and `Content` from the `posts` table in the database.&#xA;&#xA;Then, we will use the `db.Query` function to query the database. This function takes in a query and returns a `sql.Rows` object. We will iterate over all the `rows` returned from the database, with the `rows.Next()` that will fetch each row. Then we can `row.Scan` the row object with the appropriate and respective fields returned in the row. In this case, the `Id`, `Title`, `Description`, and the `Content` is fetched and stored into the `post` fields.&#xA;&#xA;We have fetched the rows and we can do operations on them as required, this was just a basic example. So the entire code can be found below.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;database/sql&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#xA;&#x9;_ &#34;github.com/tursodatabase/libsql-client-go/libsql&#34;&#xA;)&#xA;&#xA;type Post struct {&#xA;&#x9;Id          int&#xA;&#x9;Title       string&#xA;&#x9;Description string&#xA;&#x9;Content     string&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;dbURL := os.Getenv(&#34;TURSO_DATABASE_URL&#34;)&#xA;&#x9;dbToken := os.Getenv(&#34;TURSO_AUTH_TOKEN&#34;)&#xA;&#x9;dbUrl := fmt.Sprintf(&#34;%s?authToken=%s&#34;, dbURL, dbToken)&#xA;&#xA;&#x9;db, err := sql.Open(&#34;libsql&#34;, dbUrl)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to open db %s: %s&#34;, dbUrl, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#x9;defer db.Close()&#xA;&#xA;&#x9;rows, err := db.Query(&#34;SELECT * FROM posts&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to query: %s&#34;, err)&#xA;&#x9;&#x9;os.Exit(1)&#xA;&#x9;}&#xA;&#xA;&#x9;for rows.Next() {&#xA;&#x9;&#x9;var post Post&#xA;&#x9;&#x9;if err := rows.Scan(&amp;post.Id, &amp;post.Title, &amp;post.Description, &amp;post.Content); err != nil {&#xA;&#x9;&#x9;&#x9;fmt.Fprintf(os.Stderr, &#34;failed to scan: %s&#34;, err)&#xA;&#x9;&#x9;&#x9;os.Exit(1)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;fmt.Println(post)&#xA;&#x9;}&#xA;&#x9;defer rows.Close()&#xA;&#xA;}&#xA;```&#xA;The output of the above code will result in all the rows present in the post table of the LibSQL database.&#xA;&#xA;```bash&#xA;$ go run remote.go&#xA;&#xA;{1 test title test description test content}&#xA;{2 test title test description test content}&#xA;{3 sample post libsql tutorial create db, connect, create tables, insert rows, sync}&#xA;{4 test title test description test content}&#xA;```&#xA;&#xA;I have added a few more rows to the post table, as you can see we have successfully connected, inserted, and read from the post table in the LibSQL database hosted on Turso.&#xA;&#xA;For all the code related to this article, you can check out the [Let&#39;s Go with Turso](https://github.com/mr-destructive/lets-go-with-turso) GitHub repo for all the examples and additional examples for using LibSQL with Golang.&#xA;&#xA;## Conclusion&#xA;&#xA;So, that is a wrap for this part of the series, we have explored how to connect a remote LibSQL database hosted on Turso with Golang. In the next part of the series, we will explore how to create embedded replicas on Turso&#39;s LibSQL database in Golang.&#xA;&#xA;Thank you for reading this post, If you have any questions, feedback, and suggestions, feel free to drop them in the comments.&#xA;&#xA;Happy Coding :)&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Safely using Maps in Golang: Differences in declaration and initialization</title>
      <link>https://www.meetgor.com/posts/golang-safely-using-maps</link>
      <description>Introduction This week, I was working on one of the API wrapper packages for golang, and that dealt with sending post requests with URL encoded values, setting</description>
      <pubDate>Sat, 31 Aug 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;This week, I was working on one of the API wrapper packages for golang, and that dealt with sending post requests with URL encoded values, setting cookies, and all the fun stuff. However, while I was constructing the body, I was using [url.Value](https://pkg.go.dev/net/url#Values) type to construct the body, and use that to add and set key-value pairs. However, I was getting a wired `nil` pointer reference error in some of the parts, I thought it was because of some of the variables I set manually. However, by debugging closer, I found out a common pitfall or bad practice of just declaring a type but initializing it and that causing `nil` reference errors.&#xA;&#xA;In this post, I will cover, what are maps, how to create maps, and especially how to properly declare and initialize them. Create a proper distinction between the declaration and initialization of maps or any similar data type in golang.&#xA;&#xA;## What is a map in Golang?&#xA;&#xA;A [map](https://go.dev/src/runtime/map.go) or a hashmap in golang is a basic data type that allows us to store key-value pairs. Under the hood, it is a header map-like data structure that holds buckets, which are basically pointers to bucket arrays (contiguous memory). It has hash codes that store the actual key-value pairs, and pointers to new buckets if the current overflows with the number of keys. This is a really smart data structure that provides almost constant time access.&#xA;&#xA;## How to create maps in Golang&#xA;&#xA;To create a simple map in golang, you can take an example of a letter frequency counter using a map of string and integer. The map will store the letters as keys and their frequency as values.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    words := &#34;hello how are you&#34;&#xA;    letters := map[string]int{}&#xA;&#xA;    for _, word := range words {&#xA;        wordCount[word]++&#xA;    }&#xA;&#xA;    fmt.Println(&#34;Word counts:&#34;)&#xA;    for word, count := range wordCount {&#xA;        fmt.Printf(&#34;%s: %d&#xA;&#34;, word, count)&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;Word counts:&#xA;e: 2&#xA; : 3&#xA;w: 1&#xA;r: 1&#xA;y: 1&#xA;u: 1&#xA;h: 2&#xA;l: 2&#xA;o: 3&#xA;a: 1&#xA;```&#xA;&#xA;So, by initializing the map as `map[string]int{}` you will get an empty map. This can be then used to populate the keys and values, we iterate over the string, and for each character (rune) we cast that byte of character into the string and increment the value, the zero value for int is `0`, so by default if the key is not present, it will be zero, it is a bit of double-edged swords though, we never know the key is present in the map with the value `0` or the key is not present but the default value is `0`. For that, you need to check if the `key` exists in the map or not.&#xA;&#xA;For further details, you can check out my [Golang Maps](https://www.meetgor.com/golang-maps/) post in detail.&#xA;&#xA;## Difference between declaration and initialization&#xA;&#xA;There is a difference in declaring and initializing any variable in a programming language and has to do a lot more with the implementation of the underlying type. In the case of primary data types like `int`, `string`, `float`, etc. there are default/zero values, so that is the same as the declaration and initialization of the variables. However, in the case of maps and slices, the declaration is just making sure the variable is available to the scope of the program, however for initialization is setting it to its default/zero value or the actual value that should be assigned.&#xA;&#xA;So, declaration simply makes the variable available within the scope of the program. For maps and slices, declaring a variable without initialization sets it to `nil`, meaning it points to no allocated memory and cannot be used directly.&#xA;&#xA;Whereas the `initialization` allocates memory and sets the variable to a usable state. For maps and slices, you need to explicitly initialize them using syntax like `myMap = make(map[keyType]valueType)` or `slice = []type{}`. Without this initialization, attempting to use the map or slice will lead to runtime errors, such as panics for accessing or modifying a nil map or slice.&#xA;&#xA;Let&#39;s looks at the values of a map when it is declared/initialized/not initialized.&#xA;&#xA;Imagine you&#39;re building a configuration manager that reads settings from a map. The map will be declared globally but initialized only when the configuration is loaded.&#xA;&#xA;1. Declared but not initialized&#xA;    &#xA;&#xA;The below code demonstrates a map access that is not initialized.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;log&#34;&#xA;)&#xA;&#xA;// Global map to store configuration settings&#xA;var configSettings map[string]string // Declared but not initialized&#xA;&#xA;func main() {&#xA;&#x9;// Attempt to get a configuration setting before initializing the map&#xA;&#x9;serverPort := getConfigSetting(&#34;server_port&#34;)&#xA;&#x9;fmt.Printf(&#34;Server port: %s&#xA;&#34;, serverPort)&#xA;}&#xA;&#xA;func getConfigSetting(key string) string {&#xA;&#x9;if configSettings == nil {&#xA;&#x9;&#x9;log.Fatal(&#34;Configuration settings map is not initialized&#34;)&#xA;&#x9;}&#xA;&#x9;value, exists := configSettings[key]&#xA;&#x9;if !exists {&#xA;&#x9;&#x9;return &#34;Setting not found&#34;&#xA;&#x9;}&#xA;&#x9;return value&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;Server port: Setting not found&#xA;```&#xA;&#xA;2. Declared and Initialized at the same time&#xA;    &#xA;&#xA;The below code demonstrates a map access that is initialized at the same time.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;log&#34;&#xA;)&#xA;&#xA;// Global map to store configuration settings&#xA;var configSettings = map[string]string{&#xA;&#x9;&#34;server_port&#34;:  &#34;8080&#34;,&#xA;&#x9;&#34;database_url&#34;: &#34;localhost:5432&#34;,&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;serverPort := getConfigSetting(&#34;server_port&#34;)&#xA;&#x9;fmt.Printf(&#34;Server port: %s&#xA;&#34;, serverPort)&#xA;}&#xA;&#xA;func getConfigSetting(key string) string {&#xA;&#x9;value, exists := configSettings[key]&#xA;&#x9;if !exists {&#xA;&#x9;&#x9;return &#34;Setting not found&#34;&#xA;&#x9;}&#xA;&#x9;return value&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;Server port: 8080&#xA;```&#xA;&#xA;3. Declared and later initialized&#xA;    &#xA;&#xA;The below code demonstrates a map access that is initialized later.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;log&#34;&#xA;)&#xA;&#xA;// Global map to store configuration settings&#xA;var configSettings map[string]string // declared but not initialized&#xA;&#xA;func main() {&#xA;&#x9;// Initialize configuration settings&#xA;&#x9;initializeConfigSettings()&#xA;    // if the function is not called, the map will be nil&#xA;&#xA;&#x9;// Get a configuration setting safely&#xA;&#x9;serverPort := getConfigSetting(&#34;server_port&#34;)&#xA;&#x9;fmt.Printf(&#34;Server port: %s&#xA;&#34;, serverPort)&#xA;}&#xA;&#xA;func initializeConfigSettings() {&#xA;&#x9;if configSettings == nil {&#xA;&#x9;&#x9;configSettings = make(map[string]string) // Properly initialize the map&#xA;&#x9;&#x9;configSettings[&#34;server_port&#34;] = &#34;8080&#34;&#xA;&#x9;&#x9;configSettings[&#34;database_url&#34;] = &#34;localhost:5432&#34;&#xA;&#x9;&#x9;fmt.Println(&#34;Configuration settings initialized&#34;)&#xA;&#x9;}&#xA;}&#xA;&#xA;func getConfigSetting(key string) string {&#xA;&#x9;if configSettings == nil {&#xA;&#x9;&#x9;log.Fatal(&#34;Configuration settings map is not initialized&#34;)&#xA;&#x9;}&#xA;&#x9;value, exists := configSettings[key]&#xA;&#x9;if !exists {&#xA;&#x9;&#x9;return &#34;Setting not found&#34;&#xA;&#x9;}&#xA;&#x9;return value&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;Configuration settings initialized&#xA;Server port: 8080&#xA;```&#xA;&#xA;&#xA;In the above code, we declared the global map `configSettings` but didn&#39;t initialize it at that point, until we wanted to access the map. We initialize the map in the main function, this main function could be other specific parts of the code, and the global variable `configSettings` a map from another part of the code, by initializing it in the required scope, we prevent it from causing nil pointer access errors. We only initialize the map if it is `nil` i.e. it has not been initialized elsewhere in the code. This prevents overriding the map/flushing out the config set from other parts of the scope.&#xA;&#xA;## Pitfalls in access of un-initialized maps&#xA;&#xA;But since it deals with pointers, it comes with its own pitfalls like nil pointers access when the map is not initialized.&#xA;&#xA;Let&#39;s take a look at an example, a real case where this might happen.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;        var vals url.Values&#xA;        vals.Add(&#34;foo&#34;, &#34;bar&#34;)&#xA;        fmt.Println(vals)&#xA;}&#xA;```&#xA;&#xA;This will result in a runtime panic.&#xA;&#xA;```nginx&#xA;$ go run main.go&#xA;panic: assignment to entry in nil map&#xA;&#xA;goroutine 1 [running]:&#xA;net/url.Values.Add(...)&#xA;        /usr/local/go/src/net/url/url.go:902&#xA;main.main()&#xA;        /home/meet/code/playground/go/main.go:10 +0x2d&#xA;exit status 2&#xA;```&#xA;&#xA;This is because the [url.Values](https://pkg.go.dev/net/url#Values) is a map of string and a list of string values. Since the underlying type is a map for `Values`, and in the example, we only have declared the variable `vals` with the type `url.Values`, it will point to a `nil` reference, hence the message on adding the value to the type. So, it is a good practice to use `make` while declaring or initializing a map data type. If you are not sure the underlying type is `map` then you could use `Type{}` to initialize an empty value of that type.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;        vals := make(url.Values)&#xA;        // OR&#xA;        // vals := url.Values{}&#xA;        vals.Add(&#34;foo&#34;, &#34;bar&#34;)&#xA;        fmt.Println(vals)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run urlvals.go&#xA;map[foo:[bar]]&#xA;foo=bar&#xA;```&#xA;&#xA;It is also recommended by the [golang team](https://go.dev/blog/maps) to use the make function while initializing a map. So, either use `make` for maps, slices, and channels, or initialize the empty value variable with `Type{}`. Both of them work similarly, but the latter is more generally applicable to structs as well.&#xA;&#xA;## Conclusion&#xA;&#xA;Understanding the difference between declaring and initializing maps in Golang is essential for any developer, not just in golang, but in general. As we&#39;ve explored, simply declaring a map variable without initializing it can lead to runtime errors, such as panics when attempting to access or modify a nil map. Initializing a map ensures that it is properly allocated in memory and ready for use, thereby avoiding these pitfalls.&#xA;&#xA;By following best practices—such as using the make function or initializing with Type{}—you can prevent common issues related to uninitialized maps. Always ensure that maps and slices are explicitly initialized before use to safeguard against unexpected nil pointer dereferences&#xA;&#xA;Thank you for reading this post, If you have any questions, feedback, and suggestions, feel free to drop them in the comments.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>NGINX Survival Guide: Serving Web Applications</title>
      <link>https://www.meetgor.com/posts/nginx-02-web-servers</link>
      <description>Introduction In the second part of our NGINX Survival Guide, we dive into the practical aspects of using NGINX to serve web applications. This section will guid</description>
      <pubDate>Sun, 21 Jul 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the second part of our NGINX Survival Guide, we dive into the practical aspects of using NGINX to serve web applications. This section will guide you through the essential tasks of setting up a basic HTTP server, configuring NGINX to serve content from custom directories, and using it as a reverse proxy to forward requests to backend servers.&#xA;&#xA;NGINX is a versatile web server that can be used to serve applications in a variety of ways, from simple web servers to complex proxy configurations. NGINX can be used to serve static HTML content, proxy requests to a backend server, or load balance traffic across multiple servers. In this guide, we&#39;ll explore the different ways to use NGINX to serve applications, including setting up a simple HTTP server, serving content from custom directories, and using it to load balance traffic across multiple upstream servers.&#xA;&#xA;## Simple HTTP Server&#xA;&#xA;NGINX serves as the default HTTP server on port 80 of your local machine if NGINX is properly installed and running on your system. If you head on to the localhost, you will see the default NGINX HTML page like the one below:&#xA;&#xA;![NGINX Default Page](https://meetgor-cdn.pages.dev/nginx-survival-guide/nginx-default-page.png)&#xA;&#xA;This is the default HTML page served by NGINX as per the configuration in the `/etc/nginx/nginx.conf` file. The default folder for NGINX to serve HTML content is located at `/usr/share/nginx/html/index.html` , If you change the contents of this file and restart NGINX, the http server will load the new HTML content.&#xA;&#xA;Let&#39;s first look, at how we can serve a simple http message within the configuration file in NGINX.&#xA;&#xA;## Serving simple text&#xA;&#xA;We will try to write our simple HTTP server from scratch, so it would be nice to empty the existing `/etc/nginx/nginx.conf` file or use other ports to serve the content rather than the default `127.0.0.1:80` port.&#xA;&#xA;```nginx&#xA;http {&#xA;    server {&#xA;        listen 8000;&#xA;        return 200 &#34;Hello, World!&#xA;&#34;;&#xA;    }&#xA;}&#xA;```&#xA;&#xA;The above config will serve the text `Hello, World!` when there is a request to the URL `127.0.0.1:8000` or `localhost:8000` You can change the port per your requirements and even add a `server_name` for your domain name.&#xA;&#xA;```bash&#xA;$ curl http://127.0.0.1:8000 &#xA;Hello, World!&#xA;&#xA;&#xA;$ curl -i http://127.0.0.1:8000&#xA;HTTP/1.1 200 OK&#xA;Server: nginx/1.18.0 (Ubuntu)&#xA;Date: Sat, 03 Feb 2024 11:41:16 GMT&#xA;Content-Type: application/octet-stream&#xA;Content-Length: 14&#xA;Connection: keep-alive&#xA;&#xA;Hello, World!&#xA;```&#xA;&#xA;As we can see the NGINX served the HTTP content when the request was made to port 8000 on the localhost.&#xA;&#xA;## Serving from a custom path/folder&#xA;&#xA;But things are not like these in the real world, we need to serve an entire directory of HTML pages. We need to add the `root` directive with the path to the folder where our HTML content resides. The path should have the `index.html` file as the starting point of the request.&#xA;&#xA;```nginx&#xA;http {&#xA;    server {&#xA;        listen 8000;&#xA;        root /srv/techstructive-blog;&#xA;        index index.html;&#xA;    }&#xA;}&#xA;```&#xA;&#xA;**NOTE: The path to the HTML content needs to be accessible and the Nginx process should have the read permission to serve the contents.**&#xA;&#xA;It is commonly recommended to store HTML/static content files in directories such as `/srv` or `/var/www`. These paths follow conventions for serving static files and web applications in Unix-type operating systems. While it&#39;s not a strict requirement, adhering to these conventions can improve the organization and maintainability of web content.&#xA;&#xA;## Serving from a web server&#xA;&#xA;If you already have a web server running in a port on your system, you could use Nginx as a gateway to the application instead of exposing your application to the internet.&#xA;&#xA;We could use the [proxy\_pass](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass) directive in the location setting to specify which URL to pass the request to, the `listen` will forward the request to the proxy specified in the location directive.&#xA;&#xA;```nginx&#xA;http {&#xA;&#x9;server {&#xA;&#x9;&#x9;listen 80;&#xA;&#x9;&#x9;location / {&#xA;&#x9;&#x9;&#x9;proxy_pass http://localhost:8001;&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;In the above example, the NGINX listens to port 80 in the local system and sends the request to the localhost at port 8001. The `proxy_pass` is used to specify the URL to redirect the request to.&#xA;&#xA;* **listen 80:** Nginx listens for incoming requests on port 80, the standard HTTP port.&#xA;    &#xA;* **location /:** This directive matches all incoming requests, regardless of the path.&#xA;    &#xA;* **proxy\_pass**[**http://localhost:8001**](http://localhost:8001)**:** Requests are forwarded to the web application running on [localhost](http://localhost) at port 8001.&#xA;    &#xA;&#xA;This example configuration is a basic building block for setting up more complex proxy configurations with NGINX.&#xA;&#xA;## Serving from Multiple Upstream Servers&#xA;&#xA;NGINX can also serve content from multiple upstream servers, balancing the load between them. This is useful for high-traffic applications that require multiple backend servers to handle the load.&#xA;&#xA;What are upstream servers, you might ask, well in the context of NGINX, upstream servers refer to backend servers that handle the actual processing of requests. NGINX acts as a gateway, forwarding incoming requests to these upstream servers. This setup allows NGINX to manage the traffic efficiently and distribute it among multiple servers, which can be particularly beneficial for high-traffic applications. . For example, you might have your application running on [`localhost:8001`](http://localhost:8001) and [`localhost:8002`](http://localhost:8002).&#xA;&#xA;Here’s an example configuration:&#xA;&#xA;```nginx&#xA;http {&#xA;    upstream myapp {&#xA;        server backend1.example.com;&#xA;        server backend2.example.com;&#xA;        server backend3.example.com;&#xA;    }&#xA;&#xA;    server {&#xA;        listen 80;&#xA;&#xA;        location / {&#xA;            proxy_pass http://myapp;&#xA;        }&#xA;    }&#xA;}&#xA;```&#xA;&#xA;In this configuration:&#xA;&#xA;* The `upstream` block defines a named group of backend servers (`myapp`).&#xA;    &#xA;* The `server` block listens on port 80 and proxies requests to the upstream group defined earlier.&#xA;    &#xA;* `upstream myapp`: This directive creates a group of backend servers named `myapp`.&#xA;    &#xA;* [`server backend1.example.com`](http://backend1.example.com) : These directives list the backend servers that will handle the requests. These can be specified by hostname, IP address, or combination.&#xA;    &#xA;* `proxy_pass` [`http://myapp`](http://myapp): This directive tells NGINX to forward incoming requests to the `myapp` upstream group.&#xA;    &#xA;&#xA;### Why Use Upstream Servers?&#xA;&#xA;Using upstream servers has several advantages:&#xA;&#xA;* Scalability: By distributing requests across multiple servers, you can handle more traffic and scale your application horizontally.&#xA;    &#xA;* Fault Tolerance: If one of the backend servers goes down, NGINX can continue to serve requests using the remaining servers, ensuring high availability.&#xA;    &#xA;* Load Distribution: Upstream servers help in balancing the load, which can improve the performance and responsiveness of your web application.&#xA;    &#xA;&#xA;The below configuration sets up NGINX to act as a gateway that distributes incoming traffic to multiple upstream servers. It defines an upstream block with servers at [`localhost:8001`](http://localhost:8001) and [`localhost:8002`](http://localhost:8002), and forward requests to these servers.&#xA;&#xA;```nginx&#xA;http {&#xA;    upstream myapp {&#xA;        server localhost:8001;&#xA;        server localhost:8002;&#xA;    }&#xA;&#xA;    server {&#xA;        listen 80;&#xA;&#xA;        location / {&#xA;            proxy_pass http://myapp;&#xA;        }&#xA;    }&#xA;}&#xA;```&#xA;&#xA;The provided NGINX configuration sets up an upstream block named `myapp` with two backend servers running on [`localhost`](http://localhost) at ports 8001 and 8002. The server block listens on port 80 and uses a location block to match all incoming requests to the root URL (`/`). These requests are forwarded to the `myapp` upstream group via the `proxy_pass` directive, allowing NGINX to distribute the requests between the two backend servers, effectively balancing the load and enhancing the application&#39;s performance and reliability.&#xA;&#xA;## Conclusion&#xA;&#xA;From this part of ther series, we have learned how to set up a simple HTTP server to serve content from custom directories and using NGINX as a gateway to backend servers, which covered essential ways to utilize NGINX for serving web applications.&#xA;&#xA;That&#39;s it from this part of the series, we will look into detail how to use NGINX as a load balancer and reverse proxy, serving static files, and caching content in the next part of the series, where we&#39;ll dive deeper into advanced NGINX configurations.&#xA;Thank you for reading, hopefully you found this helpful. If you have any feedback, questions, or queries drop them below in the comments or reach me out directly on my social handles.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang Web: PUT Method</title>
      <link>https://www.meetgor.com/posts/golang-web-put-method</link>
      <description>Introduction In this section of the series, we will be exploring how to send a HTTP request in golang. We will understand how to send a basic PUT request, creat</description>
      <pubDate>Sat, 15 Jun 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this section of the series, we will be exploring how to send a `PUT` HTTP request in golang. We will understand how to send a basic PUT request, create an HTTP request, update a resource on a server, parsing the content from struct to json, headers, etc in the following section of this post.&#xA;&#xA;## What is a PUT Method&#xA;&#xA;A PUT method is a type of request that is used to update or modify an entire resource on a server/database.&#xA;&#xA;Imagine you have ordered a pizza at a restaurant and realized you want to change the toppings after you&#39;ve already placed the order. With a PUT request, it&#39;s like informing the waiter about the changes you want to make to your existing order. You specify the updated toppings or any modifications (the data you send). The waiter then takes this updated information (PUT request) back to the kitchen (the server) to apply the changes to your order.&#xA;&#xA;Let&#39;s say you created a order.&#xA;&#xA;```nginx&#xA;PUT /api/order/456 HTTP/1.1&#xA;Host: example.com&#xA;Content-Type: application/json&#xA;Content-Length: 123&#xA;&#xA;{&#xA;    &#34;userID&#34;: 123,&#xA;    &#34;orderID&#34;: 456,&#xA;    &#34;items&#34;: [&#xA;        {&#xA;            &#34;itemID&#34;: 789,&#xA;            &#34;name&#34;: &#34;Pizza&#34;,&#xA;            &#34;quantity&#34;: 2,&#xA;            &#34;toppings&#34;: [&#34;Mushrooms&#34;]&#xA;        }&#xA;    ]&#xA;}&#xA;```&#xA;&#xA;In the context of web development, PUT requests are often used for actions such as:&#xA;&#xA;* Updating existing records or resources&#xA;    &#xA;* Modifying specific parts of an existing resource&#xA;    &#xA;* Replacing an entire resource with updated data&#xA;    &#xA;&#xA;Here&#39;s an example of what the PUT request might look like in this scenario:&#xA;&#xA;```nginx&#xA;PUT /api/order/456 HTTP/1.1&#xA;Host: example.com&#xA;Content-Type: application/json&#xA;Content-Length: 155&#xA;&#xA;{&#xA;    &#34;userID&#34;: 123,&#xA;    &#34;orderID&#34;: 456,&#xA;    &#34;items&#34;: [&#xA;        {&#xA;            &#34;itemID&#34;: 789,&#xA;            &#34;name&#34;: &#34;Pizza&#34;,&#xA;            &#34;quantity&#34;: 2,&#xA;            &#34;toppings&#34;: [&#34;Mushrooms&#34;, &#34;Olives&#34;]&#xA;        }&#xA;    ]&#xA;}&#xA;```&#xA;&#xA;In this example:&#xA;&#xA;* The PUT method is used to update the resource identified by `/api/order/456`.&#xA;    &#xA;* The application/json is the content type of the request.&#xA;    &#xA;* The 155 is the content length of the request.&#xA;    &#xA;* The body contains the updated details of the order, including the addition of toppings to the pizza.&#xA;    &#xA;&#xA;PUT requests are crucial for maintaining and updating data in applications where accuracy and consistency are paramount, ensuring that resources are kept current and reflect the latest changes made by users or systems&#xA;&#xA;## Why the need of PUT Method&#xA;&#xA;In the world of HTTP requests, we use the PUT method to update or modify an entire resource on a server or database. This method is crucial because the POST method, while convenient for creating new data, is not intended for updating existing resources according to standard conventions. While it&#39;s possible to misuse the POST method for updates internally, doing so can lead to confusion and inconsistencies in how requests are understood and processed.&#xA;&#xA;## How PUT Method request works&#xA;&#xA;A [PUT](https://www.rfc-editor.org/rfc/rfc9110#PUT) request is utilized to send data to a server for the purpose of updating a resource. When a client (such as a browser or other APIs) sends a PUT request to the server&#39;s API endpoint, it includes data in the request body, typically formatted as JSON, XML, or form data.&#xA;&#xA;The server processes the PUT request by first identifying the resource to be updated, either through the URL or data provided in the request body. It then validates, parses, and applies the data from the request body to make modifications to the resource. Following this, the server returns a response that includes a status code indicating the success or failure of the operation. Optionally, the response may also include the updated resource in the response body.&#xA;&#xA;Unlike the POST method, which is primarily used for creating new resources, PUT is specifically designed for updating existing resources on the server. The request body of a PUT contains the data necessary for the update, while the URL identifies the specific resource to be updated.&#xA;&#xA;In summary, PUT requests facilitate the transfer of data to the server specifically for updating resources, ensuring that changes to existing data are accurately processed and reflected.&#xA;&#xA;## Basic PUT Method&#xA;&#xA;To send a `PUT` request to an API in golang, we need to create a `http.Request` object. For `POST` method, the `http` package had the `Post` function defined, however for `PUT` method, there is no separate function. The Go philosophy is right now against adding all the method functions. There have been a couple of discussions on this on [GitHub](https://github.com/golang/go/issues/22841), but it is not been adopted as of 2024.&#xA;&#xA;So, we need to create a `http.Request` object for `PUT` method.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;apiURL := &#34;https://reqres.in/api/users/5&#34;&#xA;&#xA;&#x9;req, err := http.NewRequest(http.MethodPut, apiURL, nil)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;resp, err := http.DefaultClient.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#xA;&#x9;defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;200&#xA;```&#xA;&#xA;The above code sends a `PUT` request to the [`https://reqres.in/api/users/5`](https://reqres.in/api/users/5) endpoint. The resource we are trying to update is fetched with the identifier `5` which could probably be `id` for the user in the database of the server.&#xA;&#xA;## PUT Method with JSON&#xA;&#xA;Marshaling and encoding are essential in Go for preparing structured data, such as from a struct, into JSON format suitable for HTTP requests like PUT. This conversion ensures data integrity and compatibility between Go types and JSON representations. It&#39;s crucial when updating resources on servers, as APIs often require specific data formats for processing updates correctly. Marshaling transforms Go structs into JSON bytes, while encoding further prepares them as request bodies, facilitating seamless communication with web services. This process ensures data consistency and adherence to API specifications, maintaining robust communication in distributed systems.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;Name   string `json:&#34;name&#34;`&#xA;&#x9;Salary int    `json:&#34;salary&#34;`&#xA;&#x9;Age    string `json:&#34;age&#34;`&#xA;&#x9;ID     int    `json:&#34;id,omitempty&#34;`&#xA;}&#xA;&#xA;type UserResponse struct {&#xA;&#x9;Status  string `json:&#34;status&#34;`&#xA;&#x9;Message string `json:&#34;message&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;user := User{&#xA;&#x9;&#x9;Name:   &#34;Alice&#34;,&#xA;&#x9;&#x9;Salary: 50000,&#xA;&#x9;&#x9;Age:    &#34;25&#34;,&#xA;&#x9;}&#xA;&#x9;apiURL := &#34;https://dummy.restapiexample.com/api/v1/update/11&#34;&#xA;&#xA;&#x9;// marshalling process&#xA;&#x9;// converting Go specific data structure/types to JSON&#xA;&#x9;bodyBytes, err := json.Marshal(user)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(string(bodyBytes))&#xA;&#xA;&#x9;// reading json into a buffer/in-memory&#xA;&#x9;body := bytes.NewBuffer(bodyBytes)&#xA;&#xA;&#x9;// post request&#xA;&#x9;req, err := http.NewRequest(http.MethodPut, apiURL, body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;resp, err := http.DefaultClient.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;if resp.StatusCode == 429 {&#xA;&#x9;&#x9;fmt.Println(&#34;too many requests&#34;)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#x9;respBody, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(string(respBody))&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;// unmarshalling process&#xA;&#x9;// converting JSON to Go specific data structure/types&#xA;&#x9;var userResponse UserResponse&#xA;&#x9;if err := json.Unmarshal(respBody, &amp;userResponse); err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(userResponse)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run json.go                                                                 &#xA;{&#34;name&#34;:&#34;Alice&#34;,&#34;salary&#34;:50000,&#34;age&#34;:&#34;25&#34;}&#xA;200                                                                              &#xA;{&#34;status&#34;:&#34;success&#34;,&#34;data&#34;:[],&#34;message&#34;:&#34;Successfully! Record has been updated.&#34;}&#xA;{success Successfully! Record has been updated.}&#xA;```&#xA;&#xA;In the provided Go code example, the `json.Marshal` function is used to convert a Go struct (`User`) into a JSON formatted byte slice (`[]byte`). Here&#39;s a breakdown of the steps involved:&#xA;&#xA;* Struct Definition: Define a Go struct with json tags.&#xA;    &#xA;* Marshalling: Use json.Marshal to convert the struct into JSON byte slice.&#xA;    &#xA;* Buffer Creation: Wrap the JSON byte slice into an in-memory buffer (bytes.Buffer).&#xA;    &#xA;* Request Sending: Send a PUT request with the buffer as the request body and set appropriate headers.&#xA;    &#xA;&#xA;Let&#39;s explore it step by step in detail:&#xA;&#xA;When using the PUT method in Go to update a resource on a server, you often need to send data in JSON format as the request body. Here&#39;s how you can achieve this using marshaling and encoding:&#xA;&#xA;1. Define the Struct&#xA;    &#xA;&#xA;```go&#xA;type User struct {&#xA;    Name   string `json:&#34;name&#34;`&#xA;    Salary int    `json:&#34;salary&#34;`&#xA;    Age    string `json:&#34;age&#34;`&#xA;    ID     int    `json:&#34;id,omitempty&#34;`&#xA;}&#xA;```&#xA;&#xA;Define a Go struct (`User`) that represents the data structure you want to send in JSON format. The json tags specify how each field should be serialized into JSON.&#xA;&#xA;2. Create an Object&#xA;    &#xA;&#xA;```go&#xA;user := User{&#xA;    Name:   &#34;Alice&#34;,&#xA;    Salary: 50000,&#xA;    Age:    &#34;25&#34;,&#xA;}&#xA;```&#xA;&#xA;Create an instance of the User struct (user) with sample data. This data will be marshaled into JSON format to send in the `PUT` request body.&#xA;&#xA;3. Marshal the Struct&#xA;    &#xA;&#xA;```go&#xA;bodyBytes, err := json.Marshal(user)&#xA;if err != nil {&#xA;    panic(err)&#xA;}&#xA;```&#xA;&#xA;Use json.Marshal(user) to convert the user struct into a JSON byte slice (bodyBytes). This byte slice contains the serialized JSON representation of the User struct.&#xA;&#xA;4. Create a Buffer&#xA;    &#xA;&#xA;```go&#xA;body := bytes.NewBuffer(bodyBytes)&#xA;```&#xA;&#xA;Use `bytes.NewBuffer(bodyBytes)` to create an in-memory buffer (`body`) containing the JSON byte slice (`bodyBytes`). The buffer implements the `io.Reader` interface needed for the PUT request body.&#xA;&#xA;5. Create a PUT Request&#xA;    &#xA;&#xA;```go&#xA;req, err := http.NewRequest(http.MethodPut, apiURL, body)&#xA;if err != nil {&#xA;    panic(err)&#xA;}&#xA;```&#xA;&#xA;Use http.NewRequest to create a new PUT request to the specified URL with the JSON buffer (`body`) as the request body. Set appropriate headers if needed (e.g., Content-Type as application/json).&#xA;&#xA;6. Send the Request&#xA;    &#xA;&#xA;```go&#xA;resp, err := http.DefaultClient.Do(req)&#xA;if err != nil {&#xA;    panic(err)&#xA;}&#xA;```&#xA;&#xA;Use [`http.DefaultClient.Do`](http://http.DefaultClient.Do)`(req)` to execute the PUT request and obtain the response. Handle any errors that may occur during the request execution.&#xA;&#xA;7. Process the Response&#xA;    &#xA;&#xA;```go&#xA;respBody, err := io.ReadAll(resp.Body)&#xA;if err != nil {&#xA;    panic(err)&#xA;}&#xA;```&#xA;&#xA;Use `io.ReadAll(resp.Body)` to read and store the response body from the server. Handle any errors encountered during the reading process.&#xA;&#xA;8. Unmarshal the Response&#xA;    &#xA;&#xA;```go&#xA;var userResponse UserResponse&#xA;if err := json.Unmarshal(respBody, &amp;userResponse); err != nil {&#xA;    panic(err)&#xA;}&#xA;```&#xA;&#xA;Use `json.Unmarshal(respBody, &amp;userResponse)` to deserialize the JSON response body into a Go struct. This allows you to work with the response data in a structured manner.&#xA;&#xA;The parsing of files and form data is also possible with `PUT` requests, however, that has been covered in the [POST Method](https://meetgor.com/golang-web-post-method). Those snippets would be handy in these request method as well.&#xA;&#xA;I have also included some more examples of PUT requests [here](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/web/methods/put/).&#xA;&#xA;That&#39;s it from the 35th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/web/methods/put/) repository.&#xA;&#xA;[100-days-of-golang](https://github.com/Mr-Destructive/100-days-of-golang)&#xA;&#xA;## Conclusion&#xA;&#xA;That&#39;s it from this post of the series, a post on the PUT method in golang :)&#xA;&#xA;We have covered topics like creating basic PUT requests and marshaling golang types into JSON format. Hope you found this article helpful. If you have any queries, questions, or feedback, please let me know in the comments or on my social handles. Thank you for reading.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang Web: POST Method</title>
      <link>https://www.meetgor.com/posts/golang-web-post-method</link>
      <description>Introduction In this section of the series, we will be exploring how to send a HTTP request in golang. We will understand how to send a basic POST request, crea</description>
      <pubDate>Sun, 10 Mar 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this section of the series, we will be exploring how to send a `POST` HTTP request in golang. We will understand how to send a basic POST request, create an HTTP request, and parse json, structs into the request body, add headers, etc in the following sections of this post. We will understand how to marshal the golang struct/types into JSON format, send files in the request, and handle form data with examples of each in this article. Let&#39;s answer a few questions first.&#xA;&#xA;## What is a POST request?&#xA;&#xA;POST method is a type of request that is used to send data to a server(a machine on the internet).&#xA;&#xA;Imagine you are placing an order at a restaurant. With a GET request, it would be like asking the waiter, &#34;What kind of pizza do you have?&#34; The waiter would respond by telling you the menu options (the information retrieved from the server).&#xA;&#xA;However, a POST request is more like giving your completed order to the waiter. You tell them the specific pizza you want, its size, and any additional toppings (the data you send). The waiter then takes this information (POST request) back to the kitchen (the server) to process it (fulfill your order).&#xA;&#xA;In the world of web development, POST requests are often used for things like:&#xA;&#xA;Submitting forms (e.g., contact forms, login forms) Uploading files (e.g., photos, videos) Creating new accounts Sending data to be processed (e.g., online purchases)&#xA;&#xA;Here&#39;s an example of what the POST request might look like in this scenario:&#xA;&#xA;```nginx&#xA;POST /api/order HTTP/1.1&#xA;Host: example.com&#xA;Content-Type: application/json&#xA;Content-Length: 123&#xA;&#xA;{&#xA;    &#34;userID&#34;: 123,&#xA;    &#34;orderID&#34;: 456,&#xA;    &#34;items&#34;: [&#xA;        {&#xA;            &#34;itemID&#34;: 789,&#xA;            &#34;name&#34;: &#34;Pizza&#34;,&#xA;            &#34;quantity&#34;: 2&#xA;        },&#xA;        {&#xA;            &#34;itemID&#34;: 999,&#xA;            &#34;name&#34;: &#34;Burger&#34;,&#xA;            &#34;quantity&#34;: 1&#xA;        }&#xA;    ]&#xA;}&#xA;```&#xA;&#xA;In this example:&#xA;&#xA;* The `POST` method is used to send data to the server.&#xA;* The `/api/order` is the endpoint of the server.&#xA;* The `application/json` is the content type of the request.&#xA;* The `123` is the content length of the request.&#xA;* The `{&#34;userID&#34;: 123, &#34;orderID&#34;: 456, &#34;items&#34;: [{&#34;itemID&#34;: 789, &#34;name&#34;: &#34;Pizza&#34;, &#34;quantity&#34;: 2}, {&#34;itemID&#34;: 999, &#34;name&#34;: &#34;Burger&#34;, &#34;quantity&#34;: 1}]}` is the body of the request.&#xA;&#xA;&#xA;## Why the need for a POST request?&#xA;&#xA;In the world of HTTP requests, we use the POST method to securely send data from a client (like a user&#39;s browser) to a server. This is crucial because the GET method, while convenient for retrieving data, has limitations:&#xA;&#xA;Imagine you are in registering for an event via Google form, you type in your details on the webpage like name, email, address, phone number, and other personal details. If the website/app was using the `GET` method to send the request to register or do any other authentication/privacy-related requests, it could expose the data in the URL itself. It would be something along the lines [`https://form.google.com/register/&lt;form-name&gt;-&lt;id&gt;/?name=John&amp;phone_number=1234567890`](https://form.google.com/register/%3Cform-name%3E-%3Cid%3E/?name=John&amp;phone_number=1234567890), if a user maliciously sniffs into your network and inspects the URL, your data will be exposed. That is the reason we need `POST` a method.&#xA;&#xA;## How a POST method works?&#xA;&#xA;A [POST](https://www.rfc-editor.org/rfc/rfc9110#POST) request is used to send data to a server to create or update(there is a separate method for updating) a resource. The client(browser/other APIs) sends a POST request to the server&#39;s API endpoint with the data in the request body. This data can be in formats like JSON, XML, or form data. The server processes the POST request, validates and parses the data in the request body, makes any changes or creates resources based on that data, and returns a response. The response would contain a status code indicating the success or failure of the operation and may contain the newly created or updated resource in the response body. The client must check the response status code to verify the outcome and process the response accordingly. Unlike GET, POST can create new resources on the server. The body of a POST contains the data for creation while the URL identifies the resource to be created. Overall, POST transfers data to the server for processing, creation or updating of resources.&#xA;&#xA;The status code is usually `201` indicating the resource is successfully created or `200` for just indicating success.&#xA;&#xA;Some common steps for creating and sending a POST request as a developer include:&#xA;&#xA;* Defining the API endpoint&#xA;&#xA;* Clarifying the data format (json, language native objects, xml , text, form-data, etc)&#xA;&#xA;* Converting / Marshalling the data&#xA;&#xA;* Attaching header for `Content-Type` as key and value as the format of the data type (e.g. `application/json` for json)&#xA;&#xA;* Sending the request&#xA;&#xA;&#xA;The above steps are general for creating and sending a POST request, they are not specific to Golang. For golang specific steps, we need to dive a bit deeper, let&#39;s get started.&#xA;&#xA;## Basic POST method in Golang&#xA;&#xA;To send a POST request in golang, we need to use the `http` package. The `http` package has the `Post` method, which takes in 3 parameters, namely the URL, the Content-Type, and the Body. The body can be `nil` if the URL endpoint doesn&#39;t necessarily require a body. The `Content-Type` is the string, since we are just touching on how the Post request is constructed, we will see what the `Content-Type` string value should be in the later sections.&#xA;&#xA;&gt; [http.Post](http://http.Post)(URL, Content-Type, Body)&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;apiURL := &#34;https://reqres.in/api/users&#34;&#xA;&#xA;&#x9;// POST request&#xA;&#x9;resp, err := http.Post(apiURL, &#34;&#34;, nil)&#xA;    // ideally the Content-Type header should be set to the relevant format&#xA;&#x9;// resp, err := http.Post(apiURL, &#34;application/json&#34;, nil)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;    fmt.Println(resp)&#xA;&#x9;defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;&#xA;$ go run main.go&#xA;&#xA;201&#xA;&amp;{&#xA;    201 Created&#xA;    201&#xA;    HTTP/2.0&#xA;    2&#xA;    0&#xA;    map[&#xA;        Access-Control-Allow-Origin:[*]&#xA;        Cf-Cache-Status:[DYNAMIC]&#xA;        Cf-Ray:[861cd9aec8223e4b-BOM]&#xA;        Content-Length:[50]&#xA;        Content-Type:[application/json; charset=utf-8]&#xA;        Date:[Sat, 09 Mar 2024 17:40:28 GMT]&#xA;        Server:[cloudflare]&#xA;        ...&#xA;        ...&#xA;        ...&#xA;        X-Powered-By:[Express]&#xA;    ]&#xA;    {0xc00017c180}&#xA;    50&#xA;    []&#xA;    false&#xA;    false&#xA;    map[]&#xA;    0xc000156000&#xA;    0xc00012a420&#xA;}&#xA;```&#xA;&#xA;The above code is sending the `POST` request to the [`https://reqres.in/api/users`](https://reqres.in/api/users) endpoint with an empty body and no specific format for `Content-Type` header. The response is according to the [Response](https://pkg.go.dev/net/http#Response) structure. We can see we got `201` status, which indicates the server received the POST request successfully, the API is a dummy api, so we don&#39;t care about the data we are processing, we are just using the API as a placeholder for sending the POST request.&#xA;&#xA;We can use `map[string]interface{}` it to pass the data in the request body. The `json.Marshal` method is used to convert the map into JSON format. We will look into the details shortly in the next few examples.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    apiURL := &#34;https://reqres.in/api/users&#34;&#xA;    bodyMap := map[string]interface{}{&#xA;        &#34;name&#34;: &#34;morpheus&#34;,&#xA;        &#34;job&#34;: &#34;leader&#34;,&#xA;    }&#xA;&#xA;    requestBody, err := json.Marshal(bodyMap)&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;    body := bytes.NewBuffer(requestBody)&#xA;&#xA;    resp, err := http.Post(apiURL, &#34;application/json&#34;, body)&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;    fmt.Println(resp.StatusCode)&#xA;    defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;201&#xA;```&#xA;&#xA;The above code sends the `POST` request to the [`https://reqres.in/api/users`](https://reqres.in/api/users) endpoint with the data in the request body in JSON format.&#xA;&#xA;## Creating a POST request in Golang&#xA;&#xA;We can construct the POST request with the [NewRequest](https://pkg.go.dev/net/http#NewRequest) method. The method takes in 3 parameters, namely the `method` (e.g. `POST`, `GET`), the `URL` and the `body` (if there is any). We can then add extra information to the headers or the Request object after constructing the basic HTTP [Request](https://pkg.go.dev/net/http#Request) object.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;apiURL := &#34;https://reqres.in/api/users&#34;&#xA;&#xA;&#x9;req, err := http.NewRequest(http.MethodPost, apiURL, nil)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;resp, err := http.DefaultClient.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;//fmt.Println(resp)&#xA;&#x9;defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;201&#xA;```&#xA;&#xA;In the above example, we have created an HTTP Request as the `POST` method, with [`https://reqres.in/api/users`](https://reqres.in/api/users) as the URL, and no body. This constructs an HTTP Request object, which can be sent as the parameter to the [http.DefaultClient.Do](http://http.DefaultClient.Do) method, which is the default client for the request we sent in the earlier examples as `http.Get` or [`http.Post`](http://http.Post) methods. We can implement a custom client as well, and then apply `Do` the method with the request parameters. The `Do` method returns the `Request` object or the `error` if any.&#xA;&#xA;More on the customizing Client will be explained in a separate post in the series.&#xA;&#xA;The response is also in the same format as the [Response](https://pkg.go.dev/net/http#Response) structure that we have seen earlier. This section of the series aims to construct a post request, and not to parse the response, we have already understood the parsing of the response in the [Get method](https://www.meetgor.com/golang-web-get-method/#?:~:text=Parsing%20the%20JSON%20body%20with%20structs) section of the series.&#xA;&#xA;### Parsing objects to JSON for POST method request&#xA;&#xA;We might have a golang object that we want to send as a body to an API in the POST request, for that we need to convert the golang struct object to JSON. We can do this by using the [Marshal](https://pkg.go.dev/encoding/json#Marshal) or the [Encode](https://pkg.go.dev/encoding/json#Encoder.Encode) method for serialization of the golang struct object to JSON.&#xA;&#xA;#### Using Marshal method&#xA;&#xA;Marshaling is the process of converting data from a data structure into a format suitable for transmission over a network or for storage. It&#39;s commonly used to convert native objects in a programming language into a serialized format, typically a byte stream, that can be transmitted or stored efficiently. You might get a question here, what is the difference between `Marshalling` and `Serialization`? Well, Serialization, is a broader term that encompasses marshalling. It refers to the process of converting an object or data structure into a format that can be stored or transmitted and later reconstructed into the original object. Serialization may involve converting data into byte streams, XML, JSON, or other formats. So, in summary, marshaling specifically deals with converting native objects into a format suitable for transmission, while serialization encompasses the broader process of preparing data for storage or transmission.&#xA;&#xA;The `json` package has the [Marshal](https://pkg.go.dev/encoding/json#Marshal) method that converts the golang object into JSON. The `Marshal` method takes in a parameter as the struct object with type `any` and returns a byte slice `[]byte` and error (if any).&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;Name   string `json:&#34;name&#34;`&#xA;&#x9;Salary int    `json:&#34;salary&#34;`&#xA;&#x9;Age    int    `json:&#34;age&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;user := User{&#xA;&#x9;&#x9;Name:   &#34;Alice&#34;,&#xA;&#x9;&#x9;Salary: 50000,&#xA;&#x9;&#x9;Age:    25,&#xA;&#x9;}&#xA;&#x9;apiURL := &#34;https://dummy.restapiexample.com/api/v1/create&#34;&#xA;&#xA;&#x9;// marshalling process&#xA;&#x9;// converting Go specific data structure/types to JSON&#xA;&#x9;bodyBytes, err := json.Marshal(user)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(string(bodyBytes))&#xA;&#xA;&#x9;// reading json into a buffer/in-memory&#xA;&#x9;body := bytes.NewBuffer(bodyBytes)&#xA;&#xA;&#x9;// post request&#xA;&#x9;resp, err := http.Post(apiURL, &#34;application/json&#34;, body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;{&#34;name&#34;:&#34;Alice&#34;,&#34;salary&#34;:50000,&#34;age&#34;:25}&#xA;200&#xA;```&#xA;&#xA;In the above example, we have created a struct `User` with fields `Name`, `Salary`, and `Age`, the json tags will help label each key in JSON with the tag for the respective fields in the struct. We create an object `user` of a type `User` with the values as `Alice`, `50000`, and `25` respectively.&#xA;&#xA;We call the `json.Marshal` method with the parameter `user` that represents the struct object `User`, the method returns a slice of bytes, or an error either or both could be nil. If we try to see the stringified representation of the byte slice, we can see something like `{&#34;name&#34;:&#34;Alice&#34;,&#34;salary&#34;:50000,&#34;age&#34;:25}` which is a JSON string for the user struct. We can&#39;t parse the byte slice as the body in the POST request, we need the `io.Reader` object, so we can load the byte slice `bodyBytes` into a buffer and parse that as a body for the POST request.&#xA;&#xA;We then send a `POST` request to the endpoint [`https://dummy.restapiexample.com/api/v1/create`](https://dummy.restapiexample.com/api/v1/create) with the content type as `application/json` and with the body as `body` which was a `io.Reader` object as an in-memory buffer.&#xA;&#xA;In brief, we can summarize the marshaling of the golang object into JSON with `Marshal` function as the following steps:&#xA;&#xA;* Defining the structure as per the request body&#xA;&#xA;* Creating the struct object for parsing the data as body to the request&#xA;&#xA;* Calling the `json.Marshal` function to convert the object to JSON (parameter as the struct object `any` type)&#xA;&#xA;* Loading the byte slice into a buffer with `bytes.NewBuffer()`&#xA;&#xA;* Sending the POST request to the endpoint with the body as the `io.Reader` object and content type as `application/json`&#xA;&#xA;&#xA;#### Using Encode method&#xA;&#xA;We can even use the [Encoder.Encode](https://pkg.go.dev/encoding/json#Encoder.Encode) method to parse the golang struct object to JSON. Firstly, we should have the struct defined as per the request body that the particular API takes, we can make use of the json tags, omitempty, omit(-) options to make the marshaling process work accordingly. We can then create the object of that particular struct with the data we require to be created as a resource with the POST request on that API service.&#xA;&#xA;Thereafter we can create an empty buffer object with [bytes.Buffer](https://pkg.go.dev/bytes#Buffer), this buffer object would be used to initialize the [Encoder](https://pkg.go.dev/encoding/json#Encoder) object with the [NewEncoder](https://pkg.go.dev/encoding/json#NewEncoder) method. This would give access to the [Encode](https://pkg.go.dev/encoding/json#Encoder.Encode) method, which is used to take in the struct object (`any` type) and this will populate the buffer we initialized with the `NewEncoder` method.&#xA;&#xA;Later we can access that buffer to parse it to the Post request as the body. Let&#39;s understand it better with an example.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;Name   string&#xA;&#x9;Salary int&#xA;&#x9;Age    int&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;user := User{&#xA;&#x9;&#x9;Name:   &#34;Alice&#34;,&#xA;&#x9;&#x9;Salary: 50000,&#xA;&#x9;&#x9;Age:    25,&#xA;&#x9;}&#xA;&#x9;apiURL := &#34;https://dummy.restapiexample.com/api/v1/create&#34;&#xA;&#xA;&#x9;var bodyBuffer bytes.Buffer&#xA;&#x9;var encoder = json.NewEncoder(&amp;bodyBuffer)&#xA;&#x9;err := encoder.Encode(user)&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;&#xA;&#x9;resp, err := http.Post(apiURL, &#34;application/json&#34;, &amp;bodyBuffer)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;fmt.Println(resp)&#xA;&#x9;defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;Over here, we have created a struct `User` with fields `Name`, `Salary`, and `Age`, we initialize the `user` as the object of the `User` struct. Then we create a buffer `bodyBuffer` of type `bytes.Buffer` this is the actual buffer that we will send as the body. Further, we initialize the `Encoder` object as `encoder` with the `json.NewEncoder` method by parsing the reference of `bodyBuffer` as the parameter. Since `bytes.Buffer` implements the `io.Writer` interface, we can pass the `bodyBuffer` to the `NewEncoder` method. This will create the `Encoder` object which in turn will give us access to the `Encode` method, where we will parse the struct instance and it will populate the buffer with which we initialized the `Encoder` object earlier.&#xA;&#xA;Now, we have the `encode` object, this gives us the access to `Encode` method, we call the `Encode` method with the parameter of `user` which is a User struct instance/object. The Encode method will populate the `bodyBuffer` object or it will result in an error if anything goes wrong (the data is incorrectly parsed or is not in the required format).&#xA;&#xA;We can call the `Post` method with the initialized URL, the `Content-Type` as `application/json` since we have converted the struct instance to JSON object, and the body as the reference to the buffer as `&amp;bodyBuffer`&#xA;&#xA;So, the steps for parsing struct instances into JSON objects with the `Encoder.Encode` method is as follows:&#xA;&#xA;* Defining the structure as per the request body&#xA;&#xA;* Creating the struct object for parsing the data as body to the request&#xA;&#xA;* Creating an empty `bytes.Buffer` object as an in-memory buffer&#xA;&#xA;* Initializing the `Encoder` object with `NewEncoder` method by parsing the reference of `bodyBuffer` as the parameter&#xA;&#xA;* Calling the `Encode` method with the parameter of struct instance/object&#xA;&#xA;* Sending the POST request to the endpoint with the content type as `application/json` and body as the reference to the buffer&#xA;&#xA;The results are the same as the above example just the way we have parsed the struct instance to JSON object is different.&#xA;&#xA;### Parsing JSON to POST request&#xA;&#xA;We have seen how we can parse golang struct instances to JSON and then send the post request, but what if we had the JSON string already with us, and we want to send the request? Well, that&#39;s much easier, right? We already have parsed the JSON string to the Post request by loading the slice of bytes into a buffer, so we just need to convert the string to a slice of bytes which is quite an easy task, and then load that byte slice to the buffer.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// dummy api&#xA;&#x9;apiURL := &#34;https://dummy.restapiexample.com/api/v1/create&#34;&#xA;&#xA;&#x9;// json data&#xA;&#x9;data := `{&#xA;        &#34;name&#34;: &#34;Alice&#34;,&#xA;        &#34;job&#34;: &#34;Teacher&#34;&#xA;    }`&#xA;&#x9;body := bytes.NewBuffer([]byte(data))&#xA;&#xA;&#x9;// POST request&#xA;&#x9;resp, err := http.Post(apiURL, &#34;application/json&#34;, body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;fmt.Println(resp)&#xA;&#x9;defer resp.Body.Close()&#xA;}&#xA;```&#xA;&#xA;In the example above, we already have a JSON string `data` with keys as `name` and `job` but it is not JSON, it is a stringified JSON. We can convert the stringified JSON to a slice of bytes using the `[]byte` function. Further, we have used the `bytes.NewBuffer` method to load the byte slice into an `io.Reader` object. This object returned by the `bytes.NewBuffer` will serve as the body for the POST request.&#xA;&#xA;### Parsing JSON to objects in Golang from POST method response&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type User struct {&#xA;&#x9;Name   string `json:&#34;name&#34;`&#xA;&#x9;Salary int    `json:&#34;salary&#34;`&#xA;&#x9;Age    string `json:&#34;age&#34;`&#xA;&#x9;ID     int    `json:&#34;id,omitempty&#34;`&#xA;}&#xA;&#xA;type UserResponse struct {&#xA;&#x9;Status string `json:&#34;status&#34;`&#xA;&#x9;Data   User   `json:&#34;data&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;user := User{&#xA;&#x9;&#x9;Name:   &#34;Alice&#34;,&#xA;&#x9;&#x9;Salary: 50000,&#xA;&#x9;&#x9;Age:    &#34;25&#34;,&#xA;&#x9;}&#xA;&#x9;apiURL := &#34;https://dummy.restapiexample.com/api/v1/create&#34;&#xA;&#xA;&#x9;// marshalling process&#xA;&#x9;// converting Go specific data structure/types to JSON&#xA;&#x9;bodyBytes, err := json.Marshal(user)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(string(bodyBytes))&#xA;&#xA;&#x9;// reading json into a buffer/in-memory&#xA;&#x9;body := bytes.NewBuffer(bodyBytes)&#xA;&#xA;&#x9;// post request&#xA;&#x9;resp, err := http.Post(apiURL, &#34;application/json&#34;, body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;fmt.Println(resp)&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;// Read response body&#xA;&#x9;respBody, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;// unmarshalling process&#xA;&#x9;// converting JSON to Go specific data structure/types&#xA;&#x9;var userResponse UserResponse&#xA;&#x9;if err := json.Unmarshal(respBody, &amp;userResponse); err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(userResponse)&#xA;&#x9;fmt.Println(userResponse.Data)&#xA;}&#xA;```&#xA;&#xA;```nginx&#xA;&#xA;{success {Alice 50000 25 3239}}&#xA;{Alice 50000 25 577}&#xA;```&#xA;&#xA;The above example is a POST request with a struct instance being loaded as a JSON string and then sent as a buffer to the API endpoint, it also reads the response body with a specific structure `UserResponse` and unmarshalled the `resp.Body` from the `io.Reader` as `respBody` and then loads into `userResponse` object. This example gives an entire process of what we have understood in the JSON data parsing for a POST request.&#xA;&#xA;### Sending Form data in a POST request&#xA;&#xA;We can also send data to a POST request in the form of a form, the form which we use in the HTML. Golang has a `net/url` package to parse the form data. The form data is sent in the `application/x-www-form-urlencoded` format.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;type ResponseLogin struct {&#xA;&#x9;Token string `json:&#34;token&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;// dummy api&#xA;&#x9;apiURL := &#34;https://reqres.in/api/login&#34;&#xA;&#xA;&#x9;// Define form data&#xA;&#x9;formData := url.Values{}&#xA;&#x9;formData.Set(&#34;email&#34;, &#34;eve.holt@reqres.in&#34;)&#xA;&#x9;formData.Set(&#34;password&#34;, &#34;cityslicka&#34;)&#xA;&#xA;&#x9;// Encode the form data&#xA;&#x9;fmt.Println(formData.Encode())&#xA;&#x9;reqBody := strings.NewReader(formData.Encode())&#xA;&#x9;fmt.Println(reqBody)&#xA;&#xA;&#x9;// Make a POST request with form data&#xA;&#x9;resp, err := http.Post(apiURL, &#34;application/x-www-form-urlencoded&#34;, reqBody)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;// Print response status code&#xA;&#x9;fmt.Println(&#34;Status Code:&#34;, resp.StatusCode)&#xA;&#xA;&#x9;// Read response body&#xA;&#x9;respBody, err := io.ReadAll(resp.Body)&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;&#x9;token := ResponseLogin{}&#xA;&#xA;&#x9;json.Unmarshal(respBody, &amp;token)&#xA;&#x9;fmt.Println(token)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;email=eve.holt%40reqres.in&amp;password=cityslicka&#xA;&amp;{email=eve.holt%40reqres.in&amp;password=cityslicka 0 -1}&#xA;Status Code: 200&#xA;{QpwL5tke4Pnpja7X4}&#xA;```&#xA;&#xA;In the above example, we set a `formData` with the values of `email` and `password` which are `url.Values` object. The `url.Values` the object is used to store the key-value pairs of the form data. The `formData` is encoded with the `url.Encode` method, We load the encoded string to a `buffer` with `strings.NewReader` which implements the `io.Reader` interface, so that way we can pass that object as the body to the post request.&#xA;&#xA;We send the `POST` request to the endpoint [`https://reqres.in/api/login`](https://reqres.in/api/login) with the content type as `application/x-www-form-urlencoded` and with the body as `reqBody` which implements the `io.Reader` interface as an in-memory buffer. The response from the request is read into the buffer with `io.ReadAll` method and we can `Unmarshal` the stream of bytes as a buffer into the `ResponseLogin` struct object.&#xA;&#xA;The output shows the `formData` as encoded string `email=eve.holt%`[`40reqres.in`](http://40reqres.in)`&amp;password=cityslicka` as `@` is encoded to `%40`, then we wrap the `formData` in a `strings.NewReader` object which is a buffer that implements `io.Reader` interface, hence we can see the result as the object. The status code for the request is `200` indicating the server received the `form-data` in the body and upon unmarshalling, we get the token as a response to the POST request which was a dummy login API.&#xA;&#xA;This way we have parsed the form-data to the body of a POST request.&#xA;&#xA;### Sending File in a POST request&#xA;&#xA;We have covered, parsing text, JSON, and form data, and now we need to move into sending files in a POST request. We can use the `multipart` package to parse files into the request body and set appropriate headers for reading the file from the API services.&#xA;&#xA;We first read the file contents [`os.Open`](http://os.Open) which returns a reference to the `file` object or an error. We create an empty `bytes.Buffer` object as `body` which will be populated later. The [multipart.NewWriter](https://pkg.go.dev/mime/multipart#NewWriter) method takes in the `io.Writer` object which will be the `body` as it is an `bytes.Buffer` object that implements the `io.Writer` interface. This will initialize the [Writer](https://pkg.go.dev/mime/multipart#Writer) object in the `multipart` package.&#xA;&#xA;We create a `form-field` in the `Writer` object with the [CreateFormFile](https://pkg.go.dev/mime/multipart#Writer.CreateFormFile) method, which takes in the `fieldName` as the name of the field, and the `fileName` as the name of the file which will be read later in the multipart form. The method returns either the part or the error. The `part` is an object that implements the `io.Writer` interface.&#xA;&#xA;Since we have stored the file contents in the `file` object, we copy the contents into the `form-field` with the [Copy](https://pkg.go.dev/io#Copy) method. Since the `part` return from the `CreateFormFile` was implementing the `io.Writer` interface, we can use it to Copy the contents from source to destination. The source is the `io.Reader` object and the destination is the `io.Writer` object, the destination for the `Copy` method is the first parameter, the source is the second parameter.&#xA;&#xA;This Copy method will populate the buffer initialized earlier in the `NewWriter` method. This will give us a buffer that has the file contents in it. We can pass this buffer to the POST request with the `body` parameter. We also need to make sure we close the `Writer` object after copying the contents of the file. We can extract the type of file which will serve as the `Content-Type` of the request.&#xA;&#xA;Let&#39;s clear the explanation with an example.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;mime/multipart&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;type ResponseFile struct {&#xA;&#x9;Files map[string]string `json:&#34;files&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;apiURL := &#34;http://postman-echo.com/post&#34;&#xA;&#x9;fileName := &#34;sample.csv&#34;&#xA;&#xA;&#x9;file, err := os.Open(fileName)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer file.Close()&#xA;&#xA;&#x9;body := &amp;bytes.Buffer{}&#xA;&#x9;writer := multipart.NewWriter(body)&#xA;&#xA;&#x9;part, err := writer.CreateFormFile(&#34;csvFile&#34;, fileName)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;_, err = io.Copy(part, file)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;    contentType := writer.FormDataContentType()&#xA;    fmt.Println(contentType)&#xA;&#xA;&#x9;writer.Close()&#xA;&#xA;&#x9;resp, err := http.Post(apiURL, contentType, body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;fmt.Println(&#34;Status Code:&#34;, resp.StatusCode)&#xA;&#xA;&#x9;respBody, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;token := ResponseFile{}&#xA;&#x9;json.Unmarshal(respBody, &amp;token)&#xA;&#x9;fmt.Println(token)&#xA;&#x9;fmt.Println(token.Files[fileName])&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;multipart/form-data; boundary=7e0eacfff890be395eba19c70415c908124b503a56f23ebeec0ab3c665ca&#xA;&#xA;&#xA;--619671ea2c0aa47ca6664a7cda422169d73f3b8a089c659203f5413d03de&#xA;Content-Disposition: form-data; name=&#34;csvFile&#34;; filename=&#34;sample.csv&#34;&#xA;Content-Type: application/octet-stream&#xA;&#xA;User,City,Age,Country&#xA;Alex Smith,Los Angeles,20,USA&#xA;John Doe,New York,30,USA&#xA;Jane Smith,Paris,25,France&#xA;Bob Johnson,London,40,UK&#xA;&#xA;--619671ea2c0aa47ca6664a7cda422169d73f3b8a089c659203f5413d03de--&#xA;&#xA;&#xA;&#xA;Status Code: 200&#xA;&#xA;{map[sample.csv:data:application/octet-stream;base64,VXNlcixDaXR5LEFnZSxDb3VudHJ5CkFsZXggU21pdGgsTG9zIEFuZ2VsZXMsMjAsVVNBCkpvaG4gRG9lLE5ldyBZb3JrLDMwLFVTQQpKYW5lIFNtaXRoLFBhmlzLDI1LEZyYW5jZQpCb2IgSm9obnNvbixMb25kb24sNDAsVUsK]}&#xA;&#xA;data:application/octet-stream;base64,VXNlcixDaXR5LEFnZSxDb3VudHJ5CkFsZXggU21pdGgsTG9zIEFuZ2VsZXMsMjAsVVNBCkpvaG4gRG9lLE5ldyBZb3JrLDMwLFVTQQpKYW5lIFNtaXRoLFBhmlzLDI1LEZyYW5jZQpCb2IgSm9obnNvbixMb25kb24sNDAsVUsK&#xA;```&#xA;&#xA;In the above example, we first read the file `sample.csv` into the `file` object with [`os.Open`](http://os.Open) method, this will return a reference to the file object or return an error if any arises while opening the file.&#xA;&#xA;Then we create an empty buffer `bytes.Buffer` object which will serve as the body of the post request later as it will get populated with the file contents in the form of `multipart/form-data`.&#xA;&#xA;We initialize the `Writer` object with `multipart.NewWriter` method which takes in the empty buffer as the parameter, we parse the `body` as the parameter. The method will return a reference to the `multipart.Writer` object.&#xA;&#xA;With the `Writer` object we access the `CreateFormFile` method which takes in the `fieldName` as the name of the field, and the `fileName` as the name of the file. The method will return either the part or an error. The `part` in this case, is the reference to the `io.Writer` object that will be used to write the contents from the uploaded file.&#xA;&#xA;Then, we can use the `io.Copy` method to copy the contents from the `io.Reader` object to the `io.Writer` object. The source is the `io.Reader` object and the destination is the `io.Writer` object. The first parameter is however the destination and the second parameter is the source. In the example, we call `io.Copy(part, file)` which will copy the contents of `file` to the `part` buffer.&#xA;&#xA;We get the `Content-Type` by calling the [Writer.FormDataContentType](https://pkg.go.dev/mime/multipart#Writer.FormDataContentType) method. This returns us `multipart/form-data; boundary=7e0eacfff890be395eba19c70415c908124b503a56f23ebeec0ab3c665ca` which will serve the `Content-Type` for the Post request.&#xA;&#xA;We need to make sure we close the `Writer` object with the `Close` method.&#xA;&#xA;We just print the `body.String()` to get a look at what the actual body looks like, we can see there is a form for the file as a `form-data` with keys like `Content-Type`, `Content-Disposition`, etc. The file has the `Content-Type` as `application/octet-stream` and the actual content is rendered in the output.&#xA;&#xA;The dummy API responds with a 200 status code and also sends the JSON data with the name of the file as the key and the value as the `base64` encoded value of the file contents. This indicates that we were able to upload the file to the server API using a POST request. Well done!&#xA;&#xA;I have also included some more examples of POST requests with files [here](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/web/methods/post/file_2.go) which extends the above example by taking the encoded values and decoding to get the actual contents of the file back.&#xA;&#xA;## Best Practices for POST method&#xA;&#xA;Here are some of the best practices for the POST method which are followed to make sure you consume or create the POST request in the most secure, efficient, and graceful way.&#xA;&#xA;### Always Close the Response Body&#xA;&#xA;Ensure that you close the response body after reading from it. Use `defer response.Body.Close()` to automatically close the body when the surrounding function returns. This is crucial for releasing associated resources like network connections or file descriptors. Failure to close the response body can lead to memory leaks, particularly with a large volume of requests. Properly closing the body prevents resource exhaustion and maintains efficient memory usage.&#xA;&#xA;### Client Customization&#xA;&#xA;Utilize the [Client](https://pkg.go.dev/net/http#Client) struct to customize the HTTP client behavior. By using a custom client, you can set timeouts, headers, user agents, and other configurations without modifying the `DefaultClient` provided by the `http` package. This approach allows for flexibility and avoids repetitive adjustments to the client configuration for each request.&#xA;&#xA;### Set Content-Type Appropriately&#xA;&#xA;Ensure that you set the `Content-Type` header according to the request payload. Correctly specifying the Content-Type is crucial for the server to interpret the request payload correctly. Failing to set the Content-Type header accurately may result in the server rejecting the request. Always verify and match the Content-Type header with the content being sent in the POST request to ensure smooth communication with the server.&#xA;&#xA;## Reference&#xA;&#xA;* [Postman POST API](https://www.postman.com/postman/workspace/postman-answers/documentation/13455110-00378d5c-5b08-4813-98da-bc47a2e6021d) (For POST request with file upload)&#xA;&#xA;* [Golang net/http Package](https://pkg.go.dev/net/http)&#xA;&#xA;&#xA;## Conclusion&#xA;&#xA;That&#39;s it from this post of the series, a post on the POST method in golang :)&#xA;&#xA;We have covered topics like creating basic post requests, Marshalling golang types into JSON format, parsing form data, sending a POST request with files, and best practices for the POST method. Hope you found this article helpful. If you have any queries, questions, or feedback, please let me know in the comments or on my social handles.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>NGINX Basics and Setup</title>
      <link>https://www.meetgor.com/posts/nginx-01-basics</link>
      <description>Introduction NGINX is a tool that can be used as a web server, reverse proxy, load balancer, streaming media files, application gateway, content caching, and so</description>
      <pubDate>Sun, 14 Jan 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;NGINX is a tool that can be used as a web server, reverse proxy, load balancer, streaming media files, application gateway, content caching, and so much more. It can be said to be a Swiss army knife for optimizing and securing your web application deployment.&#xA;&#xA;The series &#34;NGINX Survival Guide&#34; will start from the basics and cover the bare minimum required for a backend developer to get going with NGINX. I will use Docker widely throughout this course as it is a great combination with NGINX to server web applications. However, you can use NGINX without docker, and spawn multiple servers.&#xA;&#xA;The series will cover the terminologies of NGINX, configuring NGINX servers, load balancing multiple servers, using it as a reverse proxy, and as an API gateway, there will be tiny details and some tidbits of doing certain things in a certain constrained environment which will make the learning more valuable.&#xA;&#xA;## What is NGINX&#xA;&#xA;NGINX (pronounced &#34;engine-x&#34;) is not just a web server, it is a powerful and versatile open-source software that wears many hats in the internet world. At its core, it functions as a **lightning-fast web server**, its secret weapon lies in its **event-driven architecture**, where it handles requests asynchronously, allowing it to serve countless users simultaneously without breaking a sweat.&#xA;&#xA;&gt; NGINX is a popular choice for powering some of the **biggest websites and platforms in the world**, demonstrating its reliability and scalability.&#xA;&#xA;NGINX&#39;s **configurable nature** lets you tailor its behavior to your specific needs, whether managing traffic flow with load balancing, caching frequently accessed content for faster delivery, or even acting as a gateway for your APIs.&#xA;&#xA;This versatility makes NGINX a **powerful tool for building efficient, secure, and scalable web applications**, regardless of size or complexity. Hence the need to learn it as a developer and especially important for a backend developer.&#xA;&#xA;### Why NGINX is must learn for backend developers&#xA;&#xA;Nginx is a highly efficient and performant web server. Understanding its configuration and management allows a backend developer to optimize server performance, handle large volumes of traffic, and reduce latency.&#xA;&#xA;In microservices architectures, Nginx can serve as an API gateway, managing and routing requests between different services. Nginx provides caching mechanisms that enhance performance by serving cached content, reducing the load on backend servers.&#xA;&#xA;Having strong fundamentals in NGINX can indeed provide a competitive edge in the job market and make a backend developer more versatile in handling various aspects of backend web development.&#xA;&#xA;### Who is using NGINX?&#xA;&#xA;Big Tech Companies are using NGINX like DropBox, Netfilx, and Cloudflare, among others. Cloudflare used NGINX before but it was not enough for them, so they developed their web server/edge proxy suited to their needs called Pingora.&#xA;&#xA;* Dropbox - [Optimizing web servers for high throughput and low latency](https://dropbox.tech/infrastructure/optimizing-web-servers-for-high-throughput-and-low-latency)&#xA;&#xA;* Cloudflare - [How Cloudflare outgrown NGINX and made way to Pingora](https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/)&#xA;&#xA;* Netflix - [NGINX Netflix archives](https://www.nginx.com/blog/tag/netflix/)&#xA;&#xA;&#xA;## Installing NGINX&#xA;&#xA;### Linux&#xA;&#xA;There are comprehensive guides for your specific flavor/package manager/preferences in the [official documentation](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) of NGINX.&#xA;&#xA;A couple of common types of installation medium instructions are as follows:&#xA;&#xA;```bash&#xA;# APT&#xA;sudo apt update&#xA;sudo apt install nginx&#xA;&#xA;# YUM&#xA;sudo yum install epel-release&#xA;sudo yum update&#xA;sudo yum install nginx&#xA;```&#xA;&#xA;Check the status of the NGINX service to ensure the installation was successful or not with the command:&#xA;&#xA;```bash&#xA;sudo systemctl status nginx&#xA;```&#xA;&#xA;If this doesn&#39;t have any errors or fatal messages, the nginx server is up and running on port 80 i.e. on `127.0.0.1` on the system.&#xA;&#xA;### MacOS&#xA;&#xA;The installation on MacOS for NGINX is pretty simple with homebrew. The following [article](https://dev.to/arjavdave/installing-nginx-on-mac-46ac) walks through the steps of the installation:&#xA;&#xA;```bash&#xA;brew update&#xA;brew install nginx&#xA;nginx&#xA;```&#xA;&#xA;If you do not want to install it from homebrew, this [gist](https://gist.github.com/beatfactor/a093e872824f770a2a0174345cacf171) can help install it from the source.&#xA;&#xA;### Windows&#xA;&#xA;For Windows installation, you can follow the [guide](https://nginx.org/en/docs/windows.html) from the official documentation.&#xA;&#xA;```bash&#xA;# INSTALL the https://nginx.org/en/download.html&#xA;# A Zip file with the name nginx-version.zip will be downlaoded&#xA;# COPY it to the desired location and use that path while unzipping&#xA;cd c:\&#xA;unzip nginx-1.25.3.zip&#xA;cd nginx-1.25.3&#xA;start nginx&#xA;```&#xA;&#xA;You can check the status of NGINX if the installation was successful or not with the command:&#xA;&#xA;```bash&#xA;tasklist /fi &#34;imagename eq nginx.exe&#34;&#xA;```&#xA;&#xA;This should be from the installation section.&#xA;&#xA;## Understanding the default config&#xA;&#xA;When you have completed the installation of nginx, you can see the default nginx configuration in the file path as `/etc/nginx/nginx.conf` in Linux/macOS or `C:&#xA;ginx</content>
      <type></type>
    </item>
    <item>
      <title>Neovim + Sourcegraph Cody Plugin Integration</title>
      <link>https://www.meetgor.com/posts/neovim-sourcegraph-cody</link>
      <description>Introduction Have you ever used Sourcegraph&#39;s Cody? It is a great tool for developers, it is not just another LLM, it is tailored specifically for developers. C</description>
      <pubDate>Sat, 06 Jan 2024 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Have you ever used Sourcegraph&#39;s Cody? It is a great tool for developers, it is not just another LLM, it is tailored specifically for developers. Cody has some good features that allow parsing of context for the prompt in a smarter way.&#xA;&#xA;### What is Sourcegraph&#39;s Cody&#xA;&#xA;Cody is an AI assistant for developers that understands code context and can generate code. It goes beyond just answering questions - it can write code for you.&#xA;&#xA;The major features of Cody from the rest of the LLMs or Chatbots are:&#xA;&#xA;* Cody understands your code context - it reads your open files, repositories, etc. So it can answer questions specifically about your codebase, not just general programming questions.&#xA;&#xA;* Cody can explain sections of code to you in plain English. This helps ramp up on unfamiliar code bases.&#xA;&#xA;* Cody integrates into popular editors like VS Code, IntelliJ, Neovim, and others for frictionless use while coding.&#xA;&#xA;For more insights, check out the blog [all you need is Cody](https://about.sourcegraph.com/blog/all-you-need-is-cody). This is a great article about what and how Cody is tailored specifically to assist developers.&#xA;&#xA;## Prerequisites&#xA;&#xA;To setup sourcegraph on neovim, you will require the following setup:&#xA;&#xA;* Neovim 0.9 or above&#xA;&#xA;* Node.js &amp;gt;= 18.17 (LTS)&#xA;&#xA;* Cargo (Rust) (optional)&#xA;&#xA;To install neovim latest/nightly release, you can follow the [INSTALL](https://github.com/neovim/neovim/blob/master/INSTALL.md) or [BUILD](https://github.com/neovim/neovim/blob/master/BUILD.md) documentation of the neovim project.&#xA;&#xA;Cargo is optional, as the plugin will install the binaries itself, however, if you prefer to have cargo, just install it in case something goes wrong.&#xA;&#xA;## Installing sg.nvim&#xA;&#xA;There is a specific plugin for neovim for interacting with the Sourcegraph products, and Cody is one of them. The [sg.nvim](https://github.com/sourcegraph/sg.nvim) is a plugin for integrating sourcegraph search, Cody, and other features provided by soucegraph.&#xA;&#xA;#### Using packer.nvim&#xA;&#xA;```lua&#xA;use { &#39;sourcegraph/sg.nvim&#39;, run = &#39;nvim -l build/init.lua&#39; }&#xA;use { &#39;nvim-lua/plenary.nvim&#39; }&#xA;```&#xA;&#xA;Source your lua file where you have configured all the plugins and then enter the command `:PackerInstall` or `:PackerSync` to install the plugin.&#xA;&#xA;#### Using vim-plug&#xA;&#xA;If you are using vim-plug as the plugin manager, you can add the plugin to the configuration as below:&#xA;&#xA;```plaintext&#xA;Plug &#39;sourcegraph/sg.nvim&#39;, { &#39;do&#39;: &#39;nvim -l build/init.lua&#39; }&#xA;&#xA;&#34; Required for various utilities&#xA;Plug &#39;nvim-lua/plenary.nvim&#39;&#xA;&#xA;&#34; Required if you want to use some of the search functionality&#xA;Plug &#39;nvim-telescope/telescope.nvim&#39;&#xA;```&#xA;&#xA;You can source the file and run the command `:PlugInstall` to install the plugin.&#xA;&#xA;#### Using Lazy.nvim&#xA;&#xA;If you are using Lazy.nvim as the plugin manager, you can add the plugin to the Configuration as below:&#xA;&#xA;```lua&#xA;return {&#xA;  {&#xA;    &#34;sourcegraph/sg.nvim&#34;,&#xA;    dependencies = { &#34;nvim-lua/plenary.nvim&#34;, &#34;nvim-telescope/telescope.nvim&#34; },&#xA;&#xA;    -- If you have a recent version of lazy.nvim, you don&#39;t need to add this!&#xA;    build = &#34;nvim -l build/init.lua&#34;,&#xA;  },&#xA;}&#xA;```&#xA;&#xA;You can source the file and run the command `:Lazy install` to install the plugin.&#xA;&#xA;For other installation instructions, you can refer to the [README](https://github.com/sourcegraph/sg.nvim?tab=readme-ov-file#install) of sg.nvim.&#xA;&#xA;### Installing Binaries and Building the Plugin&#xA;&#xA;After the plugin is installed, you can move into the building and setup process of the sourcegraph Cody plugin.&#xA;&#xA;To install the binaries which are the dependencies of the plugin, you can run the command `:SourcegraphDownloadBinaries` which will force downloading the binaries, making sure that the plugin is properly installed.&#xA;&#xA;![SourcegraphDownloadBinaries Command Output](https://meetgor-cdn.pages.dev/sg-nvim-build.png)&#xA;&#xA;To build the plugin, you can simply run the command from within neovim as `:SourcegraphBuild`, this will rebuild the Sourcegraph rust crates and its dependencies (which might have failed during installation).&#xA;&#xA;### Sourcegraph Authentication&#xA;&#xA;You need to now authenticate to your Sourcegraph account to use the sourcegraph features such as search and Cody.&#xA;&#xA;You can do that by running the command `:SourcegraphLogin` in neovim. This will require two inputs, the sourcegraph endpoint and the access token. If you are using sourcegraph cloud and not a self-hosted sourcegraph, you do not need to change the endpoint, just press enter and move ahead. This will redirect you to the browser for authentication and creating an access token. Log in with your credentials to sourcegraph and copy the access token.&#xA;&#xA;This will prompt you back to the neovim interface to provide the access token. Paste the access token there and you will be good to go.&#xA;&#xA;### Health Check&#xA;&#xA;Once the plugins are installed then you can check the plugin is correctly downloaded by checking the health with the `:checkhealth sg` command.&#xA;&#xA;Below is the health check report on the sourcegraph plugin in neovim.&#xA;&#xA;```plaintext&#xA;sg: require(&#34;sg.health&#34;).check()&#xA;&#xA;sg.nvim report ~&#xA;- Machine: x86_64, sysname: Linux&#xA;- OK Valid nvim version: table: 0x7ffa0b7bce38&#xA;- OK Found `cargo` (cargo 1.70.0) is executable&#xA;-     Use `:SourcegraphDownloadBinaries` to avoid building locally.&#xA;- OK Found `sg-nvim-agent`: &#34;/home/meet/.local/share/nvim/site/pack/packer/start/sg.nvim/dist/sg-nvim-agent&#34;&#xA;- OK Found `node` (config.node_executable) is executable.&#xA;  Version: &#39;20.10.0&#39;&#xA;- OK Found `cody-agent`: /home/meet/.local/share/nvim/site/pack/packer/start/sg.nvim/dist/cody-agent.js&#xA;- OK   Authentication setup correctly&#xA;- OK     endpoint set to: https://sourcegraph.com&#xA;- OK Found correct binary versions: &#34;1.0.5&#34; = &#34;1.0.5&#34;&#xA;- OK   Sourcegraph Connection info: {&#xA;  access_token_set = true,&#xA;  endpoint = &#34;https://sourcegraph.com&#34;,&#xA;  sg_nvim_version = &#34;1.0.5&#34;,&#xA;  sourcegraph_version = {&#xA;&#x9;&#x9;  build = &#34;256174_2023-12-30_5.2-dbb20677711c&#34;,&#xA;&#x9;&#x9;  product = &#34;256174_2023-12-30_5.2-dbb20677711c&#34;&#xA;  }&#xA;  }&#xA;- To manage your Cody Account, navigate to: https://sourcegraph.com/cody/manage&#xA;- OK Cody Account Information: {&#xA;&#x9;  chat_limit = 20,&#xA;&#x9;  chat_usage = 53,&#xA;&#x9;  code_limit = 500,&#xA;&#x9;  code_usage = 0,&#xA;&#x9;  cody_pro_enabled = false,&#xA;&#x9;  username = &#34;Mr-Destructive&#34;&#xA;  }&#xA;- OK sg.nvim is ready to run&#xA;```&#xA;&#xA;At this point, the sourcegraph plugin is ready to be used. However, we need to set up the plugin in neovim with the default configurations.&#xA;&#xA;### Configuration&#xA;&#xA;In your lua setup files, you can set the plugin like this:&#xA;&#xA;```lua&#xA;require(&#34;sg&#34;).setup()&#xA;```&#xA;&#xA;Source the lua file and restart neovim, this should properly make sourcegraph commands available in the editor. After these steps, Cody is right into neovim.&#xA;&#xA;## Usage&#xA;&#xA;To use the plugin, there are multiple commands available within the editor, the complete list of them is given below:&#xA;&#xA;```plaintext&#xA;SourcegraphBuild&#xA;SourcegraphClear&#xA;SourcegraphDownloadBinaries&#xA;SourcegraphInfo&#xA;SourcegraphLink&#xA;SourcegraphLogin&#xA;SourcegraphSearch&#xA;&#xA;CodyAsk&#xA;CodyChat&#xA;CodyDo&#xA;CodyRestart&#xA;CodyTask&#xA;CodyTaskAccept&#xA;CodyTaskNext&#xA;CodyTaskPrev&#xA;CodyTaskView&#xA;CodyToggle&#xA;```&#xA;&#xA;You can get more info about these commands with the `help :commandname` command. The command are however self explanatory and simple to use.&#xA;&#xA;### Cody Ask&#xA;&#xA;You can quickly parse a prompt as a string to the `:CodyAsk` command as `:CodyAsk &#34;what is neovim by the way?&#34;` and you will get Cody&#39;s response in the side vertical split.&#xA;&#xA;### Cody Chat&#xA;&#xA;You can start Cody chats from neovim with the command `:CodyChat`. This will open a vertical split with the Cody chat interface, the bottom split has the user input prompt and the upper window will have the generated Cody response. You can enter the prompt in the bottom buffer get into normal mode and hit enter to send the prompt to generate a response from Cody.&#xA;&#xA;![Sourcegraph Cody Chat Interface](https://meetgor-cdn.pages.dev/sg-nvim-cody-chat.png)&#xA;&#xA;## Conclusion&#xA;&#xA;Sourcegraph Cody is a great tool for getting quick solutions to trivial as well as specific problems to the current file/package or project. The context parsing of Cody makes it valuable for developers to ask for specific questions and it answers them really in a straightforward way, without the developer needing to parse the context for the prompt. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>2023: Year Review</title>
      <link>https://www.meetgor.com/posts/2023-review</link>
      <description>Introduction 2023, what a year! If you&#39;re a developer and haven&#39;t heard the word &#39;AI&#39; a million times, where have you been? With the introduction of Chat GPT in</description>
      <pubDate>Sun, 31 Dec 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;2023, what a year! If you&#39;re a developer and haven&#39;t heard the word &#39;AI&#39; a million times, where have you been? With the introduction of Chat GPT in December 2022, the start of 2023 was buzzing with Open AI&#39;s flagship product. I also did a [2022 review](https://www.meetgor.com/2022-review/).&#xA;&#xA;The year 2023 was a pivotal year for me to gain experience in solving and delivering real-world problems to solutions. I went from a code-first approach to a code-last approach for solving a problem. The year was not ideal for freshers, as from the end of 2022 till 2023 there was a huge round of layoffs in big tech companies. The competition was too much for freshers and the AI hype added salt to the injury.&#xA;&#xA;I won&#39;t say that 2023 was a dark year, but a year that changed the trajectory of tech forever. I am grateful that I witnessed the introduction of AI in all domains, especially in tech. I think this year and the coming years will be exciting if you are a passionate developer, because I feel it is like an industrial revolution but in software development, there are LLMs, tools, and services that help developers boost their productivity and avoid laborious and boilerplate tasks.&#xA;&#xA;In 2023&#xA;&#xA;* Continued Solar Labs Internship (Backend)&#xA;    &#xA;* Contributed to Steampipe and MindsDB&#xA;    &#xA;* Joined Docsumo as a Python Intern&#xA;    &#xA;* Wrote 10 articles on Golang&#xA;    &#xA;* Finalist in Smart India Hackathon&#xA;    &#xA;&#xA;Crossing Checklists of 2023 from 2022:&#xA;&#xA;* &lt;s&gt;Golang Deep Dive with Open Contributions and Projects&lt;/s&gt;&#xA;    &#xA;* &lt;s&gt;Machine Learning&lt;/s&gt;&#xA;    &#xA;* Kubernetes&#xA;    &#xA;* Rust&#xA;    &#xA;* Kafka&#xA;    &#xA;&#xA;## Focus of 2023&#xA;&#xA;The year was mostly spent on internship tasks and writing go code for open-source projects. I have created over 6 SDKs in Golang for various APIs, Services, and Products. I had worked on APIs and specific tasks related to the products of the company I interned at.&#xA;&#xA;### The Solar Labs Internship&#xA;&#xA;Carrying forward where I left off in 2022, I interned from July 2022 to January 2023 at [The Solar Labs](https://arka360.com/) (now Arka 360) as a Backend Intern. The initial offer was for a 6-month remote internship. However, they were generous and offered an extension of 4 months i.e. till April 2023. So, I continued the internship as a backend intern from July 2022 till April 2023 i.e. 10 months. In this period I went from a novice developer to a somewhat organized and thoughtful developer.&#xA;&#xA;I learned a lot of things including:&#xA;&#xA;* Documenting APIs (keeping the structure clean and consistent)&#xA;    &#xA;* Collaborating with Peers for integration&#xA;    &#xA;* Deeply understanding of requirements and clarifying the scope&#xA;    &#xA;&#xA;I was grateful to my mentors and seniors at this company for walking me through their experiences and guidance for the entire duration. Their patience with my silly doubts and mistakes gave me valuable feedback and helped me whenever I reached out. It was a pleasant experience working and learning with them.&#xA;&#xA;### Job Hunting&#xA;&#xA;I had taken a break of 20 days from coding, from April 27 to May 10th. This was a break when I traveled to Uttarakhand, did trekked at Kedarnath and Tungnath. It was a refreshing trip and one of the most memorable trips I will ever have with my family.&#xA;&#xA;Vacations were over and it was time to find other internships or jobs. I was in the shadows till now, as I had the internship from January to April. I didn&#39;t notice how bad the market was for getting a job, layoffs were happening right and left. But this is where I hit reality and was truly exposed to the tech market of 2023.&#xA;&#xA;#### 2022 vs 2023 Tech Job Market&#xA;&#xA;I got The Solar Labs Internship last year in 2022, I had just applied to 4-5 different openings, and in the 3rd application itself, I was invited to the interviews. Luckily I cleared the 2 rounds and got the internship.&#xA;&#xA;But this was 2023, the year that will be remembered as a drought of tech hiring. I started applying in June 2023, since I had semester exams in May. I applied to 5-10 openings, I knew it would be a bit hard since the layoffs were happening and the process might be a bit challenging. No response from those applied to another 10-15 openings, but no response, and it continued. Some responses were coming like I knew I had applied to a position where experience was required but I was a fresher so I was rejected.&#xA;&#xA;Weeks passed, months were passing, it was almost July end and on-campus placement started in college a ray of hope, but I didn&#39;t have much expectations from it. I was applying casually and getting a feel of the selection process. Something struck me in July and I remembered a company reaching out to me in October 2022, but I had to reject it since I was already in the internship, I however said I might look into next year. But things hit hard when I approached them in July 2023 requesting an opening of a junior developer, but the positions were closed.&#xA;&#xA;This is the point where everything starts to fall apart, I start losing my confidence and burning in the self-doubts. I was still applying to internships and jobs on campus as well as off-campus drives.&#xA;&#xA;#### Someone notices my skills&#xA;&#xA;Months passed, it was almost September 2023, and the ray of hope finally shone with a phone call for interview selection from Docsumo. Scheduled the interviews one after the other and took a week to complete the process, I was selected for a 2-month internship. I was not sure whether to be happy or sad, definitely it was a great startup but just for 2 months. It would feel as if I was just getting started, warm up, and got disconnected. But I took that offer to join from November to January.&#xA;&#xA;It was a heck of a ride to land an internship in 2023, I think I was lucky I got it here, it could have been worse.&#xA;&#xA;#### Job Market narrative&#xA;&#xA;I would like to emphasize the approach for getting a job/internship in 2024:&#xA;&#xA;* Be honest and clear about your goals and skills&#xA;    &#xA;* Learn the fundamentals of the thing you are looking to get hired for&#xA;    &#xA;* Learn in public&#xA;    &#xA;* Be proactive while telling your introduction&#xA;    &#xA;* Keep up with the latest trends and technologies&#xA;    &#xA;&#xA;It would be over exaggeration no one can master all of them, but just be aware of what are you headed towards. It is a marathon, not a sprint, take it slowly and allow yourself to build a strong foundation, it will work in the long run.&#xA;&#xA;### Docsumo Internship&#xA;&#xA;I was not as excited about the Solar Labs internship as I was about the Docsumo one, as it was a hybrid. I felt a bit disconnected and not able to clearly express myself while working remotely sometimes. So, I felt Docsumo would be a good way to make myself comfortable speaking with seniors and collaborating.&#xA;&#xA;I joined in November, had a brief onboarding process, and was introduced nicely with the product itself. In the next week, I was given the task that I will be working over the few months of my stay at Docsumo. I have almost completed over a month at Docsumo and have had a great experience so far.&#xA;&#xA;It has changed my naive mindset while approaching a problem with a solution first and then code. It has helped me research a particular thing more deeply and use existing and well-tested solutions rather than reinventing the wheel.&#xA;&#xA;Still, there is a lot to learn and improve, will try my best to get good memories till I am here. Whatever happens, even for a short duration, they have given me hope. I am and will be grateful for the opportunity they have given me a driveling to work here,&#xA;&#xA;### Articles Written&#xA;&#xA;I mostly wrote about Golang and HTMX(Django) this year with a total of 14 posts:&#xA;&#xA;* [100 days of Go Series](https://www.meetgor.com/series/100-days-of-golang)&#xA;    &#xA;* [HTMX + Django Chat Application](https://dev.to/mr_destructive/creting-a-chat-application-with-django-and-htmx-1095)&#xA;    &#xA;* Dev. to x Refine hackathon: [Utility Studio](https://dev.to/mr_destructive/utility-studio-quick-ai-powered-utilities-335j)&#xA;    &#xA;* Hashnode x Grafbase Hackathon: [Grafbase MindsDB](https://techstructiveblog.hashnode.dev/mindsdb-grafbase-graphql-api)&#xA;    &#xA;&#xA;### Open Source Contributions&#xA;&#xA;While I was applying for jobs and internships, I was actively contributing to open-source projects like Steampipe and mindsdb. I had been contributing in bits and pieces to Turbot&#39;s steampipe CLI for over a year. I got introduced to MindsDB in mid-2023 and started contributing there as well.&#xA;&#xA;My primary languages for the contributions were Go and Python. I picked Golang last year and have been loving it since the beginning.&#xA;&#xA;I created over 6 Go SDKs for various APIs, Products, and Services like:&#xA;&#xA;* [Turso Go](https://github.com/Mr-Destructive/turso-go)&#xA;    &#xA;* [PaLM2 API](https://github.com/Mr-Destructive/palm)&#xA;    &#xA;* [Substack](https://github.com/Mr-Destructive/substack-go)&#xA;    &#xA;* [MindsDB](https://github.com/Mr-Destructive/mindsdb_go_sdk)&#xA;    &#xA;* [HuggingChat](https://github.com/Mr-Destructive/hugging-chat-go)&#xA;    &#xA;&#xA;Contributed to MindsDB Python SDK, Steampipe CLI, and LibSQL(Turso) shell, with the following sets of Pull requests:&#xA;&#xA;* [Turbot Steampipe CLI](https://github.com/turbot/steampipe/pulls?q=is%3Apr+is%3Aclosed+author%3Amr-destructive+) and [cohere ai plugin](https://github.com/Mr-Destructive/steampipe-plugin-cohereai)&#xA;    &#xA;* [MindsDB Handlers](https://github.com/mindsdb/mindsdb/pulls?q=is%3Apr+is%3Aclosed+author%3Amr-destructive+)&#xA;    &#xA;* [Turso CLI](https://github.com/search?q=org%3Atursodatabase++mr-destructive&amp;type=pullrequests)&#xA;    &#xA;&#xA;I loved working with these communities and look forward to contributing and learning more from these interactions. I am excited to check out flowpipe from Turbot, add a couple of plugins on the Steampipe hub, add a few handlers to mindsdb, and integrate Turso db with some golang and Python projects I have in mind.&#xA;&#xA;### Non Technical Things&#xA;&#xA;Apart from tech, there are other things as well I do in my free time like reading science fiction, and fantasy novels, writing short stories or novellas, travelling, and gaming. This year, I was able to balance many non-technical things as well like:&#xA;&#xA;* Read 7 books (all fiction) -&amp;gt; [Goodreads Profile](https://www.goodreads.com/user_challenges/44008346)&#xA;    &#xA;* Started to write 2 novellas&#xA;    &#xA;* Trekked to the highest Shiva temple, [Tungnath](https://en.wikipedia.org/wiki/Tungnath)&#xA;    &#xA;* Maintained a daily journal throughout the year (at least 5 lines, digital)&#xA;    &#xA;&#xA;## Goals for 2024&#xA;&#xA;The aim is still to be curious and actively adapt myself to learn new technologies with an open mindset. Some high-level goals for my tech journey moving into 2024 will look something like this:&#xA;&#xA;* Build a consistent blog writing routine&#xA;    &#xA;* Complete the 100-days-of-Golang Series&#xA;    &#xA;* Deep dive into Kubernetes and Micro-services&#xA;    &#xA;* Write a technical book&#xA;    &#xA;* Continue contributing to open source more actively in communities&#xA;    &#xA;&#xA;I am focusing on writing technical blogs/documentation as it enhances my learning and increases my understanding on a deeper level. So, I will look forward to bringing back the number-crunching blog posts every week.&#xA;&#xA;Also, 2024 will be the year I will graduate from the university, so June 2024 is when I&#39;ll complete my final exams, and probably the convocation will happen in August. After that, I am a B.Tech I.T. graduate. Time went quickly!&#xA;&#xA;## Conclusion&#xA;&#xA;At the beginning of 2023, I was shy and under-confident, I didn&#39;t speak up clearly, and I made random side projects(I still do). But at the end of 2023, I had improved my communication skills and felt a lot more confident and expressive than I used to be. I still make random side projects but they are more refined and meaningful in what I want and achieve as a specific thing I want to make.&#xA;&#xA;2023 was a bitter sweat year in terms of expectations and results, but it had a significant impact on my growth as a stronger and better developer. Hopefully, 2024 will be a better and more challenging one. I always remember this quote when I feel challenged&#xA;&#xA;&gt; Smooth seas never made skilled sailors&#xA;&#xA;With 2023 finally wrapping up, It is time to clear up our bad memories and get excited for the rays of hope and new beginnings in 2024. I am heart-filled thank you for reading through this and I hope 2024 will be a great and happy year for you.&#xA;&#xA;Happy New Year and as always,&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Connecting LibSQL database with Python</title>
      <link>https://www.meetgor.com/posts/libsql-python</link>
      <description>Introduction LibSQL is an **Open Contribution** fork of SQLite. Open Contribution means that it allows suggestions and contributions from the community as oppos</description>
      <pubDate>Thu, 30 Nov 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;LibSQL is an **Open Contribution** fork of SQLite. Open Contribution means that it allows suggestions and contributions from the community as opposed to SQLite which is open source but doesn&#39;t accept community contributions.&#xA;&#xA;## Installation of LibSQL Client(s)&#xA;&#xA;There are two libraries for LibSQL to interact with Python, the [libsql-client](https://github.com/libsql/libsql-client-py/) and the [libsql-experimental-python](https://github.com/libsql/libsql-experimental-python). The former is the recommended client as it is stable, whereas the latter is in development and has the latest features from the libsql database engine, however, it is compatible with the [sqlite](https://docs.python.org/3/library/sqlite3.html) module.&#xA;&#xA;There are differences in how to connect and fetch responses in each of them, but if you want to get started quickly you can safely turn with the `libsql-client` as it is recommended from the official documentation. If you want to connect an existing SQLite database to libsql, you can turn up the `libsql-experimental-pyhon` package.&#xA;&#xA;We will explore both of them in this article. Let&#39;s dive straight into the installation of the package.&#xA;&#xA;## LibSQL Client&#xA;&#xA;So, Libsql-client is the Python package provided by Turso as a Python client to interact with a libsql/sqlite database.&#xA;&#xA;### Installation&#xA;&#xA;The installation of a Python package is as simple as a pip install.&#xA;&#xA;```bash&#xA; pip install libsql-client&#xA;```&#xA;&#xA;### Connecting to the database&#xA;&#xA;Connecting to a libsql database is as simple as SQLite for the local database file. However, if you have a libsql database over the edge(Turso), you can use the API provided by SQLD to connect to that database.&#xA;&#xA;#### Connecting Local database file&#xA;&#xA;To connect to a simple libsql database file, you can either create a file as `mydb.db` or just move into the code straight away.&#xA;&#xA;```python&#xA;import libsql_client&#xA;&#xA;client = libsql_client.create_client_sync(&#34;file:temp.db&#34;)&#xA;result = client.execute(&#34;SELECT 1;&#34;)&#xA;&#xA;for row in result.rows:&#xA;    print(row)&#xA;client.close()&#xA;```&#xA;&#xA;OR&#xA;&#xA;```python&#xA;import libsql_client&#xA;&#xA;with libsql_client.create_client_sync(&#34;file:temp.db&#34;) as client:&#xA;    result = client.execute(&#34;SELECT 1;&#34;)&#xA;    &#xA;    for row in result.rows:&#xA;        print(row)&#xA;```&#xA;&#xA;So, in the above code, the `client` object is a client of the libsql database. We use the [create\_client\_sync](https://libsql.org/libsql-client-py/reference.html#create_client_sync) method that accepts a few parameters `url` i.e. the URL of the database we want to connect to, and the `auth_token` which we will see when we want to connect to a database on the edge.&#xA;&#xA;There is also [create\_client](https://libsql.org/libsql-client-py/reference.html#create_client) method which is an async method. You can turn up to this method if you are looking for async connections.&#xA;&#xA;We have used `file:temp.db` i.e. to connect a local file in the current directory named `temp.db` . This method will return a [Client](https://libsql.org/libsql-client-py/reference.html#Client) object in this case a [ClientSync](https://libsql.org/libsql-client-py/reference.html#ClientSync) object which is a wrapper around the `Client` object. We will be using the methods available for the Client class later while querying in detail. You can see that we have used the [execute](https://libsql.org/libsql-client-py/reference.html#ClientSync.execute) method from the client object.&#xA;&#xA;#### Connecting to a cloud(turso) database&#xA;&#xA;To connect to a libsql database on the edge i.e. turso you need to specify the auth token(JWT) in the `create_client` method.&#xA;&#xA;You can create the token for accessing the database with the&#xA;&#xA;```bash&#xA;turso db tokens create mydb-name&#xA;```&#xA;&#xA;This command will create a `JWT_TOKEN` , store this token securely and this will be used to access the database which is hosted on the Turso cloud.&#xA;&#xA;```bash&#xA;export JWT_TOKEN=&#34;YOURTOKEN&#34;&#xA;OR&#xA;## Save it in the .env file &#xA;## JWT_TOKEN=YOURTOKEN&#xA;```&#xA;&#xA;This will store the token in the environment variable and can be later used from the local environment.&#xA;&#xA;```python&#xA;import libsql_client&#xA;&#xA;with libsql_client.create_client_sync(&#xA;    &#34;libsql://dbname-orgname.turso.io&#34;,&#xA;    auth_token=&#34;secret.something.secret&#34;&#xA;) as client:&#xA;    result = client.execute(&#34;SELECT 1;&#34;)&#xA;    &#xA;    for row in result.rows:&#xA;        print(row)&#xA;```&#xA;&#xA;The `auth_token` is the JWT Token that we created in the previous step, you can load the token as an Environment variable with the following code:&#xA;&#xA;```python&#xA;import os&#xA;&#xA;JWT_TOKEN = os.environ.get(&#34;JWT_TOKEN&#34;)&#xA;&#xA;## OR&#xA;&#xA;# pip install python-dotenv&#xA;from dotenv import load_dotenv&#xA;load_dotenv()&#xA;JWT_TOKEN = os.environ.get(&#34;JWT_TOKEN&#34;)&#xA;```&#xA;&#xA;This token can be used to authenticate the connection to the Turso database. The rest of the connection code remains as it is.&#xA;&#xA;### Running Queries and Fetching results&#xA;&#xA;Now, that we have the connection, we can move into querying the database through the client connection.&#xA;&#xA;## LibSQL-Client (SQLite compatible)&#xA;&#xA;### Installation&#xA;&#xA;The installation of the [libsql-experimental-python](https://badge.fury.io/py/libsql-experimental) package is simple with the pip install command with the name of the package.&#xA;&#xA;```bash&#xA;pip install libsql-experimental&#xA;```&#xA;&#xA;Now, we can move into connecting to the database with Python script.&#xA;&#xA;### Connecting to the database&#xA;&#xA;Connecting to a libsql database is as simple as SQLite for the local database file.&#xA;&#xA;### Syncing with a remote database&#xA;&#xA;### Running Queries and Fetching results</content>
      <type></type>
    </item>
    <item>
      <title>Golang Web: GET Method</title>
      <link>https://www.meetgor.com/posts/golang-web-get-method</link>
      <description>Introduction In this section of the series, we will be exploring how to send a HTTP request in golang. We will be understanding how to send a basic GET request,</description>
      <pubDate>Sat, 28 Oct 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this section of the series, we will be exploring how to send a `GET` HTTP request in golang. We will be understanding how to send a basic GET request, create an HTTP request and customize the client, add headers, read the response body, etc in the following sections of this post.&#xA;&#xA;## What is a GET method?&#xA;&#xA;A [GET](https://en.wikipedia.org/wiki/HTTP#Request_methods) method in the context of an HTTP request is an action that is used to obtain data/resources. The `GET` method is used in a web application to get a resource like an HTML page, image, video, media, etc.&#xA;&#xA;Some common usecases of the `GET` method are:&#xA;&#xA;- Loading a webpage&#xA;- Getting an image, file or other resource&#xA;- API requests to retrieve data&#xA;- Search queries sending filters and parameters&#xA;&#xA;## Basic GET Request&#xA;&#xA;To use the HTTP method `GET` in golang, the [net/http](https://pkg.go.dev/net/http) package has a [Get](https://pkg.go.dev/net/http#Get) method. This method simply takes in a URL as a string and returns the [response](https://pkg.go.dev/net/http#Response) or an error. Let&#39;s look at how to send a basic HTTP GET request in golang.&#xA;&#xA;```go&#xA;// web/methods/get/main.go&#xA;&#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;reqURL := &#34;https://www.google.com&#34;&#xA;&#x9;resp, err := http.Get(reqURL)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp)&#xA;    fmt.Println(resp.Status)&#xA;    fmt.Println(&#34;Status Code:&#34;, resp.StatusCode)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;&amp;{200 OK 200 HTTP/2.0 2 0 map[Alt-Svc:[h3=&#34;:443&#34;; ma=2592000,h3-29=&#34;:443&#34;; ma=2592000] Cache-Control:[private, max-age=0] Content-Security-Policy-Report-Only:[object-src &#39;none&#39;;base-uri &#39;self&#39;;script-src &#39;nonce-pdz_s8Gr0owwMbX8I9qNEQ&#39; &#39;strict-dynamic&#39; &#39;report-sample&#39; &#39;unsafe-eval&#39; &#39;unsafe-inline&#39; https: http:;report-uri https://csp.withgoogle.com/csp/gws/other-hp] Content-Type:[text/html; charset=ISO-8859-1] Date:[Fri, 27 Oct 2023 09:37:04 GMT] Expires:[-1] P3p:[CP=&#34;This is not a P3P policy! See g.co/p3phelp for more info.&#34;] Server:[gws] Set-Cookie:[1P_JAR=2023-10-27-09; expires=Sun, 26-Nov-2023 09:37:04 GMT; path=/; domain=.google.com; Secure AEC=Ackid1Q5FARA_9d7f7znggUdw6DoJA1DBpI17Z0SWxN519Dc64EqmYVHlFg; expires=Wed, 24-Apr-2024 09:37:04 GMT; path=/; domain=.google.com; Secure; HttpOnly; SameSite=lax NID=511=EToBPqckCVRE7Paehug1PgNBKqe7lFLI9d12xJrGbvP9r8tkFIRWciry3gsy8FZ8OUIK4gE4PD-irgNzg4Y1fVePLdyu0AJdY_HcqL6zQYok-Adn-y5TDPmMCNuDnrouBfoxtqVjYY_4RFOe8jalkYto5fQAwzWnNJyw8K0avsw; expires=Sat, 27-Apr-2024 09:37:04 GMT; path=/; domain=.google.com; HttpOnly] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]] 0xc000197920 -1 [] false true map[] 0xc0000ee000 0xc0000d8420}&#xA;&#xA;200 OK&#xA;&#xA;Status Code: 200&#xA;```&#xA;&#xA;In the above code, we have defined a URL string as `reqURL` and used the [Get](https://pkg.go.dev/net/http#Get) method to send a GET request. The `Get` is parsed with the `reqURL` string and the return values are stored as `resp` and `err`. We have added an error check after calling the `Get` method in order to avoid errors later in the code.&#xA;&#xA;The `Get` method as seen from the output has returned a `*http.Response` object, we can use the `Status` and `StatusCode` properties to get the status code of the response. In this case, the status code of the response was `200`. The response `resp` is an object of type `http.Response` i.e. it has fields like `Body`, `StatusCode`, `Headers`, `Proto`, etc. We can get each individual field from the `resp` object. We will later look into how to read the `Body` field from the response, it is not directly read as a string nor it is stored in other forms, rather it is streamed from the requested URL.&#xA;&#xA;## Creating a GET request&#xA;&#xA;We can even construct a GET request using the [NewRequest](https://pkg.go.dev/net/http#NewRequest) method. This is a low-level way of creating a `GET` request. We mention the `method`, `URL`, and the `body`, in the case of `GET` request, there is nobody. So, the `NewRequest` is a general way of creating a `http` request.&#xA;&#xA;```go&#xA;// web/methods/get/newreq.go&#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;reqURL := &#34;https://www.google.com&#34;&#xA;&#x9;req, err := http.NewRequest(http.MethodGet, reqURL, nil)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;resp, err := http.DefaultClient.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(resp)&#xA;}&#xA;```&#xA;&#xA;As we can see, we construct a `GET` request using the `NewRequest` method and then use the [Do](https://pkg.go.dev/net/http#Client.Do) method to send the request to the server. The [http.DefaultClient](https://pkg.go.dev/net/http#DefaultClient) is used as a client to send the request, if we want to customize this we can create a new instance object of [http.Client](https://pkg.go.dev/net/http#Client) and use it to send requests. We will be taking a look at clients in another part of this series when we want to persist a connection or avoid connecting multiple times to the same application/URL.&#xA;&#xA;For now, we will go ahead with the DefaultClient. This will trigger the request, in this case, a `GET` request to the specified URL in the `reqURL` string. The `Do` method returns either a `http.Response` or an `error` just like the `Get` method did.&#xA;&#xA;## Reading the Response Body&#xA;&#xA;We saw some different ways to send a `GET` request, now the below example will demonstrate how to read the body of the response. The response body is read from a buffer rather than loading the entire response into memory. It makes it flexible to parse the data efficiently and as per the needs. We will see how we use the [io](https://pkg.go.dev/io) package&#39;s [ReadAll](https://pkg.go.dev/io#ReadAll) method can be used to read from the response buffer.&#xA;&#xA;```go&#xA;// web/methods/get/body.go&#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;reqURL := &#34;https://httpbin.org/html&#34;&#xA;&#x9;resp, err := http.Get(reqURL)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;// close the body object before returning the function&#xA;&#x9;// this is to avoid the memory leak&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;// stream the data from the response body only once&#xA;&#x9;// it is not buffered in the memory&#xA;&#x9;body, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(string(body))&#xA;}&#xA;```&#xA;&#xA;In the above example, we are trying to get the body from the response to the request sent at [`https://httpbin.org/html`](https://httpbin.org/html). We have used the simple `Get` method instead of `NewRequest` and `Do` for simplicity. The response is stored in `resp`, we also have added `defer resp.Body.Close()` which is to say that we will close the body reader object when the function is returned/closed. So, this means that the `Body` is not readily available data, we need to obtain/stream the data from the server. We have to receive the body in bytes as a tcp request, the body is streamed in a buffer.&#xA;&#xA;The response body is streamed from the server, which means that it&#39;s not immediately available as a complete data set. We read the body in bytes as it arrives over the network, and it&#39;s stored in a buffer, which allows us to process the data efficiently.&#xA;&#xA;### Reading Body in bytes&#xA;&#xA;We can even read the body in bytes i.e. by reading a chunk of the buffer at a time. We can use the [bytes.Buffer](https://pkg.go.dev/bytes#Buffer) container object to store the body. Then we can create a slice of bytes as `[]bytes` of a certain size and read the body into the chunk. By writing the chunk into the buffer, we get the entire body from the response.&#xA;&#xA;```go&#xA;// web/methods/get/body.go&#xA;&#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;reqURL := &#34;https://httpbin.org/html&#34;&#xA;&#x9;resp, err := http.Get(reqURL)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;    // create a empty buffer&#xA;&#x9;buf := new(bytes.Buffer)&#xA;&#xA;    // create a chunk buffer of a fixed size&#xA;&#x9;chunk := make([]byte, 1024)&#xA;&#xA;&#x9;for {&#xA;&#x9;&#x9;// Read into buffer&#xA;&#x9;&#x9;n, err := resp.Body.Read(chunk)&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;&#x9;}&#xA;        // append the chunk to the buffer&#xA;&#x9;&#x9;buf.Write(chunk[:n])&#xA;&#x9;&#x9;fmt.Printf(&#34;%s&#xA;&#34;, chunk[:n])&#xA;&#x9;}&#xA;&#xA;    // entire body stored in bytes&#xA;&#x9;fmt.Println(buf.String())&#xA;}&#xA;```&#xA;&#xA;In the above example, the body is read chunkwise buffers and obtained as a slice of bytes. We define the request as a `GET` request to the [`httpbin.org/html`](http://httpbin.org/html). We create a new Buffer as a slice of bytes with [bytes.Buffer](https://pkg.go.dev/bytes#Buffer), then we define chunk as a container to stream the response body with a particular size. We have taken `1024` bytes as the size of the chunk. Then inside an infinite for loop, we read the body as `n, err :=` [`resp.Body.Read`](http://resp.Body.Read)`(chunk)`. The code will read the body into the chunk(slice of byte) and the return value will be the size of the bytes read or an error. Then we check if there is no error, and if there is an error, we break the loop indicating we have completed reading the entire body or something went wrong. Then we write the chunk into the buffer that we allocated earlier as `buf`. This is a slice of bytes, we are basically populating the buffer with more slices of bytes.&#xA;&#xA;The entire body is then stored in the buffer as a slice of bytes. So, we have to cast it into a string to see the contents. So, this is how we can read the contents of a body in a response in chunks.&#xA;&#xA;### Parsing the JSON body with structs&#xA;&#xA;If we have the structure of the response body already decided, then we can define a struct for the response body and then we can [Unmarshal](https://doc.akka.io/docs/akka-http/current/common/unmarshalling.html#unmarshalling:~:text=Unmarshalling,type%20T.) / deserialize/unpickle. This means we can convert the bytes representation of the data into a Golang-specific structure which is called a high-level representation of the data. We can parse the JSON body into a defined struct using [Unmarshal](https://pkg.go.dev/encoding/json#Unmarshal) or [Decode](https://pkg.go.dev/encoding/json#Decoder.Decode) methods in the [json](https://pkg.go.dev/encoding/json) package.&#xA;&#xA;Let&#39;s look at both the methods.&#xA;&#xA;#### Using Unmarshal&#xA;&#xA;The `Unmarshal` method takes in two parameters i.e. the body in bytes and the reference of the object that we want to unmarshal into. The method returns an error if there is a discrepancy in the returned JSON or the structure defined it is unable to deserialize the JSON object into the defined structure.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type Product struct {&#xA;&#x9;ID                 int      `json:&#34;id&#34;`&#xA;&#x9;Title              string   `json:&#34;title&#34;`&#xA;&#x9;Description        string   `json:&#34;description&#34;`&#xA;&#x9;Price              float64  `json:&#34;price&#34;`&#xA;&#x9;DiscountPercentage float64  `json:&#34;discountPercentage&#34;`&#xA;&#x9;Rating             float64  `json:&#34;rating&#34;`&#xA;&#x9;Stock              int      `json:&#34;stock&#34;`&#xA;&#x9;Brand              string   `json:&#34;brand&#34;`&#xA;&#x9;Category           string   `json:&#34;category&#34;`&#xA;    Thumbnail          string   `json:&#34;thumbnail,omitempty&#34;`&#xA;    Images             []string `json:&#34;-&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;reqURL := &#34;https://dummyjson.com/products/1&#34;&#xA;&#x9;resp, err := http.Get(reqURL)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;body, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;var data Product&#xA;&#x9;if err := json.Unmarshal(body, &amp;data); err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Println(data)&#xA;&#x9;fmt.Println(data.Title)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;{1 iPhone 9 An apple mobile which is nothing like apple 549 12.96 4.69 94 Apple smartphones https://cdn.dummyjson.com/product-images/1/thumbnail.jpg []}&#xA;iPhone 9&#xA;```&#xA;&#xA;In the above example, we have defined a structure called Product with fields such as `ID`, `Title`, `Description`, etc. We use the JSON tag to specify how each field should be encoded to or decoded from JSON. These tags guide the Golang JSON encoders and decoders to correctly map JSON data to struct fields and vice versa. The `omitempty` option in a struct tag instructs the encoder to omit the field from the JSON output if the field&#39;s value is the zero value for its type (e.g., 0 for integers, &#34;&#34; for strings, nil for pointers, slices, and maps). This is useful for reducing the size of the JSON output by excluding empty or default-valued fields.&#xA;&#xA;Conversely, the `-` option in a struct tag tells the encoder and decoder to completely ignore the field. It will not be included in encoded JSON, nor will it be populated when decoding JSON into a struct. This is particularly useful for excluding fields that are meant for internal use only and should not be exposed through JSON.&#xA;&#xA;Therefore, `omitempty` is used to control the inclusion of fields in the JSON output based on their values, while `-` is used to exclude fields from both encoding and decoding from JSON.&#xA;&#xA;We send the `GET` request to the api `https://dummyjson.com/products/1`. The response from the request is read into a slice of bytes with [io.ReadAll](https://pkg.go.dev/io#ReadAll) that takes in a [io.Reader](https://pkg.go.dev/io#Reader) object in this case it is the `resp.Body` and it returns a slice of byte and error if any issue arises while reading in the body. Further, we can use the [Unmarshal](https://pkg.go.dev/encoding/json#Unmarshal) method to parse the slice of body `body` into the struct `Product` with the variable `data`, the reference to `&amp;data` indicates that the method will directly mutate/change this variable and populate the object with the fields from the body.&#xA;&#xA;&#xA;So in a gist, to convert the JSON body into a golang native structure with `Unmarshal` with the following steps:&#xA;&#xA;- Read the body into a slice of bytes using `io.ReadAll`&#xA;- Create an object of the struct&#xA;- Pass the body as a slice of bytes and the reference of that object (struct instance) into the Unmarshal method&#xA;- Access the object with the fields in the struct&#xA;&#xA;In the output response, we can see the object is populated with the fields from the body. The `Title` field is accessed using the `data.Title` as we do with a normal golang struct. The `Images` field is not populated because we have always ignored/omitted from the json tag with `-`.&#xA;&#xA;#### Using Decoder&#xA;&#xA;Similar to the `Unmarshal` we can use the [Decoder](https://pkg.go.dev/encoding/json#Decoder) to parse the body into a struct. However, the parameters it takes are a bit different and it is a two-step process. We first create a [Decoder](https://pkg.go.dev/encoding/json#Decoder) object using the [NewDecoder](https://pkg.go.dev/encoding/json#NewDecoder) method, which takes in a `io.Reader` object, luckily the body from the response is already in that structure, so we can directly pass that `resp.Body` into the `NewDecoder` method. The second step is to decode the data into the object, here as well, we need to create the object of the struct and parse the reference to the object to the [Decode](https://pkg.go.dev/encoding/json#Decoder.Decode) method. The `Decode` method converts the bytes parsed in the `resp.Body` from the `Decoder` object and populates the fields of the object provided in the reference struct.&#xA;&#xA;So the steps for deserializing the json object into the struct with the decode method are:&#xA;&#xA;- Create a decoder with `NewDecoder` method and pass the `resp.Body` as the parameter which is an `io.Reader` object&#xA;- Create an object of the struct&#xA;- Decode the body into that object using the `decoder.Decode` method&#xA;- Access the object with the fields in the struct&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type Product struct {&#xA;&#x9;ID                 int     `json:&#34;id&#34;`&#xA;&#x9;Title              string  `json:&#34;title&#34;`&#xA;&#x9;Description        string  `json:&#34;description&#34;`&#xA;&#x9;Price              float64 `json:&#34;price&#34;`&#xA;&#x9;DiscountPercentage float64 `json:&#34;discountPercentage&#34;`&#xA;&#x9;Rating             float64 `json:&#34;rating&#34;`&#xA;&#x9;Stock              int     `json:&#34;stock&#34;`&#xA;&#x9;Brand              string  `json:&#34;brand&#34;`&#xA;&#x9;Category           string  `json:&#34;category&#34;`&#xA;    Thumbnail          string   `json:&#34;thumbnail,omitempty&#34;`&#xA;    Images             []string `json:&#34;-&#34;`&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;reqURL := &#34;https://dummyjson.com/products/1&#34;&#xA;&#x9;resp, err := http.Get(reqURL)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#xA;&#x9;var data Product&#xA;&#x9;decoder := json.NewDecoder(resp.Body)&#xA;    err = decoder.Decode(&amp;data)&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;&#xA;&#x9;fmt.Println(data)&#xA;&#x9;fmt.Println(data.Title)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;{1 iPhone 9 An apple mobile which is nothing like apple 549 12.96 4.69 94 Apple smartphones https://cdn.dummyjson.com/product-images/1/thumbnail.jpg []}&#xA;iPhone 9&#xA;```&#xA;&#xA;&#xA;We have first defined the struct `Product` with the `json:&#34;id&#34;` tag. As explained earlier, we have used the json tags to identify the fields from the json data to the structures while encoding and decoding.&#xA;In the above example, we have sent a `GET` request to the api endpoint `https://dummyjson.com/products/1`, and we have created a new decoder with the `NewDecoder` method with the `resp.Body` as the parameter. The `data` is created as a `Product` instance. The reference to `data` is parsed to the `Decode` method from the `decoder` instance as `&amp;data`. This method will either return `nil` or an `error`. Thereafter, we can check for errors and then only access the data object with its populated fields from the response body.&#xA;&#xA;There is a certain difference between the `Unmarshal` and `Decode` methods. The difference is just a slight performance improvement in the `NewDecoder` and `Decode` methods. Though it is not significant, having a little info about it might be handy in your use case. Read here for more info : [To Unmarshal or Decode](https://dev.to/jpoly1219/to-unmarshal-or-to-decode-json-processing-in-go-explained-5870)&#xA;&#xA;## Adding Headers to a GET Request&#xA;&#xA;We can even add headers before sending a `GET` request to a URL. By creating a `Request` object with the `NewRequest` method and adding a [Header](https://pkg.go.dev/net/http#Header) with the [Add](https://pkg.go.dev/net/http#Header.Add) method. The `Add` method will take in two parameters i.e. the key of the header, and the value of the header both as strings.&#xA;&#xA;```go&#xA;// web/methods/get/header.go&#xA;&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;req, err := http.NewRequest(http.MethodGet, &#34;https://api.github.com/users/mr-destructive&#34;, nil)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;req.Header.Add(&#34;Authorization&#34;, &#34;token YOUR_TOKEN&#34;)&#xA;&#x9;resp, err := http.DefaultClient.Do(req)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;defer resp.Body.Close()&#xA;&#x9;body, err := io.ReadAll(resp.Body)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(string(body))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run web/methods/get/header.go&#xA;&#xA;{&#34;message&#34;:&#34;Bad credentials&#34;,&#34;documentation_url&#34;:&#34;https://docs.github.com/rest&#34;}&#xA;```&#xA;&#xA;In the above example, we have created a `GET` request to the [`https://api.github.com/users/mr-destructive`](https://api.github.com/users/mr-destructive) the last portion is the username, it could be any valid username. The request is to the GitHub API, so it might require API Key/Tokens in the headers, however, if there are certain endpoints that do not require Authorization headers might work just fine.&#xA;&#xA;So, the above code will give `401` error indicating the request has wrong or invalid credentials, if we remove the header, the request will work fine. This is just an example, but headers are useful in working with APIs.&#xA;&#xA;Without adding the header:&#xA;&#xA;```&#xA;$ go run web/methods/get/header.go&#xA;&#xA;{&#34;login&#34;:&#34;Mr-Destructive&#34;,&#34;id&#34;:40317114,&#34;node_id&#34;:&#34;MDQ6VXNlcjQwMzE3MTE0&#34;,&#34;avatar_url&#34;:&#34;https://avatars.githubusercontent.com/u/40317114?v=4&#34;,&#34;gravatar_id&#34;:&#34;&#34;,&#34;url&#34;:&#34;https://api.github.com/users/Mr-Destructive&#34;,&#xA;... &#xA;&#34;updated_at&#34;:&#34;2023-10-10T17:57:22Z&#34;}&#xA;```&#xA;&#xA;That&#39;s it from the 33rd part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/web/methods/get/) repository.&#xA;&#xA;[100-days-of-golang](https://github.com/Mr-Destructive/100-days-of-golang)&#xA;&#xA;## References&#xA;&#xA;- [To Unmarshal or Decode](https://dev.to/jpoly1219/to-unmarshal-or-to-decode-json-processing-in-go-explained-5870)&#xA;- [Golang JSON tutorial](https://drstearns.github.io/tutorials/gojson/)&#xA;- [Golang OmitEmpty](https://www.sohamkamani.com/golang/omitempty/)&#xA;&#xA;## Conclusion&#xA;&#xA;From this article, we explored the `GET` HTTP method in golang. By using a few examples for creating a get request, adding headers, reading response body, the basic use cases were demonstrated.&#xA;&#xA;Hopefully, you found this section helpful, if you have any comments or feedback, please let me know in the comments section or on my social handles. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang Web: URL Parsing</title>
      <link>https://www.meetgor.com/posts/golang-web-url-parsing</link>
      <description>Introduction We have done around 32 posts on the fundamental concepts in golang, With that basic foundation, I&#39;d like to start with the new section of this seri</description>
      <pubDate>Tue, 05 Sep 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We have done around 32 posts on the fundamental concepts in golang, With that basic foundation, I&#39;d like to start with the new section of this series which will be a major one as `web-development`. This section will have nearly 40-50 posts, this will cover the fundamental concepts for web development like APIs, Database integrations, Authentication and Authorizations, Web applications, static sites, etc.&#xA;&#xA;## What is a URL?&#xA;&#xA;A URL is a Uniform Resource Locator. It is a string of characters that identifies a resource on the Internet. URLs are the building blocks of the web, allowing us to access websites, documents, and data with just a click. URLs are all over the place, if we want to build a strong foundation in web development, it&#39;s quite important to understand what URLs actually mean and what can they store.&#xA;&#xA;A URL looks something like this:&#xA;&#xA;```&#xA;[scheme:][//[userinfo@]host][/]path[?query][#fragment]&#xA;```&#xA;&#xA;Not all the URLs are like this, majority of the URLs that the common user sees are simply the ones with the `scheme`, `host`, and `paths`. However other components are equally important and are vital in the exchanging of information over the network. &#xA;&#xA;- The `scheme` is the protocol used for accessing the resource like `http`, `https`, `ftp`, etc.&#xA;- The `userinfo` is the username and password used to access the resource.&#xA;- The `host` is the domain name of the resource.&#xA;- The `path` is the path or folder to the resource.&#xA;- The `query` is the query string of the resource. It is usually a key-value pair as a paramter to access resources.&#xA;- The `fragment` is used as a reference within the resource.&#xA;&#xA;We will see the use cases of most of them throughout this series for example, the `userinfo` is commonly used in accessing databases over the internet/cloud. The query parameters will be used in making dynamic API calls, etc.&#xA;&#xA;## Basic URL Parsing&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// simple url&#xA;&#x9;urlString := &#34;http://www.google.com&#34;&#xA;&#x9;parsedUrl, err := url.Parse(urlString)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(parsedUrl)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;http://www.google.com&#xA;```&#xA;&#xA;&#xA;So, what is getting parsed here, we gave the URL as a string we get the URL back, the only difference is that instead of the URL being a string, it is now a structure of components. For instance, we want the protocol the host name, the port, etc. from the URL.&#xA;&#xA;&#xA;```&#xA;fmt.Printf(&#34;%T&#xA;&#34;, parsedUrl)&#xA;// *url.URL&#xA;```&#xA;&#xA;The `parsedUrl` is a pointer to a [url.URL](https://pkg.go.dev/net/url#URL) structure. The structure `url.URL` has a lot of components to it like `Scheme`, `Host`, `User`, `Path`, `RawQuery`, etc. We will dive into each of these soon.&#xA;&#xA;We could get those specific components ourselves, but that would be a bit tedious and might be even prone to bugs.&#xA;&#xA;Let&#39;s try to get those components from the URL without any functions, just string manipulation.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    urlString := &#34;http://www.google.com&#34;&#xA;    protocol := urlString.split(&#34;:&#34;)[0]&#xA;    hostName := urlString.split(&#34;//&#34;)[1]&#xA;    fmt.Println(protocol)&#xA;    fmt.Println(hostName)&#xA;}&#xA;```&#xA;&#xA;This might work for a simple URL, but what if we have more complex URLs which have paths, query parameters, fragments, username, port, etc? This could mess up quickly if we tried to get the parts of the URL ourselves. So, golang has a package called [net/url](https://pkg.go.dev/net/url) explicitly for parsing URLs.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;urlString := []string{&#34;http://www.google.com&#34;,&#xA;&#x9;&#x9;&#34;http://www.google.com/about/&#34;,&#xA;&#x9;&#x9;&#34;http://www.google.com/about?q=hello&#34;,&#xA;&#x9;&#x9;&#34;http://www.google.com:8080/about?q=hello&#34;,&#xA;&#x9;&#x9;&#34;http://user:password@example.com:8080/path/to/resource?query=value#fragment&#34;,&#xA;&#x9;}&#xA;&#x9;for _, url := range urlString {&#xA;&#x9;&#x9;hostStr := strings.Split(url, &#34;//&#34;)[1]&#xA;        hostName := strings.Split(hostStr, &#34;/&#34;)[0]&#xA;&#x9;&#x9;fmt.Println(hostName)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;www.google.com&#xA;www.google.com&#xA;www.google.com&#xA;www.google.com:8080&#xA;user:password@example.com:8080&#xA;```&#xA;&#xA;The above code might work for most of the URLs, but what if we have a more complex URL like the one with `port` or `user`, it doesn&#39;t give the thing we want exactly. In the above example, we have created a list of URLs as strings and simply iterated over each of the `url` in the `urlString`. Thereafter, we split the `url` on `//` so we get `https:` and `www.google.com`, if we want the host/domain name, we could simply get the `1` index in the slice since the [strings.Split](https://pkg.go.dev/strings#Split) method returns a slice after splitting the string with the provided separator. The `hostName` could be fetched from the `1` index. However, this time for the 2nd element in the list, we have `https://www.google.com/about`, which would return `www.google.com/about` as the hostname which is not ideal, so we will again have to split this string with `/` and grab the first part i.e. 0th index.&#xA;&#xA;The above code would work for `paths` and `query` parameters but if we had ports, and username, and password, it would not work as expected as evident from the last 2 examples in the list.&#xA;&#xA;So, now we know the downsides of manually parsing the URLs, we can use the [net/url](https://pkg.go.dev/net/url) package to do it for us.&#xA;&#xA;## Parsing DB URLs&#xA;&#xA;Databases have a connection URL or connection string which provides a standard way to connect to a database/database server. The format of the URL is just the `URL` with all the components from the `scheme` to the `path`. The common examples of some database connection URLs might include:&#xA;&#xA;&#xA;```&#xA;# PostgreSQL DB Connection URL/string&#xA;postgresql://username:password@hostname:port/database_name&#xA;&#xA;# MongoDB Connection URL/string&#xA;mongodb://username:password@hostname:port/database_name&#xA;```&#xA;&#xA;The above are examples of the Postgres and MongoDB connection URLs, they have a `protocol` which usually for databases is their short name, the user credentials i.e. `username` and `password`, the `hostname` i.e. the server IP address, the `port` on which the database is running, and finally the path as the `database name`.&#xA;&#xA;We can construct a simple snippet in golang to grab all the details from the simple connection URL string with the `net/url` package.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;//postgres db url&#xA;&#x9;dbUrl, err := url.Parse(&#34;postgres://admin:pass1234@localhost:5432/mydb&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(dbUrl)&#xA;&#x9;fmt.Println(&#34;Scheme/Protocol = &#34;, dbUrl.Scheme)&#xA;&#x9;fmt.Println(&#34;User = &#34;, dbUrl.User)&#xA;&#x9;//fmt.Println(&#34;User = &#34;, dbUrl.User.String())&#xA;&#x9;fmt.Println(&#34;Username = &#34;, dbUrl.User.Username())&#xA;&#x9;password, _ := dbUrl.User.Password()&#xA;&#x9;fmt.Println(&#34;Password = &#34;, password)&#xA;&#x9;fmt.Println(&#34;Host = &#34;, dbUrl.Host)&#xA;&#x9;fmt.Println(&#34;HostName = &#34;, dbUrl.Hostname())&#xA;&#x9;fmt.Println(&#34;Port = &#34;, dbUrl.Port())&#xA;&#x9;fmt.Println(&#34;DB Name = &#34;, dbUrl.Path)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;postgres://postgres:pass1234@localhost:5432/mydb&#xA;Scheme/Protocol =  postgres&#xA;User = admin:pass1234&#xA;Username =  admin&#xA;Password =  pass1234&#xA;Host =  localhost:5432&#xA;HostName =  localhost&#xA;Port =  5432&#xA;DB Name =  mydb&#xA;```&#xA;&#xA;In the above code, we have given the string `postgres://admin:pass1234@localhost:5432/mydb`, and we have parsed the URL using the `net/url` package. The result is we have a `parsedUrl` object which has all the components that can be accessed as either fields or methods. Let&#39;s break down each field/method we used in the above example:&#xA;&#xA;- The `Scheme` is simply a string representing the protocol of the resource(URL).&#xA;- The `User` is the [UserInfo](https://pkg.go.dev/net/url#Userinfo) object having immutable username and password fields.&#xA;- The `Username` is the method on [UserInfo](https://pkg.go.dev/net/url#Userinfo.Username) that returns the string representing the username of the URL.&#xA;- The `Password` is the method on [UserInfo](https://pkg.go.dev/net/url#Userinfo.Password) that returns the string representing the password of the URL.&#xA;- The `Host` is the field on `URL` as a string representing the host:port of the URL.&#xA;- The `Hostname` is the method on [URL](https://pkg.go.dev/net/url#URL.Hostname) that returns the string representing the hostname of the URL.&#xA;- The `Port` is the method on [URL](https://pkg.go.dev/net/url#URL.Port) that returns the string representing the port of the URL.&#xA;- The `Path` is the field as the string representing the path of the URL.&#xA;&#xA;So, this is how we can get almost every detail like `db protocol`, `username`, `password`, `hostname`, `port`, and the `database name` from the database connection string URL.&#xA;&#xA;&#xA;## Parsing Query Parameters&#xA;&#xA;We can even get the query parameters of a URL using the [Query](https://pkg.go.dev/net/url#URL.Query) method on the `URL` object. The `Query` method returns a `map[string][]string` which is to say a map with the key as `string` and the value as a `[]string` slice of string. For example, if the URL is something like `https://google.com/?q=hello`, the `Query` method will return `map[q:[hello]]` which means the key is `q` and the value is a list of strings of which the only element is `hello`.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// a complex url with query params&#xA;&#x9;urlStr := &#34;http://www.google.com/?q=hello&amp;q=world&amp;lang=en&#34;&#xA;&#x9;parsedUrl, err := url.Parse(urlStr)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(parsedUrl)&#xA;&#x9;fmt.Println(parsedUrl.Query())&#xA;&#x9;for k, v := range parsedUrl.Query() {&#xA;&#x9;&#x9;fmt.Println(&#34;KEY:&#34;, k)&#xA;&#x9;&#x9;for _, vv := range v {&#xA;&#x9;&#x9;&#x9;fmt.Println(vv)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;http://www.google.com/?q=hello+world&amp;lang=en&amp;q=gopher&#xA;map[lang:[en] q:[hello world gopher]]&#xA;KEY: q&#xA;hello world&#xA;gopher&#xA;KEY: lang&#xA;en&#xA;```&#xA;&#xA;We have taken a bit of a complex example that might cover many use cases of the `Query` method. We have a URL as `http://www.google.com/?q=hello&amp;q=world&amp;lang=en`, and the `Query` method returns `map[lang:[en] q:[hello world]]` which means the `q` key has a slice of a string with elements `hello world` and `gopher` and the `lang` key has a value of `en`. Here, the first paramter, `q=hello+world` is basically `hello world` or `hello%20world`, which is to say escaping the space in the URL. We can have multiple values for the same key, which is evident as we have added `q=gopher` at the end of the `URL`, the key `q` has two elements in the slice as `hello world` and `gopher`. The `lang=en` is simply a key as `lang` with the only element as `en` in the slice. We use `&amp;` to separate different query parameters in the URL.&#xA;&#xA;### Checking Values in Query Parameters&#xA;&#xA;We can even check the values in the query parameters without requiring the construction of for loops to find a particular value in a key. The [Values](https://pkg.go.dev/net/url#Values) is a type that stores the map as a return value from the `Query` method. It has a few handy methods like:&#xA;- [Has](https://pkg.go.dev/net/url#Values.Has) to check if the key exists in the map (paramter as key `string` and returns a `bool`).&#xA;- [Get](https://pkg.go.dev/net/url#Values.Get) to fetch the first value of the given key as a string or if not present then returns an empty string (paramter as key `string` and returns `string`).&#xA;- [Add](https://pkg.go.dev/net/url#Values.Add) method is used to append the value for a given key (paramter as key `string` and value to be added as `string`).&#xA;- [Set](https://pkg.go.dev/net/url#Values.Set) method is used to replace the value for a given key if already exists (paramter as key `string` and value as `string`).&#xA;- [Del](https://pkg.go.dev/net/url#Values.Del) method is used to delete the value for a given key (paramter as key `string`).&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// a complex url with query params&#xA;&#x9;urlStr := &#34;http://www.google.com/?q=hello+world&amp;lang=en&amp;q=gopher&#34;&#xA;&#x9;parsedUrl, err := url.Parse(urlStr)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(parsedUrl)&#xA;&#x9;fmt.Println(parsedUrl.Query())&#xA;&#xA;&#x9;queryParams := parsedUrl.Query()&#xA;&#xA;&#x9;fmt.Println(queryParams.Get(&#34;q&#34;))&#xA;&#xA;&#x9;fmt.Println(queryParams.Has(&#34;q&#34;))&#xA;&#xA;&#x9;if queryParams.Has(&#34;lang&#34;) {&#xA;&#x9;&#x9;fmt.Println(queryParams.Get(&#34;lang&#34;))&#xA;&#x9;}&#xA;&#xA;&#x9;queryParams.Add(&#34;q&#34;, &#34;ferris&#34;)&#xA;&#x9;fmt.Println(queryParams)&#xA;&#xA;&#x9;queryParams.Set(&#34;q&#34;, &#34;books&#34;)&#xA;&#x9;fmt.Println(queryParams)&#xA;&#xA;&#x9;queryParams.Del(&#34;q&#34;)&#xA;&#x9;fmt.Println(queryParams)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;http://www.google.com/?q=hello+world&amp;lang=en&amp;q=gopher&#xA;hello world&#xA;true&#xA;en&#xA;map[lang:[en] q:[hello world gopher ferris]]&#xA;map[lang:[en] q:[books]]&#xA;map[lang:[en]]&#xA;```&#xA;&#xA;The above code example demonstrates almost all the methods available on the `Values` type. The `Get` method is used to fetch the first value for a given key, so we parse the key as a `string` to the method and it would return a `string`. We checked for the `q` as the key and it returned the first element in the `queryParams` for key `q` as `hello world` from the list `[hello world, gopher]`. The `Has` method takes in the paramter as key as `string` and returns if the key exists in the `queryParams` or not as a bool. The `Add` method, we have used to `Add` a key with a particular value, we added the value `ferris` to the key `q` hence it appended to the list and the list of `queryParams[q]` became `[hello world, gopher, ferris]`. The `Set` method is used to override the existing key with a particular value, here we have set the value `books` to the key `q` and hence the list of `queryParams[q]` becomes `[books]`. We can use the `Del` method to remove the key from the `queryParams`, so we delete `q` from `queryParams`, then the `queryParams` simply has no key as `q` in it.&#xA;&#xA;### Parsing Query Parameters to String&#xA;&#xA;Now, that you have manipulated the query parameters, let&#39;s say you want to construct back that string representation of the query particular or the URL for it. The [Encode]() method is used to grab the `queryParams` i.e. `Values` object and convert it into the `string` representation of the encoded URL.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// a complex url with query params&#xA;&#x9;urlStr := &#34;http://www.google.com/?q=hello+world&amp;lang=en&amp;q=gopher&#34;&#xA;&#x9;parsedUrl, err := url.Parse(urlStr)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;    queryParams := parsedUrl.Query()&#xA;    queryParams.Add(&#34;name&#34;, &#34;ferris&#34;)&#xA;&#xA;    q := queryParams.Encode()&#xA;    fmt.Println(q)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;lang=en&amp;name=ferris&amp;q=hello+world&amp;q=gopher&#xA;```&#xA;&#xA;So, we can see the `Encode` method has given a query parameter in the form of a URL encoded string. We first grab the query parameters from the `parsedUrl` which is a `URL` object via the `Query` method, we then `Add` the key `name` with a value of `ferris` to the `queryParams`. This is then used to `Encode` the object back to a string representation. This could be useful to construct a query paramter for requesting other websites/APIs.&#xA;&#xA;## Parsing URL object back to String&#xA;&#xA;We can even get the `URL` object back to a string representation using the [String](https://pkg.go.dev/net/url#URL.String) method on the `URL` object. The `String` method returns a `string` representation of the URL object.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;urlStr := &#34;http://www.google.com/?q=hello+world&amp;lang=en&amp;q=gopher&#34;&#xA;&#x9;fmt.Println(&#34;URL:&#34;, urlStr)&#xA;&#x9;parsedUrl, err := url.Parse(urlStr)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;queryParams := parsedUrl.Query()&#xA;&#x9;queryParams.Add(&#34;name&#34;, &#34;ferris&#34;)&#xA;&#xA;&#x9;q := queryParams.Encode()&#xA;&#x9;fmt.Println(q)&#xA;&#x9;parsedUrl.RawQuery = q&#xA;&#x9;newUrl := parsedUrl.String()&#xA;&#x9;fmt.Println(&#34;New URL:&#34;, newUrl)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;URL: http://www.google.com/?q=hello+world&amp;lang=en&amp;q=gopher&#xA;lang=en&amp;name=ferris&amp;q=hello+world&amp;q=gopher&#xA;New URL: http://www.google.com/?lang=en&amp;name=ferris&amp;q=hello+world&amp;q=gopher&#xA;```&#xA;&#xA;In the example above, we parse a URL string into a `URL` object as `parsedUrl`, then we `Add` the key `name` with a value of `ferris` to the `queryParams`. We then `Encode` the `URL` object back to a string representation. But this won&#39;t change the `parsedUrl` object we want to change the entire `URL` object. For that, we have overwritten the `RawQuery` field of the `URL` object with the query parameter encoded string as `q`. The `String` method returns a `string` representation of the `URL` object.&#xA;&#xA;## Parsing Fragments&#xA;&#xA;The fragment in a URL is usually present in a static website like `#about`, `#contact`, `#blog`, etc. The `Fragment` is a string that is usually a reference to a specific section or anchor point within a web page or resource. When a URL with a fragment is accessed, the web browser or user agent will scroll the page to display the section identified by the fragment.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/url&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// url with fragments&#xA;&#x9;urlStr := &#34;https://pkg.go.dev/math/rand#Norm Float64&#34;&#xA;&#x9;parsedUrl, err := url.Parse(urlStr)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;fmt.Println(parsedUrl)&#xA;&#x9;fmt.Println(parsedUrl.Fragment)&#xA;&#x9;fmt.Println(parsedUrl.RawFragment)&#xA;&#x9;fmt.Println(parsedUrl.EscapedFragment())&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;https://pkg.go.dev/math/rand#Norm Float64&#xA;&#xA;Norm Float64&#xA;Norm Float64&#xA;Norm%20Float64&#xA;```&#xA;&#xA;The above code is used to fetch the `#Norm Float64` fragment from the URL `https://pkg.go.dev/math/rand#NormFloat64`. We can use the [Fragment](https://pkg.go.dev/net/url#URL) field in the `URL` object to get the fragment text. There is [RawFragment](https://pkg.go.dev/net/url#URL) field that is used to parse the fragment text as it is, without trying to escape any special characters in the URL. The [EscapedFragment](https://pkg.go.dev/net/url#URL.EscapedFragment) is used to parse the fragment text by escaping the characters in the URL.&#xA;&#xA;That&#39;s it from the 32nd part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/web/url-parsing) repository.&#xA;&#xA;[100-days-of-golang](https://github.com/Mr-Destructive/100-days-of-golang)&#xA;&#xA;## Conclusion&#xA;&#xA;From the first post of the web development section, we covered the fundamentals of URL parsing and got a bit introduced to the `net` package, which will be heavily used for most of the core language&#39;s features for working with the web. We covered the concepts for parsing URLs, getting components of URLs from the parsed object, Database connection URL resolving, parsing query parameters, and some other URL-related concepts.&#xA;&#xA;Hopefully, you found this section helpful, if you have any comments or feedback, please let me know in the comments section or on my social handles. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Channels</title>
      <link>https://www.meetgor.com/posts/golang-channels</link>
      <description>Introduction In this part of the series, we will be continuing with the concurrency features of golang with channels. In the last post, we covered the fundament</description>
      <pubDate>Mon, 28 Aug 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this part of the series, we will be continuing with the concurrency features of golang with channels. In the last post, we covered the fundamentals of go routines and wait groups. By leveraging those understood concepts, we will explore channels to communicate the data between various go routines.&#xA;&#xA;## What are Channels&#xA;&#xA;A golang Channel is like a pipe that lets goroutines communicate. It lets you pass values from one goroutine to another. Channels are typed i.e. you declare them with `chan` keyword followed by the type to be sent and received (e.g. `chan int`). The `chan` type specifies the type of values that will be passed through the channel. We will explore the detailed technicalities soon. Right now, we need to just focus on what problem is channels solving.&#xA;&#xA;In the previous article, we worked with go routines and wait groups which allowed us to process tasks asynchronously. However, if we wanted to access the data in between the processes, we would have to tweak the core functionality or might require global variables, however, in real-world applications, the environment is quite constrained. We would require a way to communicate data between those go routines. Channels are made just for that(more than that), but in essence, it solves that exact problem.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;ch := make(chan string)&#xA;&#x9;defer close(ch)&#xA;&#xA;&#x9;go func() {&#xA;&#x9;&#x9;message := &#34;Hello, Gophers!&#34;&#xA;&#x9;&#x9;ch &lt;- message&#xA;&#x9;}()&#xA;&#xA;&#x9;msg := &lt;-ch&#xA;&#x9;fmt.Println(msg)&#xA;}&#xA;```&#xA;&#xA;In the above code example, the channel `ch` is created of type `string` and a message is sent to the channel inside a go routine as `ch &lt;- message`, and the message is retrieved from the channel as `&lt;-ch`.&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;Hello, Gophers!&#xA;```&#xA;&#xA;Channels have two key properties:&#xA;&#xA;- Send and receive operations block until both sides are ready(i.e. there is a sender and a receiver for a channel). This allows goroutines to synchronize without explicit locks or condition variables.&#xA;- Channels are typed, so only values of the specified type can be sent and received. This provides type safety.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    ch := make(chan string)&#xA;    go func() {&#xA;        message := &#34;Hello, Gophers!&#34;&#xA;        ch &lt;- message&#xA;    }()&#xA;    fmt.Println(&lt;-ch)&#xA;    fmt.Println(&lt;-ch)&#xA;}&#xA;```&#xA;&#xA;In the same example, if we tried to add the second receiver i.e. `&lt;-ch`, it would result in a deadlock/block forever since there is no second message sent into the channel. Only one value i.e. &#34;Hello Gophers!&#34; was sent as a `message` into the channel, and that was received by the first receiver as `&lt;-ch`, however in the next receiver, there is no sender.&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;Hello, Gophers!&#xA;fatal error: all goroutines are asleep - deadlock!&#xA;&#xA;goroutine 1 [chan receive]:&#xA;main.main()&#xA;        /home/meet/code/100-days-of-golang/scripts/channels/main.go:16 +0x125&#xA;exit status 2&#xA;```&#xA;&#xA;To sum up the deadlock concept in unbuffered channels:&#xA;&#xA;- The main goroutine is waiting at the second receive operation for a second message that will never arrive (was never sent).&#xA;- The anonymous goroutine is waiting for someone to read from the channel so it can proceed with sending the second message.&#xA;&#xA;## Buffered Channels&#xA;&#xA;In Go, you can create both buffered and unbuffered channels. An unbuffered channel has no capacity to hold data, it relies on immediate communication between the sender and receiver. However, you can create a buffered channel by specifying a capacity when using the make function, like `ch := make(chan int, 5)` will create a channel with a capacity of `5` i.e. it can store a certain number of values without an immediate receiver. A buffered channel allows you to send multiple values to the channel without an immediate receiver, up to its capacity. After that, it will block until the receiver retrieves values.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;buffchan := make(chan int, 2)&#xA;&#xA;&#x9;wg := sync.WaitGroup{}&#xA;&#x9;wg.Add(2)&#xA;&#xA;&#x9;for i := 1; i &lt;= 2; i++ {&#xA;&#x9;&#x9;go func(n int) {&#xA;&#x9;&#x9;&#x9;buffchan &lt;- n&#xA;&#x9;&#x9;&#x9;wg.Done()&#xA;&#x9;&#x9;}(i)&#xA;&#x9;}&#xA;&#xA;&#x9;wg.Wait()&#xA;&#x9;close(buffchan)&#xA;&#xA;&#x9;for c := range buffchan {&#xA;&#x9;&#x9;fmt.Println(c)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run channels.go&#xA;1&#xA;2&#xA;&#xA;$ go run channels.go&#xA;2&#xA;1&#xA;```&#xA;&#xA;In this code snippet, we create a buffered channel ch with a capacity of 2. We send two values to the channel, and even though there&#39;s no immediate receiver, the code doesn&#39;t block. If we were to send a third value, it would lead to a deadlock because there is no receiver to free up space in the buffer.&#xA;&#xA;## Closing Channels&#xA;&#xA;Closing a channel is important to signal to the receiver that no more data will be sent. It&#39;s achieved using the built-in close function. After closing a channel, any further attempts to send data will result in a panic. On the receiving side, if a channel is closed and there&#39;s no more data to receive, the receive operation will yield the zero value for the channel&#39;s type.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;ch := make(chan int)&#xA;&#xA;&#x9;go func() {&#xA;&#x9;&#x9;for i := 1; i &lt;= 5; i++ {&#xA;&#x9;&#x9;&#x9;ch &lt;- i&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;close(ch)&#xA;&#x9;}()&#xA;&#xA;&#x9;for num := range ch {&#xA;&#x9;&#x9;fmt.Println(&#34;Received:&#34;, num)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;In this example, a goroutine sends numbers to the channel and then closes it. The main routine receives these numbers using a for-range loop. When the channel is closed and all values are received, the loop will terminate automatically. Keep in mind that only a sender can close the channel, to indicate the receiver to not wait for further values from the channel.&#xA;&#xA;## Select Statement for Channels&#xA;&#xA;The select statement is used for handling multiple channels. There are a few operations that can be checked with a case statement in the select block.&#xA;&#xA;|Case     |Channel Operation|&#xA;|---------|-----------------|&#xA;|         |                 |&#xA;|Sending  | chan &lt;- value   |&#xA;|         |                 |&#xA;|Receiving|    &lt;- chan      |&#xA;|         |                 |&#xA;&#xA;So, we can either check if there is a sender or a receiver available for a channel with a case statement just like a switch statement.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func sendMessage(ch chan string, message string, wg *sync.WaitGroup) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;ch &lt;- message&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;var wg sync.WaitGroup&#xA;&#xA;&#x9;ch1 := make(chan string, 2)&#xA;&#x9;ch2 := make(chan string, 2)&#xA;&#x9;wg.Add(2)&#xA;&#xA;&#x9;go sendMessage(ch1, &#34;Hello, Gophers!&#34;, &amp;wg)&#xA;&#x9;go sendMessage(ch2, &#34;Hello, Hamsters!&#34;, &amp;wg)&#xA;&#xA;&#x9;go func() {&#xA;&#x9;&#x9;defer wg.Done()&#xA;&#x9;&#x9;wg.Wait()&#xA;&#x9;&#x9;close(ch1)&#xA;&#x9;&#x9;close(ch2)&#xA;&#x9;}()&#xA;&#x9;ch1 &lt;- &#34;new message to c1&#34;&#xA;&#x9;ch2 &lt;- &#34;new message to c2&#34;&#xA;&#xA;&#x9;select {&#xA;&#x9;case &lt;-ch1:&#xA;&#x9;&#x9;fmt.Println(&#34;Received from ch1&#34;)&#xA;&#x9;case ch1 &lt;- &#34;new message to c1&#34;:&#xA;&#x9;&#x9;fmt.Println(&#34;Sent to ch1&#34;)&#xA;&#x9;case &lt;-ch2:&#xA;&#x9;&#x9;fmt.Println(&#34;Received from ch2&#34;)&#xA;&#x9;case ch2 &lt;- &#34;new message to c2&#34;:&#xA;&#x9;&#x9;fmt.Println(&#34;Sent to ch2&#34;)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run channels.go&#xA;Sent to ch1&#xA;&#xA;$ go run simple.go&#xA;Received from ch1&#xA;&#xA;$ go run simple.go&#xA;Received from ch2&#xA;&#xA;$ go run simple.go&#xA;Sent to ch2&#xA;&#xA;$ go run simple.go&#xA;Received from ch1&#xA;```&#xA;The order of the messages is not guaranteed, the operation which is performed first based on the go routine will be only logged.&#xA;&#xA;In the simple example above, we have created two channels `ch1` and `ch2`, and sent two messages to them using two go routines. The main routine then waits for the messages to be received from the channels. We close the channels when the sending is done and simply check for the 4 cases i.e. the send on channel 1, receive on channel 1, and similarly for channel 2. So, that is how we can use the select statement to check which operation is being performed on the different channels, and this forms the basis for the communication between channels. We get more ease in the flow while working with channels.&#xA;&#xA;Below is an example to test which `url` or a web server is responding first to the request.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func pingGoogle(c chan string, wg *sync.WaitGroup) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;res, _ := http.Get(&#34;http://google.com&#34;)&#xA;&#x9;c &lt;- res.Status&#xA;}&#xA;&#xA;func pingDuckDuckGo(c chan string, wg *sync.WaitGroup) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;res, _ := http.Get(&#34;https://duckduckgo.com&#34;)&#xA;&#x9;c &lt;- res.Status&#xA;}&#xA;&#xA;func pingBraveSearch(c chan string, wg *sync.WaitGroup) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;res, _ := http.Get(&#34;https://search.brave.com&#34;)&#xA;&#x9;c &lt;- res.Status&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;gogChan := make(chan string)&#xA;&#x9;ddgChan := make(chan string)&#xA;&#x9;braveChan := make(chan string)&#xA;&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;wg.Add(3)&#xA;&#xA;&#x9;go pingDuckDuckGo(ddgChan, &amp;wg)&#xA;&#x9;go pingGoogle(gogChan, &amp;wg)&#xA;&#x9;go pingBraveSearch(braveChan, &amp;wg)&#xA;&#xA;&#x9;openChannels := 3&#xA;&#xA;&#x9;go func() {&#xA;&#x9;&#x9;wg.Wait()&#xA;&#x9;&#x9;close(gogChan)&#xA;&#x9;&#x9;close(ddgChan)&#xA;&#x9;&#x9;close(braveChan)&#xA;&#x9;}()&#xA;&#xA;&#x9;for openChannels &gt; 0 {&#xA;&#x9;&#x9;select {&#xA;&#x9;&#x9;case msg1, ok := &lt;-gogChan:&#xA;&#x9;&#x9;&#x9;if !ok {&#xA;&#x9;&#x9;&#x9;&#x9;openChannels--&#xA;&#x9;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;&#x9;fmt.Println(&#34;Google responded:&#34;, msg1)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;case msg2, ok := &lt;-ddgChan:&#xA;&#x9;&#x9;&#x9;if !ok {&#xA;&#x9;&#x9;&#x9;&#x9;openChannels--&#xA;&#x9;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;&#x9;fmt.Println(&#34;DuckDuckGo responded:&#34;, msg2)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;case msg3, ok := &lt;-braveChan:&#xA;&#x9;&#x9;&#x9;if !ok {&#xA;&#x9;&#x9;&#x9;&#x9;openChannels--&#xA;&#x9;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;&#x9;fmt.Println(&#34;BraveSearch responded:&#34;, msg3)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;The above example shows how to use a select statement to wait for multiple channels to be ready before proceeding with the next operation. With this example, we can get the channel that sent the response first i.e. which search engine in this case responded to the ping first. Just a bit exaggerated example but it helps in understanding the concept of the `select` statement.&#xA;&#xA;```bash&#xA;$ go run select-chan.go&#xA;&#xA;DuckDuckGo responded: 200 OK&#xA;Google responded: 200 OK&#xA;BraveSearch responded: 200 OK&#xA;&#xA;&#xA;$ go run select-chan.go&#xA;&#xA;DuckDuckGo responded: 200 OK&#xA;BraveSearch responded: 200 OK&#xA;Google responded: 200 OK&#xA;```&#xA;&#xA;Let&#39;s break each of the steps down:&#xA;&#xA;- `pingDuckDuckGo(ddgChan, &amp;wg)` is a method which sends data to the channel `ddgChan`.&#xA;- `pingGoogle(gogChan, &amp;wg)` is a method that sends data to the channel `gogChan`.&#xA;- `pingBraveSearch(braveChan, &amp;wg)` is a method that sends data to the channel `braveChan`.&#xA;- We wait for each go routine to finish using `wg.Wait()` and close the channels.&#xA;- Finally, we close the channels `gogChan`, `ddgChan`, and `braveChan` to pick up the data from the channel as `&lt;-chan` with the select case block.&#xA;- The select case will pick the first channel that is ready to receive data. Hence we get the output based on the order of which the channel responded first.&#xA;- We use the `!ok` condition to check if the channel is closed or not, we have a `openChannels` variable to keep track of the number of open channels, if there are no channels open, we simply break out of the infinite loop.&#xA;&#xA;## Directional Channels&#xA;&#xA;Channels can also be designated as &#34;send-only&#34; or &#34;receive-only&#34; to enforce certain communication patterns and enhance safety. This is done by specifying the direction when defining the channel type.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func receiver(ch &lt;-chan int, wg *sync.WaitGroup) {&#xA;&#x9;for i := range ch {&#xA;&#x9;&#x9;fmt.Println(&#34;Received:&#34;, i)&#xA;&#x9;}&#xA;&#x9;wg.Done()&#xA;}&#xA;&#xA;func sender(ch chan&lt;- int, wg *sync.WaitGroup) {&#xA;&#x9;for i := 0; i &lt; 10; i++ {&#xA;&#x9;&#x9;fmt.Println(&#34;Sent:&#34;, i)&#xA;&#x9;&#x9;ch &lt;- i&#xA;&#x9;}&#xA;&#x9;close(ch)&#xA;&#x9;wg.Done()&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;ch := make(chan int)&#xA;&#x9;wg := sync.WaitGroup{}&#xA;&#x9;wg.Add(2)&#xA;&#x9;go receiver(ch, &amp;wg)&#xA;&#x9;go sender(ch, &amp;wg)&#xA;&#x9;wg.Wait()&#xA;}&#xA;```&#xA;&#xA;In the above example, we have created a channel `ch` and sent 10 values to it using two-go routines. The main routine is waiting for the goroutines to finish before closing the channel. The `sender` sends values `0` through `9`, and the `receiver` prints whenever a value is received. In the `sender` method, we only accept the channel to send data as `chan&lt;-`, and in the `receiver` method, the channel parameter is set to only read from the channel as `&lt;-chan`.&#xA;&#xA;```bash&#xA;$ go run send-recv.go&#xA;&#xA;Sent: 0&#xA;Received: 0&#xA;Sent: 1&#xA;Sent: 2&#xA;Received: 1&#xA;Received: 2&#xA;Sent: 3&#xA;Sent: 4&#xA;Received: 3&#xA;Received: 4&#xA;Sent: 5&#xA;Sent: 6&#xA;Received: 5&#xA;Received: 6&#xA;Sent: 7&#xA;Sent: 8&#xA;Received: 7&#xA;Received: 8&#xA;Sent: 9&#xA;Received: 9&#xA;```&#xA;&#xA;When we define a parameter as a write-only channel means that the function can only send data into that channel. It cannot read data from it or close it. This pattern is helpful when you want to make sure that the function is solely responsible for producing data and not consuming or interacting with the channel&#39;s current state.&#xA;&#xA;When we define a parameter as a read-only channel, it means that the function can only receive data from that channel. It cannot close the channel or send data into it. This pattern is useful when we want to ensure that the function only consumes data from the channel without modifying it or interfering with the sender&#39;s logic.&#xA;&#xA;Additionally, the compiler will catch code trying to send on a read-only channel or receive from a write-only one.&#xA;&#xA;## Common Channel Usage Pattern&#xA;&#xA;There are a variety of ways in which channels can be used in Go. In this section, we&#39;ll explore some of the most common patterns for using channels in Go. Some of the most useful and idiomatic channel usage patterns include pipelines, fan-in and fan-out, etc.&#xA;&#xA;### Async Await pattern for Channels&#xA;&#xA;In Go, goroutines and channels enable an elegant async/await style. A goroutine can execute a task asynchronously, while the main thread awaits the result using a channel.&#xA;&#xA;The async-await pattern in Go involves initiating multiple tasks concurrently, each with its own goroutine, and then awaiting their completion before proceeding. Channels are used to communicate between these goroutines, allowing them to work independently and provide results to the main routine when ready.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func fetchURL(url string, ch chan&lt;- http.Response) {&#xA;&#x9;go func() {&#xA;&#x9;&#x9;res, err := http.Get(url)&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;panic(err)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;defer res.Body.Close()&#xA;&#x9;&#x9;ch &lt;- *res&#xA;&#x9;}()&#xA;}&#xA;&#xA;func task(name string) {&#xA;&#x9;fmt.Println(&#34;Task&#34;, name)&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(&#34;Start&#34;)&#xA;&#xA;&#x9;url := &#34;http://google.com&#34;&#xA;&#xA;&#x9;respCh := make(chan http.Response)&#xA;&#xA;&#x9;fetchURL(url, respCh)&#xA;&#xA;&#x9;task(&#34;A&#34;)&#xA;&#x9;task(&#34;B&#34;)&#xA;&#xA;&#x9;response := &lt;-respCh&#xA;&#x9;fmt.Println(&#34;Response Status:&#34;, response.Status)&#xA;&#xA;&#x9;fmt.Println(&#34;Done&#34;)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run async.go&#xA;Start&#xA;Task A&#xA;Task B&#xA;Response Status: 200 OK&#xA;Done&#xA;```&#xA;&#xA;In the above example, we have created a function `fetchURL` which takes a URL and a channel as an argument. The channel `respCh` is used to communicate between the goroutines. The function fires up a goroutine that fetches the request, we send a `GET` request to the provided URL and send the response to the provided channel.  In the main function, we access the `response` by receiving the data from the channel as `&lt;-respCh`. Before doing this, we can do any other task simultaneously, like `task(&#34;A&#34;)` and `task(&#34;B&#34;)` which just prints some string(it could be anything). But this should be before we pull in from the channel, anything after the access will be blocked i.e. will be executed sequentially.&#xA;&#xA;### Pipeline pattern for Channels&#xA;&#xA;The pipeline pattern is used to chain together a sequence of processing stages, each stage consumes input, processes data, and passes the output to the next stage. This type of pattern can be achieved by chaining different channels from one go routine to another.&#xA;&#xA;![Pipeline pattern flow using channels in golang](https://meetgor-cdn.pages.dev/100-days-of-golang/channels-pipelines-pattern.png)&#xA;&#xA;So, the pipeline pattern using channels in Go, data flows sequentially through multiple stages: Stage 1 reads input and sends to Channel A, Stage 2 receives from Channel A and sends to Channel B, and Stage 3 receives from Channel B and produces the final output.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func generate(nums []int, out chan&lt;- int, wg *sync.WaitGroup) {&#xA;&#x9;fmt.Println(&#34;Stage 1&#34;)&#xA;&#x9;for _, n := range nums {&#xA;&#x9;&#x9;fmt.Println(&#34;Number:&#34;, n)&#xA;&#x9;&#x9;out &lt;- n&#xA;&#x9;}&#xA;&#x9;close(out)&#xA;&#x9;wg.Done()&#xA;}&#xA;&#xA;func square(in &lt;-chan int, out chan&lt;- int, wg *sync.WaitGroup) {&#xA;&#x9;fmt.Println(&#34;Stage 2&#34;)&#xA;&#x9;for n := range in {&#xA;&#x9;&#x9;sq := n * n&#xA;&#x9;&#x9;fmt.Println(&#34;Square:&#34;, sq)&#xA;&#x9;&#x9;out &lt;- sq&#xA;&#x9;}&#xA;&#x9;close(out)&#xA;&#x9;wg.Done()&#xA;}&#xA;&#xA;func print(in &lt;-chan int, wg *sync.WaitGroup) {&#xA;&#x9;for n := range in {&#xA;&#x9;&#x9;fmt.Println(n)&#xA;&#x9;}&#xA;&#x9;wg.Done()&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;input := []int{1, 2, 3, 4, 5}&#xA;&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;wg.Add(3)&#xA;&#xA;&#x9;stage1 := make(chan int)&#xA;&#x9;stage2 := make(chan int)&#xA;&#xA;&#x9;go generate(input, stage1, &amp;wg)&#xA;&#xA;&#x9;go square(stage1, stage2, &amp;wg)&#xA;&#xA;&#x9;go print(stage2, &amp;wg)&#xA;&#xA;&#x9;wg.Wait()&#xA;}&#xA;```&#xA;&#xA;&#xA;In the above example, we have created a sequence of processing stages, each stage consumes input, processes data, and passes the output to the next stage. We can consider the functions `generate`, `square`, and `print` as stages `1`, `2`, and `3` respectively.&#xA;&#xA;- The generate function, takes in the input as a slice of integers, an unbuffered channel, and the waitgroup b reference, the function basically iterates over the numbers in the slice and sends it to the channel provided in the parameters.&#xA;- The square function takes in the stage1 channel that the channel from the stage1, as well as its own channel as `stage2` (remember the `stage1` channel has sent the numbers via the generating function).&#xA;- The square function then iterates over the number sent from the channel `stage1` as `in` and squares it and sends it to the channel provided as `stage2` as the `out` channel.&#xA;- The print function takes in the stage2 channel as an argument and iterates over the number sent from the channel `stage2` and prints it.&#xA;&#xA;```&#xA;$ go run pipeline.go&#xA;Stage 1&#xA;Number: 1&#xA;Stage 2&#xA;Square: 1&#xA;1&#xA;Number: 2&#xA;Number: 3&#xA;Square: 4&#xA;Square: 9&#xA;Number: 4&#xA;4&#xA;9&#xA;Square: 16&#xA;16&#xA;Number: 5&#xA;Square: 25&#xA;25&#xA;```&#xA;&#xA;So, we can see the order of the execution, both the pipelines started synchronously, However, they perform the operation only when the data is sent from the previous channel. We first print the `number` from the `generate` function, then print the squared value in the `square` function, and finally print it as `Square: value` in the print function.&#xA;&#xA;### Fan-In pattern for Channels&#xA;&#xA;The Fan-In pattern is used for combining data from multiple sources into a single stream for unified processing, often using a shared data structure to aggregate the data. We can create the fan-in pattern by merging multiple input channels into a single output channel.&#xA;&#xA;![Fan-in pattern flow using channels in golang](https://meetgor-cdn.pages.dev/100-days-of-golang/channels-fan-in-pattern.png)&#xA;&#xA;The fan-in pattern is when multiple input channels (A, B, C) are read concurrently, and their data is merged into a single output channel (M).&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;io/ioutil&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func readFile(file string, ch chan&lt;- string) {&#xA;&#x9;content, _ := ioutil.ReadFile(file)&#xA;&#x9;fmt.Println(&#34;Reading from&#34;, file, &#34;as :: &#34;, string(content))&#xA;&#x9;ch &lt;- string(content)&#xA;&#x9;close(ch)&#xA;}&#xA;&#xA;func merge(chs ...&lt;-chan string) string {&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;out := &#34;&#34;&#xA;&#xA;&#x9;for _, ch := range chs {&#xA;&#x9;&#x9;wg.Add(1)&#xA;&#x9;&#x9;go func(c &lt;-chan string) {&#xA;&#x9;&#x9;&#x9;for s := range c {&#xA;&#x9;&#x9;&#x9;&#x9;out += s&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;wg.Done()&#xA;&#x9;&#x9;}(ch)&#xA;&#x9;}&#xA;&#xA;&#x9;wg.Wait()&#xA;&#x9;return out&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;ch1 := make(chan string)&#xA;&#x9;ch2 := make(chan string)&#xA;&#xA;&#x9;go readFile(&#34;data/f1.txt&#34;, ch1)&#xA;&#x9;go readFile(&#34;data/f2.txt&#34;, ch2)&#xA;&#xA;&#x9;merged := merge(ch1, ch2)&#xA;&#xA;&#x9;fmt.Println(merged)&#xA;}&#xA;```&#xA;&#xA;In the above example, the `readFile` function reads the contents of the file and sends it to the channels `ch1` and `ch2` from different go routines. The `readFile` takes in the channel as send only channel which reads the file and sends the content to the channel as `ch &lt;- string(content)`. The `merge` function takes in `2` it can also be `n` number of channels to parse from as indicated as `...&lt;-chan`, it iterates over each channel, and for each channel, it reads the contents, and appends as a single string.&#xA;&#xA;```bash&#xA;$ go run fan-in.go&#xA;&#xA;Reading from data/f1.txt as ::  This is from file 1&#xA;Reading from data/f2.txt as ::  This is from file 2&#xA;&#xA;This is from file 1&#xA;This is from file 2&#xA;&#xA;&#xA;$ go run fan-in.go&#xA;Reading from data/f2.txt as ::  This is from file 2&#xA;Reading from data/f1.txt as ::  This is from file 1&#xA;&#xA;This is from file 2&#xA;This is from file 1&#xA;```&#xA;&#xA;So, this is how the fan-in pattern works, We create multiple channels and combine the results into a single stream of data(in this example a single string).&#xA;&#xA;&#xA;### Fan-Out Pattern for Channels&#xA;&#xA;The Fan-Out pattern involves taking data from a single source and distributing it to multiple workers or processing units for parallel or concurrent handling. Fan-out design splits an input channel into multiple output channels, it is used to distribute branches of work or data across concurrent processes.&#xA;&#xA;![Fan-Out pattern flow using channels in golang](https://meetgor-cdn.pages.dev/100-days-of-golang/channels-fan-out-pattern.png)&#xA;&#xA;The fan-out pattern is when data from a single input channel (A) is distributed to multiple worker channels (X, Y, Z) for parallel processing.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func readFile(file string, ch chan&lt;- string, wg *sync.WaitGroup) {&#xA;&#x9;defer wg.Done()&#xA;&#xA;&#x9;content, err := os.ReadFile(file)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Printf(&#34;Error reading from %s: %v&#xA;&#34;, file, err)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#xA;&#x9;ch &lt;- string(content)&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;files := []string{&#34;data/f1.txt&#34;, &#34;data/f2.txt&#34;}&#xA;&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;ch := make(chan string)&#xA;&#xA;&#x9;for _, f := range files {&#xA;&#x9;&#x9;wg.Add(1)&#xA;&#x9;&#x9;go readFile(f, ch, &amp;wg)&#xA;&#x9;}&#xA;&#xA;&#x9;go func() {&#xA;&#x9;&#x9;wg.Wait()&#xA;&#x9;&#x9;close(ch)&#xA;&#x9;}()&#xA;&#xA;&#x9;var fileData []string&#xA;&#x9;for content := range ch {&#xA;&#x9;&#x9;fileData = append(fileData, content)&#xA;&#x9;}&#xA;&#xA;&#x9;fmt.Printf(&#34;Read %d files&#xA;&#34;, len(fileData))&#xA;&#x9;fmt.Printf(&#34;Contents:&#xA;%s&#34;, fileData)&#xA;}&#xA;```&#xA;&#xA;In the above example, we create a single channel `ch` as our single source, we loop over all the files, and create go routines calling the `readFile` function. The `readFile` function takes in the filename, channel, and the WaitGroup reference, the function reads the file and sends the content to the channel as `ch &lt;- content`. The `readFile` is called concurrently for all the files, Here we have done a fan-out of the task into multiple go routines, then in the main function, we iterate over the channel and receive the content.&#xA;&#xA;```bash&#xA;$ go run fan-out.go&#xA;&#xA;Read 2 files&#xA;Contents:&#xA;[This is from file 2&#xA; This is from file 1&#xA;]&#xA;&#xA;&#xA;$ go run fan-out.go&#xA;&#xA;Read 2 files&#xA;Contents:&#xA;[This is from file 1&#xA; This is from file 2&#xA;]&#xA;```&#xA;Here&#39;s a brief summary of the fan-out pattern from the example provided:&#xA;&#xA;- Multiple files are read concurrently using goroutines. This &#34;fans out&#34; the work.&#xA;- The `readFile` function runs in a goroutine to process each file separately.&#xA;- WaitGroup coordinates the goroutines.&#xA;- A shared channel ch collects the results from each goroutine.&#xA;- The main goroutine reads from the channel and aggregates the results.&#xA;- Channel is closed and ranged over to collect results cleanly.&#xA;&#xA;I have a few more patterns to demonstrate that have been provided in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/channels/patterns/) repository.&#xA;&#xA;That&#39;s it from the 31st part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/channels) repository.&#xA;&#xA;## References&#xA;&#xA;- [Channels](https://go.dev/ref/spec#Channel_types)&#xA;- [Effective Go: Channels](https://go.dev/doc/effective_go#channels)&#xA;- [Fan-In and Fan-Out](https://mariocarrion.com/2021/08/19/learning-golang-concurrency-patterns-fan-in-fan-out.html)&#xA;- [Go 101: Channels](https://go101.org/article/channel.html)&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to understand the fundamentals of channels in golang. By using the core concepts from the previous posts like go routines and wait groups, we were able to work with channels in golang. We wrote a few examples for different patterns using concurrency concepts with channels. Patterns like pipelines, fan-in, fan-out, async, and some usage of select statements for channels were explored in this section.&#xA;&#xA;Hopefully, you found this section helpful, if you have any comments or feedback, please let me know in the comments section or on my social handles. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Go Routines and WaitGroups</title>
      <link>https://www.meetgor.com/posts/golang-go-routines</link>
      <description>Introduction One of the key features that set Go apart from many other languages is its native support for Goroutines - lightweight concurrent functions that ca</description>
      <pubDate>Sat, 29 Jul 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;One of the key features that set Go apart from many other languages is its native support for Goroutines - lightweight concurrent functions that can run concurrently and efficiently manage concurrency tasks. Goroutines are an essential aspect of Go&#39;s concurrency model, enabling developers to build highly concurrent and performant applications effortlessly.&#xA;&#xA;In this blog post, we will dive into the world of Goroutines and explore how they work, why they are essential for concurrent programming in Go, and how they can greatly improve the responsiveness and efficiency of your applications. This post will cover go routines and a primer on wait groups, in the next article we will be looking deeply into channels where all these three things can be demonstrated and understood in a better and more useful way.&#xA;&#xA;## Concurrency and Parallelism&#xA;&#xA;This two concepts are quite crucial before diving into the fundamentals of concurreny.&#xA;&#xA;**Concurrency** is about dealing with multiple things at once. Concurrent programs can have several processes or threads running simultaneously on a single CPU core by rapidly switching between them (context switching). The threads are interleaved, not necessarily executing at literally the same time. The CPU can switch between these tasks to give the appearance of simultaneous progress.&#xA;&#xA;**Parallelism** is about doing multiple things at literally the same time. Parallel programs can execute multiple computations simultaneously on separate CPU cores. The threads actually execute in parallel.&#xA;&#xA;## What is a Go Routine&#xA;&#xA;A go routine is a simple lightweight thread managed by the Go runtime. In the simplest of terms, a go routine can be defined as:&#xA;&#xA;&gt; Go routine is a way to perform multiple tasks within a program, allowing different parts of the program to work simultaneously and make the most out of the resources.&#xA;&#xA;Also it can be stated as:&#xA;&#xA;A goroutine in Golang is a lightweight, independently executing function that runs concurrently with other goroutines within the same address space. In other words, it is a concurrent unit of execution.&#xA;&#xA;Focussing on the word **same address space** that will be really critical in the later sections of this article.&#xA;&#xA;## Features of Go Routines&#xA;&#xA;Go routines form a key part of Go&#39;s concurrency model. Here are some of the key features of go routines:&#xA;&#xA;- **Lightweight Thread**:&#xA;    A Go routine is often referred to as a lightweight thread.&#xA;&#xA;- **Independent Execution**:&#xA;    Go routines run independently of each other, enabling concurrent execution.&#xA;&#xA;- **Managed by Go**:&#xA;    Go routines are managed by the Go runtime, making them easy to use.&#xA;&#xA;- **Low Overhead**:&#xA;    Go routines have low memory overhead, allowing us to create thousands of them efficiently.&#xA;&#xA;- **Communication**:&#xA;    Go routines can communicate and synchronize data through channels.&#xA;&#xA;- **Asynchronous**:&#xA;    Go routines can execute asynchronously, allowing other parts of the program to continue running.&#xA;&#xA;- **Scalability**:&#xA;    Go routines are the foundation of scalable concurrent programming in Go.&#xA;&#xA;Unlike threads in other languages, goroutines are cheap and you can easily create thousands or even millions of them in a program.&#xA;&#xA;&#xA;## Example of Go Routines&#xA;&#xA;Creating a go routine is not hard, just add the keyword `go` before the function call and the go runtime will create a new go routine inside the main function or wherever is the context from. Remember, the main function is also a go routine.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func process() {&#xA;&#x9;fmt.Println(&#34;Hello World!&#34;)&#xA;&#x9;time.Sleep(time.Second)&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;start := time.Now()&#xA;&#x9;go process()&#xA;&#x9;go process()&#xA;&#x9;end := time.Now()&#xA;&#x9;duration := end.Sub(start)&#xA;&#x9;fmt.Println(duration)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;Hello World!&#xA;15.607µs&#xA;&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;9.889µs&#xA;&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;8.834µs&#xA;&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;9.158µs&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;12.54µs&#xA;&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;Hello World!&#xA;Hello World!&#xA;10.19µs&#xA;&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ go run main.go &#xA;&#xA;14.1µs&#xA;&#xA;&#xA;scripts/go-routines on  main via 🐹 v1.20 &#xA;$ &#xA;```&#xA;&#xA;Quite unpredictable output right? This is a power of go-routines. It is **asynchronous** so it will not block the main function. The two function calls to `process`, are executed independently of the main function scope. The program just calls the process function and captures the output, and sequentially reaches the end of the program(main function), At this point the go routines(threads) inside the main function are stopped abruptly.&#xA;&#xA;Let&#39;s break it down if it is not clear yet.&#xA;&#xA;- The main function starts.&#xA;- `go process()` creates a Go routine and starts its execution.&#xA;- Meanwhile, another call to `go process()` creates a separate Go routine and starts its execution as well.&#xA;- Meanwhile, calculate the time difference between the main function start and end.&#xA;- Meanwhile main function ends.&#xA;&#xA;So, in summary, the main function is only able to capture the output of the `duration` as it is synchronous and if the process has been executed, it prints the `Hello World!` message. Hence we have different outputs because the uncontrolled concurrency, lack of coordination, and OS scheduling across the program runs are different. &#xA;&#xA;&#xA;&#xA;## Wait Groups&#xA;&#xA;In simple terms, WaitGroup is used to synchronize multiple goroutines and to wait for them to finish executing. This allows the go routines to be completed before the completion of the main function, hence it blocks the main function from leaving/exiting the scope.&#xA;&#xA;&gt; A WaitGroup is a synchronization primitive that allows a goroutine to wait for a collection of other goroutines to finish executing. &#xA;&#xA;- A WaitGroup is initialized with a counter representing the number of goroutines to wait for.&#xA;- The Add() method increments the counter by the given value. This is called by each goroutine to indicate it is running.&#xA;- The main goroutine calls Add() to set the initial count, then launches worker goroutines.&#xA;- A WaitGroup is typically passed by a pointer to goroutines that need to be waited on.&#xA;- The Done() method decrements the counter by 1. Goroutines call this when finished.&#xA;- Each worker calls Done() when finished, decrementing the counter.&#xA;- The Wait() method blocks until the counter reaches 0, indicating all goroutines have finished.&#xA;- Main calls Wait() to block until Done() brings counter to 0.&#xA;&#xA;This provides a simple way to synchronize multiple goroutines finishing their work with a main thread that needs to wait for them to complete. The counter ensures the main thread knows how many goroutines it is waiting for. Interacting and working with go routines with synchronization using a wait group is quite intuitive and simple to follow. Let&#39;s look at a simple example below:&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func process(pid int, wg *sync.WaitGroup) {&#xA;&#x9;fmt.Printf(&#34;Started process %d&#xA;&#34;, pid)&#xA;&#x9;time.Sleep(1 * time.Second)&#xA;&#x9;fmt.Printf(&#34;Completed process %d&#xA;&#34;, pid)&#xA;&#x9;defer wg.Done()&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;now := time.Now()&#xA;&#x9;var wg sync.WaitGroup&#xA;&#xA;&#x9;for i := 0; i &lt; 10; i++ {&#xA;&#x9;&#x9;wg.Add(1)&#xA;&#x9;&#x9;go process(i, &amp;wg)&#xA;&#x9;}&#xA;&#x9;wg.Wait()&#xA;&#x9;fmt.Println(&#34;All processes completed&#34;)&#xA;&#x9;end := time.Now()&#xA;&#x9;fmt.Println(end.Sub(now))&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;scripts/go-routines on  main [?] via 🐹 v1.20 &#xA;$ go run wg.go &#xA;Started process 9&#xA;Started process 0&#xA;Started process 1&#xA;Started process 2&#xA;Started process 3&#xA;Started process 4&#xA;Started process 5&#xA;Started process 6&#xA;Started process 7&#xA;Started process 8&#xA;Completed process 8&#xA;Completed process 3&#xA;Completed process 9&#xA;Completed process 0&#xA;Completed process 1&#xA;Completed process 2&#xA;Completed process 5&#xA;Completed process 4&#xA;Completed process 6&#xA;Completed process 7&#xA;1.000563257s&#xA;All processes completed&#xA;&#xA;scripts/go-routines on  main [?] via 🐹 v1.20 &#xA;$ &#xA;&#xA;```&#xA;&#xA;In the above example, we have used the same function `process` but with a slight twist, we have added a process id, just an integer to represent the go routine. We print the start and completion of the function in between the sleep for 1 second. We also have a wait group. &#xA;- A [WaitGroup](https://pkg.go.dev/sync#WaitGroup) is basically a struct type defined in the [sync](https://pkg.go.dev/sync) package. &#xA;- The variable `wg` is a new wait group instance that will be used to synchronize and wait for the completion of groups of go routines. &#xA;&#xA;We create a for loop with `10` iterations, so as to spawn 10 `process` function calls. We first use the `wg.Add(1)` which says to the wait group to wait for 1 go routine. The immediate next line is a go routine `go process()` which takes in a `pid` just to track which go routine is being executed in the loop. &#xA;&#xA;Inside the `process` function, we simply say the process with the given `pid` has started as a print statement, sleeps for a second, and then print the end of the process with `pid`. The end of the function is marked with `defer wg.Done()`, this is to indicate that the go routine is completed. &#xA;&#xA;The wg(wait group) has a counter that keeps track of the number of go routines it has to synchronize or wait for them to complete. In the [Add](https://pkg.go.dev/sync#WaitGroup.Add) function, the internal counter in the `WaitGroup` is incremented by the `delta` the integer parsed as a parameter. And the [Done](https://pkg.go.dev/sync#WaitGroup.Done) function, decrements the internal counter in the `WaitGroup` which indicates that the go routine is completed.&#xA;&#xA;In the main function, we call the [wg.Wait](https://pkg.go.dev/sync#WaitGroup.Wait) which will block until the counter for the `WaitGroup` is 0 i.e. all go routines have completed their execution. So, we created 10 go routines which ran concurrently but were synchronized with the help of WaitGroups. The WaitGroup is allowed to block the main function till all the go routines are done executing.&#xA;&#xA;&#xA;## Go Routine with WaitGroup Use Cases&#xA;&#xA;Go routines can be used for creating asynchronous tasks, and also for creating concurrent tasks. By using wait groups, we can create a way to wait for multiple goroutines to complete. By using go routines and wait groups, we can complete n number of tasks in a time frame of 1 task&#39;s completion. However, to create concurrent communication between the other processes, we need `channels` (which we will explore in the next article). &#xA;&#xA;Here is a simple breakdown of what asynchronous and concurrent tasks might refer to:&#xA;&#xA;**Asynchronous** tasks run independently of the main program flow, allowing the main program to continue executing without waiting for the task&#39;s completion. For example, not blocking other tasks in the sequential flow of the main function.&#xA;&#xA;**Concurrent** tasks run simultaneously and can execute at the same time as other tasks. They make use of multiple threads (goroutines in Go&#39;s case) to achieve parallel execution. For example, running multiple tasks parallel will cut the time down for spinning each task after the completion of another task.&#xA;&#xA;Some of the asynchronous tasks that can be done with go routines might include the following:&#xA;&#xA;- Sending a mail while saving the user to the database.&#xA;- Fetch and process data from multiple websites(web scraping/crawling).&#xA;- High-performance message brokers and queuing systems for inter-process communication.&#xA;&#xA;&#xA;One practical way of using go routines with wait groups would be to send mail, we are not going to see the actual implementation of the mail-sending stuff. However, we can try to mimic how the setup might be for sending bulk mail. By creating a wait group and a list of mail addresses to send, a go routine can be created with a function that sends those emails.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func sendMail(address string, wg *sync.WaitGroup) {&#xA;&#x9;fmt.Println(&#34;Sending mail to&#34;, address)&#xA;&#x9;defer wg.Done()&#xA;    // Actual mail sending, smtp stuff&#xA;    // handle errors&#xA;&#xA;    // client, err := smtp.Dial(&#34;smtp.example.com:587&#34;)&#xA;    // errr = client.Mail(&#34;sender@example.com&#34;)&#xA;    // err = client.Rcpt(address)&#xA;    // wc, err := client.Data()&#xA;    //_, err = wc.Write([]byte(&#34;This is the email body.&#34;))&#xA;    // err = wc.Close()&#xA;    // client.Quit()&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;emails := []string{&#xA;&#x9;&#x9;&#34;recipient1@example.com&#34;,&#xA;&#x9;&#x9;&#34;recipient2@example.com&#34;,&#xA;&#x9;&#x9;&#34;xyz@example.com&#34;,&#xA;&#x9;}&#xA;&#x9;wg := sync.WaitGroup{}&#xA;    wg.Add(len(emails))&#xA;&#xA;&#x9;for _, email := range emails {&#xA;&#x9;&#x9;go sendMail(email, &amp;wg)&#xA;&#x9;}&#xA;&#x9;wg.Wait()&#xA;&#x9;fmt.Println(&#34;All emails queued for sending&#34;)&#xA;&#x9;// Do other stuff&#xA;}&#xA;```&#xA;In the above example, the `emails` is a list of email ids to send the mail. We have created a wait group that we can initialize with the total number of go routines probably to be executed. The `wg.Add` method is parsed with the number of emails to be sent hence that equals the go routines to spawn.&#xA;&#xA;So, we can then in the for loop, iterate over each mail and send the emails with the `sendMail` function as go routine. The `wg.Wait` function outside the loop, will make sure the main function is halted before all the go routines complete their execution.&#xA;&#xA;There is one more way to call a function as a go routine without changing its signature since we had to pass `wg` pointer as the WaitGroup reference to it to acknowledge the completion of the go routine. We could wrap these two operations wiz. calling the function and calling the `wg.Done` method with an anonymous function.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func sendMail(address string) {&#xA;&#x9;fmt.Println(&#34;Sending mail to&#34;, address)&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;emails := []string{&#xA;&#x9;&#x9;&#34;recipient1@example.com&#34;,&#xA;&#x9;&#x9;&#34;recipient2@example.com&#34;,&#xA;&#x9;&#x9;&#34;xyz@example.com&#34;,&#xA;&#x9;}&#xA;&#x9;wg := sync.WaitGroup{}&#xA;&#x9;wg.Add(len(emails))&#xA;&#xA;&#x9;for _, email := range emails {&#xA;&#x9;&#x9;mail := email&#xA;&#x9;&#x9;go func(m string) {&#xA;&#x9;&#x9;&#x9;sendMail(m)&#xA;&#x9;&#x9;&#x9;wg.Done()&#xA;&#x9;&#x9;}(mail)&#xA;&#x9;}&#xA;&#x9;wg.Wait()&#xA;&#x9;fmt.Println(&#34;All emails queued for sending&#34;)&#xA;&#x9;// Do other stuff&#xA;}&#xA;```&#xA;&#xA;This does the exact same thing, but we don&#39;t have to change the signature of the function, This keeps the functional logic of the function in its place and handles the concurrency on the go.&#xA;&#xA;```go&#xA;&#xA;for _, email := range emails {&#xA;    mail := email&#xA;    go func(m string) {&#xA;        sendMail(m)&#xA;        wg.Done()&#xA;    }(mail)&#xA;}&#xA;```&#xA;&#xA;If the above bit scares you, don&#39;t worry it&#39;s too simple. &#xA;&#xA;- We are iterating over the email slice using a for loop and creating a Go routine for each email address. The loop variable email represents the email address at the current iteration.&#xA;- However, to avoid the loop variable capture issue (where all Go routines would share the same email variable), we create a new variable mail and assign the value of email to it. This step ensures that each Go routine captures its own copy of the email address.&#xA;- We immediately create an anonymous function (a closure) using the go keyword. This anonymous function takes the `mail` variable as a parameter m and is executed concurrently as a Go routine. Inside the Go routine, we call the `sendMail` function with the email address `m`.&#xA;- After the `sendMail` call has been executed i.e. email has been sent, we call wg.Done() to notify the WaitGroup that the Go routine has completed its work. This allows the WaitGroup to properly synchronize and wait for all Go routines to finish before the program proceeds beyond wg.Wait() in the main function.&#xA;&#xA;This is one way to do it if you want to separate the mail-sending logic from the goroutines/concurrency task. However, this should be handled with care as the variables inside the closure might be shared among all the goroutines instead of having individual variable literals. &#xA;&#xA;To ensure that each goroutine operates on its own copy of the email address, we use the approach of creating a new variable mail and passing it as a parameter to the anonymous function. This way, each goroutine captures its unique email address, avoiding any interference or unintended sharing of data between goroutines.&#xA;&#xA;## Mutual Exclusion Locks&#xA;&#xA;In the previous examples, we saw how goroutines and wait groups allow us to run multiple tasks concurrently in Go. However, sometimes these concurrent goroutines need to access shared resources like memory, files, network sockets, etc.&#xA;&#xA;When more than one goroutine tries to access a resource at the same time, it can lead to **race conditions** and unpredictable behavior. To handle this, we need a way to ensure only one goroutine can access the resource at a time.&#xA;&#xA;This is where mutual exclusion locks come in. A **mutual exclusion lock**, or **mutex**, provides a mechanism to lock access to a shared resource. It ensures only one goroutine at a time can acquire the lock, blocking other goroutines until the lock is released.&#xA;&#xA;For example, say we have multiple goroutines trying to append data to the same memory buffer(could be file/database/etc.) concurrently:&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func WriteToFile(filename string, contents string, buffer *[]byte, wg *sync.WaitGroup) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;*buffer = append(*buffer, []byte(contents)...)&#xA;&#x9;err := os.WriteFile(filename, *buffer, 0644)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;var sharedBuffer []byte&#xA;&#xA;&#x9;wg.Add(2)&#xA;&#x9;go WriteToFile(&#34;data/f1.txt&#34;, &#34;Hello &#34;, &amp;sharedBuffer, &amp;wg)&#xA;&#x9;go WriteToFile(&#34;data/f1.txt&#34;, &#34;World! &#34;, &amp;sharedBuffer, &amp;wg)&#xA;&#x9;wg.Wait()&#xA;&#xA;&#x9;fmt.Println(string(sharedBuffer))&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run --race no-mutex.go&#xA;==================&#xA;WARNING: DATA RACE&#xA;Read at 0x00c000012030 by goroutine 8:&#xA;  main.WriteToFile()&#xA;      /home/meet/code/100-days-of-golang/scripts/go-routines/no-mutex.go:11 +0xe7&#xA;  main.main.func2()&#xA;      /home/meet/code/100-days-of-golang/scripts/go-routines/no-mutex.go:24 +0x64&#xA;&#xA;Previous write at 0x00c000012030 by goroutine 7:&#xA;  main.WriteToFile()&#xA;      /home/meet/code/100-days-of-golang/scripts/go-routines/no-mutex.go:11 +0x16a&#xA;  main.main.func1()&#xA;      /home/meet/code/100-days-of-golang/scripts/go-routines/no-mutex.go:23 +0x64&#xA;&#xA;Goroutine 8 (running) created at:&#xA;  main.main()&#xA;      /home/meet/code/100-days-of-golang/scripts/go-routines/no-mutex.go:24 +0x1f6&#xA;&#xA;Goroutine 7 (finished) created at:&#xA;  main.main()&#xA;      /home/meet/code/100-days-of-golang/scripts/go-routines/no-mutex.go:23 +0x14e&#xA;==================&#xA;```&#xA;&#xA;This is because we are trying to access the same memory address at the same time. This is a race condition and can lead to unpredictable behavior. Try removing the `--race` flag while running, in this little stupid example, it might not be obvious, but in complex and constrained environments, this can get the application in serious trouble.&#xA;&#xA;**NOTE**: We are using `go run --race no-mutex.go` to check if there are any race conditions in the program. This is the [race detector](https://go.dev/doc/articles/race_detector) flag in the run command.&#xA;&#xA;To avoid this race condition, we need to add the mutex locks provided in the [sync.Mutex](https://pkg.go.dev/sync#Mutex) type. There are methods like [Lock](), [Unlock](), and [TryLock]() which help in locking access of the resource to a single entity at a given time.&#xA;&#xA;When a goroutine calls `Lock()` on a mutex, it acquires the lock. If the mutex is already locked by another goroutine, the calling goroutine will be blocked (put to sleep) until the lock becomes available. Once the lock is acquired successfully, the goroutine can proceed with its critical section, which is the part of the code that should not be executed concurrently by multiple goroutines.&#xA;&#xA;When a goroutine calls `Unlock()` on a mutex, it releases the lock. This allows other waiting goroutines to acquire the lock and execute their critical sections. It&#39;s essential to ensure that `Unlock()` is called after the critical section to release the mutex and avoid deadlocks. The critical section/shared resource should not be accessed after the release of this lock.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;sync&#34;&#xA;)&#xA;&#xA;func WriteToFile(filename string, contents string, buffer *[]byte, wg *sync.WaitGroup, mutex *sync.Mutex) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;contentBytes := []byte(contents)&#xA;&#xA;&#x9;mutex.Lock()&#xA;&#x9;*buffer = append(*buffer, contentBytes...)&#xA;&#xA;&#x9;f, err := os.OpenFile(filename, os.O_APPEND|os.O_WRONLY, 0644)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;&#x9;defer f.Close()&#xA;&#x9;_, err = f.Write(contentBytes)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;&#x9;mutex.Unlock()&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;var mut sync.Mutex&#xA;&#x9;var sharedBuffer []byte&#xA;&#xA;&#x9;wg.Add(2)&#xA;&#x9;go WriteToFile(&#34;data/f1.txt&#34;, &#34;Hello Gophers!&#xA;&#34;, &amp;sharedBuffer, &amp;wg, &amp;mut)&#xA;&#x9;go WriteToFile(&#34;data/f1.txt&#34;, &#34;Welcome to Goworld!&#xA;&#34;, &amp;sharedBuffer, &amp;wg, &amp;mut)&#xA;&#x9;wg.Wait()&#xA;&#xA;&#x9;fmt.Println(string(sharedBuffer))&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run --race mutex.go&#xA;Welcome to Goworld!&#xA;Hello Gophers!&#xA;&#xA;$ go run --race mutex.go&#xA;Hello Gophers!&#xA;Welcome to Goworld!&#xA;&#xA;$ go run --race mutex.go&#xA;Hello Gophers!&#xA;Welcome to Goworld!&#xA;&#xA;$ go run --race mutex.go&#xA;Welcome to Goworld!&#xA;Hello Gophers!&#xA;```&#xA;&#xA;The above example is a preventive measure for race conditions with a mutex Lock on the shared resource used in the go routines. &#xA;&#xA;Let&#39;s break down the code step by step. First, we initialize a few variables:&#xA;- `wg` as `sync.WaitGroup`:&#xA;    The `wg` is a waitgroup that will be used for the synchronization of the go-routines by blocking the main function to exit&#xA;&#xA;- `mut` as `sync.Mutex`:&#xA;    The `mut` is a structure that internally holds a few integer values indicating either a blocked or unblocked state. The `sync.Mutex` has two private fields wiz. `state` and `sema`, the state holds the mutex state either `0`(unlocked) or `1` as locked. The `sema` field is a `uint32` that is used for blocking and signaling, it acts as a semaphore to manage blocking and unblocking goroutines trying to acquire the mutex.&#xA;    This will be used to acquire `Lock` and `Unlock` on the shared resource while writing the data to the file or appending the data to the resource.&#xA;&#xA;- `sharedBuffer` as `[]byte`: &#xA;  The `sharedBuffer` is the actually shared resource that will be used to hold the strings for keeping track of the data written to the file. It will be the resource that would require to lock for mutating its value(appending to the slice) among the go routines.&#xA;&#xA;We add `2` to the `wg` to indicate to wait for the two go-routines to complete, in the next line we call two go routines as the function `WriteToFile`. The `WriteToFile` is a function that takes in quite a few parameters namely, the filename, the content to write, the reference to the sharedBuffer, waitgroup, and the mutex.&#xA;&#xA;Inside the function `WriteToFile`:&#xA;-  We first `defer` the `waitgroups` as `Done` i.e. to call the `wg.Done` method at the end of the function call. &#xA;- Typecast the `contents` from `string` as the `[]byte`. &#xA;- Acquire the `mutex.Lock()` i.e. to say &#34;The below operations should not be done concurrently, one at a time&#34;. We then append the `contents` to the `buffer` which is a pointer to the `sharedBuffer` in the main function, so essentially we are trying to mutate the `sharedBuffer` in this function.&#xA;- Open the file as `O_APPEND` and `O_WRONLY` to indicate that the file should be opened in append/writing mode. (We have observed this type of operation in the [Golang: File Write](https://www.meetgor.com/golang-file-write/) article)&#xA;- Use the `Write` method to write the slice of bytes(contents) into the file that we opened. We have a `defer` for the closing of the file. &#xA;&#xA;We obviously check for errors and print them, but it could be a `panic` or `log` depending on the type of operation the application is doing.&#xA;So that is the all operation we want to do, so we finally open the lock with `mutex.Unlock()` which will allow the other go routine if any to access the resource and proceed with the operations.&#xA;&#xA;### Read Write Mutual Exclusion Lock&#xA;&#xA;The mutual exclusion lock is good if you have a write operation-heavy application. However, if we have read write operations in more or the less in same proportion(read heavy) we don&#39;t want to have readers getting blocked when other readers are accessing the resource since it is not a mutation process. &#xA;&#xA;We could allow many readers to read simultaneously. But for the writing operation, we want to block the readers/writers. The writer should be given preference first in case of a lock by other writers. This would prevent a writer from waiting for readers to complete. This is usually referred to as **Read Write Mutual Exclusion**.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func reader(id int, count *int, wg *sync.WaitGroup, mutex *sync.Mutex) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;if !mutex.TryLock() {&#xA;&#x9;&#x9;fmt.Printf(&#34;Reader %d blocked!&#xA;&#34;, id)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#x9;defer mutex.Unlock()&#xA;&#x9;fmt.Printf(&#34;Reader %d: read count %d&#xA;&#34;, id, *count)&#xA;}&#xA;&#xA;func writer(id, increment int, count *int, wg *sync.WaitGroup, mutex *sync.Mutex) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;defer mutex.Unlock()&#xA;&#x9;mutex.Lock()&#xA;&#x9;*count += increment&#xA;&#x9;time.Sleep(5 * time.Millisecond)&#xA;&#x9;fmt.Printf(&#34;Writer %d: wrote count %d&#xA;&#34;, id, *count)&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;count := 1&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;var mutex sync.Mutex&#xA;    readers := 5&#xA;    writers := 3&#xA;&#x9;wg.Add(readers)&#xA;&#x9;for i := 0; i &lt; readers; i++ {&#xA;&#x9;&#x9;go reader(i, &amp;count, &amp;wg, &amp;mutex)&#xA;&#x9;}&#xA;&#xA;&#x9;wg.Add(writers)&#xA;&#x9;for i := 0; i &lt; writers; i++ {&#xA;&#x9;&#x9;go writer(i, 1, &amp;count, &amp;wg, &amp;mutex)&#xA;&#x9;}&#xA;&#x9;wg.Wait()&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run --race no-rwmutex.go &#xA;Reader 0: read count 1&#xA;Reader 3 blocked!&#xA;Reader 2 blocked!&#xA;Reader 1 blocked!&#xA;Reader 4 blocked!&#xA;Writer 0: wrote count 2&#xA;Writer 1: wrote count 3&#xA;Writer 2: wrote count 4&#xA;&#xA;$ go run --race no-rwmutex.go &#xA;Reader 2 blocked!&#xA;Reader 0: read count 1&#xA;Reader 1 blocked!&#xA;Reader 4 blocked!&#xA;Reader 3 blocked!&#xA;Writer 2: wrote count 2&#xA;Writer 0: wrote count 3&#xA;Writer 1: wrote count 4&#xA;```&#xA;&#xA;The above example has a `reader` and a `writer` method, the `reader` method simply has to read the shared resource `count`. It acquires a `Mutex` lock before reading and unlocks it thereafter. Similarly, the `writer` function is used for incrementing the `count` shared resource. &#xA;&#xA;The `reader` method has a [TryLock](https://pkg.go.dev/sync#Mutex.TryLock) method that tries to acquire a mutex lock on the resource, if the resource is already locked, the function will return `false` and hence we can say that the reading is blocked(just for demonstration). And if the function `TryLock` returns `true`, it will acquire the `Lock`. We further `defer` the `Unlock` and access the `count` variable which is passed as a reference. &#xA;&#xA;The `writer` method is simply acquiring the `Lock` and incrementing the `counter` and thereafter `Unlock` is called with `defer`.&#xA;&#xA;In the above code:&#xA;- The `reader` and the `writer` both might be waiting for the lock to be released, however, for readers to wait for reading doesn&#39;t make sense. &#xA;- Because if you would want to just read a particular memory address, there shouldn&#39;t be any locks for one reader to wait for other readers to finish.&#xA;- However, for writing, there has to be a lock. The mutex lock will lock the resource irrespective of the `reader` or `writer`. &#xA;&#xA;This might not be as visible here, but it might be the reason, that all the `readers` are blocked from reading due to another reader&#39;s or writer&#39;s lock.&#xA;&#xA;The `sync` package has the [RWMutex](https://pkg.go.dev/sync#RWMutex) that does this exact same thing. It is almost similar to the `Mutex` however, it would allow concurrent reading operation and prefer writing operation before readers to prevent writer starvation.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;sync&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func reader(id int, count *int, wg *sync.WaitGroup, mutex *sync.RWMutex) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;defer mutex.RUnlock()&#xA;&#xA;&#x9;if !mutex.TryRLock() {&#xA;&#x9;&#x9;fmt.Printf(&#34;Reader %d blocked!&#xA;&#34;, id)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#x9;fmt.Printf(&#34;Reader %d: read count %d&#xA;&#34;, id, *count)&#xA;&#xA;}&#xA;&#xA;func writer(id, increment int, count *int, wg *sync.WaitGroup, mutex *sync.RWMutex) {&#xA;&#x9;defer wg.Done()&#xA;&#x9;defer mutex.Unlock()&#xA;&#x9;mutex.Lock()&#xA;&#xA;&#x9;*count += increment&#xA;&#x9;time.Sleep(5 * time.Millisecond)&#xA;&#x9;fmt.Printf(&#34;Writer %d: wrote count %d&#xA;&#34;, id, *count)&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;count := 1&#xA;&#x9;var wg sync.WaitGroup&#xA;&#x9;var mutex sync.RWMutex&#xA;    readers := 5&#xA;    writers := 3&#xA;&#x9;wg.Add(readers)&#xA;&#x9;for i := 0; i &lt; readers; i++ {&#xA;&#x9;&#x9;go reader(i, &amp;count, &amp;wg, &amp;mutex)&#xA;&#x9;}&#xA;&#xA;&#x9;wg.Add(writers)&#xA;&#x9;for i := 0; i &lt; writers; i++ {&#xA;&#x9;&#x9;go writer(i, 1, &amp;count, &amp;wg, &amp;mutex)&#xA;&#x9;}&#xA;&#x9;wg.Wait()&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run --race rwmutex.go &#xA;Reader 0: read count 1&#xA;Reader 3: read count 1&#xA;Reader 1: read count 1&#xA;Reader 2: read count 1&#xA;Reader 4: read count 1&#xA;Writer 1: wrote count 2&#xA;Writer 0: wrote count 3&#xA;Writer 2: wrote count 4&#xA;Writer 3: wrote count 5&#xA;Writer 4: wrote count 6&#xA;&#xA;$ go run --race rwmutex.go &#xA;Reader 1 blocked!&#xA;Reader 0: read count 1&#xA;Reader 2 blocked!&#xA;Reader 3 blocked!&#xA;Reader 4 blocked!&#xA;Writer 4: wrote count 2&#xA;Writer 0: wrote count 3&#xA;Writer 1: wrote count 4&#xA;Writer 3: wrote count 5&#xA;Writer 2: wrote count 6&#xA;&#xA;```&#xA;&#xA;In the modified example, all the logic remains the same, just the `sync.Mutex` is replaced with `sync.RWMutex`. Also for trying to acquire the lock in the `reader` method [TryRLock](https://pkg.go.dev/sync#RWMutex.RLock) is used instead of [TryLock](https://pkg.go.dev/sync#RWMutex.TryRLock) which will check if the existing lock acquired is of a reader or writer, if it is a reader, it will return `true`, else `false`. Also the `Unlock` is replaced with the `RUnlock` method for releasing the read lock. In the `writer` method, everything remains the same so the writer has to acquire the lock irrespective of whether the current lock is from the reader/writer, so it is a normal `Lock` and `Unlock`.&#xA;&#xA;In the above example, we can see all the read operations sometimes are executed instead of getting blocked. This is due to the `RWMutex` Lock and Unlock on the read operation/function. &#xA;- When one reader is reading, it can&#39;t block other readers. &#xA;- However with simple `Mutex`, the reader is even blocked when another reader is reading.&#xA;- For the write operation, it will be blocked as usual, so if a writer is performing a write operation and a reader/readers come in, they will be blocked, also if in the meantime, while the resource is locked, another writer comes in, the writer will be given preference instead of waiting for all readers to complete. This prevents writer starvation.&#xA;&#xA;You can see, that the readers are still blocked the majority of the time, however, they are blocked due to the writer locking the resource and not any other reader.&#xA;&#xA;There is a bit more suitable [example](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/scripts/go-routines/file-rw.go) related to `file` reading and writing with `RWMutex`, make sure to check that out to get a more clear understanding of an actual case of using the `RWMutex`.&#xA;&#xA;### Channels&#xA;&#xA;This is a big part and I would like to delve into this topic in a separate post. There are some patterns like `fan-in`, `fan-out`, `worker-pool`, `pub-sub`, etc. which are really common in web applications and backend systems. These patterns we shall explore in the next article.&#xA;&#xA;Channels are a way to provide a safe and idiomatic way for Goroutines to exchange data and coordinate their execution without resorting to low-level mechanisms like shared memory or explicit locking.&#xA;&#xA;That&#39;s it from the 30th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/go-routines) repository.&#xA;&#xA;### References:&#xA;&#xA;- [How Goroutines Work](https://blog.nindalf.com/posts/how-goroutines-work/)&#xA;- [Concurrency patterns in Golang: WaitGroups and Goroutines](https://blog.logrocket.com/concurrency-patterns-golang-waitgroups-goroutines/)&#xA;- [A complete journey with Goroutines](https://riteeksrivastava.medium.com/a-complete-journey-with-goroutines-8472630c7f5c)&#xA;&#xA;## Conclusion&#xA;&#xA;From this part of the series, the fundamentals of golang&#39;s concurrency model were understood specifically spawning go-routines, synchronously executing go-routines with the help of a wait group, mutex locks, and how to secure concurrent access to a shared resource. In the next part of the series, these concepts will be used in asynchronous communication using channels. &#xA;&#xA;Hopefully, you have got the basics of concurrency in golang cleared from this post. If you have any queries, suggestions, or feedback, please feel free to comment below or contact me on the social handles. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Generics</title>
      <link>https://www.meetgor.com/posts/golang-generics</link>
      <description>Introduction In the 29th post of the series, we will be looking into generics in Golang. Generics were added in Golang version 1.18, so they are quite new in th</description>
      <pubDate>Sun, 23 Jul 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 29th post of the series, we will be looking into generics in Golang. Generics were added in Golang version 1.18, so they are quite new in the world of Golang but the concept is quite old in other programming languages.&#xA;&#xA;Generics provide a powerful toolset for writing more expressive and concise code that can handle a wide range of data types. With generics, we can write reusable algorithms, data structures, and functions that work seamlessly with various types, without sacrificing type safety.&#xA;&#xA;We will learn how to create generic functions and work with generic types. Additionally, we will cover type constraints and interfaces, which allow us to specify requirements for the types used with our generics.&#xA;&#xA;## Generic Type in Functions&#xA;&#xA;We can define a generic type with the keyword `any` that is going to replace the type `T` i.e. any data type with the inferred type at compilation. This makes the code reusable to any relevant data type to be used for that operation/task.&#xA;&#xA;We can provide the type `any` after the name of the function/struct to make it generic like `func Name[T any](x T)`. Here, the Name is a function that takes in a generic type `T` it could be any type and the variable is named as `x` that could be used inside the function.&#xA;&#xA;We could also make the function take specific types instead of `any` but we will eventually move into that. However, for now, let&#39;s ease the process of learning and then move on to the optimizations and adding constraints.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func Print[T any](stuff T) {&#xA;&#x9;fmt.Println(stuff)&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;Print(&#34;hello&#34;)&#xA;&#x9;Print(123)&#xA;&#x9;Print(3.148)&#xA;}&#xA;```&#xA;&#xA;[GO Playground Link](https://go.dev/play/p/t-ODmkHu5BJ)&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;hello&#xA;123&#xA;3.148&#xA;```&#xA;&#xA;The above is the simplest example that could be used to demonstrate a generic function. The function `Print` takes a parameter `stuff` of a generic type denoted by a type parameter `T`. The type parameter `T` serves as a placeholder that represents a specific type determined at compile time when the function is invoked.&#xA;&#xA;This means, if in my code I do not call the function with the type `[]int` it won&#39;t have the function with the signature as `Print(stuff []int)` rather only the types which the function is called with are inferred and written with that specific types.&#xA;&#xA;## Creating a Generic Slice&#xA;&#xA;We can even create a slice with a generic type and allow any valid processing on the elements or the slice as a whole.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func ForEach[T any](arr []T, f func(T)) {&#xA;&#x9;for _, v := range arr {&#xA;&#x9;&#x9;f(v)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;strSlice := []string{&#34;b&#34;, &#34;e&#34;, &#34;a&#34;}&#xA;&#x9;ForEach(strSlice, func(v string) {&#xA;&#x9;&#x9;fmt.Println(v)&#xA;&#x9;})&#xA;&#xA;&#x9;slice := []int{10, 25, 33, 42, 50}&#xA;&#x9;var evenSlice []int&#xA;&#x9;ForEach(slice, func(v int) {&#xA;&#x9;&#x9;isEven := v%2 == 0&#xA;&#x9;&#x9;if isEven {&#xA;&#x9;&#x9;&#x9;evenSlice = append(evenSlice, v)&#xA;&#x9;&#x9;}&#xA;&#x9;})&#xA;&#x9;fmt.Println(evenSlice)&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;b&#xA;e&#xA;a&#xA;[10 42 50]&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/tUwgbxnLc-1)&#xA;&#xA;The `ForEach` is a generic function that iterates over a slice of any type and calls a function on each element. Let&#39;s break it down:&#xA;&#xA;* `ForEach[T any]` declares this as a generic function that works on a slice of any type `T`.&#xA;    &#xA;* `arr []T` is the slice of elements we want to iterate over. It can be a slice of any type, ints, strings, `T` in general, etc. So it is a generic slice parameter.&#xA;    &#xA;* `f func(T)` is the function that will be called on each element. It takes a single parameter of type `T` which will be each element. So, this is a function parameter with a generic type as its parameter.&#xA;    &#xA;&#xA;In the body, we range over the slice arr:&#xA;&#xA;```go&#xA;for _, v := range arr {&#xA;&#xA;}&#xA;```&#xA;&#xA;On each iteration, `v` will be the next element. The underscore ignores the index. We call the provided function `f`, passing the element `v`: `f(v)`&#xA;&#xA;So in summary:&#xA;&#xA;* It allows iterating over a slice of any type&#xA;    &#xA;* This avoids having to know the specific slice type in the loop&#xA;    &#xA;* The provided function `f` is called on each element&#xA;    &#xA;* So it provides a reusable abstraction for processing slices generically.&#xA;    &#xA;&#xA;Now, we will discuss the example used in the main function. First, we create a slice of strings as `strSlice := []string{&#34;b&#34;, &#34;e&#34;, &#34;a&#34;}`. Then we call the generic `ForEach` function, passing the string slice and a function to handle each element.&#xA;&#xA;```bash&#xA;ForEach(strSlice, func(v string) {&#xA;  fmt.Println(v) &#xA;})&#xA;```&#xA;&#xA;Here, the `func(v string)` is the invocation of the `ForEach` function with the typed string and the variable name as v. The `v` is the element of the slice, so inside the function body(this is an anonymous function), we call the `fmt.Println(v)`, which will print each string in the slice.&#xA;&#xA;```go&#xA;slice := []int{10, 25, 33, 42, 50}&#xA;var evenSlice []int&#xA;ForEach(slice, func(v int) {&#xA;    isEven := v%2 == 0&#xA;    if isEven {&#xA;        evenSlice = append(evenSlice, v)&#xA;    }&#xA;})&#xA;fmt.Println(evenSlice)&#xA;```&#xA;&#xA;In the next example, we create a new slice of int as `slice := []int{10, 25, 33, 42, 50}`. Then we call the generic `ForEach` function again, passing the slice and a function to handle each element just as before, however, we are processing some things and then appending to an external slice.&#xA;&#xA;First, the `slice := []int{10, 25, 33, 42, 50}` is created with some numbers, we also initialize another slice as `evenSlice := []int{}` which is empty. Then we call the generic `ForEach` function again, passing the slice and a function to handle each [element.Here](http://element.Here), the ForEach is called with the `slice` slice and not the `evenSlice` slice, so we are going to access each element in the `slice` array. We first create a `isEven` boolean that checks if the element is even or odd by `v%2 == 0`. This expression will result in `true` if `v` is even and `false` otherwise. So, only if the `isEven` is true, we append the element `v` into the `evenSlice` slice.&#xA;&#xA;So, that&#39;s how generic slices can be handy for doing type-specific processing without writing functions for those individual types. This avoids needing to write type-specific functions for each slice type.&#xA;&#xA;NOTE: Make sure to only use generic functions with generic slice types with the appropriate and valid conditions and use it only when it looks legible to do so.&#xA;&#xA;## Creating a Generic Map&#xA;&#xA;We can also create a generic map with the generic type of `map[K]T` where `K` is a generic type and `T` is the type of the key.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func GetValue[K comparable, V any](m map[K]V, key K, defaultVal V) V {&#xA;&#x9;if v, ok := m[key]; ok {&#xA;&#x9;&#x9;return v&#xA;&#x9;}&#xA;&#x9;return defaultVal&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;serverStats := map[string]int{&#xA;&#x9;&#x9;&#34;port&#34;:      8000,&#xA;&#x9;&#x9;&#34;pings&#34;:     47,&#xA;&#x9;&#x9;&#34;status&#34;:    1,&#xA;&#x9;&#x9;&#34;endpoints&#34;: 13,&#xA;&#x9;}&#xA;&#x9;v := GetValue(serverStats, &#34;status&#34;, -1)&#xA;&#x9;fmt.Println(v)&#xA;&#x9;v = GetValue(serverStats, &#34;cpu&#34;, 4)&#xA;&#x9;fmt.Println(v)&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;1&#xA;4&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/ludlh6UwKaD)&#xA;&#xA;&#xA;GetValue is a generic function that takes three type parameters: The map itself, the key to find the value for, and a default value if the key doesn&#39;t exist.&#xA;&#xA;The `m` is a map with keys of type K and values of type V, the key is of type K, and defaultVal is of type V. So, we have two generics here, as the key and value need not be of the same type, hence we have distinct generics here. K has added a constraint of `comparable` and `V` as `any` type. The type constraint comparable restricts K to be a comparable type, and the type constraint `any` allows V to be any type.&#xA;&#xA;* Inside the function, we use the ok variable to check if the given `key` exists in the map `m`.&#xA;    &#xA;* If the key is present in the map (ok is true), we retrieve the corresponding value from the map and return it as `m[key]` which is stored in the variable `v`.&#xA;    &#xA;* If the key is not present in the map (ok is false), we return the provided `defaultVal`.&#xA;    &#xA;&#xA;So, this is how we can use any type of map to retrieve the value of a key, the data type of key and value could be anything. It allows us to retrieve a value from a map irrespective of the pair type in the map.&#xA;&#xA;NOTE: The type of `defaultVal` and the type of `v` should be the same since the map will need to have the value for the given key as the same type as defined in the map type(here `map[string]int` so `v` is `int` and so should be the `defaultVal`).&#xA;&#xA;Moving into the main function, we create a map of `[string]int` i.e. the key is of type `string` and the value of type `int`. The map `serverStats` has a few keys like `port`, `pings`, `endpoints`, and `status`. We call the `GetValue` method on the map `serverStats` with the key `status` and provide a default value of `-1`. The function will readily return `1` since the key is present in the provided map. However, if we try to access the key `cpu`, the key is not present and the value is returned as the `defaultVal` which we passed as `4`.&#xA;&#xA;So, this was a simple generic getter method on a map. We can get a value of a key in a map of any pair and even provide a default value if doesn&#39;t exist. However, it won&#39;t add it to the map, we will just return the value from the function that&#39;s it. We have to see that returned default value manually.&#xA;&#xA;We can make another function to get or set the value of a key in a map. The function will take in a reference to the map rather than a copy of the map, we can then use that reference to set the key with the provided default value.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;)&#xA;&#xA;func GetOrSetValue[K comparable, V any](m *map[K]V, key K, defaultVal V) V {&#xA;    // reference the original map&#xA;&#x9;ref := *m&#xA;&#x9;if v, ok := ref[key]; ok {&#xA;&#x9;&#x9;return v&#xA;&#x9;} else {&#xA;        //mutate the original map&#xA;        ref[key] = defaultVal&#xA;&#xA;        return defaultVal&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;    serverStats := map[string]int{&#xA;        &#34;port&#34;:      8000,&#xA;        &#34;pings&#34;:     47,&#xA;        &#34;status&#34;:    1,&#xA;        &#34;endpoints&#34;: 13,&#xA;    }&#xA;    fmt.Println(serverStats)&#xA;    v := GetOrSetValue(&amp;serverStats, &#34;cpu&#34;, 4)&#xA;    fmt.Println(v)&#xA;    fmt.Println(serverStats)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;map[endpoints:13 pings:47 port:8000 status:1]&#xA;4&#xA;map[cpu:4 endpoints:13 pings:47 port:8000 status:1]&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/fYtjFQRaPCb)&#xA;&#xA;In the above code, we take a reference of the map as `*map[K]V`, this will give access to the actual place(memory address where the map is located so we could mutate/change it). The rest of the parameters are kept as is, the key will be taken as it was before, and so will the `defaultVal`. The only difference is that we will set the `key` doesn&#39;t exist, we set the `ref[key]` to the `defaultVal` and return the `defaultVal`.&#xA;&#xA;For example, the `cpu` key is not present in the initial map `serverStats` so, when we call the `GetOrSetValue` with the reference to the map `serverStats`, key as `cpu` and the default value as `4`, the function returns `4` and the map is mutated with the key `cpu` with value `4`.&#xA;&#xA;The takeaway is you can even use references to access the original data and mutate it based on your needs.&#xA;&#xA;## Generic Type in Struct&#xA;&#xA;We can define custom structs with generic type as the field values. The name of the struct is followed by the `[T any]` which is the type parameter to be used in the struct fields, again this type could have multiple types(since a struct can have many fields), not necessary want a single type parameter, you could have multiple type parameters bunched up just like we saw in the map example.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;type Stack[T any] struct {&#xA;&#x9;items []T&#xA;}&#xA;&#xA;func NewStack[T any]() *Stack[T] {&#xA;&#x9;return &amp;Stack[T]{}&#xA;}&#xA;&#xA;func (s *Stack[T]) Push(item T) {&#xA;&#x9;s.items = append(s.items, item)&#xA;}&#xA;&#xA;func (s *Stack[T]) Pop() T {&#xA;&#x9;if len(s.items) == 0 {&#xA;&#x9;&#x9;panic(&#34;Stack is empty&#34;)&#xA;&#x9;}&#xA;&#x9;index := len(s.items) - 1&#xA;&#x9;item := s.items[index]&#xA;&#x9;s.items = s.items[:index]&#xA;&#x9;return item&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;intStack := NewStack[int]()&#xA;&#x9;intStack.Push(10)&#xA;&#x9;intStack.Push(20)&#xA;&#x9;intStack.Push(30)&#xA;&#x9;fmt.Println(&#34;Integer Stack&#34;)&#xA;&#x9;fmt.Println(intStack)&#xA;&#x9;intStack.Pop()&#xA;&#x9;intStack.Pop()&#xA;&#x9;fmt.Println(intStack)&#xA;&#xA;&#x9;// without the NewStack method&#xA;&#x9;strStack := Stack[string]{}&#xA;&#x9;strStack.Push(&#34;c&#34;)&#xA;&#x9;strStack.Push(&#34;python&#34;)&#xA;&#x9;strStack.Push(&#34;mojo&#34;)&#xA;&#x9;fmt.Println(&#34;String Stack:&#34;)&#xA;&#x9;fmt.Println(strStack)&#xA;&#x9;strStack.Pop()&#xA;&#x9;fmt.Println(strStack)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;Integer Stack&#xA;&amp;{[10 20 30]}&#xA;&amp;{[10]}&#xA;&#xA;String Stack:&#xA;{[c python mojo]}&#xA;{[c python]}&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/4t_P2mKtTZN)&#xA;&#xA;In this example, we have used the `Stack` example for doing a basic `Push` and `Pop` operation on the elements. Here the type of the underlying stack elements could be anything, hence the type parameter is defined for the `items` which is a list/slice of the type `T` as `[]T`. We have to specify the type before initializing the&#xA;&#xA;We have created the `NewStack` method, it is not needed, it could be just used as `Stack[int]{}` to initialize a empty stack with int type(here `int` could be any other type you wish). I have just created it so that it shows the abstraction that could be built upon in real applications. The `NewStack` simply replaces the `T` with the provided `type` in the initialization.&#xA;&#xA;The `Push` method is associated with the `Stack` struct, as we refer to the `*Stack[T]` indicating a reference to the Stack object with the type `T`. The method takes in the parameter `T` which would be the element to be added to the `Stack`. Since the function is tied to the Stack struct, we can simply mutate the underlying `items` by appending the provided value `item` in the parameter using `s.items = append(s.items, item)`. This appends the `item` to the underlying `items` list in the `Stack` object `s`&#xA;&#xA;Similarly, we can create `Pop` method as well, which will first check if the `Stack` is not empty(i.e. the s.items slice has a length greater than 0), then get the index of the last element using `len(s.items) - 1` and then slice the `items` at index `[:last_index]`. This will basically get you all the elements except the last. Before we remove the element from the slice, we also store the item in `item` variable and return it from the method.&#xA;&#xA;You could see the case of generics here, you could build complex data structures without creating a separate implementation for each type.&#xA;&#xA;## Adding Constraints to Generics&#xA;&#xA;We can add constraints to the generics to restrict the type of the generic parameter. For example, we can add a constraint for the generic type to be a slice of a specific type or we have seen in the map example the `comparable` constraint.&#xA;&#xA;The `comparable` constraint is an interface that allows two instances of the same type to be compared i.e. support comparison operators like ==, &amp;lt;, &amp;gt;, !=, &amp;gt;=, &amp;lt;=, etc. It could be any numeric type like `int`, `float`, `uint` and variants, booleans, time duration, and any other struct that implements the `comparable` interface.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func FindIndex[T comparable](arr []T, value T) int {&#xA;&#x9;for i, v := range arr {&#xA;&#x9;&#x9;if v == value {&#xA;&#x9;&#x9;&#x9;return i&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;&#x9;return -1&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;strSlice := []string{&#34;m&#34;, &#34;e&#34;, &#34;e&#34;, &#34;t&#34;}&#xA;&#x9;fmt.Println(FindIndex(strSlice, &#34;e&#34;))&#xA;&#x9;fmt.Println(FindIndex(strSlice, &#34;t&#34;))&#xA;&#x9;fmt.Println(FindIndex(strSlice, &#34;a&#34;))&#xA;&#xA;&#x9;intSlice := []int{10, 25, 33, 42, 50}&#xA;&#x9;fmt.Println(FindIndex(intSlice, 33))&#xA;&#x9;fmt.Println(FindIndex(intSlice, 90))&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;1&#xA;3&#xA;-1&#xA;&#xA;2&#xA;-1&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/fv9gzb8K4d7)&#xA;&#xA;In the above example, we have created the function `FindIndex` that takes in a generic slice, the type parameter `[T comparable]` indicates that the type used for calling this method needs to have a type that implements the comparable interface (for the elements of the slice). The method takes in two parameters, one the slice as `[]T` and the other the value to find the index for as type `T`. The method returns a type `int` since the index of the slice has to be an integer value.&#xA;&#xA;Inside the function body, we simply iterate over the slice `arr` and check if the element is equal to the provided value. If the element exists, we return that index, else we return `-1`&#xA;&#xA;As we can see we have run a couple of slices with the function `FindIndex` with types `int` and `string`. The method returns an index value if the element exists, else it returns `-1`. The `comparable` is a built-in type constraint. We could even define custom constraints as interfaces that implement the types of the particular type(s).&#xA;&#xA;Also, we could define custom constraints like numeric only, string only, etc.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;type numeric interface {&#xA;&#x9;uint | uint8 | uint16 | uint32 | uint64 |&#xA;&#x9;&#x9;int | int8 | int16 | int32 | int64 |&#xA;&#x9;&#x9;float32 | float64&#xA;}&#xA;&#xA;func Sum[T numeric](nums []T) T {&#xA;&#x9;var s T&#xA;&#x9;for _, n := range nums {&#xA;&#x9;&#x9;s += n&#xA;&#x9;}&#xA;&#x9;return s&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;intSlice := []int{10, 20, 30, 40, 50}&#xA;&#x9;fmt.Println(Sum(intSlice))&#xA;&#xA;&#x9;floatSlice := []float64{1.1, 2.2, 3.3, 4.4, 5.5}&#xA;&#x9;fmt.Println(Sum(floatSlice))&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;150&#xA;16.5&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/_1eGl58qQ-2)&#xA;&#xA;In the above example, we have created the `numeric` constraints that allow the type `int`, `float` and their variants to be allowed in the numeric generic type. The function `Sum` is a generic function with the constraint of `numeric` type parameter. The method takes in the parameter as type `[]T` and returns the type as `T`. The method will simply iterate over the slice and return the sum of its elements.&#xA;&#xA;This will allow any numeric type which can be added and the sum can be obtained, so if we try to call the method with other types like `string` or `maps`, it won&#39;t work, and give an error:&#xA;&#xA;```bash&#xA;$ go run constraints.go&#xA;&#xA;# command-line-arguments                                                                                                               &#xA;scripts/generics/constraints.go:46:20: &#xA;string does not satisfy numeric (string missing in uint | uint8 | uint16 | uint32 | uint64 | int&#xA; | int8 | int16 | int32 | int64 | float32 | float64)&#xA;&#xA;shell returned 1&#xA;```&#xA;&#xA;So, we can use the constraint to restrict the type of the generic type parameter which will allow us to restrict the usage and avoid any unsafe type to be used in the generic function.&#xA;&#xA;Also, if we have a custom type with the base types, we need to use `~` before the type to accept it into the generic constraint. This will allow any approximate type to be allowed in the constraint. Let&#39;s say we are implementing a custom string type, for that to work with a constraint, it won&#39;t be satisfied in the constraint since its type is `CustomString` and not `string`. So to avoid this we use `~string` that would approximate the type and allow abstracted base types.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;type string2 string&#xA;&#xA;type strings interface {&#xA;&#x9;~string&#xA;}&#xA;&#xA;func PrintEach[T strings](arr T) {&#xA;&#x9;for _, v := range arr {&#xA;&#x9;&#x9;fmt.Printf(&#34;%c&#xA;&#34;, v)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;var s string2&#xA;&#x9;s = &#34;hello&#34;&#xA;&#x9;PrintEach(s)&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;h&#xA;e&#xA;l&#xA;l&#xA;o&#xA;```&#xA;&#xA;[Go Playground Link](https://go.dev/play/p/N-66A9C94ps)&#xA;&#xA;In the above example, we have used the type approximations in the type constraint `strings`, it allows any string type, whether a base `string` type or an abstract `string` type. If you try to remove the `~` in `~string`, it will result in the error that `string2 does not satisfy strings` interface. So, by adding `~` to the `string` type the abstract type `string2` can be satisfied in the generic constraint.&#xA;&#xA;That&#39;s it from the 29th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/generics) repository.&#xA;&#xA;### References&#xA;&#xA;- [Generics in Go](https://bitfieldconsulting.com/golang/generics)&#xA;- [Understanding generics in Go](https://blog.logrocket.com/understanding-generics-go-1-18/)&#xA;&#xA;## Conclusion&#xA;&#xA;From this section of the series, we have covered the basics of generics in Golang. By using generics in functions, slices, maps, and structs, and adding constraints to them the fundamental usage of generics was covered.&#xA;&#xA;If you have any questions, feedback, or suggestions, please drop them in the comments/social handles or discuss them below. Thank you so much for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Date and Time</title>
      <link>https://www.meetgor.com/posts/golang-date-time</link>
      <description>Introduction In the 28th post of the series, I will be exploring date and time handling in Golang. We will be covering the following topics: Date and Time parsi</description>
      <pubDate>Mon, 19 Jun 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 28th post of the series, I will be exploring date and time handling in Golang. We will be covering the following topics:&#xA;&#xA;* Date and Time parsing&#xA;    &#xA;* Time Duration&#xA;    &#xA;* Time and Date Arithmetic&#xA;    &#xA;* Timezones&#xA;    &#xA;* Sleep and Tickers&#xA;    &#xA;&#xA;This will cover most of the methods and properties used extensively in general use cases related to time and date operations.&#xA;&#xA;## Time package&#xA;&#xA;The Golang standard library provides the time package to handle date and time-related operations. It has a lot of methods and constants to work and handle data related to time and dates.&#xA;&#xA;One of the fundamental methods to get time in golang is the [time.Now](http://time.Now) function, returns a time object representing the current time. The return value is a [time](https://pkg.go.dev/time#Time) object, which is the base struct that we can use to perform operations on top of it.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main(){&#xA;    now := time.Now()&#xA;    fmt.Println(now)&#xA;    fmt.Printf(&#34;%T&#xA;&#34;, now)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-18 15:18:53.977607431 +0530 IST m=+0.000050291&#xA;time.Time&#xA;```&#xA;&#xA;### Create a time/date type&#xA;&#xA;You can create a specific date by providing details like year, month, day, hour, minute, second, nanosecond, and time zone.&#xA;&#xA;The [time.Date](http://time.Date) method is used to create a time object. We will talk about location i.e. the timezone in a while.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main(){&#xA;    sometime := time.Date(2020, 03, 25, 8, 5, 0, 0, time.UTC)&#xA;    fmt.Println(sometime)&#xA;    fmt.Printf(&#34;%T&#xA;&#34;, sometime)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2020-03-25 08:05:00 +0000 UTC&#xA;time.Time&#xA;```&#xA;&#xA;The above example generates a custom time object as we parsed the year, month, day, etc to the function. The time zone is set to UTC, the method might also take in a custom timezone, but that is what we will be exploring further.&#xA;&#xA;We can also get some specific type from the time object using the methods that are available in the provided struct. For example, we can get the year from the time object by using the [time.Year()](https://pkg.go.dev/time#Time.Year), or month using [time.Month()](https://pkg.go.dev/time#Time.Month) and so on&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main(){&#xA;&#xA;&#x9;somedate := time.Date(2020, 03, 25, 8, 5, 0, 0, time.FixedZone(&#34;UTC&#34;, 5))&#xA;&#x9;fmt.Println(somedate)&#xA;&#xA;&#x9;today := somedate.Day()&#xA;&#x9;fmt.Println(today)&#xA;&#xA;&#x9;year := somedate.Year()&#xA;&#x9;fmt.Println(year)&#xA;&#xA;&#x9;month := somedate.Month()&#xA;&#x9;fmt.Println(month)&#xA;&#xA;&#x9;date := somedate.Day()&#xA;&#x9;fmt.Println(date)&#xA;&#xA;&#x9;weekDay := somedate.Weekday()&#xA;&#x9;fmt.Println(weekDay)&#xA;&#xA;    yearDay := somedate.YearDay()&#xA;    fmt.Println(yearDay)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2020-03-25 08:05:00 +0000 UTC&#xA;25&#xA;2020&#xA;March&#xA;25&#xA;Wednesday&#xA;85&#xA;```&#xA;&#xA;In the above example, we have used the methods available in the time structure to get the specific components like a year, month, day, hour, etc. The data type for these methods is simple integer or strings as suitable to the format like int for date, year, and string for weekday, month, and timezone/location.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main(){&#xA;&#xA;    now := time.Now()&#xA;&#x9;fmt.Println(now.Format(&#34;Monday 01 January 2006 15:04:05&#34;))&#xA;&#xA;&#x9;// day month date hour:minutes:second timezone year&#xA;&#x9;fmt.Println(now.Format(time.UnixDate))&#xA;&#xA;&#x9;// day month date hour:minutes:second year&#xA;&#x9;fmt.Println(now.Format(time.ANSIC))&#xA;&#xA;&#x9;// day month date hour:minutes:second&#xA;&#x9;fmt.Println(now.Format(time.Stamp))&#xA;&#xA;&#x9;// day month date hour:minutes:second.milisecond&#xA;&#x9;fmt.Println(now.Format(time.StampMilli))&#xA;&#xA;&#x9;// day month date hour:minutes:second.microsecond&#xA;&#x9;fmt.Println(now.Format(time.StampMicro))&#xA;&#xA;&#x9;// day month date hour:minutes:second.nanosecond&#xA;&#x9;fmt.Println(now.Format(time.StampNano))&#xA;&#xA;&#x9;// day, date month year hour:minutes:second timezone&#xA;&#x9;fmt.Println(now.Format(time.RFC1123))&#xA;&#xA;&#x9;// day, date month year hour:minutes:second offset&#xA;&#x9;fmt.Println(now.Format(time.RFC1123Z))&#xA;&#xA;&#x9;// year-month-dayThour:minutes:second+-offset&#xA;&#x9;fmt.Println(now.Format(time.RFC3339))&#xA;&#xA;&#x9;// year-month-dayThour:minutes.nanosecond:second&#xA;&#x9;fmt.Println(now.Format(time.RFC3339Nano))&#xA;&#xA;&#x9;// date month year hour:minutes timezone&#xA;&#x9;fmt.Println(now.Format(time.RFC822))&#xA;&#xA;&#x9;// hour:minuteAM/PM&#xA;&#x9;fmt.Println(now.Format(time.Kitchen))&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;Sunday 06 June 2023 20:09:09&#xA;Sun Jun 18 20:09:09 IST 2023&#xA;Sun Jun 18 20:09:09 2023&#xA;Jun 18 20:09:09&#xA;Jun 18 20:09:09.086&#xA;Jun 18 20:09:09.086565&#xA;Jun 18 20:09:09.086565975&#xA;Sun, 18 Jun 2023 20:09:09 IST&#xA;Sun, 18 Jun 2023 20:09:09 +0530&#xA;2023-06-18T20:09:09+05:30&#xA;2023-06-18T20:09:09.086565975+05:30&#xA;18 Jun 23 20:09 IST&#xA;8:09PM&#xA;```&#xA;&#xA;These are some of the time formats that are provided by the time package in golang. They all return a string, so it cannot be parsed and resolved into components again. You can definitely take in a custom timestamp, convert it into a time object and then use the appropriate time format for your needs.&#xA;&#xA;## Parsing time object from a string&#xA;&#xA;We can use the [Parse](https://pkg.go.dev/time#Parse) function to parse a string into a time object. The method takes in two parameters, a date format, and the string to convert to both are parsed as a string. The method returns the parsed time or can give an error if the provided string is not in the mentioned format.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main(){&#xA;&#xA;&#x9;customDate := &#34;2023-04-26&#34;&#xA;&#x9;t, err := time.Parse(&#34;2006-01-02&#34;, customDate)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;    fmt.Println(customDate)&#xA;&#x9;fmt.Println(t)&#xA;&#xA;    customDate = &#34;2023-0426&#34;&#xA;    t, err = time.Parse(&#34;2006-01-02&#34;, customDate)&#xA;    if err != nil {&#xA;        fmt.Println(err)&#xA;    }&#xA;    fmt.Println(customDate)&#xA;    fmt.Println(t)&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-04-26&#xA;2023-04-26 00:00:00 +0000 UTC&#xA;&#xA;2023-0426&#xA;parsing time &#34;2023-0426&#34; as &#34;2006-01-02&#34;: cannot parse &#34;26&#34; as &#34;-&#34;                                                          &#xA;0001-01-01 00:00:00 +0000 UTC&#xA;```&#xA;&#xA;In the above examples, we have parsed a time object from a string that looks like a date. We used the Parse method that takes in a string as the format to parse from and the string that we want to convert to. The format remains some specific date value like the `2006-01-02` which is fixed as a time to parse time in the time package. The method returns a time object which is parsed in the format, also it can return an error if the string is not in the provided format. We have used the second example that parsed a string as an invalid date format.&#xA;&#xA;## Time Duration field&#xA;&#xA;The time duration field is used to represent the elapsed time between two Time objects as an integer 64-bit (in hours at the biggest unit and nanoseconds as the smallest unit to scale).&#xA;&#xA;### Parse duration from string&#xA;&#xA;We can use the [ParseDuration](https://pkg.go.dev/time#ParseDuration) method to parse a duration like string into a time.Duration object. The duration object can have serialized fields like Hours, Minutes, and so on for further usage into the time processing.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;screentime, err := time.ParseDuration(&#34;6h30m&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;    fmt.Printf(&#34;%T&#xA;&#34;, screentime)&#xA;&#x9;fmt.Println(screentime.Hours())&#xA;&#x9;fmt.Println(screentime.Minutes())&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;time.Duration&#xA;6.5&#xA;390&#xA;```&#xA;&#xA;In this example, the time duration is 6 hours 30 minutes as indicated by `6h30m`. This is parsed as a time duration, the duration can accept formats like &#34;ns&#34;, &#34;us&#34; (or &#34;µs&#34;), &#34;ms&#34;, &#34;s&#34;, &#34;m&#34;, &#34;h&#34;. We have used the `h` and `m` as short forms of hours and minutes respectively. Similarly, `s` can be used for seconds, `ms` for milliseconds, `us` or `µs` for microseconds, and `ns` for nanoseconds respectively. If you use another format it will result in errors, and yes! you will have to write err != nil { handle the error } syntax and debug the issue in the console (just saying)&#xA;&#xA;The [ParseDuration](https://pkg.go.dev/time#ParseDuration) method will return the [Duration](https://pkg.go.dev/time#Duration) object or will error out if the string is not in the required short duration formats. In this example, the duration is stored in the `screentime` variable, which has a few methods like `Hours`, `Minutes`, `Seconds`, and so on to extract the component time duration in that object. So, if we use `screentime.Hours()`, this will give us the total hours in that parsed duration, in this case, it is `6.5` hours.&#xA;&#xA;The return type is:&#xA;&#xA;* `float64` for [Hours](https://pkg.go.dev/time#Duration.Hours), [Minutes](https://pkg.go.dev/time#Duration.Minutes), and [Seconds](https://pkg.go.dev/time#Duration.Seconds)&#xA;    &#xA;* `int64` for [Milliseconds](https://pkg.go.dev/time#Duration.Milliseconds), [Microseconds](https://pkg.go.dev/time#Duration.Microseconds), and [Nanoseconds](https://pkg.go.dev/time#Duration.Nanoseconds)&#xA;    &#xA;&#xA;This can be useful in Linux command line applications, where we can get the duration of the application running or execution speed, etc. It can be used to get the approximate number of hours or other metrics specific to the needs.&#xA;&#xA;The duration field is more useful for calculating the difference between two Time objects. Some methods like [Since](https://pkg.go.dev/time#Since), [Sub](https://pkg.go.dev/time#Time.Sub), are used to get the duration between the current time and other time objects and two Time objects respectively.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;newYear := time.Date(2023, 1, 1, 0, 0, 0, 0, time.UTC)&#xA;    // current time is 2023-06-18 15:27:12 +0000 UTC&#xA;    fmt.Println(time.Now().UTC())&#xA;&#x9;fmt.Println(time.Since(newYear).Hours())&#xA;&#xA;&#x9;nextNewYear := time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC)&#xA;&#x9;fmt.Println(nextNewYear.Sub(newYear).Hours())&#xA;&#x9;fmt.Println(nextNewYear.Sub(newYear).String())&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-18 15:27:12 +0000 UTC&#xA;4047.4130468335657&#xA;8760              &#xA;8760h0m0s&#xA;```&#xA;&#xA;So as seen from the examples, the `Since` method calculates the difference between the current time and some other time object, and the `Sub` method calculates the difference between two Time objects. The current time is `18th June 2023` which is roughly near the middle of the year, so if we get the duration from the start of the year(1st January 2023), we get `4047.4` hours. Similarly, we can get a duration between a year, i.e. 1st January 2023 and 1st January 2024, which comes out to be `8760` hours. We can even use `Minutes`, `Seconds`, `Milliseconds`, etc. to get the duration in those units.&#xA;&#xA;```go&#xA;day := time.Hour * 24&#xA;fmt.Println(day)&#xA;week := time.Hour * 24 * 7&#xA;fmt.Println(week)&#xA;month := time.Hour * 30 * 24 * 7&#xA;fmt.Println(month)&#xA;&#xA;fifteenDays := day * 15&#xA;fmt.Println(fifteenDays)&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;24h0m0s&#xA;168h0m0s&#xA;5040h0m0s&#xA;360h0m0s&#xA;```&#xA;&#xA;Here, we have created some custom units for calculating duration like a day which is 24 hours, that is obtained by multiplying an hour i.e. `time.Hour` by 24, and a week is 7 \* day, hence we do the calculation with precision in the duration structure. This can also be applied to `Minutes`, `Seconds`, and so on.&#xA;&#xA;## Time Zones and Locations&#xA;&#xA;Time zones are associated with every time object in the time structure. They are represented as [Locations](https://pkg.go.dev/time#Location) structure.&#xA;&#xA;[List of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)&#xA;&#xA;If not specified, the [`time.Now`](http://time.Now)`()` set the timezone as the Local time zone which is picked up from the system. It stores a list of timezone if the location uses daylight savings time, it picks the first timezone if daylight savings is not present.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;t := time.Now()&#xA;&#x9;fmt.Println(t)&#xA;&#x9;fmt.Println(t.Location())&#xA;&#xA;&#x9;newYorkTimeZone, err := time.LoadLocation(&#34;America/New_York&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;&#x9;londonTimeZone, err := time.LoadLocation(&#34;Europe/London&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;&#x9;newYorkTime := t.In(newYorkTimeZone)&#xA;&#x9;londonTime := t.In(londonTimeZone)&#xA;&#xA;&#x9;//local time&#xA;&#x9;fmt.Println(t)&#xA;&#xA;&#x9;// london time&#xA;    fmt.Println(londonTimeZone)&#xA;&#x9;fmt.Println(londonTime)&#xA;&#xA;&#x9;// new york time&#xA;    fmt.Println(newYorkTimeZone)&#xA;&#x9;fmt.Println(newYorkTime)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-19 13:58:13.232181805 +0530 IST m=+0.000044899&#xA;Local&#xA;2023-06-19 13:58:13.232181805 +0530 IST m=+0.000044899&#xA;&#xA;Europe/London&#xA;2023-06-19 09:28:13.232181805 +0100 BST&#xA;&#xA;America/New_York&#xA;2023-06-19 04:28:13.232181805 -0400 EDT&#xA;```&#xA;&#xA;The [LoadLocation](https://pkg.go.dev/time#LoadLocation) method is used to parse the timezone name as a string and returns the timezone/location object. It can return a `Location` object or an error in case the provided timezone name is not valid.&#xA;&#xA;We can use the [time.In](http://time.In)[(timezone)](https://pkg.go.dev/time#Time.In) to get the time in the specified timezone. In the above example, we have created a timezone/location object with the location name of the timezone as `Europe/London`. The timezone/location object has no exported/public properties or methods except the `String()` method used to get the string representation of the time stamp. However, this timezone is used to get time stamps in another timezone, for instance in this example, we have used my local timezone, i.e. `IST` or `Aisa/Kolkata` to get the current time, and used `In()` method to get time in `Europe/London` timezone. The `In()` method takes in a location object as a parameter and returns the `time.Time` object as the time in that provided location.&#xA;&#xA;### Creating a custom time zone&#xA;&#xA;A location object in the time package is basically a name and an offset value, so we can construct our own custom time zone location object with those parameters with the [FixedZone](https://pkg.go.dev/time#FixedZone) method. The `FixedZone` method takes in two parameters, one as the name of the timezone as a string, and the other as the offset as int. The offset can be up to 290 years roughly, as it can hold 64 bits and this can only represent an integer value as time only this far.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    t := time.Now()&#xA;    fmt.Println(t)&#xA;&#xA;&#x9;offset := int((4*time.Hour + 30*time.Minute + 30*time.Second).Seconds())&#xA;&#x9;// (4*60 + 30*1 + 30*0.166) * 60&#xA;&#x9;// (270 + 0.5) * 60 = 16230&#xA;&#x9;fmt.Println(offset)&#xA;&#x9;fmt.Println(t.UTC())&#xA;&#x9;customTimeZone := time.FixedZone(&#34;SOMETZ&#34;, offset)&#xA;&#x9;fmt.Println(customTimeZone)&#xA;&#x9;fmt.Println(t.In(customTimeZone))&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-19 15:25:44.8624019 +0530 IST m=+0.000044899&#xA;16230&#xA;2023-06-19 09:55:44.8624019 +0000 UTC&#xA;SOMETZ&#xA;2023-06-19 14:26:14.8624019 +0430 SOMETZ&#xA;```&#xA;&#xA;So from the example, the offset is 16230 seconds, which is then constructed by adding hours, minutes, and seconds, this is just an example, it shows you can customize the hour, minute, and seconds components, but you can extend it with millisecond up to nanosecond but that is a very niche case and might not be broadly used. To get an offset of `4h3030s`, we have used arithmetic to get the number of hours, minutes, and seconds.&#xA;&#xA;## Arithmetic on Time&#xA;&#xA;We can perform addition(going ahead) and subtraction(going back) of time in the `time.Time` object. There are also some comparison operations that can be performed using the method provided in the time struct.&#xA;&#xA;### Add Time and Date&#xA;&#xA;We can add a date to the `time.Time` object using the [Add](https://pkg.go.dev/time#Time.Add) method, this method takes in an integer value of `Hours`, `Minute`, `Second`, and so on to add the value of those units to the existing time object as a copy. This method can be chained to add multiple units of different types for instance hours, minutes, or seconds in a precise way to add the time. There is also a [AddDate](https://pkg.go.dev/time#Time.AddDate) method which takes in 3 parameters as a number of years, months, and days. We can add `x` years to the existing date by passing `AddDate(x, 0, 0)` to the time object we want to add to. These two method can be chained together to go from a scale of year to all the way to nanoseconds precision in adding the time.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;t := time.Now()&#xA;&#x9;fmt.Println(t)&#xA;&#xA;&#x9;afterOneHour := t.Add(time.Hour * 1)&#xA;&#x9;fmt.Println(afterOneHour)&#xA;&#xA;&#x9;afterOneDayTwoHours30minutes := t.AddDate(0, 0, 1).Add(time.Hour * 2).Add(time.Minute * 30)&#xA;&#x9;fmt.Println(afterOneDayTwoHours30minutes)&#xA;&#xA;&#x9;afterThreeMonths15days := t.AddDate(0, 3, 15)&#xA;&#x9;fmt.Println(afterThreeMonths15days)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-19 15:58:32.893246798 +0530 IST m=+0.000042833&#xA;2023-06-19 16:58:32.893246798 +0530 IST m=+3600.000042833&#xA;2023-06-20 18:28:32.893246798 +0530 IST&#xA;2023-10-04 15:58:32.893246798 +0530 IST&#xA;```&#xA;&#xA;Here, we can see, we have taken a simple example, for adding an hour to the existing time by saying `t.Add(time.Hour)`, this can be multiplied by the number of hours to add to, in this case, it was just one so we simply multiply with one. In the next example, we have chained the `AddDate` and `Add` methods to get the time after 1 day, 2 hours, and 30 minutes. The `AddDate` method is passed with `(0, 0, 1)` indicating a single day, then we tune the hours with 2 and minutes with 30 to get the desired time.&#xA;&#xA;In the last example, we have added the time by 3 months and 15 days, bypassing the `AddDate` method with `(0, 3, 15)`.&#xA;&#xA;Time can even be subtracted or we can get behind the specified time object using the negative number in the `Add` method. Instead of saying `.Add(1 * time.Hour)` to go one hour ahead of the parsed time, we can say `.Add(-1 * time.Hour)` to go one hour behind the current time.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    t := time.Now()&#xA;    fmt.Println(t)&#xA;&#xA;    oneHourBack := t.Add(-1 * time.Hour)&#xA;    fmt.Println(oneHourBack)&#xA;&#xA;    beforeOneYearTwoMonths := t.AddDate(-1, -2, 0)&#xA;    fmt.Println(beforeOneYearTwoMonths)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-19 16:23:26.992724868 +0530 IST m=+0.000044899&#xA;2023-06-19 15:23:26.992724868 +0530 IST m=-3599.999957644&#xA;2022-04-19 16:23:26.992724868 +0530 IST&#xA;```&#xA;&#xA;In this example, we have used the `Add` and `AddDate` methods with negative numbers to go back in time. In the first example, we have subtracted 1 hour from the existing time object, in the second example, we have used -1 to go back a year and -2 for going back 2 months from the current time object, hence we pass `.AddDate(-1, -2, 0)` in the example.&#xA;&#xA;### Comparing Time&#xA;&#xA;We can use `After`, `Before`, `Equal`, or `Compare` methods to compare the `Time` object in Golang. The method returns true or false depending if the time comes after, before, or is equal, the compare method returns -1 if it is before, 0 if two times are equal, and 1 if the time is after the other.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    t := time.Now()&#xA;    fmt.Println(t)&#xA;    &#xA;    afterOneHour := t.Add(time.Hour * 1)&#xA;    fmt.Println(afterOneHour)&#xA;&#xA;&#x9;isNowAfter := t.After(afterOneHour)&#xA;&#x9;isOneAfter := afterOneHour.After(t)&#xA;&#x9;fmt.Println(isNowAfter)&#xA;&#x9;fmt.Println(isOneAfter)&#xA;&#xA;&#x9;isNowBefore := t.Before(afterOneHour)&#xA;&#x9;isOneBefore := afterOneHour.Before(t)&#xA;&#x9;fmt.Println(isNowBefore)&#xA;&#x9;fmt.Println(isOneBefore)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-19 16:38:25.785629649 +0530 IST m=+0.000051616&#xA;2023-06-19 17:38:25.785629649 +0530 IST m=+3600.000051616&#xA;&#xA;// is now after one hour?&#xA;false&#xA;&#xA;// is one hour after now?&#xA;true &#xA;&#xA;// is now before after one hour?&#xA;true &#xA;&#xA;// is after one hour before now?&#xA;false&#xA;```&#xA;&#xA;These are some ridiculous examples, but it could be any date comparison like billing periods, subscription due, etc. We can compare dates with [After](https://pkg.go.dev/time#Time.After), [Before](https://pkg.go.dev/time#Time.Before), and [Equal](https://pkg.go.dev/time#Time.Equal) methods provided in the time package.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    t := time.Now()&#xA;    fmt.Println(t)&#xA;&#xA;    afterOneHour := t.Add(time.Hour * 1)&#xA;    fmt.Println(afterOneHour)&#xA;&#xA;&#x9;isNowEqual := t.Equal(afterOneHour)&#xA;&#x9;isEqual := time.Now().Equal(t)&#xA;&#x9;fmt.Println(isNowEqual)&#xA;&#x9;fmt.Println(isEqual)&#xA;&#xA;&#x9;londonTimeZone, err := time.LoadLocation(&#34;Europe/London&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;}&#xA;&#x9;londonTime := t.In(londonTimeZone)&#xA;&#xA;&#x9;fmt.Println(t)&#xA;&#x9;fmt.Println(londonTime)&#xA;&#x9;fmt.Println(t.Equal(londonTime))&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;2023-06-19 16:38:25.785629649 +0530 IST m=+0.000051616&#xA;2023-06-19 17:38:25.785629649 +0530 IST m=+3600.000051616&#xA;&#xA;// is now equal to one hour after now?&#xA;false&#xA;&#xA;// is t equal to now?&#xA;false&#xA;&#xA;2023-06-19 16:49:25.509421027 +0530 IST m=+0.000040200&#xA;2023-06-19 12:19:25.509421027 +0100 BST&#xA;&#xA;// is london time equal to now in local?&#xA;true&#xA;```&#xA;&#xA;Here in the examples, we have checked if two Time objects are equal or not, we have checked if a current time is equal to one hour after time, and it returns false rightly so. In the next example, we have created a new time object as [`time.Now`](http://time.Now), and then compared with the previous now, there will be a difference in the seconds of the initialization and hence it gives false as the return value.&#xA;&#xA;In the next example, we take the current time and the current time in the London location as the `BST` timezone, this gets the result as `true` because the time is the same despite being in different time zones.&#xA;&#xA;## Sleep Time&#xA;&#xA;We can even sleep for a certain duration of time, with the [Sleep](https://pkg.go.dev/time#Sleep) method in the time package, this results in halting the execution of the program.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    t1 := time.Now()&#xA;    time.Sleep(time.Second * 3)&#xA;    t2 := time.Now()&#xA;    duration := t2.Sub(t1)&#xA;    fmt.Println(duration)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;3.000235856s&#xA;```&#xA;&#xA;In this example, we have used the `Sleep()` method with a duration of 3 seconds which will halt the execution of the program (or the goroutine) for that duration. We also printed the duration for which the flow was halted by taking a timestamp before and after the sleep method was called and then taking the difference between the two-time stamps. This rightly gives us a value of three seconds which is the duration for which we tried to call the `Sleep` method.&#xA;&#xA;## Tickers&#xA;&#xA;Tickers are basically like a clock that ticks at regular intervals. It&#39;s a mechanism provided by the package to execute code repeatedly at fixed time intervals. The [Ticker](https://pkg.go.dev/time#Ticker) is a type and it has a few methods associated with it for handling these ticks. The [NewTicker](https://pkg.go.dev/time#NewTicker) method is used to create a ticker with the parameter as the amount of time to repeat the tick i.e. the frequency of its ticking. It will only stop when the [.Stop](https://pkg.go.dev/time#Ticker.Stop) method is called with that ticker object.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;ticker := time.NewTicker(time.Second * 2)&#xA;&#x9;counter := 0&#xA;&#x9;for {&#xA;&#x9;&#x9;select {&#xA;&#x9;&#x9;case &lt;-ticker.C:&#xA;&#x9;&#x9;&#x9;// api calls, call to database after specific time intervals&#xA;&#x9;&#x9;&#x9;counter++&#xA;&#x9;&#x9;&#x9;fmt.Println(&#34;Tick&#34;, counter)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;if counter == 5 {&#xA;&#x9;&#x9;&#x9;ticker.Stop()&#xA;&#x9;&#x9;&#x9;return&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;Tick 1&#xA;Tick 2&#xA;Tick 3&#xA;Tick 4&#xA;Tick 5&#xA;```&#xA;&#xA;In the example above, we have created a ticker with a frequency of 2 seconds, we initialize a counter to 0, we run an infinite condition for loop, and the ticker receives a value when the time is for ticking via the channel `.C`, this is used to check if it has ticked, and we enter the flow for the execution of any logic/code. We internally increment the counter and when the counter is 5 we stop the ticker and break out of the loop with either a `break` or `return`.&#xA;&#xA;So, that is how tickers can be used, this can be used in sending requests to APIs that have rate limiting, so we make sure we don&#39;t flood the API and maintain the rate of hitting the API only after specific intervals with tickers.&#xA;&#xA;That&#39;s it from the 28th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/date-time) repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, in conclusion, we have seen how to use the time package in Go. We covered time and date parsing, timezone, date comparison, sleep, and tickers. This would have given a good overview of the time package in golang.&#xA;&#xA;If you have any questions, feedback, or suggestions feel free to drop them in the comments section, or on the social handles. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Random Numbers</title>
      <link>https://www.meetgor.com/posts/golang-random-numbers</link>
      <description>Introduction In the 27th post of the series, we will be looking into random number generation in golang. We will be exploring how to create a random number, gen</description>
      <pubDate>Sun, 14 May 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 27th post of the series, we will be looking into random number generation in golang. We will be exploring how to create a random number, generate random numbers within a range, shuffle slices/arrays, and generate random strings, float, and bytes.&#xA;&#xA;There are two types of random number generation processes in software pseudo-random numbers and cryptographically secure pseudo-random number generation.&#xA;&#xA;The [math/rand](https://pkg.go.dev/math/rand) package in Golang provides a number of functions for generating pseudorandom numbers. These functions are suitable for a variety of applications, such as games, simulations, and to some extent in cryptography.&#xA;&#xA;The [crypto/rand](https://pkg.go.dev/crypto/rand) package in Golang generates cryptographically secure pseudorandom numbers. This means that the numbers are generated in a way that is very difficult to predict or reproduce. However, they are not truly random, as they are generated by a computer algorithm.&#xA;&#xA;## Creating a Random Source&#xA;&#xA;We need to first create a source/seed to generate a random number. If we do not add a new source each time we run the program or generate a random number, it will pick up the same source leading to the generation of the same pattern of the random numbers.&#xA;&#xA;We use the [rand.NewSource](https://pkg.go.dev/math/rand#NewSource) method to generate a new source of the random number, by initializing it with the current time in nanoseconds. Further, we need to create a [Rand](https://pkg.go.dev/math/rand#Rand) object for accessing the methods and properties attached to the struct type in the rand package for generating random numbers.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;source := rand.NewSource(time.Now().UnixNano())&#xA;&#x9;rand_source := rand.New(source)&#xA;&#x9;for i := 0; i &lt; 5; i++ {&#xA;&#x9;&#x9;rand_num := rand_source.Int()&#xA;&#x9;&#x9;fmt.Println(rand_num)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run rand_source.go &#xA;2651653079934875120&#xA;5510445616427469234&#xA;3817011159463415912&#xA;5708754224362255659&#xA;7511308401304127761&#xA;```&#xA;&#xA;In the above example, we use the `NewSource` and the `New` method to generate the `Source` and the `Rand` object respectively. Further, for demonstration, we use the [Int](https://pkg.go.dev/math/rand#Rand.Int)method to generate a 64-bit random integer 5 times with a for loop.&#xA;&#xA;As we can see it generates a 5 random number, we will see why we need the random source initialized to the current time in the upcoming section.&#xA;&#xA;## Generating Random Numbers&#xA;&#xA;Random number as the name suggest are used to get an unpredictable number, however, using software we can only mimic the actual randomness. The process is called pseudo-random number generation. There is a particular pattern in the numbers, however, it is sufficient for trivial tasks in games, and simulations to some extent. However, actual cryptographic random numbers should be used for security tasks, crypto arithmetic, and other sensitive tasks.&#xA;&#xA;Golang provides a built-in package for both generating pseudo-random numbers called math/rand and cryptographic numbers with crypto/rand packages. This package contains a number of functions for generating random integers, floats, and strings.&#xA;&#xA;### Random Numbers&#xA;&#xA;To simply generate a random number we can use the `rand.Int` method from the [match/rand](https://pkg.go.dev/math/rand#Int) package to get a single random integer.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(rand.Int())&#xA;&#x9;num := rand.Int()&#xA;    fmt.Printf(&#34;%d&#34;, num)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;5577006791947779410&#xA;int 8674665223082153551&#xA;&#xA;$ go run main.go&#xA;5577006791947779410&#xA;int 8674665223082153551&#xA;```&#xA;&#xA;In the above code, we have generated a couple of random numbers with the `rand.Int` method. The method returns a random 64-bit integer. If you run the program a few times, you can see the numbers are the same, so how exactly are they random?&#xA;&#xA;They are not random yet, we need to create a new seed/source each time we run the program in order to generate a new pattern of digits each time to generate a pseudo-random number.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;    &#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;&#xA;&#x9;fmt.Println(r.Int())&#xA;&#x9;num := r.Int()&#xA;    fmt.Printf(&#34;%d&#34;, num)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;5577006791947779410&#xA;int 8674665223082153551&#xA;7524437893560534176&#xA;int 5023070266853767708&#xA;&#xA;$ go run main.go&#xA;5577006791947779410&#xA;int 8674665223082153551&#xA;8935404877937414882 &#xA;int 209789266380754935&#xA;```&#xA;&#xA;No, we can see that after the `rand.New(rand.NewSource(time.Now().UnixNano()))` function call, the number generated is random each time we run the program. This is because we initialize the source of the random number generator package to the current time in nanoseconds.&#xA;&#xA;### Random Numbers in a Range&#xA;&#xA;The above numbers are too big, what if we want the random numbers to be in a specific range? This is quite a common thing to do, and hence there is a function like [rand.Intn](https://pkg.go.dev/math/rand#Intn) where we can specify the bound to which the function should generate the random numbers.&#xA;&#xA;```go&#xA; package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {   &#xA;    r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;&#xA;&#x9;for i := 0; i &lt; 10; i++ {&#xA;&#x9;&#x9;// generate an integer between 0 and 5&#xA;&#x9;&#x9;dice_throw := r.Intn(6)&#xA;&#x9;&#x9;// Move the Offset of 0&#xA;&#x9;&#x9;fmt.Println(dice_throw + 1)&#xA;&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;1&#xA;2&#xA;5&#xA;6&#xA;6&#xA;3&#xA;6&#xA;1&#xA;4&#xA;2&#xA;```&#xA;&#xA;In the above code, we have used the `r.Intn(upper_range int)` method to generate a random number between 0 and the provided range, so if we give a parameter to the method `r.Intn(6)`, it would generate the numbers between 0 and 5. so the range is not inclusive 0, 6). Thereby the numbers generated will be either 0, 1, 2, 3, 4, or 5. So to remove the offset of 0, we add 1.&#xA;&#xA;Hence we get some pseudo-random numbers between 1 and 6. I have used a for loop that generates 10 such numbers.&#xA;&#xA;### Cryptographic Random Numbers&#xA;&#xA;The above method was a pseudo-random number generator, however, for more robust random number generations, we can use the [crypto/rand](https://pkg.go.dev/crypto/rand) package that is more secure and powerful for complex operations.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;crypto_rand &#34;crypto/rand&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/big&#34;&#xA;)&#xA;&#xA;func Handle_error(err error) {&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;// Cryptographic random numbers&#xA;&#x9;var max *big.Int = big.NewInt(6)&#xA;&#x9;// big is a package for high-precision arithmetic&#xA;&#x9;for i := 0; i &lt; 10; i++ {&#xA;&#x9;&#x9;crypt_rand_num, err := crypto_rand.Int(crypto_rand.Reader, max)&#xA;&#x9;&#x9;Handle_error(err)&#xA;&#x9;&#x9;// Move the Offset of 0 by adding 1&#xA;&#x9;&#x9;crypt_num := crypt_rand_num.Add(crypt_rand_num, big.NewInt(1))&#xA;&#x9;&#x9;fmt.Println(crypt_num)&#xA;&#x9;}&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;3&#xA;5&#xA;5&#xA;1&#xA;5&#xA;5&#xA;4&#xA;2&#xA;3&#xA;6&#xA;```&#xA;&#xA;In the above example, we have used the [math/big](https://pkg.go.dev/math/big) package to store the random number generated by the [Int](https://pkg.go.dev/crypto/rand#Int) method in the `crypto/rand` package. We create a new integer from the `big` package. The [NewInt](https://pkg.go.dev/math/big#NewInt) function returns a pointer to the integer. So, we parse the integer 6 which will create a memory location storing 6 as a [big.Int](https://pkg.go.dev/math/big#Int) type. We use the max variable name as it denotes the maximum number to be generated in the next step.&#xA;&#xA;Then we can use the [crypto/rand](https://pkg.go.dev/crypto/rand) package to generate cryptographic random numbers. The package has [Int](https://pkg.go.dev/crypto/rand#Int) method to generate a `big.Int` type of number in a given range. However, it also takes in a [Reader](https://pkg.go.dev/crypto/rand#pkg-variables) object that is global in the package used as a shared instance of a cryptographically secure random number generator. This means it can be used as the platform&#39;s default random number generator in the program.&#xA;&#xA;So, the `crypto_rand.Int` method takes in two parameters, the `Reader` object which will be the platform-specific random number generator/api, and the next parameter is the max range to generate the random number. So, this method returns a `big.Int` type. This is the cryptographic random number.&#xA;&#xA;However, we have the range from 0 to 5 again, so we just add 1 to the `big.Int` type by using the [Add](https://pkg.go.dev/math/big#Int.Add) method associated to the `big.Int` type as `crypto_rand_num` variable and then parse two parameters as x and y, i.e. the two numbers to add. So, we just pass the `crypto_rand_num` and a new integer `1`. This adds the numbers and we store it in the `crypto_num` variable. This is how we generate a cryptographic random number between 1 and 6.&#xA;&#xA;### Random Float&#xA;&#xA;We can even generate random types like float. There are a quite a few variations like [Float32](https://pkg.go.dev/math/rand#Float32), [Float64](https://pkg.go.dev/math/rand#Float64), [ExpFloat64](https://pkg.go.dev/math/rand#ExpFloat64), and [NormFloat64](https://pkg.go.dev/math/rand#NormFloat64).&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;&#xA;    rand_float32 := r.Float32()&#xA;&#x9;fmt.Println(rand_float32)&#xA;&#xA;&#x9;rand_float64 := r.Float64()&#xA;&#x9;fmt.Println(rand_float64)&#xA;&#xA;&#x9;rand_exp_float := r.ExpFloat64()&#xA;&#x9;fmt.Println(rand_exp_float)&#xA;&#xA;&#x9;rand_norm_float := r.NormFloat64()&#xA;&#x9;fmt.Println(rand_norm_float)&#xA;&#xA;&#x9;for i := 0; i &lt; 5; i++ {&#xA;&#x9;&#x9;rand_float := r.Float32()&#xA;&#x9;&#x9;fmt.Println(rand_float)&#xA;&#x9;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;0.08891088&#xA;0.9218221078616824&#xA;1.8237338579299396&#xA;-0.30238778160373464&#xA;&#xA;0.65474856&#xA;0.65964687&#xA;0.39930198&#xA;0.8043338&#xA;0.17894344&#xA;```&#xA;&#xA;We have used 4 types for generating a random float. We have two variations of the float depending on the size, 32 or 64-bit number. We get 32-bit and 64-bit random floats from the `r.Float32` and `r.Float64` respectively.&#xA;&#xA;The `r.ExpFloat64` function returns an exponentially distributed float64 with a range from 0 to +math.MaxFloat64 with rate parameter and mean as 1. If you want to change the distribution&#39;s rate parameter, it can be done by dividing the number by the desired rate parameter.&#xA;&#xA;The `r.NormFlaot64` function returns a normally distributed float64 with a range from -math.MaxFloat64 to +math.MaxFloat64 with mean as 0 and standard deviation as 1. This can also be changed by multiplying the generated number by the desired standard deviation and then adding the desired mean.&#xA;&#xA;## Generating Random Strings&#xA;&#xA;We can generate a random string of a specific length. We can generate a random number between 0 and 25 and then add 97 for lowercase ASCII characters and add 65 for uppercase characters. So, we generate a random number between 97 and 122 which can be cast to string/rune to get the string equivalent of the numbers.&#xA;&#xA;This get&#39;s us a single character which would be random and thereby we use for loop to generate a fixed length random string for upper case and lower case characters similarly.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;&#xA;    //Random string&#xA;&#x9;randomLowerCase := make([]rune, 6)&#xA;&#x9;randomUpperCase := make([]rune, 6)&#xA;&#x9;for i := range randomLowerCase {&#xA;&#x9;&#x9;randomLowerCase[i] = rune(r.Intn(26) + 97)&#xA;&#x9;&#x9;randomUpperCase[i] = rune(r.Intn(26) + 65)&#xA;&#x9;}&#xA;&#x9;randomLowerCaseStr := string(randomLowerCase)&#xA;&#x9;randomUpperCaseStr := string(randomUpperCase)&#xA;&#xA;&#x9;fmt.Println(randomLowerCase)&#xA;&#x9;fmt.Println(randomLowerCaseStr)&#xA;&#x9;fmt.Println(randomUpperCase)&#xA;&#x9;fmt.Println(randomUpperCaseStr)&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;[100 113 122 97 107 101]&#xA;dqzake&#xA;[86 81 88 76 66 74]&#xA;VQXLBJ&#xA;&#xA;$ go run main.go&#xA;[116 115 120 97 100 111]&#xA;tsxado&#xA;[80 74 66 83 77 66]&#xA;PJBSMB&#xA;```&#xA;&#xA;We first create an empty or 0 initialized rune slice with length 6, it can be any length as per your requirement. Then we create a for loop iterating over that slice and set each rune to the `r.Intn` method with range 26 and add 97 for lower case letters and add 65 for upper case letters. This generates an integer between 97 to 122 that is typecast to a rune to represent the slice. Similarly, it generates an integer between 65 and 90 which is typecast to rune.&#xA;&#xA;## Shuffling Arrays&#xA;&#xA;We can use the [rand.Shuffle](https://pkg.go.dev/math/rand#Shuffle) and the [rand.Perm](https://pkg.go.dev/math/rand#Perm) to shuffle and create a random permutation of a particular list/slice of elements.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;&#xA;    fmt.Println(r.Perm(10))&#xA;&#xA;&#x9;arr := []int{1, 2, 3, 4, 5, 6}&#xA;&#x9;fmt.Println(&#34;Before shuffle:&#34;, arr)&#xA;&#x9;r.Shuffle(len(arr), func(i, j int) {&#xA;&#x9;&#x9;arr[i], arr[j] = arr[j], arr[i]&#xA;&#x9;})&#xA;&#x9;fmt.Println(&#34;After shuffle:&#34;, arr)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;[8 1 9 3 7 2 0 6 5 4]&#xA;&#xA;Before shuffle: [1 2 3 4 5 6]&#xA;After shuffle: [4 6 5 3 2 1]&#xA;```&#xA;&#xA;In the above example, we have used the `rand.Perm` method to create a permutation of the n number passed as a parameter. So, it would generate a permutation of numbers from 0 to 9 if we pass 10 as the parameter to the method. It would return a slice of int.&#xA;&#xA;We also have used the `rand.Shuffle` method to shuffle an already existing slice of elements. This is not restricted to int, it can be any type of slice. The method takes in two parameters, the length of the slice/array and the swap function which is an anonymous function.&#xA;&#xA;In the example, I have created an arr slice with 6 elements, it could be any number, for demonstration, I have initialized the slice with 1 to 6 numbers. The `rand.Shuffle` method has been parsed with the length of the `arr` as `len(arr)` and an anonymous function that takes in two integers as the indices of the array and inside the function, we swap the elements of the array. This can be modified as per the requirement, but this is the base swap function for the shuffle method.&#xA;&#xA;So, in the output, we print the shuffled array, which now looks like a random array of numbers.&#xA;&#xA;We can use the `Perm` method to generate a random list of indices of an array, and then assign the index to the string, to generate a random shuffled string.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;&#xA;    letters := &#34;abcdefghijklmnopqrstuvwxyz&#34;&#xA;&#x9;shuffled := r.Perm(len(letters))&#xA;&#xA;&#x9;result := make([]byte, len(letters))&#xA;&#x9;for i, randIndex := range shuffled {&#xA;&#x9;&#x9;result[i] = letters[randIndex]&#xA;&#x9;}&#xA;    rand_str := string(result)&#xA;&#x9;fmt.Println(rand_str)&#xA;    // random string of length 10&#xA;    fmt.Println(rand_str[:10]&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;yeinvkdbfqomacrzhtswgxulpj&#xA;yeinvkdbfq&#xA;&#xA;$ go run main.go&#xA;tvakbgnjprwiofquxlzecdshym&#xA;tvakbgnjpr&#xA;```&#xA;&#xA;In the above example, we have first created the string with all the alphabets and then created a random permutation with the length of that array i.e. 26. This would create a random permutation of numbers from 0 to 25. This is now a list of numbers, which can be used to assign the index of the string, to make it feel like a shuffled string.&#xA;&#xA;We create a for loop for the string iteration and assign the index with the random operation array index. This will basically jumble the order of the elements/characters in the string. We can then truncate or slice the string to any length as per the requirement.&#xA;&#xA;## Random Read Bytes&#xA;&#xA;There is another way to generate a slice of bytes/string with the [Read](https://pkg.go.dev/math/rand#Rand.Read) method. We have used the `Read` method in the cryptographic random number generation part. The Read method generates a random byte for the given length of a slice of bytes.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;crypto_rand &#34;crypto/rand&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/big&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;&#x9;&#34;time&#34;&#xA;)&#xA;&#xA;func Handle_error(err error) {&#xA;    if err != nil {&#xA;        panic(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;    r := rand.New(rand.NewSoure(time.Now().UnixNano()))&#xA;    rand_byte := make([]byte, 5)&#xA;&#x9;fmt.Println(rand_byte)&#xA;&#x9;_, err = r.Read(rand_byte)&#xA;    Handle_error(err)&#xA;&#x9;fmt.Println(rand_byte)&#xA;    fmt.Printf(&#34;%c &#xA;&#34;, rand_byte)&#xA;&#xA;&#x9;crypt_rand_byte := make([]byte, 5)&#xA;&#x9;fmt.Println(crypt_rand_byte)&#xA;&#x9;_, err = crypto_rand.Read(crypt_rand_byte)&#xA;&#x9;Handle_error(err)&#xA;    fmt.Println(crypt_rand_byte)&#xA;    fmt.Printf(&#34;%c &#xA;&#34;, crypto_rand_byte)&#xA;&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;&#xA;[0 0 0 0 0]&#xA;[88 53 113 116 251]&#xA;[X 5 q t û]&#xA;&#xA;[0 0 0 0 0]&#xA;[37 90 42 93 96]&#xA;[% Z * ] `] &#xA;```&#xA;&#xA;We have demonstrated usage of both the packages i.e. math/rand and crypto/rand for the generation of random bytes. In the example above, we initialize a slice of byte `rand_byte` and use the `Read` method that will take in the slice of byte as the parameter and return two things, the number of bytes it read and the error object if there is any or nil. So, we do not care how many bytes it read right now, so we do the `_` for the read bytes. It mutates/modifies the byte slice and the slice elements are then random byte values.&#xA;&#xA;We can print the slice of byte as a string with `%s`, or each character in the bytes using the `%c` format specifier. So, the generated bytes are between 0 and 255, which include Unicode and ASCII characters.&#xA;&#xA;Similarly, for the crypto/rand package, we create a slice of bytes with size 5 and use the crypto/rand package provided [Read](https://pkg.go.dev/crypto/rand#Read) method directly to modify the slice of bytes to random bytes.&#xA;&#xA;That&#39;s it from the 27th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/random-num) repository.&#xA;&#xA;## Conclusion&#xA;&#xA;From this part of the series, we were able to explore the random number generation in golang, packages like math/rand, mah/big, and crypto/rand were used for the examples and generation of random numbers, cryptographic secure random numbers, random strings and shuffling of arrays.&#xA;&#xA;So, hopefully, the article might have found useful to you, if you have any queries, questions, feedback, or mistakes in the article, you can let me know in the discussion or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Regex</title>
      <link>https://www.meetgor.com/posts/golang-regex</link>
      <description>Introduction In this 26th part of the series, we will be covering the basics of using regular expressions in golang. This article will cover the basic operation</description>
      <pubDate>Wed, 22 Mar 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this 26th part of the series, we will be covering the basics of using regular expressions in golang. This article will cover the basic operations like matching, finding, replacing, and sub-matches in a regular expression pattern from string source or file content. This will have examples for each of the concepts and similar variants will follow the same ideology in self-exploring the syntax.&#xA;&#xA;## Regex in golang&#xA;&#xA;So, let&#39;s start with what are regular expressions.&#xA;&#xA;&gt; Regular expressions are basic building blocks for searching, pattern matching, and manipulation from a source of text using computational logic.&#xA;&#xA;This is not the formal definition, but have written it in words for my understanding of regular expressions till now. Might not be accurate, but makes sense to me after I had played and explored it (not fully).&#xA;&#xA;So regular expressions use some pattern-matching techniques using basic logic operators like concatenation, quantifiers, etc. These relate to the study of the theory of computation quite closely but you don&#39;t need to get into too much theory in order to understand the working of regular expressions. However, it won&#39;t harm you if you are curious about it and want to explore it further.&#xA;&#xA;Some resources to learn the fundamentals of regular expressions:&#xA;&#xA;* [Regular Expressions LMU notes](https://cs.lmu.edu/~ray/notes/regex/)&#xA;    &#xA;* [RegexOne](https://regexone.com/)&#xA;    &#xA;&#xA;## Regexp package&#xA;&#xA;We will be using the [regexp](https://pkg.go.dev/regexp) package in the golang standard library to get some quick and important methods for quick and neat pattern matching and searching. It provides a `Regexp` type and a lot of methods on top of it to perform matching, finding, replacing, and sub-matches in the source text.&#xA;&#xA;This package also supports two types of methods serving different purposes and use cases for string and slice of bytes, this can be useful for reading from a buffer, file, etc., and also flexible enough to search for a simple string.&#xA;&#xA;## Matching Patterns&#xA;&#xA;One of the fundamental aspects of the regular expression is to check if a particular pattern is present or not in a source string. The `regexp` package provides some methods like [Match](https://pkg.go.dev/regexp#Match), [MatchString](https://pkg.go.dev/regexp#MatchString) methods on a slice of bytes and string respectively from the pattern string.&#xA;&#xA;### Matching Strings&#xA;&#xA;The basic operations with regex or regular expression can be performed to compare and match if the pattern matches a given string.&#xA;&#xA;In golang, the [regexp](https://pkg.go.dev/regexp) package provides a few functions to simply match expressions with strings or text. One of the easy-to-understand ones include the [MtachString](https://pkg.go.dev/regexp#MatchString), and [Match](https://pkg.go.dev/regexp#Match) methods.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;regexp&#34;&#xA;)&#xA;&#xA;func log_error(err error) {&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;str := &#34;gophers of the goland&#34;&#xA;&#x9;is_matching, err := regexp.MatchString(&#34;go&#34;, str)&#xA;&#x9;log_error(err)&#xA;&#x9;log.Println(is_matching)&#xA;&#x9;is_matching, err := regexp.MatchString(&#34;no&#34;, str)&#xA;&#x9;log_error(err)&#xA;&#x9;log.Println(is_matching)&#xA;&#xA;}&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;true&#xA;false&#xA;```&#xA;&#xA;In the above code, we have used the `MatchString` method that takes in two parameters string/pattern to find, and the source string. The function returns a boolean as `true` or `false` if the pattern is present in the source string, also it might return an error if the pattern(first parameter) is parsed as an incorrect regular expression.&#xA;&#xA;So, we can clearly see, the string `go` is present in the string `gophers of the goland` and the string `no` is not a substring.&#xA;&#xA;We also have `Match` method which is a more general version of `MatchString` it excepts a slice of byte rather than a string as the source string. The first parameter is still a string, but the second parameter is a slice of bytes.&#xA;&#xA;```go&#xA;is_matching, err = regexp.Match(`.*land`, []byte(&#34;goland is a land of gophers&#34;))&#xA;log_error(err)&#xA;log.Println(is_matching)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;true&#xA;```&#xA;&#xA;We can use the `Match` method to simply parse a slice of bytes to use as the source text to check the pattern.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;    &#34;os&#34;&#xA;&#x9;&#34;regexp&#34;&#xA;)&#xA;&#xA;func log_error(err error) {&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;file_content, err := os.ReadFile(&#34;temp.txt&#34;)&#xA;&#x9;log_error(err)&#xA;&#x9;is_matching, err = regexp.Match(`memory`, file_content)&#xA;&#x9;log_error(err)&#xA;&#x9;log.Println(is_matching)&#xA;&#x9;is_matching, err = regexp.Match(`text `, file_content)&#xA;&#x9;log_error(err)&#xA;&#x9;log.Println(is_matching)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;# temp.txt&#xA;&#xA;One of the gophers used a slice,&#xA;the other one used a arrays.&#xA;Some gophers were idle in the memory.&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;true&#xA;false&#xA;```&#xA;&#xA;We can now even parse the contents of a file as a slice of bytes. So, it would be really nice to compare and check for a string pattern in a file quickly. Here in the above example, we have checked if `memory` is present in the text which it is, and in the second call, we check if `text` string is present anywhere in the file content which it is not.&#xA;&#xA;## Find Patterns&#xA;&#xA;We can even use regular expressions for searching text or string with the struct/type [Regexp](https://pkg.go.dev/regexp#Regexp) provided by golang&#39;s regexp. We can create a regular expression and use other functions like `MatchString`, `Match`, and others that we will explore to match or find a pattern in the text.&#xA;&#xA;### Find String from RegEx&#xA;&#xA;We can get a slice of byte from the `FindAll` method which will take in a slice of byte, the second parameter as -1 for all matches. The function returns a slice of slice of byte with the byte representation of the matched string in the source text.&#xA;&#xA;```bash&#xA;exp, err := regexp.Compile(`�\d{5}(?:[-\s]\d{4})?�`)&#xA;log_error(err)&#xA;pincode_file, err := os.ReadFile(&#34;pincode.txt&#34;)&#xA;log_error(err)&#xA;match := exp.FindAll(pincode_file, -1)&#xA;log.Println(match)&#xA;```&#xA;&#xA;```bash&#xA;# pincode.txt&#xA;&#xA;Pincode: 12345-1234&#xA;City, 40084&#xA;State 123&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[[49 50 51 52 53 45 49 50 51 52] [52 48 48 56 52]]&#xA;```&#xA;&#xA;In the above example, we have used the [Compile](https://pkg.go.dev/regexp#Compile) method to create a regular expression and [FindAll](https://pkg.go.dev/regexp#Regexp.FindAll) to get all the occurrences of the matching patterns in the text. We have again read the contents from the file. In this example, the `exp` is a regular expression for a postal code, which can either have 5 digit or 5digits-4digit combination. We read the file `pincode.txt` as a slice of bytes and use the `FindAll` method. The FindAll method takes in the parameter as a slice of byte and the integer as the number of occurrences to search. If we use a negative number it will include all the matches.&#xA;&#xA;We search for the pin code in the file and the funciton returns a list of bytes that match the regular expression in the provided object `exp`. Finally, we get the result as `12345-1234` and `40084` which are present in the file. It doesn&#39;t match the number `123` which is not a valid match for the given regular expression.&#xA;&#xA;There is also a version of `FindAll` as `FindAllString` which will take in a string as the text source and return a slice of strings.&#xA;&#xA;```go&#xA;matches := exp.FindAllString(string(pincode_file), -1)&#xA;log.Println(matches)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[&#34;12345-1234&#34; &#34;40084&#34;]&#xA;```&#xA;&#xA;So, the `FindAllString` would return a slice of strings of the matches in the text.&#xA;&#xA;### Find the Index of String from RegEx&#xA;&#xA;We can even get the start and end index of the matched string in the text using the [FindIndex](https://pkg.go.dev/regexp#Regexp.FindIndex) and [FindAllIndex](https://pkg.go.dev/regexp#Regexp.FindAllIndex) methods to get the indexes of the match, or all the matches of the file content.&#xA;&#xA;```go&#xA;exp, err := regexp.Compile(`�\d{5}(?:[-\s]\d{4})?�`)&#xA;log_error(err)&#xA;pincode_file, err := os.ReadFile(&#34;pincode.txt&#34;)&#xA;log_error(err)&#xA;&#xA;match_index := exp.FindIndex(pincode_file)&#xA;log.Printf(&#34;%T&#xA;&#34;, match_index)&#xA;log.Println(match_index)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[]int&#xA;[9 19]&#xA;```&#xA;&#xA;The above code is using the `FindIndex` method to get the indexes of the first match of the regular expression. The output of the code gives a single slice of an integer with length two as the start and last index of the matched string in the text file. So, here, the `9` represents the position(index) of the first character match in the string, and `19` is the last index of the matched string.&#xA;&#xA;```bash&#xA;Pincode: 12345-1234&#xA;01234567890123456789&#xA;&#xA;Assume 0 as 10 after 9, 11, and so on&#xA;It starts from the 9th character as `1` and ends at the `4` character&#xA;at position 18 but it returns the end position + 1 for the ease of slicing&#xA; &#xA;City, 40084&#xA;012345678901&#xA;&#xA;State 123&#xA;234567890&#xA;```&#xA;&#xA;The character at 9 and 18 are the first and the last character position/index of the source string, so it returns the end position + 1 as the index. This makes retrieval of slicing source string much easier, as we won&#39;t be offset by one.&#xA;&#xA;If we want to get the text in the source string, we can use the slicing as:&#xA;&#xA;```go&#xA;exp, err := regexp.Compile(`�\d{5}(?:[-\s]\d{4})?�`)&#xA;log_error(err)&#xA;pincode_file, err := os.ReadFile(&#34;pincode.txt&#34;)&#xA;log_error(err)&#xA;&#xA;match_index := exp.FindIndex(pincode_file)&#xA;if len(match_index) &gt; 0 {&#xA;    &#xA;    // Get the slice of the original string from start to end index&#xA;    sliced_string := pincode_file[match_index[0]:match_index[1]]&#xA;    log.Printf(&#34;%q&#xA;&#34;, sliced_string)&#xA;}&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;&#34;12345-1234&#34;&#xA;```&#xA;&#xA;So, we can access the string from the source text without calling other functions, simply slicing the original string will retrieve the expected results. This is the convention of the golang standard library to make the end index exclusive.&#xA;&#xA;Similarly, the `FindAllIndex` method can be used to get a list of list of such indexes of matched strings.&#xA;&#xA;```go&#xA;exp, err := regexp.Compile(`�\d{5}(?:[-\s]\d{4})?�`)&#xA;log_error(err)&#xA;pincode_file, err := os.ReadFile(&#34;pincode.txt&#34;)&#xA;log_error(err)&#xA;&#xA;match_indexes := exp.FindAllIndex(pincode_file)&#xA;log.Printf(&#34;%T&#xA;&#34;, match_indexes)&#xA;log.Println(match_indexes)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[][]int&#xA;[[9 19] [26 31]]&#xA;```&#xA;&#xA;The above example gets the list of slices as for all the search pattern indexes in the source string/text. We can iterate over the list and get each matched string index.&#xA;&#xA;## Find Submatch&#xA;&#xA;the `regexp` package also has a utility function for finding the sub-match of a given regular expression. The method returns a list of strings (or slices of bytes) containing the leftmost match and the sub-matches in that match. This also has the `All` version which instead of returning a single match i.e. the leftmost match it returns all the matches and the corresponding sub-matches.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;regexp&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;str := &#34;abc@def.com is the mail address of 8th user with id 12&#34;&#xA;&#x9;exp := regexp.MustCompile(&#xA;        `([a-zA-Z0-9]+@[a-zA-Z0-9]+\.[a-zA-Z]{2,})` + &#xA;            `|(email|mail)|` + &#xA;            `(\d{1,3})`)`,&#xA;    )&#xA;&#x9;match := exp.FindStringSubmatch(str)&#xA;&#x9;log.Println(match)&#xA;&#x9;matches := exp.FindAllStringSubmatch(str, -1)&#xA;&#x9;log.Println(matches)&#xA;}&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[abc@def.com abc@def.com  ]&#xA;[[abc@def.com abc@def.com  ] [mail  mail ] [8   8] [12   12]]&#xA;```&#xA;&#xA;The above example uses a regex to compute a few things like email address, words like `mail` or `email`, also a number of up to 3 digits. The `|` between these expressions indicates any combination of these three things would be matched for the regular expression. The [FindStringSubmatch](https://pkg.go.dev/regexp#Regexp.FindStringSubmatch) method, takes in a string as the source and returns a slice of the matching pattern. The first element is the leftmost match in the string source for the given regular expression, and the subsequent elements are the sub-matches in the matched string.&#xA;&#xA;We can now move a little step ahead for actually understanding the sub-match in a regular expression.&#xA;&#xA;```go&#xA;str := &#34;abe21@def.com is the mail address of 8th user with id 124&#34;&#xA;exp := regexp.MustCompile(&#xA;    `([a-zA-Z]+(\d*)[a-zA-Z]*@[a-zA-Z]*(\d*)[a-zA-Z]+\.[a-z|A-Z]{2,})` +&#xA;        `|(mail|email)` +&#xA;        `|(\d{1,3})`,&#xA;)&#xA;&#xA;match := exp.FindStringSubmatch(str)&#xA;log.Println(match)&#xA;matches := exp.FindAllStringSubmatch(str, -1)&#xA;log.Println(matches)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[abe21@def.com abe21@def.com 21   ]&#xA;[[abe21@def.com abe21@def.com 21   ] [mail    mail ] [8     8] [124     124]]&#xA;```&#xA;&#xA;In the above example, there are a few things to take away, let us break it down into small pieces. We have a regex for matching either a mail address, the word `mail` or `email`, or a number up to 3 digits. The regex is a bit different from the previous example for understanding the sub-matches within an expression. We find the sub-matches in the string which will be handled by the `FindStringSubmatch`. This function takes in a string and returns a list of strings that are the leftmost matches in the source string.&#xA;&#xA;First, we need to understand the regex to get a clear idea of the code snippet. The first sub-match is for the email address. However, we use `[a-zA-Z]` in the picking of the username and the domain name, we don&#39;t want to directly match numbers yet. The goal of this regex is to pick up numbers inside an email address. So, we can have 1 or more characters `[a-zA-z]+`, followed by 0 or more digits `(\d*)`, and again we can have 0 or more characters `[a-zA-Z]*`. The `+` is for 1 or more, `*` is for 0 or more, `\d` is for digits. After this, we have the `@` as a compulsory character in the mail, followed by the same sequence in the username i.e. 1 or more characters, 0 or more digits, 0 or more characters. Finally, we have the `.com` and the domain name extensions, as a group of 2 or more characters `[a-z|A-Z]{2,}`.&#xA;&#xA;So, the regex accepts an email, with a sub-match of the number anywhere in the username or the domain name.&#xA;&#xA;The `FindStringSubmatch` function lists out the sub-matches for the leftmost(first) match of the regex in the source string. So it finds the string `abc21@def.com` which is the email id. This regex `([a-zA-Z]+(\d*)[a-zA-Z]*@[a-zA-Z]+(\d*)[a-zA-Z]*\.[a-z|A-Z]{2,})` has two sub-matches `\d*` in the username part and also in the domain part. So, the list returns the email address as the match and the match as itself, as well as the number in the sub-match inside the mail address. So the result, `[abc21@def.com abc21@def.com 21 ]`, there are a few empty string sub-matches because the second sub-match for the domain name number returns an empty string.&#xA;&#xA;Similarly, the `FindAllStringSubmatch` will return the list of all the matches in the source string. The other matches don&#39;t have any sub-matches in the regular expression so it just gets the match and the sub-match as itself in the case of string `mail`, digit `8`, and digit `124`.&#xA;&#xA;We can also use this example from a file as a slice of bytes. This will return a list of slice of slice of bytes instead of slice of slice of strings.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;regexp&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    exp := regexp.MustCompile(&#xA;        `([a-zA-Z]+(\d*)[a-zA-Z]*@[a-zA-Z]*(\d*)[a-zA-Z]+\.[a-z|A-Z]{2,})` +&#xA;            `|(mail|email)` +&#xA;            `|(\d{1,3})`,&#xA;    )&#xA;&#x9;email_file, err := os.ReadFile(&#34;subtext.txt&#34;)&#xA;&#x9;log_error(err)&#xA;&#x9;mail_match := exp.FindSubmatch(email_file)&#xA;&#x9;log.Printf(&#34;%s&#xA;&#34;, mail_match)&#xA;&#x9;mail_matches := exp.FindAllSubmatch(email_file, -1)&#xA;&#x9;//log.Println(mail_matches)&#xA;&#x9;log.Printf(&#34;%s&#xA;&#34;, mail_matches)&#xA;&#xA;}&#xA;```&#xA;&#xA;```txt&#xA;# submatch.txt&#xA;&#xA;abc21@def.com is the mail address of user id 1234&#xA;The email address abe2def.com is of user name abc&#xA;a2be.@def.com&#xA;Email address: abe@de2f.com, User id: 45&#xA;johndoe@example.com&#xA;jane.doe123@example.com&#xA;janedoe@example.co.uk&#xA;john123@example.org&#xA;janedoe456@example.net&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[][]unit8&#xA;[abc21@def.com abc21@def.com 21   ]&#xA;&#xA;[][][]unit8&#xA;[&#xA;    [abc21@def.com abc21@def.com 21   ] [mail    mail ] [123 123] [4     4] &#xA;    [email    email ] [2     2] [2     2] [mail    mail ] &#xA;    [abe@de2f.com abe@de2f.com  2  ] [45     45] &#xA;    [johndoe@example.com johndoe@example.com    ] &#xA;    [doe123@example.com doe123@example.com 123   ] &#xA;    [janedoe@example.co janedoe@example.co    ] &#xA;    [john123@example.org john123@example.org 123   ] &#xA;    [janedoe456@example.net janedoe456@example.net 456   ]&#xA;]&#xA;```&#xA;&#xA;As we can see from the dummy email ids and some random text, we are able to match the email ids and the numbers in them as the sub-match of the string. This is a `[][]unit8` in case of the `FindSubmatch` and `[][][]unit8` in case of `FindAllSubmatch`. The working remains the same for the bytes as it was for the strings.&#xA;&#xA;### Find Submatch Index&#xA;&#xA;We also have the [FindSubmatchIndex](https://pkg.go.dev/regexp#Regexp.FindSubmatchIndex) and [FindAllSubmatchIndex](https://pkg.go.dev/regexp#Regexp.FindAllSubmatchIndex), and the string variants to get the index(es) of the sub-matches in the pattern picked from the regular expression.&#xA;&#xA;```go&#xA;str := &#34;abe21@def.com is the mail address of 8th user with id 124&#34;&#xA;exp := regexp.MustCompile(&#xA;    `([a-zA-Z]+(\d*)[a-zA-Z]*@[a-zA-Z]*(\d*)[a-zA-Z]+\.[a-z|A-Z]{2,})` +&#xA;        `|(mail|email)` +&#xA;        `|(\d{1,3})`,&#xA;)&#xA;&#xA;match := exp.FindStringSubmatch(str)&#xA;match_index := exp.FindStringSubmatchIndex(str)&#xA;log.Println(match)&#xA;log.Println(match_index)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[abe21@def.com abe21@def.com 21   ]&#xA;[0 13 0 13 3 5 9 9 -1 -1 -1 -1]&#xA;```&#xA;&#xA;So this returns a list of pairs, here consider the list as `[(0, 13) (0, 13), (3, 5), (9, 9), (-1, -1), (-1, -1)]`. Because the list represents the start and end indexes of the sub-matches in the source string. The match is the first pair, i.e. at the first character `0` -&amp;gt; `a`, and ends at space i.e. `13` character off by one. Then we have the sub-match itself with the same indexes. Then the `3` and `5` indicating the number sub-match `21` in the `abc21@def.com` it starts at 3 and ends at 5, so it occupies 3 and 4 characters in the source string. Similarly, the domain level number doesn&#39;t have any number in the source string so it has returned the domain name end index as an empty string.&#xA;&#xA;We had used `(\d*)` which can return 0 or more occurrences of the digit, so it returned no match in the case of the domain level name, hence we get the `9` as an empty string at the end of the sub-match for it. The rest are for the `email` or `mail`, and the lone digit sub-matches in the regular expression which we don&#39;t have in the first match in the source string.&#xA;&#xA;```go&#xA;str := &#34;abe21@def.com is the mail address of 8th user with id 124&#34;&#xA;exp := regexp.MustCompile(&#xA;    `([a-zA-Z]+(\d*)[a-zA-Z]*@[a-zA-Z]*(\d*)[a-zA-Z]+\.[a-z|A-Z]{2,})` +&#xA;        `|(mail|email)` +&#xA;        `|(\d{1,3})`,&#xA;)&#xA;&#xA;match := exp.FindAllStringSubmatch(str, -1)&#xA;match_indexes := exp.FindAllStringSubmatchIndex(str, -1)&#xA;log.Println(match_indexes)&#xA;```&#xA;&#xA;```console&#xA;$ go run main.go&#xA;&#xA;[[abe21@def.com abe21@def.com 21   ] [mail    mail ] [8     8] [124     124]]&#xA;[&#xA;    [0 13 0 13 3 5 9 9 -1 -1 -1 -1]&#xA;    [21 25 -1 -1 -1 -1 -1 -1 21 25 -1 -1]&#xA;    [37 38 -1 -1 -1 -1 -1 -1 -1 -1 37 38]&#xA;    [54 56 -1 -1 -1 -1 -1 -1 -1 -1 54 57]&#xA;]&#xA;```&#xA;&#xA;Similarly, we have used the [FindAllStringSubmatchIndex](https://pkg.go.dev/regexp#Regexp.FindAllStringSubmatchIndex) for getting the list of a slice of indexes(int) for the sub-match of the expression in the source string.&#xA;&#xA;The first element is the same as the previous example, the next match in the source string is `mail` which comes at index 21 and ends at the 24th index which golang does 24+1 as a convention. Similarly, the number `8` is matched at index `37` and the number `124` at index `54`, rest of the sub-matches for these matches are not present so it turns up to be -1 for the rest of the sub-matches.&#xA;&#xA;So this can also be used for the byte/uint8 type of variants with [FindSubmatchIndex](https://pkg.go.dev/regexp#Regexp.FindSubmatchIndex) and [FindAllSubmatchIndex](https://pkg.go.dev/regexp#Regexp.FindAllSubmatchIndex).&#xA;&#xA;## Replace Patterns&#xA;&#xA;The replace method is used for replacing the matched patterns.&#xA;&#xA;The [ReplaceAll](https://pkg.go.dev/regexp#Regexp.ReplaceAll) and [ReplaceAllLiteral](https://pkg.go.dev/regexp#Regexp.ReplaceAllLiteral) with some string and byte slice variations can help us in replacing the source text with a replacement string against a regular expression.&#xA;&#xA;Let&#39;s start with a simple example of strings.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;regexp&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;str := &#34;Gophy gophers go in the golang grounds&#34;&#xA;&#x9;exp := regexp.MustCompile(`(Go|go)`)&#xA;&#x9;replaced_str := exp.ReplaceAllString(str, &#34;to&#34;)&#xA;&#x9;log.Println(replaced_str)&#xA;&#xA;}&#xA;```&#xA;&#xA;The above code can replace the string with regex with the replacement string. The regex provided in the `exp` variable has a pattern matching for `Go` or `go`. So, the `ReplaceAllString` method takes in two arguments as strings and returns the string after replacing the character.&#xA;&#xA;```console&#xA;$ go run replace.go&#xA;&#xA;Original:  Gophy gophers go in the golang grounds&#xA;Replaced:  tophy tophers to in the tolang grounds&#xA;```&#xA;&#xA;So, this replaces all the characters with `Go` or `go` in the source string with `to`.&#xA;&#xA;There is a special value available in the replacement string which can expand the regular expression literals. Since the regular expression consists of multiple literals/quantifiers, we can use those to replace or keep the string in a particular expression&#39;s evaluation.&#xA;&#xA;```go&#xA;str = &#34;Gophy gophers go in the golang grounds&#34;&#xA;exp2 := regexp.MustCompile(`(Go|go)|(phers)|(rounds)`)&#xA;log.Println(exp2.ReplaceAllString(str, &#34;hop&#34;))&#xA;log.Println(exp2.ReplaceAllString(str, &#34;$1&#34;))&#xA;log.Println(exp2.ReplaceAllString(str, &#34;$2&#34;))&#xA;log.Println(exp2.ReplaceAllString(str, &#34;$3&#34;))&#xA;```&#xA;&#xA;```console&#xA;$ go run replace.go&#xA;&#xA;hopphy hophop hop in the hoplang ghop&#xA;Gophy go go in the golang g&#xA;phy phers  in the lang g&#xA;phy   in the lang grounds&#xA;```&#xA;&#xA;The above code uses the regular expression match string have been replaced with the literals. So, the regex `(Go|go)|(phers)|(rounds)` has three parts `Go|go`, `phers`, and `rounds`. Each of the literals is separated by an operator, indicating either the match or all matches should be considered.&#xA;&#xA;In the first statement, we replace the regex with `hop` as you can see all the matches are replaced with the replacement string. For instance, the word `gophers` is replaced by `hophop`, because `go` and `phers` is matched separately and replaced.&#xA;&#xA;In the second statement, we replace the source string with `$1` indicating the first literal in the regex i.e. `Go|go`. These statements will expand the `$1` and keep the match only where the literal matches and rest are removed. So, `Gophy` is matched with `Go|go` so it is kept as is in the replacement. However, for `grounds`, the literal match for `$1` i.e. `Go|go` does not match and hence is not kept and removed, so the resulting string becomes `g` rest is substituted with an empty string.&#xA;&#xA;In the third print statement, the source string is replaced with the second literal `$2` i.e. `phers`. So if any string matches the literal, only those are kept and the rest are substituted by an empty string. So, `Gophy` or `Go` doesn&#39;t match `phers` and hence is replaced, however, `gophers` match the `phers` part and is kept as it is, but the `go` part is substituted.&#xA;&#xA;Similarly, for the fourth print statement, the source string is replaced with the third literal i.e. `rounds`. So if only the third literal is kept as is, the rest matching strings from the regex are substituted with an empty string. So, `grounds` remain as it is because it matches the `rounds` in the replacement literal.&#xA;&#xA;In short, we replace the literal back after replacing the regex patterns in the source string. This can be used to fine-tune or access specific literals or expressions in the regular expressions.&#xA;&#xA;```go&#xA;str = &#34;Gophy gophers go in the golang grounds&#34;&#xA;exp2 := regexp.MustCompile(`(Go|go)|(phers)|(rounds)`)&#xA;log.Println(exp2.ReplaceAllString(str, &#34;$1$2&#34;))&#xA;&#xA;str = &#34;Gophy gophers go in the golang cophers grounds&#34;&#xA;log.Println(exp2.ReplaceAllString(str, &#34;$1$3&#34;))&#xA;```&#xA;&#xA;```console&#xA;$ go run replace.go&#xA;&#xA;Gophy gophers go in the golang g&#xA;Gophy go go in the golang co grounds&#xA;```&#xA;&#xA;We can even concatenate and make some minor string literal adjustments to the replacement string. As we have done in the example, where both the `$1$2` are used as the replacement string. The two literals combine to make a string with `Go|go` and `phers`. So, we can see the result, in the first statement, `Gophy gophers go in the golang g`, all the character that have `Go|go` or `phers` are kept as it is(substituted the same string), the `grounds` however does not match and hence are replaced with the empty string(because the capture group `$1$2` does not match `rounds`).&#xA;&#xA;Similarly, for the third example, the `$1$3` i.e. `Go|go` or `rounds` are matched with the source string. So, we the `phers` in `gophers` and `cophers` does not match the capture group `$1$3` and hence is replaced by an empty group(string). However, the `Gophy`, `golang`, and `grounds` match the capture group and are replaced by that match string (which is the same string).&#xA;&#xA;If we want to avoid expansion of the strings as we did in the previous example with `$1` and other parameters, we can use the [ReplaceAllLiteral](https://pkg.go.dev/regexp#Regexp.ReplaceAllLiteral) or [ReplaceAllLiteralString](https://pkg.go.dev/regexp#Regexp.ReplaceAllLiteralString) to parse the string as it is.&#xA;&#xA;```go&#xA;str := &#34;Gophy gophers go in the golang cophers grounds&#34;&#xA;exp2 := regexp.MustCompile(`(Go|go)|(phers)|(rounds)`)&#xA;log.Println(exp2.ReplaceAllLiteralString(str, &#34;$1&#34;))&#xA;```&#xA;&#xA;```&#xA;$ go run replace.go&#xA;&#xA;$1phy $1$1 $1 in the $1lang co$1 g$1&#xA;```&#xA;&#xA;As we can see the `$1` is not expanded and parsed as it is for replacing the pattern in the regular expression. The `Go` is replaced with `$1` to look like `$1phy`, and similarly for the rest of the patterns.&#xA;&#xA;That&#39;s it from the 26th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/regex) repository.&#xA;&#xA;## Conclusion&#xA;&#xA;This article covered the fundamentals of using the `regexp` package for working with regular expressions in golang. We explored the methods and `Regexp` type in the package with various methods available through the type interface. By exploring the examples and simple snippets, various ways for pattern matching, finding, and replacing were walked over and found.&#xA;&#xA;So, hopefully, the article might have found useful to you, if you have any queries, questions, feedback, or mistakes in the article, you can let me know in the discussion or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Creating a Chat Application with Django and HTMX</title>
      <link>https://www.meetgor.com/posts/django-htmx-chat-app</link>
      <description>Django + HTMX Chat application Introduction In this article, we will be creating a Django project, which will be a chat-room kind of application. The user needs</description>
      <pubDate>Sun, 05 Feb 2023 00:00:00 UTC</pubDate>
      <content># Django + HTMX Chat application&#xA;&#xA;## Introduction&#xA;&#xA;In this article, we will be creating a Django project, which will be a chat-room kind of application. The user needs to authenticate to the app and then and there he/she can create or join rooms, every room will have some name and URL associated with it. So, the user simply needs to enter the name of the room, which will be unique. The user can then simply enter the messages in the chat room. This is a core chat application that uses web sockets.&#xA;&#xA;The unique thing about this app will be that, we don&#39;t have to write a javascript client. It will all be handled by some HTMX magic. The web socket in the backend will definitely handle using Django channels.&#xA;&#xA;Demo:&#xA;&#xA;![Demonstration of the Chat App](https://meetgor-cdn.pages.dev/django-htmx/chat-app-demo.webm)&#xA;&#xA;[GitHub Repository](https://github.com/Mr-Destructive/django-htmx-chat)&#xA;&#xA;### Requirements:&#xA;&#xA;* Django&#xA;    &#xA;* Django-channels&#xA;    &#xA;* daphne&#xA;    &#xA;* HTMX&#xA;    &#xA;* SQLite or any relational database&#xA;    &#xA;&#xA;Also if we want to use the application on a large and production scale:&#xA;&#xA;* Redis&#xA;    &#xA;* channels_redis&#xA;&#xA;The code for this chat app is provided in the [GitHub repository](https://github.com/Mr-Destructive/django-htmx-chat).&#xA;&#xA;## Setup for Django project&#xA;&#xA;We will create a simple Django project to start with. The project can have two apps, one for auth and the other for the chat. You can customize the way you want your existing project accordingly. This project is just a demonstration of the use of the chat application with websockets and Django channels.&#xA;&#xA;I&#39;ll call the project `backchat`, you can call it whatever you want. We will create a virtual environment and install Django in that virtual environment&#xA;&#xA;```bash&#xA;virtualenv .venv&#xA;&#xA;For Linux/macOS:&#xA;source .venv/bin/activate&#xA;&#xA;For Windows:&#xA;.venv\scripts�ctivate&#xA;&#xA;pip install django&#xA;django-admin startproject backchat&#xA;cd backchat&#xA;```&#xA;&#xA;This will set up a base Django project. We can now work on the actual implementation of the Django project. Firstly, we will start with authentication.&#xA;&#xA;## Adding basic Authentication and Authorization&#xA;&#xA;### Creating the accounts app&#xA;&#xA;We can separate the authentication of the user from the rest of the project, by creating a separate app called `user` or `accounts` app.&#xA;&#xA;```bash&#xA;python manage.py startapp accounts&#xA;```&#xA;&#xA;### Creating a base user model&#xA;&#xA;We&#39;ll start by inheriting the [AbstractUser](https://docs.djangoproject.com/en/4.1/topics/auth/customizing/#using-a-custom-user-model-when-starting-a-project) the model provided in the `django.contrib.auth.models` module. The model has base fields like `username` and `password` which are required fields, and `email`, `first_name`, `last_name`, etc. which are not mandatory. It is better to create a custom model by inheriting the `AbstractUser` because if in the longer run, we need to make custom fields in the user model, it becomes a breeze.&#xA;&#xA;```python&#xA;# accounts/models.py&#xA;&#xA;&#xA;from djnago.contrib.auth.models import AbstractUser&#xA;&#xA;&#xA;class User(AbstractUser):&#xA;    pass&#xA;```&#xA;&#xA;This creates a basic custom user rather than using the Django built-in user. Next, we need to make sure, Django understands the default user is the one we defined in the `accounts` app and not the default `User`. So, we just need to add a field called `AUTH_USER_MODEL` in the `settings.py` file. The value of this field will be the app name or the module name and the model in that python module that we need our custom user model to be set as the default user model.&#xA;&#xA;```python&#xA;# backchat/settings.py&#xA;&#xA;&#xA;INSALLED_APPS = [&#xA;    ...&#xA;    ...&#xA;    &#34;accounts&#34;,&#xA;]&#xA;&#xA;# Append to the end of file&#xA;AUTH_USER_MODEL = &#39;accounts.User&#39;&#xA;```&#xA;&#xA;### Initial migrations for the Django project&#xA;&#xA;Now, this will get picked up as the default user model while referring to any processing related to the user. We can move into migrating the changes for the basic Django project and the user model.&#xA;&#xA;```bash&#xA;python manage.py makemigrations&#xA;python manage.py migrate&#xA;```&#xA;&#xA;![initial migration for base django and user model](https://meetgor-cdn.pages.dev/django-htmx/chat-accounts-migrations.png)&#xA;&#xA;### Creating register view&#xA;&#xA;Further, we can add the views like register and profile for the accounts app that can be used for the basic authentication. The Login and Logout views are provided in the `django.contrib.auth.views` module, we only have to write our own register view. I will be using function-based views to keep it simple to understand but it can simply be a class-based view as well.&#xA;&#xA;To define a view first, we need to have form, a user registration form. The form will inherit from the [UserCreationForm](https://docs.djangoproject.com/en/4.1/topics/auth/default/#django.contrib.auth.forms.UserCreationForm) form which will make the bulk of the task for us at the user registration. We can simply then override the Meta class with the fields that we want to display, so hence we just keep the `username` and the `password` fields. The form can be customized by adding in the widget attribute and specifying the classes used in them.&#xA;&#xA;```python&#xA;# accounts/forms.py&#xA;&#xA;&#xA;from accounts.models import User&#xA;from django.contrib.auth.forms import UserCreationForm&#xA;&#xA;class UserRegisterForm(UserCreationForm):&#xA;&#xA;    class Meta:&#xA;        model= User&#xA;        fields = [&#39;username&#39;, &#39;password1&#39;, &#39;password2&#39;]&#xA;```&#xA;&#xA;This will give us the `UserRegisterForm` form that can be used for displaying in the register view that we will create in the next step.&#xA;&#xA;We will have to create the register view that will render the form for user registration and will also process the form submission.&#xA;&#xA;```python&#xA;# accounts/views.py&#xA;&#xA;&#xA;from django.contrib import messages&#xA;from django.shortcuts import redirect, render&#xA;from accounts.forms import UserRegisterForm&#xA;&#xA;def register(request):&#xA;    if request.method == &#34;POST&#34;:&#xA;        form = UserRegisterForm(request.POST)&#xA;        if form.is_valid():&#xA;            form.save()&#xA;            username = form.cleaned_data.get(&#34;username&#34;)&#xA;            messages.success(&#xA;                request, f&#34;Your account has been created! You are now able to log in&#34;&#xA;            )&#xA;            return redirect(&#34;login&#34;)&#xA;    else:&#xA;        form = UserRegisterForm()&#xA;        return render(request, &#34;accounts/register.html&#34;, {&#34;form&#34;: form})&#xA;```&#xA;&#xA;The above register view has two cases, one for the user requesting the registration form and the second request when the user submits the form. So, when the user makes a get request, we load an empty form `UserRegisterForm` and render the `register` template with the form. We will make the templates later.&#xA;&#xA;So, the template is just rendered when the user wants to register and when the user submits the form(sends a post request) we get the details from the post request and make it an instance of `UserRegisterForm` and save the form if it is valid. We then redirect the user to the login view (will use the default one in the next section). We parse the message as the indication that the user was created.&#xA;&#xA;### Adding URLs for Authentication and Authorization&#xA;&#xA;Once, we have the register view setup, we can also add login and logout views in the app. But, we don&#39;t have to write it ourselves, we can override them if needed, but we&#39;ll keep the default ones. We will use the [LoginView](https://docs.djangoproject.com/en/4.1/topics/auth/default/#django.contrib.auth.views.LoginView) and the [LogoutView](https://docs.djangoproject.com/en/4.1/topics/auth/default/#django.contrib.auth.views.LogoutView) view which are class-based views provided in the `django.contrib.auth.views` module. We will provide the respective templates for each of these views.&#xA;&#xA;```python&#xA;# accounts/urls.py&#xA;&#xA;&#xA;from django.urls import path&#xA;from django.contrib.auth import views as auth_views&#xA;import user.views as user_views&#xA;&#xA;urlpatterns = [&#xA;    path(&#34;register/&#34;, user_views.register, name=&#34;register&#34;),&#xA;    path(&#xA;        &#34;login/&#34;,&#xA;        auth_views.LoginView.as_view(template_name=&#34;accounts/login.html&#34;),&#xA;        name=&#34;login&#34;,&#xA;    ),&#xA;    path(&#xA;        &#34;logout/&#34;,&#xA;        auth_views.LogoutView.as_view(template_name=&#34;accounts/logout.html&#34;),&#xA;        name=&#34;logout&#34;,&#xA;    ),&#xA;]&#xA;```&#xA;&#xA;We have named the URLs as `register`, `login`, and `logout` so that we can use them while rendering the links for them in the templates. Now, we also need to include the URLs from the accounts app in the project URLs. We can do that by using the `include` method and specifying the app name with the module where the urlpatterns are located.&#xA;&#xA;```python&#xA;# backchat/urls.py&#xA;&#xA;&#xA;from django.contrib import admin&#xA;from django.urls import include, path&#xA;&#xA;urlpatterns = [&#xA;    path(&#34;admin/&#34;, admin.site.urls),&#xA;    path(&#34;auth/&#34;, include(&#34;accounts.urls&#34;)),&#xA;]&#xA;```&#xA;&#xA;So, we have routed the `/auth` path to include all the URLs in the accounts app. So, the login view will be at the URL `/auth/login/` , and so on.&#xA;&#xA;Also, we need to add the `LOGIN_REDIRECT_URL` and `LOGIN_URL` for specifying the url name which will be redirected once the user is logged in and the default login url name respectively.&#xA;&#xA;```python&#xA;# backchat / settings.py&#xA;&#xA;&#xA;LOGIN_REDIRECT_URL = &#34;index&#34;&#xA;LOGIN_URL = &#34;login&#34;&#xA;```&#xA;&#xA;We are now almost done with the view and routing part of the accounts app and can move into the creation of templates.&#xA;&#xA;### Adding Templates for authentication views&#xA;&#xA;We need a few templates that we have been referencing in the views and the URLs of the accounts app in the project. So there are a couple of ways to use templates in a Django project. I prefer to have a single templates folder in the root of the project and have subfolders as the app which will have the templates specific to those apps.&#xA;&#xA;I usually create a `base.html` file in the templates folder and use that for inheriting other templates. So, we will have to change one setting in the project to make sure it looks at `templates/` from the root of the project.&#xA;&#xA;```bash&#xA;# backchat/settings.py&#xA;&#xA;import os&#xA;&#xA;...&#xA;...&#xA;&#xA;TEMPLATES = [&#xA;    {&#xA;        &#34;BACKEND&#34;: &#34;django.template.backends.django.DjangoTemplates&#34;,&#xA;        &#34;DIRS&#34;: [ os.path.join(BASE_DIR, &#34;templates&#34;), ],&#xA;        &#34;APP_DIRS&#34;: True,&#xA;        &#34;OPTIONS&#34;: {&#xA;            &#34;context_processors&#34;: [&#xA;                &#34;django.template.context_processors.debug&#34;,&#xA;                &#34;django.template.context_processors.request&#34;,&#xA;                &#34;django.contrib.auth.context_processors.auth&#34;,&#xA;                &#34;django.contrib.messages.context_processors.messages&#34;,&#xA;            ],&#xA;        },&#xA;    },&#xA;]&#xA;```&#xA;&#xA;Then create the folder in the same path as you see your `manage.py` file.&#xA;&#xA;```bash&#xA;mkdir templates&#xA;```&#xA;&#xA;![Template Set Up](https://meetgor-cdn.pages.dev/django-htmx/chat-mkdir-templates.png)&#xA;&#xA;#### Creating the base template&#xA;&#xA;The below will be the base template used for the chat application, you can custmize it as per your needs.&#xA;&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html&gt;&#xA;    &lt;head&gt;&#xA;        &lt;meta charset=&#34;utf-8&#34; /&gt;&#xA;        &lt;title&gt;Chat App&lt;/title&gt;&#xA;        {% load static %}&#xA;        &lt;script src=&#34;https://unpkg.com/htmx.org@1.8.5&#34;&gt;&lt;/script&gt;&#xA;    &lt;/head&gt;&#xA;    &lt;body&gt;&#xA;        {% if user.is_authenticated %}&#xA;            &lt;a href=&#34;#&#34;logout&#39; %}&#34;&gt;Logout&lt;/a&gt;&#xA;        {% else %}&#xA;            &lt;a href=&#34;#&#34;login&#39; %}&#34;&gt;Login&lt;/a&gt;&#xA;        {% endif %}&#xA;        &lt;h1&gt;Back Chat&lt;/h1&gt;&#xA;        {% block base %}&#xA;        {% endblock %}&#xA;    &lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;We have included the [htmx](https://htmx.org/docs/#installing) package as the script into this template as we will be using it in the actual part of chat application.&#xA;&#xA;#### Creating the Register Template&#xA;&#xA;```html&#xA;# templates / accounts / register.html&#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;{% block base %}&#xA;    &lt;div class=&#34;content-section&#34;&gt;&#xA;        &lt;form method=&#34;POST&#34;&gt;&#xA;            {% csrf_token %}&#xA;            &lt;fieldset class=&#34;form-group&#34;&gt;&#xA;                &lt;legend class=&#34;border-bottom mb-4&#34;&gt;Register Now&lt;/legend&gt;&#xA;                {{ form.as_p }}&#xA;            &lt;/fieldset&gt;&#xA;            &lt;div class=&#34;form-group&#34;&gt;&#xA;                &lt;button class=&#34;btn btn-outline-info&#34; type=&#34;submit&#34;&gt;Sign Up&lt;/button&gt;&#xA;            &lt;/div&gt;&#xA;        &lt;/form&gt;&#xA;        &lt;div class=&#34;border-top pt-3&#34;&gt;&#xA;            &lt;small class=&#34;text-muted&#34;&gt;&#xA;&#x9;&#x9;    Already Have An Account? &lt;a class=&#34;ml-2&#34; href=&#34;#&#34;login&#39; %}&#34;&gt;Log In&lt;/a&gt;&#xA;            &lt;/small&gt;&#xA;        &lt;/div&gt;&#xA;    &lt;/div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;![User Registration Page](https://meetgor-cdn.pages.dev/django-htmx/chat-register-page.png)&#xA;&#xA;#### Creating the Login Template&#xA;&#xA;```html&#xA;# templates / accounts / login.html    &#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;{% block base %}&#xA;    &lt;div class=&#34;content-section&#34; id=&#34;login&#34;&gt;&#xA;        &lt;form method=&#34;POST&#34; &gt;&#xA;            {% csrf_token %}&#xA;            &lt;fieldset class=&#34;form-group&#34;&gt;&#xA;                &lt;legend class=&#34;border-bottom mb-4&#34;&gt;LOG IN&lt;/legend&gt;&#xA;                {{ form.as_p }}&#xA;            &lt;/fieldset&gt;&#xA;            &lt;div class=&#34;form-group&#34;&gt;&#xA;                &lt;button class=&#34;btn btn-outline-info&#34; type=&#34;submit&#34;&gt;Log In&lt;/button&gt;&#xA;            &lt;/div&gt;&#xA;        &lt;/form&gt;&#xA;        &lt;div class=&#34;border-top pt-3&#34;&gt;&#xA;            &lt;small class=&#34;text-muted&#34;&gt;&#xA;                Register Here &lt;a class=&#34;ml-2&#34; href=&#34;#&#34;register&#39; %}&#34;&gt;Sign Up&lt;/a&gt;&#xA;            &lt;/small&gt;&#xA;        &lt;/div&gt;&#xA;    &lt;/div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;![User Login Page](https://meetgor-cdn.pages.dev/django-htmx/chat-login-page.png)&#xA;&#xA;Creating the Logout Template&#xA;&#xA;```html&#xA;# templates / accounts / logout.html    &#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;{% block base %}&#xA;    &lt;h2&gt;You have been logged out&lt;/h2&gt;&#xA;    &lt;div class=&#34;border-top pt-3&#34;&gt;&#xA;        &lt;small class=&#34;text-muted&#34;&gt;&#xA;            &lt;a href=&#34;#&#34;login&#39; %}&#34;&gt;Log In Again&lt;/a&gt;&#xA;        &lt;/small&gt;&#xA;    &lt;/div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;## Install and setup channels&#xA;&#xA;We will be using channels to create long-running connections, it is a wrapper around Django&#39;s asynchronous components and allows us to incorporate other protocols like web sockets and other asynchronous protocols.&#xA;&#xA;So, we will be using the Django channels package that will allow us to use the WebSocket protocol in the chat application. [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) is a communication protocol(set of rules and standards to be followed) that allows both the client and server to send and receive messages(communication).&#xA;&#xA;To install Django channels, we can use pip and install channels with daphne which will be used as the web server gateway interface for asynchronous web applications.&#xA;&#xA;```bash&#xA;pip install -U channels[&#34;daphne&#34;]&#xA;```&#xA;&#xA;So this will install the latest version of channels and daphne. We now have [channels](https://channels.readthedocs.io/en/stable/installation.html) in our Django project, we simply need to configure the `CHANNEL_LAYER` config for specifying the backend used that can be `Redis`, `In-Memory`, or others. We need to add `channels` , `daphne` to the `INSALLED_APPS` config of the project. Make sure the `daphne` app is on top of the applications list.&#xA;&#xA;```bash&#xA;# backchat/settings.py&#xA;&#xA;...&#xA;...&#xA;&#xA;INSALLED_APPS = [&#xA;    &#34;daphne&#34;,&#xA;    ...&#xA;    ...&#xA;    &#34;channels&#34;,&#xA;]&#xA;&#xA;&#xA;ASGI_APPLICATION = &#34;backchat.asgi.application&#34;&#xA;&#xA;...&#xA;...&#xA;&#xA;# For InMemory channels&#xA;&#xA;CHANNEL_LAYERS = {&#xA;    &#39;default&#39;: {&#xA;        &#34;BACKEND&#34;: &#34;channels.layers.InMemoryChannelLayer&#34;,&#xA;    }&#xA;}&#xA;&#xA;&#xA;# For Redis&#xA;&#xA;CHANNEL_LAYERS = {&#xA;    &#34;default&#34;: {&#xA;        &#34;BACKEND&#34;: &#34;asgi_redis.RedisChannelLayer&#34;,&#xA;        &#34;CONFIG&#34;: {&#xA;            &#34;hosts&#34;: [(&#34;redis-host-url&#34;, 6379)],&#xA;        },&#xA;    },&#xA;}&#xA;```&#xA;&#xA;### Redis Configuration (Optional)&#xA;&#xA;You can either use the [InMemoryChannelLayer](https://channels.readthedocs.io/en/latest/topics/channel_layers.html) or you can use them `RedisChannelLayer` for the backend of the chat app. There are other types of backends like `Amazon SQS` services, `RabbitMQ`, `Kafka`, `Google Cloud Pub/Sub`, etc. I will be creating the app with only the `InMemoryChannelLayer` but will provide a guide for redis as well, both are quite similar and only have a few nuances.&#xA;&#xA;We need to install [channels_redis](https://github.com/django/channels_redis/) it for integrating redis in the Django project with channels.&#xA;&#xA;```bash&#xA;pip install channels_redis&#xA;```&#xA;&#xA;So, this will make the `channels_redis` package available in the project, we use this package for real-time storage, in the case of the chat app, we might use it for storing messages or room details, etc.&#xA;&#xA;## Creating the Chat App&#xA;&#xA;Further, we now can create another app for handling the rooms and chat application logic. This app will have its own models, views, and URLs. Also, we will define consumers and routers just like views and URLs but for asynchronous connections. More on that soon.&#xA;&#xA;So, let&#39;s create the `chat` app.&#xA;&#xA;```bash&#xA;python manage.py startapp chat&#xA;```&#xA;&#xA;Then we will add the chat app to the `INSALLED_APPS` config.&#xA;&#xA;```python&#xA;# backchat/settings.py&#xA;&#xA;INSALLED_APPS = [&#xA;    ...&#xA;    ...,&#xA;    &#34;chat&#34;,&#xA;]&#xA;```&#xA;&#xA;So, this will make sure to load the chat app in the project. Whenever we run any commands like migrations or running the server, it keeps the app in the `INSALLED_APPS` checked up.&#xA;&#xA;### Defining models&#xA;&#xA;This is optional, but we&#39;ll do it for since we are making a Django app. We already have an auth system configured, adding rooms and authorizing the users will become easier then.&#xA;&#xA;So, let&#39;s create the models for the chat app which will be the `Room`.&#xA;&#xA;```python&#xA;# chat/models.py&#xA;&#xA;&#xA;from django.db import models&#xA;from accounts.models import User&#xA;&#xA;class Room(models.Model):&#xA;    name = models.CharField(max_length=128)&#xA;    slug = models.SlugField(unique=True)&#xA;    users = models.ManyToManyField(User)&#xA;&#xA;    def __str__(self):&#xA;        return self.name&#xA;&#xA;&#xA;class Message(models.Model):&#xA;    room = models.ForeignKey(Room, on_delete=models.CASCADE)&#xA;    user = models.ForeignKey(User, on_delete=models.CASCADE)&#xA;    message = models.TextField()&#xA;    created_at = models.DateTimeField(auto_now_add=True)&#xA;&#xA;    def __str__(self):&#xA;        return (&#xA;            self.room.name + &#34; - &#34; +&#xA;            str(self.user.username) + &#34; : &#34; +&#xA;            str(self.message)&#xA;        )&#xA;```&#xA;&#xA;So, we simply have the name which will be taken from the user, and the slug which will be served as the URL to the room. The users are set as [ManyToManyField](https://docs.djangoproject.com/en/4.1/ref/models/fields/#django.db.models.ManyToManyField) since one room can have multiple users and a user can be in multiple rooms. Also we are creating the model `Message` that will be storing the room and the user as the foreign key and the actual text as the message, we can improve the security by encrypting the text but it&#39;s not the point of this article.&#xA;&#xA;We have set the `created_at` the field which will set the time when the object was created. Finally, the dunder string method is used for representing the message object as a price of the concatenation of strings of room name, username, and the message. This is useful for admin stuff as it makes it easier to read the object rather than the default id.&#xA;&#xA;Now, once the models are designed we can migrate the models into the database.&#xA;```&#xA;python manage.py makemigrations&#xA;python manage.py migrate&#xA;```&#xA;&#xA;![Chat app migrations](https://meetgor-cdn.pages.dev/django-htmx/chat-migrations.png)&#xA;&#xA;We now have a decent model structure for the chat application. We can now start the crux of the application i.e. consumers and routing with channels.&#xA;&#xA;### Writing consumers and routers for WebSockets&#xA;&#xA;So, we start with the bare bones provided in the tutorial on the channel [documentation](https://channels.readthedocs.io/en/stable/tutorial/part_3.html). We create a class (consumer) called `ChatConsumer` which inherits from the `AsyncWebsocketConsumer` provided by the `channels.generic.websocket` module. This has a few methods like `connect`, `disconnect`, and `receive`. These are the building blocks of a consumer. We define these methods as they will be used for communication via the WebSocket protocol through the channels interface.&#xA;&#xA;In the following block of code, we are essentially doing the following:&#xA;&#xA;* Accepting connection on the requested room name&#xA;    &#xA;* Sending and Receiving messages on the room/group&#xA;    &#xA;* Closing the WebSocket connection and removing the client from the room/group&#xA;    &#xA;&#xA;```python&#xA;# chat/consumers.py&#xA;&#xA;&#xA;import json&#xA;&#xA;from asgiref.sync import sync_to_async&#xA;from channels.generic.websocket import AsyncWebsocketConsumer&#xA;&#xA;from chat.models import Room, Message&#xA;&#xA;&#xA;class ChatConsumer(AsyncWebsocketConsumer):&#xA;    async def connect(self):&#xA;        self.room_name = self.scope[&#34;url_route&#34;][&#34;kwargs&#34;][&#34;room_slug&#34;]&#xA;        self.room_group_name = f&#34;chat_{self.room_name}&#34;&#xA;        self.user = self.scope[&#34;user&#34;]&#xA;&#xA;        await self.channel_layer.group_add(&#xA;            self.room_group_name, self.channel_name&#xA;        )&#xA;&#xA;        await self.accept()&#xA;&#xA;    async def disconnect(self, close_code):&#xA;        await self.channel_layer.group_discard(&#xA;            self.room_group_name, self.channel_name&#xA;        )&#xA;&#xA;    async def receive(self, text_data):&#xA;        text_data_json = json.loads(text_data)&#xA;        message = text_data_json[&#34;message&#34;]&#xA;        username = self.user.username&#xA;        &#xA;        await self.channel_layer.group_send(&#xA;            self.room_group_name, &#xA;            {&#xA;                &#34;type&#34;: &#34;chat_message&#34;,&#xA;                &#34;message&#34;: message,&#xA;                &#34;username&#34;: username,&#xA;            }&#xA;        )&#xA;&#xA;    async def chat_message(self, event):&#xA;        message = event[&#34;message&#34;]&#xA;        username = event[&#34;username&#34;]&#xA;&#xA;        await self.send(&#xA;            text_data=json.dumps(&#xA;                {&#xA;                    &#34;message&#34;: message,&#xA;                    &#34;username&#34;: username&#xA;                }&#xA;            )&#xA;        )&#xA;```&#xA;&#xA;#### Accept the WebSocket connection&#xA;&#xA;Here, room and group more or less mean the same thing but are different in different contexts. For instance, the group refers to the collection of clients which are connected to a channel(communication between consumer and web socket) and the Room is referring to the collection of clients connected to a single connection stream like a literal room. So we can say, the group is a technical term and the room is more of a layman&#39;s term for the same thing.&#xA;&#xA;The method `connect` is called when the client establishes a websocket connection. When that happens, the function gets the room slug from the URL of the client and stores `room_name` which is a string. It creates a separate variable called `room_group_name` by prepending the `chat_` string to the `room_name`, it also gets the currently logged-in user from the request. It then adds the `channel_name` to the group named `room_group_name`. The `channel_name` is a unique identifier to the connection/consumer in the channel. By adding the `channel_name`, the consumer then can broadcast the message to all the channels within the group. Finally, the function accepts the connection, and a **webcoket connection is established from both ends, connection is sent from the client and is now accepted from the backend.**&#xA;&#xA;#### Disconnect from the WebSocket connection&#xA;&#xA;When the client sends a close connection request to the server, the `disconnect` method is triggered and it basically removes the `channel_name` from the group i.e. the group name `room_group_name` whatever the string it happens to be. So, it basically removes the client from the broadcast group and hence it can&#39;t receive or send messages through the websocket since it has been closed from both ends.&#xA;&#xA;If you would have paid attention to the `close_code` in-method parameter, it is not being used currently. However, we can use it for checking why the connection was closed, as the `close_code` is a numeric value just like the status code in the web request for letting the server know the reason for disconnecting from the client.&#xA;&#xA;#### Receive a message from the WebSocket connection&#xA;&#xA;The `recieve` method is the core of the consumer as it is responsible for all the logic and parsing of the message and broadcasting of the messages from the clients to the group channels. The function takes in a parameter called `text_data` and it is sent from the client through websocket, so it is JSON content. We need to get the actual message from the JSON object or any other piece of content from the client. So, we deserialize (convert the JSON object to python objects) the received payload, and get the value from the key `message`. The key is the input name or id from the client sending the request through the web socket, so it can be different depending on the frontend template(we&#39;ll see the front end soon as well).&#xA;&#xA;We get the user from the scope of the consumer as we previously initialized it in the connect method. This can be used for understanding which user has sent the message, it will be used later on as we send/broadcast the message to the group.&#xA;&#xA;Now, the final piece in the receive method is the `channel_layer.group_send` method, this method as the name suggests is used to send or broadcast the received message to the entire group. The method has two parameters:&#xA;&#xA;1. The name of the group&#xA;    &#xA;2. The JSON body containing the message and other details&#xA;    &#xA;&#xA;The method is not directly sending the message but it has a type key in the JSON body which will be the function name to call. The function will simply call the other funciton mentioned in the type key in the dict. The following keys in the dict will be the parameters of that funciton. In this case, the funciton specified in the `type` key is `chat_message` which takes in the `event` as the parameter. This event will have all the parameters from the `group_send` method.&#xA;&#xA;So, the `chat_message` will load in this message, username, and the room name and then call the `send` method which actually sends the message as a JSON payload to the WebSocket connection which will be received by all the clients in the same group as provided in the `room_group_name` string.&#xA;&#xA;### Adding Routers for WebSocket connections&#xA;&#xA;So, till this point have consumers, which are just like views in terms of channels. Now, we need some URL routes to map these consumers to a path. So, we will create a file/module called `routing.py` which will look quite similar to the `urls.py` file. It will have a list called `websocket_urlpatterns` just like `urlpatterns` with the list of `path`. These paths however are not `http` routes but will serve for the `WebSocket` path.&#xA;&#xA;&#xA;```python&#xA;# chat / routing.py&#xA;&#xA;&#xA;from django.urls import path&#xA;&#xA;from chat import consumers&#xA;&#xA;websocket_urlpatterns = [&#xA;    path(&#39;chat/&lt;str:room_slug&gt;/&#39;, consumers.ChatConsumer.as_asgi()),&#xA;]&#xA;```&#xA;&#xA;In the above code block, we have defined a URL for the web socket with the path as `/chat/&lt;room_slug&gt;` where room_name will be the `slug` for the room. The path is bound with the consumer-defined in the `consumers.py` module as `ChatConsumer`. The `as_asgi` method is used for converting a view into an ASGI-compatible view for the WebSocket interface.&#xA;&#xA;### Setting up ASGI Application&#xA;&#xA;We are using the ASGI application config rather than a typical WSGI application which only works one request at a time. We want the chat application to be asynchronous because multiple clients might send and receive the messages at the same time, we don&#39;t want to make a client wait before the server process a message from another client, that&#39;s just the reason we are using WebSocket protocol.&#xA;&#xA;So, we need to also make sure, it makes the http request and also add our websocket config from the chat app we created in the previous sections. So, inside the `asgi.py` file in the project config module, we need to make some changes to include the chat application configurations.&#xA;&#xA;```python&#xA;# backchat / asgi.py&#xA;&#xA;&#xA;import os&#xA;from django.core.asgi import get_asgi_application&#xA;from channels.auth import AuthMiddlewareStack&#xA;from channels.routing import ProtocolTypeRouter, URLRouter&#xA;&#xA;os.environ.setdefault(&#39;DJANGO_SETTINGS_MODULE&#39;, &#39;backchat.settings&#39;)&#xA;&#xA;from chat import routing&#xA;&#xA;&#xA;application = ProtocolTypeRouter({&#xA;    &#39;http&#39;: get_asgi_application(),&#xA;    &#34;websocket&#34;:AuthMiddlewareStack(&#xA;        URLRouter(&#xA;            routing.websocket_urlpatterns&#xA;        )&#xA;    )&#xA;})&#xA;```&#xA;&#xA;We will override the `application` config which is a component used for routing different types of protocols for the `ASGI` application. We have set the two keys, `http` and `websocket` in our application. The `http` type of requests will be served with the `get_asgi_application` application which is used for running the application in an ASGI environment.&#xA;&#xA;For `websocket` type of requests, we are setting the [AuthMiddlewareStack](https://channels.readthedocs.io/en/latest/topics/authentication.html) which helps in authenticating the users requesting the WebSocket connection and allows only authorized users to make a connection to the application. The [URLRouter](https://channels.readthedocs.io/en/stable/topics/routing.html) is used for mapping the list of URL patterns with the incoming request. So, this basically serves the request URL with the appropriate consumer in the application. We are parsing in the `websocket_urlpatterns` as the list of URLs that will be used for the WebSocket connections.&#xA;&#xA;Now, we run the server, we should be seeing the `ASGI` server serving our application rather than the plain WSGI application.&#xA;&#xA;```&#xA;$ python manage.py runserver&#xA;&#xA;Watching for file changes with StatReloader&#xA;Performing system checks...&#xA;&#xA;System check identified no issues (0 silenced).&#xA;February 05, 2023 - 09:22:45&#xA;Django version 4.1.5, using settings &#39;backchat.settings&#39;&#xA;Starting ASGI/Daphne version 4.0.0 development server at http://127.0.0.1:8000/&#xA;Quit the server with CONTROL-C.&#xA;```&#xA;&#xA;&#xA;The application is not complete yet, it might not have most components working functional yet. So, we&#39;ll now get into making the user interfaces for the application, to create, join, and view rooms in the application.&#xA;&#xA;### Adding Views for Chat Rooms&#xA;&#xA;We will have a couple of views like create room page, the join room page, and the chat room page. We will define each view as a distinct view and all of them will require authenticated users.&#xA;&#xA;```python&#xA;# chat / views.py&#xA;&#xA;&#xA;import string&#xA;import random&#xA;from django.contrib.auth.decorators import login_required&#xA;from django.shortcuts import render, reverse, redirect&#xA;from django.utils.text import slugify&#xA;from chat.models import Room&#xA;&#xA;&#xA;@login_required&#xA;def index(request, slug):&#xA;    room = Room.objects.get(slug=slug)&#xA;    return render(request, &#39;chat/room.html&#39;, {&#39;name&#39;: room.name, &#39;slug&#39;: room.slug})&#xA;&#xA;@login_required&#xA;def room_create(request):&#xA;    if request.method == &#34;POST&#34;:&#xA;        room_name = request.POST[&#34;room_name&#34;]&#xA;        uid = str(&#39;&#39;.join(random.choices(string.ascii_letters + string.digits, k=4)))&#xA;        room_slug = slugify(room_name + &#34;_&#34; + uid)&#xA;        room = Room.objects.create(name=room_name, slug=room_slug)&#xA;        return redirect(reverse(&#39;chat&#39;, kwargs={&#39;slug&#39;: room.slug}))&#xA;    else:&#xA;        return render(request, &#39;chat/create.html&#39;)&#xA;&#xA;@login_required&#xA;def room_join(request):&#xA;    if request.method == &#34;POST&#34;:&#xA;        room_slug = request.POST[&#34;room_slug&#34;]&#xA;        room = Room.objects.get(slug=room_slug)&#xA;        return redirect(reverse(&#39;chat&#39;, kwargs={&#39;slug&#39;: room.slug}))&#xA;    else:&#xA;        return render(request, &#39;chat/join.html&#39;)&#xA;```&#xA;&#xA;In the above views module, we have added 3 views namely `index` as the room page, `room_create` for the room creation page, and the `room_join` for the room join page. The index view is a simple get request to the provided slug of the room, it gets the slug from the URL from the request and fetches an object of the room associated with that slug. Then it renders the room template with the context variables like the name of the room and the slug associated with that room.&#xA;&#xA;The `room_create` view is a simple two-case view that either can render the room creation page or process the submitted form and create the room. Just like we used in the `register` view in the accounts app. When the user will send a `GET` request to the URL which we will map to `/create/` shortly after this, the user will be given a form. So, we will render the `create.html` template. We will create the html template shortly. &#xA;If the user has sent a `POST` request to the view via the `/create` URL, we will fetch the name field in the sent request and create a unique identifier with the name of the room. We will slugify the concatenation of the name with the uid and will set it as the slug of the room. We will then simply create the room and redirect the user to the `room` page.&#xA;&#xA;The `room_join` view also is a two-case view, where the user can either request the join room form or send a slug with the form submission. If the user is requesting a form, we will render the `join.html` template. If the user is submitting the form, we will fetch the room based on the slug provided and redirect the user to the `room` page.&#xA;&#xA;So, the `room_join` and `room_create` views are quite similar, we are fetching an already existing room in the case of the join view and creating a new instance of room in the create view. Now, we will connect the views to the URLs and finally get to the templates.&#xA;&#xA;### Connecting Views and URLs&#xA;&#xA;We have three views to route to a URL. But we will also have one additional URL which will be the home page for the application, on that page the user can choose to either join or create a room. We have the room creation, join the room and the room view to be mapped in this URL routing of the app.&#xA;&#xA;```python&#xA;# chat / urls.py&#xA;&#xA;&#xA;from django.urls import path&#xA;from django.views.generic import TemplateView&#xA;from chat import views&#xA;&#xA;&#xA;urlpatterns = [&#xA;    path(&#34;&#34;, TemplateView.as_view(template_name=&#34;base.html&#34;), name=&#39;index&#39;),&#xA;    path(&#34;room/&lt;str:slug&gt;/&#34;, views.index, name=&#39;chat&#39;),&#xA;    path(&#34;create/&#34;, views.room_create, name=&#39;room-create&#39;),&#xA;    path(&#34;join/&#34;, views.room_join, name=&#39;room-join&#39;),&#xA;]&#xA;```&#xA;&#xA;So, the first route is the home page view called `index`, we have used the [TemplateView](https://docs.djangoproject.com/en/4.1/ref/class-based-views/base/#templateview) which will simply render the template provided. We don&#39;t have to create a separate view for just rendering a template. We already have defined the `base.html` template while setting up the `accounts` app. This will be the same template, we will add some more content to the template later on. The URL mapped is the `/` since we will add the URLs of this app to the project URLs with an empty `&#34;&#34;` path.&#xA;&#xA;The second route is used for the room index page, i.e. where the user will be able to send and receive messages. The path is assigned as `/room/&lt;str:slug&gt;/` indicating a parameter called slug of type string will be used in accessing a particular room. The URL will be bound to the index view that we created in the views file, which fetches the room based on the slug, so here is where the slug will be coming from. The name of the URL-View route will be `chat` but you can keep it as per your requirements. The URL name is really handy for use in the templates.&#xA;&#xA;The third route is for the room creation page. The `/create/` URL will be bound to the `room_create` view, as we discussed, it will serve two purposes, one to render the form for creating the room, and the other for sending a `POST` request to the same path for the creating the Room with the name provided. The name is not required but helps in identifying and making it user-friendly. The name of this URL is set as `room-create`.&#xA;&#xA;The final route is for joining the room, the `/join/` URL will be triggering the `room_join` view. Similar to the `room-create` URL, the URL will render the join room form on a `GET` request, fetch the room with the provided slug and redirect to the room page. Here, the slug field in the form will be required for actually finding the matching room. The name of the URL route is set as `room-join`.&#xA;&#xA;We now add the URLs from the chat app to the project URLs. This will make the `/` as the entry point for the chat application URLs.&#xA;&#xA;```python&#xA;# backchat / urls.py&#xA;&#xA;&#xA;from django.contrib import admin&#xA;from django.urls import path, include&#xA;&#xA;urlpatterns = [&#xA;    path(&#34;admin/&#34;, admin.site.urls),&#xA;    path(&#34;auth/&#34;, include(&#39;accounts.urls&#39;)),&#xA;    path(&#34;&#34;, include(&#39;chat.urls&#39;)),&#xA;]&#xA;```&#xA;&#xA;Hence the process is completed for the backend to process the message, it then is dependent on the client to process and render the message.&#xA;&#xA;Till HTMX was not a thing!&#xA;&#xA;We won&#39;t have to write a single line of javascript to receive and handle the WebSocket connection!&#xA;&#xA;### Creating Templates and adding htmx&#xA;&#xA;We now move into the actual frontend or creating the template for actually working with the rooms and user interaction. We will have three pieces of templates, a room creates the page, a room join page, and a room chat page. As these template names suggest, they will be used for creating a room with the name, joining the room with the slug, and the room chat page where the user will send and receive messages.&#xA;&#xA;Let/s modify the base template first.&#xA;&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html&gt;&#xA;    &lt;head&gt;&#xA;        &lt;meta charset=&#34;utf-8&#34; /&gt;&#xA;        &lt;title&gt;Chat App&lt;/title&gt;&#xA;        {% load static %}&#xA;        &lt;script src=&#34;https://unpkg.com/htmx.org@1.8.5&#34;&gt;&lt;/script&gt;&#xA;    &lt;/head&gt;&#xA;    &lt;a href=&#34;#&#34;index&#39; %}&#34;&gt;Home&lt;/a&gt;&#xA;        {% if user.is_authenticated %}&#xA;            &lt;a href=&#34;#&#34;logout&#39; %}&#34;&gt;Logout&lt;/a&gt;&#xA;        {% else %}&#xA;            &lt;a href=&#34;#&#34;login&#39; %}&#34;&gt;Login&lt;/a&gt;&#xA;        {% endif %}&#xA;    &lt;body hx-ext=&#34;ws&#34;&gt;&#xA;        &lt;h1&gt;Back Chat&lt;/h1&gt;&#xA;        {% block base %}&#xA;            &lt;a href=&#34;#&#34;room-join&#39; %}&#34;&gt;Join Room&lt;/a&gt;&#xA;            &lt;a href=&#34;#&#34;room-create&#39; %}&#34;&gt;Create Room&lt;/a&gt;&#xA;        {% endblock %}&#xA;    &lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;![Chat App Home Page](https://meetgor-cdn.pages.dev/django-htmx/chat-home-page.png)&#xA;&#xA;#### Create Room Template&#xA;&#xA;We will have to render the form with a field like `name` for setting it as the name of the room, it is not required but again, it makes it easier for the user to find the name of the room a bit more friendly than random characters.&#xA;&#xA;```html&#xA;# templates / chat / create.html&#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;&#xA;{% block base %}&#xA;    &lt;form method=&#39;post&#39; action=&#39;{% url &#39;room-create&#39; %}&#39;&gt;&#xA;        {% csrf_token %}&#xA;        &lt;input name=&#39;room_name&#39; id=&#39;room_name&#39; placeholder=&#39;Room Name&#39;&gt;&#xA;        &lt;input type=&#39;submit&#39; id=&#34;submit&#34;&gt;&#xA;    &lt;/form&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;![Chat Room Create Page](https://meetgor-cdn.pages.dev/django-htmx/chat-create-room-page.png)&#xA;&#xA;The template is inherited from the `base.html` template and we render a form with the `room_name` input. The form will be submitted to the URL named `room-create` hence it will send a `POST` request to the server where it will create the room and further process the request.&#xA;&#xA;#### Join Room Template&#xA;&#xA;The join room template is similar to the create room template except it gets the slug of the room from the user rather than the name which is not a unique one to join the room.&#xA;&#xA;```html&#xA;# templates / chat / join.html&#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;&#xA;{% block base %}&#xA;    &lt;form method=&#39;post&#39; action=&#39;{% url &#39;room-join&#39; %}&#39;&gt;&#xA;        {% csrf_token %}&#xA;        &lt;input name=&#39;room_slug&#39; id=&#39;room_slug&#39; required=&#39;true&#39; placeholder=&#39;Room Code&#39;&gt;&#xA;        &lt;input type=&#39;submit&#39; id=&#34;submit&#34;&gt;&#xA;    &lt;/form&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;![Chat Room Join Page](https://meetgor-cdn.pages.dev/django-htmx/chat-join-room-page.png)&#xA;&#xA;The form will be submitted to the URL named `room-join` hence it will send a `POST` request to the server where it will fetch the room and further process the request.&#xA;&#xA;### Room Template (HTMX code)&#xA;&#xA;Now, time for the actual ingredient in the application, some HTMX magic!&#xA;&#xA;This template, as the two templates above inherit from the base template, that&#39;s the same thing. But it has a special `div` with the attribute [hx-ws](https://htmx.org/attributes/hx-ws/) which is used for using attributes related to the web socket in the htmx library. The `connect` value is used for connecting to a WebSocket. The value of the attribute must be set to the URL of the WebSocket to be connected. In our case, it is the URL path from the `routing` app as `/chat/&lt;room_slug&gt;/`. This simply will connect the client to the WebSocket from the backend. The other important attribute is the `send` which is used for sending a message to the connected web socket.&#xA;&#xA;```html&#xA;# templates / chat / room.html&#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;&#xA;{% block base %}&#xA;    &lt;h2&gt;{{ name }}&lt;/h2&gt;&#xA;    &lt;div hx-ws=&#34;connect:/chat/{{ slug }}/&#34;&gt;&#xA;        &lt;form hx-ws=&#34;send:submit&#34;&gt;&#xA;            &lt;input name=&#34;message&#34;&gt;&#xA;            &lt;input type=&#34;submit&#34;&gt;&#xA;        &lt;/form&gt;&#xA;     &lt;/div&gt;&#xA;     &lt;div id=&#39;messages&#39;&gt;&lt;/div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;![Chat Room Page](https://meetgor-cdn.pages.dev/django-htmx/chat-room-page.png)&#xA;&#xA;NOTE: The template has a div with the id `messages` which will be very important for sending the messages from the WebSocket to the client, so more on that when we use the HTMX part.&#xA;&#xA;For testing this template, you can create a room, and that will redirect you to the room template as we have seen in the views for the room creation. If you see something like `WebSocket CONNECT` it means, that the application has been able to establish a WebSocket connection to the backend, and we can be ready to accept messages and other stuff.&#xA;&#xA;```&#xA;HTTP GET /chat/room/def_teas/ 200 [0.03, 127.0.0.1:38660]&#xA;WebSocket HANDSHAKING /chat/def_teas/ [127.0.0.1:38666]&#xA;WebSocket CONNECT /chat/def_teas/ [127.0.0.1:38666]&#xA;```&#xA;&#xA;![Django ASGI server websocket connection](https://meetgor-cdn.pages.dev/django-htmx/chat-asgi-server.png)&#xA;&#xA;Till this point, we should have a running and almost complete application, though we just have a minor part missing that will be the most important part.&#xA;&#xA;### Sending HTML response from backend for htmx&#xA;&#xA;We will be sending a fragment of HTML from the backend when the user sends a message, to broadcast it to the group. Let&#39;s make some changes to the application, especially to the receive method in the `ChatConsumer` of the chat application.&#xA;&#xA;```python&#xA;# chat / consumers.py&#xA;    &#xA;&#xA;    ...&#xA;    ...&#xA;&#xA;    async def receive(self, text_data):&#xA;        text_data_json = json.loads(text_data)&#xA;        message = text_data_json[&#34;message&#34;]&#xA;        user = self.user&#xA;        username = user.username&#xA;&#xA;        await self.channel_layer.group_send(&#xA;            self.room_group_name, &#xA;            {&#xA;                &#34;type&#34;: &#34;chat_message&#34;,&#xA;                &#34;message&#34;: message,&#xA;                &#34;username&#34;: username,&#xA;            }&#xA;        )&#xA;&#xA;    async def chat_message(self, event):&#xA;        message = event[&#34;message&#34;]&#xA;        username = event[&#34;username&#34;]&#xA;&#xA;        # This is the crucial part of the application&#xA;        message_html = f&#34;&lt;div hx-swap-oob=&#39;beforeend:#messages&#39;&gt;&lt;p&gt;&lt;b&gt;{username}&lt;/b&gt;: {message}&lt;/p&gt;&lt;/div&gt;&#34;&#xA;        await self.send(&#xA;            text_data=json.dumps(&#xA;                {&#xA;                    &#34;message&#34;: message_html,&#xA;                    &#34;username&#34;: username&#xA;                }&#xA;            )&#xA;        )&#xA;```&#xA;&#xA;![Chat Room Message](https://meetgor-cdn.pages.dev/django-htmx/chat-room-msg-1.png)&#xA;![Chat Room Message 2 Users](https://meetgor-cdn.pages.dev/django-htmx/chat-room-msg-2.png)&#xA;![Chat Room Message](https://meetgor-cdn.pages.dev/django-htmx/chat-room-msg-3.png)&#xA;&#xA;&#xA;In the above snippet, we are just changing the final message object to include some HTML just simple. The HTML however has home htmx attributes like [hx-swap-oob](https://htmx.org/attributes/hx-swap-oob/) which will just update the specified DOM element to the content in the div. In this case, the DOM element is `#message` which is the id message present in the room template. We basically add the username and the message into the same id by appending it to the element. That&#39;s it, it would work and it would start showing the messages from the connected clients and broadcast them as well.&#xA;&#xA;There are some things to keep in mind while using htmx in the long run especially when the htmx 2.0 is released, it will have `ws` as a separate extension. It will have a bit of a different syntax than above. I have tried the latest version but doesn&#39;t seem to work. I&#39;ll just leave a few snippets for your understanding of the problem.&#xA;&#xA;&#xA;```html&#xA;# templates / chat / room.html&#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;&#xA;{% block base %}&#xA;    &lt;h2&gt;{{ name }}&lt;/h2&gt;&#xA;    &lt;div hx-ext=&#34;ws&#34; ws-connect=&#34;/chat/{{ slug }}/&#34;&gt;&#xA;        &lt;form ws-send&gt;&#xA;            &lt;input name=&#34;message&#34;&gt;&#xA;        &lt;/form&gt;&#xA;    &lt;/div&gt;&#xA;    &lt;div id=&#39;messages&#39;&gt;&lt;/div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;I have added, the `hx-ext` as `ws` which is a htmx [extension for websocket](https://htmx.org/extensions/web-sockets/). This extension has websocket-specific attributes like `ws-connect` and `ws-send`. I have tried a combination like changing the htmx versions, adding submit value as the `ws-send` attribute, etc, but no results yet. I have opened a [discussion](https://github.com/bigskysoftware/htmx/discussions/1231) on GitHub for this issue, you can express your solution or views there.&#xA;&#xA;### Adding some utility features for the chat app&#xA;&#xA;We can save messages, add and remove the users from the room according to the connection, and other stuff that can make this a fully-fledged app. So, I have made a few changes to the chat consumers for saving the messages and also updating the room with the users in the room.&#xA;&#xA;```python&#xA;# chat / consumers.py&#xA;&#xA;&#xA;import json&#xA;&#xA;from asgiref.sync import sync_to_async&#xA;from channels.generic.websocket import AsyncWebsocketConsumer&#xA;&#xA;from chat.models import Room, Message&#xA;&#xA;&#xA;class ChatConsumer(AsyncWebsocketConsumer):&#xA;    async def connect(self):&#xA;        self.room_name = self.scope[&#34;url_route&#34;][&#34;kwargs&#34;][&#34;room_slug&#34;]&#xA;        self.room_group_name = &#34;chat_%s&#34; % self.room_name&#xA;        self.user = self.scope[&#34;user&#34;]&#xA;&#xA;        await self.channel_layer.group_add(&#xA;            self.room_group_name, self.channel_name&#xA;        )&#xA;&#xA;        # Add the user when the client connects&#xA;        await self.add_user(self.room_name, self.user)&#xA;&#xA;        await self.accept()&#xA;&#xA;    async def disconnect(self, close_code):&#xA;&#xA;        # Remove the user when the client disconnects&#xA;        await self.remove_user(self.room_name, self.user)&#xA;&#xA;        await self.channel_layer.group_discard(&#xA;            self.room_group_name, self.channel_name&#xA;        )&#xA;&#xA;    async def receive(self, text_data):&#xA;        text_data_json = json.loads(text_data)&#xA;        message = text_data_json[&#34;message&#34;]&#xA;        user = self.user&#xA;        username = user.username&#xA;        room = self.room_name&#xA;&#xA;        # Save the message on recieving&#xA;        await self.save_message(room, user, message)&#xA;&#xA;        await self.channel_layer.group_send(&#xA;            self.room_group_name, &#xA;            {&#xA;                &#34;type&#34;: &#34;chat_message&#34;,&#xA;                &#34;message&#34;: message,&#xA;                &#34;username&#34;: username,&#xA;            }&#xA;        )&#xA;&#xA;    async def chat_message(self, event):&#xA;        message = event[&#34;message&#34;]&#xA;        username = event[&#34;username&#34;]&#xA;&#xA;&#xA;        message_html = f&#34;&lt;div hx-swap-oob=&#39;beforeend:#messages&#39;&gt;&lt;p&gt;&lt;b&gt;{username}&lt;/b&gt;: {message}&lt;/p&gt;&lt;/div&gt;&#34;&#xA;        await self.send(&#xA;            text_data=json.dumps(&#xA;                {&#xA;                    &#34;message&#34;: message_html,&#xA;                    &#34;username&#34;: username&#xA;                }&#xA;            )&#xA;        )&#xA;&#xA;    @sync_to_async&#xA;    def save_message(self, room, user, message):&#xA;        room = Room.objects.get(slug=room)&#xA;        Message.objects.create(room=room, user=user, message=message)&#xA;&#xA;    @sync_to_async&#xA;    def add_user(self, room, user):&#xA;        room = Room.objects.get(slug=room)&#xA;        if user not in room.users.all():&#xA;            room.users.add(user)&#xA;            room.save()&#xA;&#xA;    @sync_to_async&#xA;    def remove_user(self, room, user):&#xA;        room = Room.objects.get(slug=room)&#xA;        if user in room.users.all():&#xA;            room.users.remove(user)&#xA;            room.save()&#xA;```&#xA;&#xA;So, we have created a few methods like `save_message`, `add_user`, and `remove_user` which all are `synchronous` methods but we are using an asynchronous web server, so we add in the `sync_to_async` decorator which wraps a synchronous method to an asynchronous method. Inside the methods, we simply perform the database operations like creating a message object, and adding or removing the user from the room.&#xA;&#xA;That&#39;s only a few features that I have added, you can add to this application and customize them as per your needs.&#xA;&#xA;The code for this chat app is provided in the [GitHub repository](https://github.com/Mr-Destructive/django-htmx-chat).&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this post, we were able to create a simple chat app (frontendless) with Django and htmx. We used Django channels and HTMX to make a chat application without the need to write javascript for the client-side connection. Hope you found this tutorial helpful, do give your feedback and thoughts on it, I&#39;ll be eager to improve this post. Thank you for your patient listening. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Command Line Arguments</title>
      <link>https://www.meetgor.com/posts/golang-command-line-args</link>
      <description>Introduction In the 25th post of the series, we will be taking a look into parsing of command line arguments in golang. We will be exploring how to do the basic</description>
      <pubDate>Sat, 07 Jan 2023 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 25th post of the series, we will be taking a look into parsing of command line arguments in golang. We will be exploring how to do the basics of parsing and using the positional parameters or arguments from the command line in the program. By using standard library packages like `os` and `flag`, we can make powerful yet easy-to-build CLI apps and programs.&#xA;&#xA;## Parsing Arguments from the command line (os package)&#xA;&#xA;We can use the os package to get the arguments from the command line in a go script. We have to use the Args variable in the os package. The `Args` variable is a slice of strings which thereby is the parsed arguments from the command line.&#xA;&#xA;* The first (0 index) Argument is the path to the program&#xA;    &#xA;* The 1st index onwards are the actual arguments passed.&#xA;    &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;args := os.Args&#xA;    fmt.Printf(&#34;Type of Args = %T&#xA;&#34;, args)&#xA;&#x9;fmt.Println(args[0], args[1])&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go hello&#xA;Type of Args = []string&#xA;/tmp/go-build1414795487/b001/exe/main hello&#xA;```&#xA;&#xA;In the above example, we can see that the `Args` is a slice of string and we can get the indices as per the arguments passed from the command line.&#xA;&#xA;If you don&#39;t parse any arguments and access the 1st argument as `os.Args[1]` it will result in an `index out of range` error. So, you need to first check if the argument is parsed and set a default value otherwise.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;    &#34;strconv&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;var err error&#xA;&#x9;if len(os.Args) &gt; 1 {&#xA;&#x9;&#x9;port, err = strconv.Atoi(args[1])&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;panic(err)&#xA;&#x9;&#x9;}&#xA;&#x9;} else {&#xA;&#x9;&#x9;port = 8000&#xA;&#x9;}&#xA;&#x9;fmt.Println(port)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go&#xA;8000&#xA;&#xA;$ go run main.go 7000&#xA;7090&#xA;&#xA;$ go run main.go h&#xA;panic: strconv.Atoi: parsing &#34;h&#34;: invalid syntax&#xA;```&#xA;&#xA;In the above example, we have declared the port variable as an integer and tried to see if we had an argument parsed from the command line using the len function and if there was a variable, we will simply cast it into an integer using the [strconv.Atoi](https://pkg.go.dev/strconv#Atoi) function. If there are any errors in the process, we log an error message and panic out of the program. So, this is how we can set default values or check for any arguments from the command line in golang.&#xA;&#xA;### Get the number of args&#xA;&#xA;We can use the len function with the `Args` slice to get the total number of arguments from the command line. To ignore the first argument which would be the path to the program, we simply can slice the first element as `os.Args[1:]`. This will slice the list of the arguments from the first index till the last element in the slice.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;total_args := len(os.Args[1:])&#xA;&#x9;fmt.Println(&#34;Total Args =&#34;, total_args)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go hello world 56&#xA;&#xA;Total Args = 3&#xA;```&#xA;&#xA;This will simply give us the number of arguments passed from the command line, excluding the first(0th) argument which is the default argument as the execution path of the current program.&#xA;&#xA;### Iterate over all arguments&#xA;&#xA;We can use the simple for loop with range over the `os.Args` or `os.Args[1:]` for iterating over each of the arguments passed from the command line.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;for n, args := range os.Args {&#xA;&#x9;&#x9;fmt.Println(&#34;Arg&#34;, n, &#34;-&gt;&#34;, args)&#xA;&#x9;}&#xA;&#xA;    /* &#xA;    // For excluding the 0th argument&#xA;&#x9;for n, args := range os.Args[1:] {&#xA;&#x9;&#x9;fmt.Println(&#34;Arg&#34;, n, &#34;-&gt;&#34;, args)&#xA;&#x9;}&#xA;    */&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run main.go hello world 56&#xA;Arg 0 -&gt; /tmp/go-build2248205073/b001/exe/main&#xA;Arg 1 -&gt; hello&#xA;Arg 2 -&gt; world&#xA;Arg 3 -&gt; 56&#xA;```&#xA;&#xA;We can now iterate over the arguments passed from the command line using a simple for loop. We can further process these arguments per the program&#39;s requirements and need.&#xA;&#xA;## Using flags package&#xA;&#xA;Golang has a package in its standard library called [flags](https://pkg.go.dev/flag) which allows us to parse flags and arguments from the command line with a lot of built-in features. For instance, a default value is easily parsed with a simple function parameter, help text in case of an error in parsing the arguments or flag, customization and freedom for choosing a data type for the type of argument, and so on. For a bare-bones and quick CLI program, the flag package is a great choice.&#xA;&#xA;### Parse Typed Flags&#xA;&#xA;We can use typed flag values using the functions provided in the `flags` package like [IntVar](https://pkg.go.dev/flag#IntVar) for an integer value, [StringVar](https://pkg.go.dev/flag#StringVar) for string, [BoolVar](https://pkg.go.dev/flag#BoolVar) for boolean values and so on. Each function takes in 4 parameters and they set the value of the parsed variable from the parsed argument/flag from the command line.&#xA;&#xA;* The first parameter is a reference to the variable to store the value.&#xA;    &#xA;* The second parameter is the name of the argument/flag to be read from the command line.&#xA;    &#xA;* The third parameter is the default value of the variable.&#xA;    &#xA;* The fourth parameter is the help text for that argument/flag.&#xA;    &#xA;&#xA;So, let&#39;s take the previous example of port number parsing from the command line. We can use the `flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)`, this will set the value of the variable port from the command line as the value of `-p 6789` or the default value as `8000`. The help text will be used if the user has provided a non-integer or an invalid value as an error message.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;var dir string&#xA;&#x9;var publish bool&#xA;&#xA;&#x9;flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;flag.StringVar(&amp;dir, &#34;dir&#34;, &#34;output_dir&#34;, &#34;Directory&#34;)&#xA;&#x9;flag.BoolVar(&amp;publish, &#34;publish&#34;, false, &#34;Publish the article&#34;)&#xA;&#xA;&#x9;flag.Parse()&#xA;&#xA;&#x9;fmt.Println(port)&#xA;&#x9;fmt.Println(dir)&#xA;&#x9;fmt.Println(publish)&#xA;&#xA;&#x9;if publish {&#xA;&#x9;&#x9;fmt.Println(&#34;Publishing article...&#34;)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(&#34;Article saved as Draft!&#34;)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run flag.go&#xA;&#xA;8000&#xA;output_dir&#xA;false&#xA;Article saved as Draft!&#xA;&#xA;&#xA;$ go run flag.go -p 1234&#xA;&#xA;1234&#xA;output_dir&#xA;false&#xA;Article saved as Draft!&#xA;&#xA;&#xA;$ go run flag.go -p 1234 -dir site_out&#xA;&#xA;1234&#xA;site_out&#xA;false&#xA;Article saved as Draft!&#xA;&#xA;&#xA;$ go run flag.go -publish&#xA;&#xA;8000&#xA;output_dir&#xA;true&#xA;Publishing article...&#xA;```&#xA;&#xA;So, in the above, example, we have used a few types of values like `IntegerVar` for `port`, `StringVar` for `dir`, and `BoolVar` for `publish`. As explained earlier, the functions take 4 parameters in the same format, the reference to the variable to hold the parsed value, the name of the argument/flag, the default value the variable will hold, and the help text or usage string. The [BoolVar](https://pkg.go.dev/flag#BoolVar) is slightly different but it works logically well, if we parse `-publish` the value will be set as `true` and `false` otherwise. You can manually add the value like `-publish true` and so on but it is not mandatory and understood as true.&#xA;&#xA;In the above example, we have parsed different arguments in the output and displayed the values of these flags. If we don&#39;t specify a value, we can see the default value being parsed, in the case of the `bool` variable, the default value is taken as `false`. Hence we can see how easily we can use and parse flags from the command line in golang, it&#39;s simple, quick, and also extensible.&#xA;&#xA;For other data types, the flag package has functions like [Float64Var](https://pkg.go.dev/flag#Float64Var) for float64 values, [DurationVar](https://pkg.go.dev/flag#DurationVar) for time duration values and [TextVar](https://pkg.go.dev/flag#TextVar) for other types as inferred by the unmarshalling of the text.&#xA;&#xA;### Set flags from the script&#xA;&#xA;We can set the value of a flag/argument from the script rather than from the command line using the [Set](https://pkg.go.dev/flag#Set) method in the flag package. The `Set` method takes in two values as parameters the name of the argument and the value of that argument to set as. It returns an error if any arise during the setting of the argument.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;var dir string&#xA;&#x9;var publish bool&#xA;&#xA;&#x9;flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;flag.StringVar(&amp;dir, &#34;dir&#34;, &#34;output_dir&#34;, &#34;Directory&#34;)&#xA;&#xA;&#x9;flag.Parse()&#xA;&#xA;    fmt.Println(port)&#xA;    fmt.Println(dir)&#xA;    flag.Set(&#34;dir&#34;, &#34;dumps&#34;)&#xA;    fmt.Println(dir)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run flag.go -p 8080&#xA;8080&#xA;output_dir&#xA;dumps&#xA;```&#xA;&#xA;So, it is clearly visible that the value of an argument can be changed within the script, it also changes the value of the associated variable. Remember, we gave the two-parameter as strings so the first parameter is the name of the argument and not necessarily the variable name.&#xA;&#xA;### Use Reference to arguments (pointers)&#xA;&#xA;Also, there are functions like [Int](https://pkg.go.dev/flag#Int), [Float64](https://pkg.go.dev/flag#Float64), [String](https://pkg.go.dev/flag#String), [Bool](https://pkg.go.dev/flag#Bool) in the flag package that can allow getting the values of the arguments without using the `Parse` method. We use the reference of the value stored in as the arguments instead of defining the variables as a data value; we have a pointer to that value of data.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;port := flag.Int(&#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;dir := flag.String(&#34;dir&#34;, &#34;output_dir&#34;, &#34;Directory&#34;)&#xA;&#x9;publish := flag.Bool(&#34;publish&#34;, false, &#34;Publish the article&#34;)&#xA;&#x9;help := flag.Bool(&#34;help&#34;, false, &#34;Help&#34;)&#xA;&#xA;&#x9;if *help {&#xA;&#x9;&#x9;flag.PrintDefaults()&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(*port)&#xA;&#x9;&#x9;fmt.Println(*dir)&#xA;&#x9;&#x9;flag.Set(&#34;dir&#34;, &#34;dumps&#34;)&#xA;&#x9;&#x9;fmt.Println(*dir)&#xA;&#xA;&#x9;&#x9;fmt.Println(flag.NFlag())&#xA;&#x9;&#x9;fmt.Println(flag.NArg())&#xA;&#xA;&#x9;&#x9;fmt.Println(*publish)&#xA;&#xA;&#x9;&#x9;if *publish {&#xA;&#x9;&#x9;&#x9;fmt.Println(&#34;Publishing article...&#34;)&#xA;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;fmt.Println(&#34;Article saved as Draft!&#34;)&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;vals := flag.Args()&#xA;&#x9;&#x9;fmt.Println(vals)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run flag.go -p 80 -dir node_mods 1234&#xA;80&#xA;node_mods&#xA;dumps&#xA;2&#xA;1&#xA;false&#xA;Article saved as Draft!&#xA;[1234]&#xA;```&#xA;&#xA;As we can it performs the same task, but we have to use pointers as references to the arguments instead of storing them in an actual memory address. We have performed the same set of operations on the arguments and flags as we do with the other examples.&#xA;&#xA;We first, use the `Int` method or other methods appropriate that `String` can be used in general use cases, the function returns a reference (memory address) of the actual stored value of the arguments/flag. We can access the value from its memory address using the `*` operator. We have covered the [pointer](https://www.meetgor.com/golang-pointers/) arithmetic in the last part of the series. When we use `*port` we get the value from the memory address and thereby we can use it for the required task in the program, we can also store a copy of the variable by creating a new variable with the value of that argument.&#xA;&#xA;### Parse Arguments&#xA;&#xA;So, if we want to parse flags, with a single value, we have seen the use of the [flag.Args](https://pkg.go.dev/flag#Args) function to get the values of the arguments passed from the command line which don&#39;t have any flag labels attached to them(just raw arguments from the CMD). Just as we used the `os.Args` variable but this function is much clean and filtered out the path to the program argument. So we can directly have the arguments which are clearly passed by the user from the command line.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;flag.Parse()&#xA;&#x9;fmt.Println(port)&#xA;&#x9;vals := flag.Args()&#xA;&#x9;fmt.Println(vals)&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run flag.go -p 8123&#xA;8123&#xA;[]&#xA;&#xA;&#xA;$ go run flag.go -p 8123 1234 hello true&#xA;8123&#xA;[1234 hello true]&#xA;&#xA;&#xA;$ go run flag.go -p 8123 1234 hello true -p 9823 world&#xA;8123&#xA;[1234 hello true -p 9823 world]&#xA;```&#xA;&#xA;In the above example, we can see that we have used a few non-flagged arguments from the command line. The return value of the `Args` function is a slice of string, we can then convert it into appropriate types using type casting and functions. Once the flagged arguments are parsed, if we use the `Args` function, it won&#39;t be possible to again use flagged arguments in the command line. It will be considered a simple string thereafter.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/files/write/) GitHub repository.&#xA;&#xA;### Get Help text with PrintDefaults&#xA;&#xA;We can use the [flag.PrintDefaults](https://pkg.go.dev/flag#PrintDefaults) method for just printing the default values and the help text for the expected arguments from the command line in the script. We can simply use it as a help flag or use it in error messages for guiding the user to the proper arguments and flags.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;var help bool&#xA;&#x9;flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;flag.BoolVar(&amp;help, &#34;help&#34;, false, &#34;Help&#34;)&#xA;&#x9;flag.Parse()&#xA;&#x9;if help {&#xA;&#x9;&#x9;flag.PrintDefaults()&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(port)&#xA;&#x9;&#x9;vals := flag.Args()&#xA;&#x9;&#x9;fmt.Println(vals)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run help.go -h&#xA;&#xA;Usage of /tmp/go-build121267600/b001/exe/help:&#xA;  -help&#xA;        Help&#xA;  -p int&#xA;        Provide a port number (default 8000)&#xA;&#xA;&#xA;$ go run help.go&#xA;&#xA;8000&#xA;[]&#xA;```&#xA;&#xA;So, we can see the `PrintDefaults` function will simply print the helper text for the flags expected in the script and the default value of those flags as well. This can be used to provide a good user-friendly interface for a simple terminal application.&#xA;&#xA;### Get the number of arguments&#xA;&#xA;We can use the [NFlag](https://pkg.go.dev/flag#NFlag) method in the `flag` package. The function returns an integer that indicates a count of the arguments that have been set from the command line.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;var dir string&#xA;&#x9;var publish bool&#xA;&#xA;&#x9;flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;flag.StringVar(&amp;dir, &#34;dir&#34;, &#34;output_dir&#34;, &#34;Directory&#34;)&#xA;&#xA;&#x9;flag.Parse()&#xA;&#xA;    fmt.Println(port)&#xA;    fmt.Println(dir)&#xA;    fmt.Println(flag.NFlag())&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run flag.go&#xA;8000&#xA;output_dir&#xA;0&#xA;&#xA;&#xA;$ go run flag.go -p 8080 8999 false hello&#xA;8080&#xA;output_dir&#xA;1&#xA;&#xA;&#xA;$ go run flag.go -p 8080 -dir dumps hello 1234&#xA;8080&#xA;dumps&#xA;2&#xA;```&#xA;&#xA;The `port` flag has been set from the command line, so we just have one argument set, hence the function `NFlag` returns `1` as the number of set flags.&#xA;&#xA;Also, the [NArg](https://pkg.go.dev/flag#NArg) method will return an integer that will count the number of arguments that have been provided leaving out the flag arguments.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;flag&#34;&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;var port int&#xA;&#x9;var dir string&#xA;&#x9;var publish bool&#xA;&#xA;&#x9;flag.IntVar(&amp;port, &#34;p&#34;, 8000, &#34;Provide a port number&#34;)&#xA;&#x9;flag.StringVar(&amp;dir, &#34;dir&#34;, &#34;output_dir&#34;, &#34;Directory&#34;)&#xA;&#xA;&#x9;flag.Parse()&#xA;&#xA;    fmt.Println(port)&#xA;    fmt.Println(dir)&#xA;    fmt.Println(flag.NArg())&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ go run flag.go 1234&#xA;8000&#xA;output_dir&#xA;1&#xA;&#xA;&#xA;$ go run flag.go -p 8080 -dir dumps hello 1234&#xA;8080&#xA;dumps&#xA;2&#xA;&#xA;&#xA;$ go run flag.go -p 8080 hello 1234 false&#xA;8080&#xA;dumps&#xA;3&#xA;```&#xA;&#xA;In the first example, we don&#39;t have any flag arguments set, we just have one unflagged argument as `1234`, hence the `NArg` function returns `1`. The second example has 2 values that are not flagged, we have set the values of `port` and `dir` as `8080` and `dumps` respectively, so the remaining unflagged values are `hello` and `1234` hence the return value as `2`. The third example has 3 unflagged values as `hello 1234 false`, hence we return `3`.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/cmd-args/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;We have seen how to parse command line arguments in golang with the `os` and the `flag` packages. Though these two are not the only options for building CLI applications, they provide a clean and easy-to-start approach, also they come with the standard library which makes it even better as we don&#39;t have to mingle with third-party libraries. We saw the basics of parsing flags and arguments from a command line program.&#xA;&#xA;Thank you for reading. If you have any queries, questions, or feedback, you can let me know in the discussion below or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>2022: Year Review</title>
      <link>https://www.meetgor.com/posts/2022-review</link>
      <description>Introduction Well, what a year! If 2021 was a start, 2022 was a year to convert a start into a goal. From collaborating on open-source projects to getting an in</description>
      <pubDate>Sat, 31 Dec 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Well, what a year! If 2021 was a start, 2022 was a year to convert a start into a goal. From collaborating on open-source projects to getting an internship, it was an exciting year! Created a couple dozen of projects(not finished as usual), wrote 40+ articles, did a couple of streams, learned the basics of golang, and spend 6 months as a backend dev intern at [Solarlabs](https://thesolarlabs.com/), just a year that was awaited for the hard work over the span of 2 years.&#xA;&#xA;This was a year, where I did things that were above my expectations, did an internship with college side by side for 6 months, started to stream on twitch(2 streams :),&#xA;&#xA;Some technical things I did and learnt in 2022:&#xA;&#xA;* Markata and blog revamp&#xA;    &#xA;* Learnt Docker&#xA;    &#xA;* Django Rest Framework Project and Django Basics completed 4 articles&#xA;    &#xA;* Podevcast(hashnode/netlify hackathon)&#xA;    &#xA;* Web App from scratch - django collaboration with mark gibney&#xA;    &#xA;* Daily.dev hackathon - bookmark CLI runner up&#xA;    &#xA;* 100 days of golang init 13 articles in April&#xA;    &#xA;* Django deployment (Heroku, railway, nixpacks) articles&#xA;    &#xA;* Switched from Windows 7 to Ubuntu&#xA;    &#xA;* Bought meetgor.com and started applying for an internship&#xA;    &#xA;* Ding 3 rounds of interviews got in at SolarLabs!&#xA;    &#xA;* Learned Flask, pytests, drf, in the process of internship assignments at other companies&#xA;    &#xA;* Switched to Lua config in Neovim with LSP&#xA;    &#xA;* Created lots of projects(vacation) dj-notes, aptui, rattlesnake, django mermaid&#xA;    &#xA;* HTMX with django, golang learning with Graphql, and REST API&#xA;    &#xA;* Picking up with neovim plugins for django, markrunner, black snipper&#xA;    &#xA;* deepgram and steampipe contribution at hacktoberfest&#xA;    &#xA;* Djankata, pykyll and some other contributions to golang&#xA;    &#xA;* Revamped APTUI with textual 2.0&#xA;    &#xA;* Devjournal with pykyll (bare bones ssg in python)&#xA;    &#xA;* All in all, golang and python deep dives this year with tons of projects and django huge learning experience at the internship.&#xA;    &#xA;&#xA;## How did it start?&#xA;&#xA;The start was exciting and full of expectations due to a great start in 2021. I wrote around 80 articles starting almost halfway through the year. Also started to learn Django and shell scripting. With that knowledge, I created dozens of projects like web apps for silly games, portfolios, newsletters, and shell scripts for web scrapping and framework setups. I kept on with the pace and wrote articles and made projects.&#xA;&#xA;I got a collaboration on an awesome open-source project \[web-app-from-scratch\]([https://github.com/no-gravity/web\_app\_from\_scratch](https://github.com/no-gravity/web_app_from_scratch)) for a docker image for setting up Django from scratch. That was a great learning experience as I didn&#39;t know docker and in February 2022 was using Windows 7 back then, bad old days. I researched and got docker up and running on a cloud instance and learned the basics of docker.&#xA;&#xA;## Switch from Windows to Linux&#xA;&#xA;In May came a turning point for good, had to forcefully switch to Linux! I was under a ransomware attack due to the security vulnerability of windows 7 and had messed up all the files on the system. I had to completely wipe out the system and installed fresh ubuntu 22.04 on 14th May. Things changed and I was happy than ever to use my system. Thanks to that virus(xcvf ransomware virus) and natural instincts.&#xA;&#xA;I loved to code in the terminal, I have been using termux in lockdown which is a Linux-like terminal emulator for android. I learned the basics of bash and Linux commands with that and it was becoming my second nature to use Linux commands, so using windows was really frustrating. The incident happened at a good time and was able to switch from windows 7 to ubuntu.&#xA;&#xA;Setup Vim, tmux, and custom aliases was a great productive environment for me compared to windows. It changed the speed and mindset of my programming world.&#xA;&#xA;## Started learning golang&#xA;&#xA;I started to learn golang at the end of March and created a [100-day golang series](https://www.meetgor.com/series/100-days-of-golang) on April 1st. It was not consecutive days you know, so it still looms around 25 days. It looks a lot less than I expected. Still, I feel really confident in writing golang, so much so that I decided to do the advent of code in golang this year.&#xA;&#xA;I completed around 13 articles in the month of April and then due to end-semester things getting in the way, was not able to keep a consecutive streak. Still, each week, I tried to write an article. And I have just 25 articles till now. It does not mean I did not learn golang, without writing any articles, I have a couple of projects on golang for creating bare-bones REST APIs and also a bit of GraphQL with DB connections. I haven&#39;t got a good amount of time to write detailed articles about them in a while, so maybe January is the time. I would like to complete 100 articles in the first quarter of 2023 that&#39;s the plan or at least the basics of web development with golang(backend of course).&#xA;&#xA;It was a fantastic learning experience by creating examples, and snippets and playing with some low-level stuff in the language.&#xA;&#xA;## Applied for internships&#xA;&#xA;At the end of May, that is when my exams were ending, I started to apply for internships targeting a backend(Django stack) for maybe 3-6 months. And it was a sad time in the first few weeks. I got a few offers talking but ended in rejection as I was not able to work full-time :( That hurts after doing some assignments and an interview.&#xA;&#xA;## TheSolarLabs Internship&#xA;&#xA;It all went on for 3 weeks, until the morning on 27 June, I read the application as &#34;Rejected&#34; to [SolarLabs](https://thesolarlabs.com/). Anyone would have lost hope there, and so did I. And to anyone&#39;s surprise or credit for their blunder, I got a call from them for scheduling an interview! I am not making this up, the portal still says &#34;rejected&#34; and in the chats, it asks for a resume! Some nervous times.&#xA;&#xA;After 3 days of interviews, I finally got the offer letter of confirmation. For 6 months and a good bit of stipend. It was my preferred Django stack, with some unknown grounds of the Django rest framework. I realized rather quickly in the first interview itself, I need to get some DRF stuff up! There was a lot of stuff to learn that I didn&#39;t know existed, I was baffled by how great was DRF then. That&#39;s the first bit of learning from the interview itself, it showed, how much there is to learn.&#xA;&#xA;Soon after the onboarding process and getting a bit familiar with the designing product, I was presented with the code repository and time to get familiarity and set up the system (python3.7, postman, editor, etc). There came the first task soon in the following week. it looked doable and indeed just a matter of time to get familiarity with the problem statement. it was a task for fetching some data from weather API and store in a specific format.&#xA;&#xA;As the days passed, I was introduced to the tech scrum, like a daily update for the tas done yesterday and todo today. It was really new to me, I have never heard of such systems but realized it might be a development process. Soon college started and it was a mess thereafter. had to balance college attendance and attend scrum with clashing times sometimes. But to date got off with the semesters.&#xA;&#xA;### Key takeaways from the internship&#xA;&#xA;I don&#39;t want to get into all the details of the internship tasks but here&#39;s a summary of what I learned in a gist.&#xA;&#xA;* Django Rest Framework viewsets, routers, and actions&#xA;    &#xA;* Django Signals, Admin section, API designs in a better way&#xA;    &#xA;* Azure (some specific things like function deployment, config, sas tokens, pipelines, python SDK, etc.)&#xA;    &#xA;* SQL ( got some good practice on joins and nested queries)&#xA;    &#xA;* ERP technology(erpnext and python setup)&#xA;    &#xA;* JWT tokens set up in Django&#xA;    &#xA;* Process for designing API from scratch for a given feature&#xA;    &#xA;* Cashfree and Stripe integration in Django&#xA;    &#xA;* Dockerfile and buildkit specific features&#xA;    &#xA;&#xA;## Learning Constantly&#xA;&#xA;Currently in a hope of continuing the internship as I have thoroughly enjoyed working here and it has been a great learning experience. But one thing was really important this year again, to not stop learning and building. Don&#39;t mind if the idea is small or useless, just building the project and overcoming the entry barrier for learning the technology has been the quote for everything in my developer journey.&#xA;&#xA;&gt; When your mind feels lost, letting it free and driving new places strikes a inspriation for another take off, don&#39;t run, enjoy the walk, it&#39;s a marathon not a sprint.&#xA;&#xA;Building projects which are useless or not making sense to anybody might be a different thing but what it means to yourself matters the most. There might be instances where you might be failing, not having any ideas, burnout, imposter syndrome, and some other fuzzy words in tech. These situations are meant to improve you in one or the other way, If you run out of ideas, just take a break, look at your personal state and hobbies besides programming and get some inspiration from other fields and apply them in the programming world were some lessons I learned in some rough times during the year.&#xA;&#xA;There will be or rather should be times when you feel down or lose inspiration, these times drive the minds in different directions as programming can be a bit draining and one can quickly lose motivation if there are no results straight away.&#xA;&#xA;&gt; Hard work takes time to turn into results, patience and actions are the key.&#xA;&#xA;I had a hobby for writing, I had written a novel(maybe not a novel it&#39;s not that big) when I was in 8th grade, and had a ton of ideas for stories but was stuck with the hypothetical thing called the perfection of time, plot, and ideas. I watched a youtube video about writing and it just fascinated me to write again, and it began. I started to write my novel which is handwritten into an ebook. I was consistent for a few days but things got wild and were moved away from it. The new year will be the perfect free time to go for it and at least have one fiction novel to my name.&#xA;&#xA;I am also reading fantasy novels, I have read Elantris and am in mid of reading Mistborn. I have been enjoying reading and writing as a programmer, but want to get into fantasy as well. It is so exciting and fascinating as a human to fly with creativity. I am grateful to have eyes and a healthy mind to read and see the beauty of the world.&#xA;&#xA;## Hits and Misses in 2022&#xA;&#xA;There were a few pullbacks in 2022 for me, if there are good times, one must be prepared for bad ones it seemed. The worst thing about bad times is that you sometimes don&#39;t realize that you are in one. I had almost the same count of articles compared to the previous year comparing a whole year to half the year(I started blogging in June 2022).&#xA;&#xA;* Unable to write out words&#xA;    &#xA;* Feeling exhausted and anxious sometimes&#xA;    &#xA;* Not able to communicate properly with mentors&#xA;    &#xA;* Sometimes, college and working on internship tasks clash badly&#xA;    &#xA;* addicted to solving a problem and can&#39;t focus on other things(maybe a good thing sometimes)&#xA;    &#xA;&#xA;## Tech list for 2023&#xA;&#xA;* Golang deep dive with projects and open-source contributions&#xA;    &#xA;* Machine Learning (Tensorflow python and Keras)&#xA;    &#xA;* Kafka&#xA;    &#xA;* Kubernetes and Microservices&#xA;    &#xA;* AoC with Rust&#xA;    &#xA;&#xA;So, these are some of the technologies I will try to learn in 2023. I have a basic knowledge of golang, but I will like to get into the specific things golang is used for like concurrency and channels. I have been wanting to learn machine learning with projects, and this year I am serious about it, I have a couple of ideas for web apps with ML, and will try to get it up and rolling.&#xA;&#xA;Some specific technologies I want to learn to include Kafka for data pipelines and streams. Some deployment tools like Kubernetes but before that, I will be learning a bit more about docker networking.&#xA;&#xA;This year I did AoC with golang, to complete all 25 days, but did around 12 days. Still will be trying to solve this over the month. Next year, I will be trying AoC in Rust. I love low-level programming languages, go is great but rust is extreme.&#xA;&#xA;## Conclusion&#xA;&#xA;So, that&#39;s a wrap for 2022. A great year for learning a lot of stuff. Hopefully, the next year will be a better one. More collaboration and more opportunities. I am grateful to so many people who came into my life and have played important roles for me to develop into a backend developer and even a human. My mentors and colleagues, real-life and Twitter friends, and GitHub collaborators. I would have not reached here if I would have not met any one of them. Thank you. May you too have a great year and life ahead full of opportunities and success.&#xA;&#xA;Happy New Year, Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: File Write</title>
      <link>https://www.meetgor.com/posts/golang-file-write</link>
      <description>Introduction In the 24th post of the series, we will be taking a look at how we can perform write operations to a file using golang. We will be using the packag</description>
      <pubDate>Sun, 18 Dec 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 24th post of the series, we will be taking a look at how we can perform write operations to a file using golang. We will be using the `os` package in most operations along with `bufio` text manipulations. We will be performing write operations like appending, deleting, and replacing a file using golang. We will be heavily leveraging standard library packages like `os`, `bufio`, `bytes` and `fmt`. We will also be looking into overwriting and string formatting to a file.&#xA;&#xA;## Write to a File&#xA;&#xA;The first part of this section is the basic write operation to a file, we assume we are writing to a fresh file and overriding the contents of the existing file. The next section will cover the appending of content to the file and so on. In this example, we will see how we perform basic writing operations to write a string, a slice of string to a file.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func HandleError(err){&#xA;    if err != nil {&#xA;        log.Fatal(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;str := &#34;Hi, I am a gopher!&#xA;&#34;&#xA;&#x9;f, err := os.OpenFile(&#34;abc.txt&#34;, os.O_WRONLY, 0660)&#xA;    // f, err := os.Create(&#34;abc.txt&#34;)&#xA;&#x9;HandleError(err)&#xA;&#x9;_, err = f.Write([]byte(str))&#xA;&#x9;HandleError(err)&#xA;&#x9;defer f.Close()&#xA;}&#xA;```&#xA;&#xA;```bash&#xA;$ cat abc.txt&#xA;&#xA;$ go run main.go&#xA;&#xA;$ cat abc.txt&#xA;Hi, I am a gopher!&#xA;```&#xA;&#xA;So, we have used a simple golang script to write to a file that exists/has already been created. If you don&#39;t want errors while having to write on a file that does not exist, use the [Create](https://pkg.go.dev/os#Create) method instead which is similar to the `Open` method but creates a file if it doesn&#39;t exist. We use the [Write](https://pkg.go.dev/os#File.Write) method to overwrite contents to the file, it takes in a parameter as a slice of byte, so we typecast the string `str` into `[]byte` using the `[]byte(str)` syntax. Thereby we write the contents of the string into the file. We use the defer keyword for closing the file at the end of the script or the end of the main function scope.&#xA;&#xA;### Write slice of strings to file&#xA;&#xA;We can even write slice of string to file using a for loop and appending each string with a new line character.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func HandleError(err){&#xA;    if err != nil {&#xA;        log.Fatal(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;f, err := os.Create(&#34;abc.txt&#34;)&#xA;&#x9;//f, err := os.Open(&#34;abc.txt&#34;, os.O_WRONLY, 0660)&#xA;&#x9;langs := []string{&#34;golang&#34;, &#34;python&#34;, &#34;rust&#34;, &#34;javascript&#34;, &#34;ruby&#34;}&#xA;&#x9;for _, lang := range langs {&#xA;&#x9;&#x9;_, err := f.WriteString(lang + &#34;&#xA;&#34;)&#xA;&#x9;&#x9;HandleError(err)&#xA;&#x9;}&#xA;&#x9;defer f.Close()&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ cat abc.txt&#xA;&#xA;$ go run main.go&#xA;&#xA;$ cat abc.txt&#xA;golang&#xA;python&#xA;rust&#xA;javascript&#xA;ruby&#xA;```&#xA;&#xA;We have used the [WriteString](https://pkg.go.dev/os#File.WriteString) method which will take in a string as a parameter instead of a slice of bytes. So, we don&#39;t have to type cast into slice of bytes. So, as we can see we have written the string slice into a file.&#xA;&#xA;### Over Write&#xA;&#xA;The minimal code to write to a file is the [WriteFile](https://pkg.go.dev/os#WriteFile) function in the [os](https://pkg.go.dev/os) package, it overrides the content of the file with the provided slice of bytes, the name of the file, and the necessary permission to write. The funciton additionally creates a file if it does not exist, which is one less reason for the error. Though it returns an error object, the error might be created due to not right permissions to write to the file, encoding issues, etc.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;data := []byte{115, 111, 109, 101, 65}&#xA;&#x9;err := os.WriteFile(&#34;test.txt&#34;, data, 0660)&#xA;    log.Println(err)&#xA;&#xA;&#x9;s := &#34;Hello&#34;&#xA;&#x9;err = os.WriteFile(&#34;test.txt&#34;, []byte(s), 0660)&#xA;    log.Println(err)&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ go run main.go&#xA;2022/12/17 19:24:13 &lt;nil&gt;&#xA;2022/12/17 19:24:13 &lt;nil&gt;&#xA;&#xA;$ cat test.txt&#xA;Hello&#xA;```&#xA;&#xA;So, we have used the `WriteFile` method two times in the script, it first takes in a slice of bytes as it is defined as `data` which corresponds to `115 -&gt; s`, `111 -&gt; o`, `65 -&gt; A`, ASCII mapped to strings. The slice of bytes can be taken as a string like `someA` as the literal value of the underlying slice of the byte. So, we take that slice of byte and parse it to the second parameter of the WriteFile function. The first parameter is a string path of the file we want to write the contents to, the third parameter is the file permission. We have set it as `0660` indicating read(4) + write(2) to the group and the user and no permission to the other users. The function will return an error if any, or else it simply overwrites the data in the file.&#xA;&#xA;In this case, we have called the `WriteFile` method with string `s` type cast to slice of bytes at the end of the script so we see the file has contents as `Hello` instead of `someA`. If we reverse the action, we don&#39;t see the `Hello` string in the file.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;s := &#34;Hello&#34;&#xA;&#x9;err := os.WriteFile(&#34;test.txt&#34;, []byte(s), 0660)&#xA;    log.Println(err)&#xA;&#xA;&#x9;data := []byte{115, 111, 109, 101, 65}&#xA;&#x9;err = os.WriteFile(&#34;test.txt&#34;, data, 0660)&#xA;    log.Println(err)&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ go run main.go&#xA;2022/12/17 19:24:13 &lt;nil&gt;&#xA;2022/12/17 19:24:13 &lt;nil&gt;&#xA;&#xA;$ cat test.txt&#xA;someA&#xA;```&#xA;&#xA;As we can see the `Hello` has been overwritten by `someA`.&#xA;&#xA;### Write formatted string&#xA;&#xA;We can even use fmt to write formatted strings to a file. Just like we can take inputs with `Scanf`, we can use [Fprint](https://pkg.go.dev/fmt#Fprint) and other similar functions like [Fprintf](https://pkg.go.dev/fmt#Fprintf), and [Fprintln](https://pkg.go.dev/fmt#Fprintln) functions to print/add contents to the file.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func HandleErr(err error) {&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;}&#xA;func main() {&#xA;&#x9;f, err := os.Create(&#34;temp.txt&#34;)&#xA;&#x9;HandleErr(err)&#xA;&#x9;name, lang, exp := &#34;John&#34;, &#34;go&#34;, 2&#xA;&#x9;_, err = fmt.Fprint(f, &#34;Hi, I am &#34;, name, &#34;&#xA;&#34;)&#xA;&#x9;HandleErr(err)&#xA;&#x9;_, err = fmt.Fprintf(f, &#34;Language of choice: %s.&#xA;&#34;, lang)&#xA;&#x9;HandleErr(err)&#xA;&#x9;_, err = fmt.Fprintln(f, &#34;Having&#34;, exp, &#34;years of experience.&#34;)&#xA;&#x9;HandleErr(err)&#xA;&#x9;defer f.Close()&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ cat temp.txt&#xA;cat: temp.txt: No such file or directory&#xA;&#xA;$ go run format.go&#xA;&#xA;$ cat test.txt&#xA;Hi, I am John.&#xA;Language of choice: go.&#xA;Having 2 years of experience.&#xA;```&#xA;&#xA;So, we can see that we have used all three methods having their own use cases, we can use `Fprint` for simple strings, `Fprintf` for formatting the block of a string with multiple placeholders, and the `Fprintln` which works simply like `Fprint` but it adds a new line itself, we don&#39;t need to specify it explicitly.&#xA;&#xA;### Append&#xA;&#xA;If we want to append text to a file, we can use the [OpenFile](https://pkg.go.dev/os#OpenFile) function and provide a few parameters to append the contents instead of overwriting.&#xA;&#xA;Here, we have two steps, open the file and then write the contents in the file. So while opening the file, we provide a few options as parameters to make the fine-tuned system call like only open for read, write or append modes. These options are defined as constant int values in the [os package](https://pkg.go.dev/os#pkg-constants).&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func HandleError(err error) {&#xA;    if err != nil {&#xA;        log.Fatal(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;s := &#34;Hello&#34;&#xA;&#x9;err := os.WriteFile(&#34;test.txt&#34;, []byte(s), 0660)&#xA;    HandleError(err)&#xA;&#xA;&#x9;s = &#34;World&#34;&#xA;&#x9;f, err := os.OpenFile(&#34;test.txt&#34;, os.O_APPEND|os.O_WRONLY, 0660)&#xA;&#x9;HandleError(err)&#xA;&#x9;_, err = f.WriteString(s)&#xA;&#x9;HandleError(err)&#xA;&#x9;defer f.Close()&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ go run main.go&#xA;&#xA;$ cat test.txt&#xA;HelloWorld&#xA;```&#xA;&#xA;So, from the above example, we are able to append text into a file, we have first added the `Hello` world string into the file using the `WriteFile` method to indicate we overwrite the previous contents of the file. We then use the [OpenFile](https://pkg.go.dev/os#OpenFile) method to open a file provided in the first parameter as a string path. The second parameter is the options to be passed for performing operations on the opened file, we always have to use them `defer` to close the file or other resource-locking operations.&#xA;&#xA;We have specified the `os.O_WRONLY` and the `os.O_APPEND` options indicating we want to write to the file while the file is open and specifically append to the file. So this is fine-tuning the opened file operation. We can use the ReadFile or WriteFile operation which is just used for simple read and write operations respectively.&#xA;&#xA;We use the [WriteString](https://pkg.go.dev/os#File.WriteString) method, but we can even use the [Write](https://pkg.go.dev/os#File.Write) method to write a slice of byte instead. This is just used for exploring the different options in the file types of the os package.&#xA;&#xA;### Append at a specific line&#xA;&#xA;We can also add content to a specific line or a portion of the file. There are no direct functions in golang to do the same, we will have to do some manual fine-tuning of file operations to append a particular text at a specific line.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;bufio&#34;&#xA;    &#34;bytes&#34;&#xA;    &#34;log&#34;&#xA;    &#34;os&#34;&#xA;)&#xA;&#xA;func HandleError(err error) {&#xA;    if err != nil {&#xA;        log.Fatal(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;    /* test.txt&#xA;    Hi&#xA;    Hello&#xA;    World&#xA;    Gopher&#xA;    */&#xA;&#x9;f, err = os.OpenFile(&#34;test.txt&#34;, os.O_RDWR, 0660)&#xA;&#x9;defer f.Close()&#xA;&#x9;HandleError(err)&#xA;&#x9;m := bufio.NewScanner(f)&#xA;&#x9;bytes_till := 0&#xA;&#x9;// line to be appended&#xA;&#x9;line_till := 2&#xA;&#x9;var lines_after string&#xA;&#x9;var lines_till string&#xA;&#x9;i := 0&#xA;&#x9;for m.Scan() {&#xA;&#x9;&#x9;line := m.Text()&#xA;&#x9;&#x9;if i &lt; line_till {&#xA;&#x9;&#x9;&#x9;bytes_till += bytes.Count([]byte(line), []byte{})&#xA;&#x9;&#x9;&#x9;if i &gt; 0 {&#xA;&#x9;&#x9;&#x9;&#x9;lines_till += &#34;&#xA;&#34;&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;lines_till += line&#xA;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;lines_after += &#34;&#xA;&#34; + line&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;i += 1&#xA;&#x9;}&#xA;&#x9;HandleError(m.Err())&#xA;&#x9;insert_text := lines_till + &#34;&#xA;Inserted content&#34;&#xA;&#x9;insert_text_bytes := bytes.Count([]byte(insert_text), []byte{})&#xA;&#x9;lines_after_bytes := bytes.Count([]byte(lines_after), []byte{})&#xA;&#xA;&#x9;err = os.WriteFile(&#34;test.txt&#34;, []byte(insert_text), 0660)&#xA;&#x9;HandleError(err)&#xA;&#x9;_, err = f.WriteAt([]byte(lines_after), int64(lines_after_bytes)+int64(insert_text_bytes))&#xA;&#x9;HandleError(err)&#xA;    /* test.txt&#xA;    Hi&#xA;    Hello&#xA;    Inserted content&#xA;    World&#xA;    Gopher&#xA;    */&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ cat test.txt &#xA;Hi&#xA;Hello&#xA;World&#xA;Gophers&#xA;&#xA;$ go run append.go &#xA;&#xA;$ cat test.txt &#xA;Hi&#xA;Hello&#xA;Inserted content&#xA;World&#xA;Gophers&#xA;```&#xA;&#xA;We have inserted `Inserted content` after the second line because the `line_till` the variable is set to `2`&#xA;&#xA;So, in the above example, we have used a bunch of packages to append a string or any form of text to a particular line. We first read the contents of the file, by using the `OpenFile` method which will open the file with certain permissions. We need to close the file at the end of the script so we simply use the `defer` keyword before the `f.Close()` method call. We then start to scan the file buffer, by creating a `Scanner` object with the `NewScanner` method. Then, with the scanner object of the file content, we then can use the `Scan()` method to scan the file contents line by line. By converting the content at each line from a slice of bytes to a string using the `Text`, we append to a string `line`, this will be used for keeping the count of bytes for appending text before the newly inserted text.&#xA;&#xA;The `line_till` variable is used for the line number from which we want to append to the text after.&#xA;&#xA;We count the bytes for the current line and add it to the `bytes_till` variable indicating the number of bytes there before appending content. We have a simple if-else check for the first line that is for appending a new line of characters. We append the lines into a single string `lines_till`. The string `insert_text` is created by appending all the lines before the line number `line_till` with the actual content to be inserted. We calculate the number of bytes using the [Count](https://pkg.go.dev/bytes#Count) method in the bytes package. The separator is kept blank. The `lines_after` is also been created as a single string of lines after the line number in the file.&#xA;&#xA;We add the `insert_text` (lines before + inserted text) into the file using the `WriteFile` which will override the contents of the file. Then we append the `lines_after` string as a slice of bytes to the `insert_text_bytes + lines_after_bytes` so we get the byte number position to append the `lines_after` string.&#xA;&#xA;In short, we basically overwrite the file by creating two strings (slice of bytes) one which has the lines before the line number with the text to be inserted and the second string has all the lines after the line number.&#xA;&#xA;## Replace text in a file&#xA;&#xA;Using the [bytes.Replace](https://pkg.go.dev/bytes#Replace) method, we can first read all the bytes and replace the old with the new text, and store them as a slice of bytes. We then write these slices of bytes to the file again, so we first read the contents into slices of bytes, replace the content of the byte and then overwrite the contents with the slice of bytes. It&#39;s quite simple.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bytes&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func HandleError(err error) {&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;filename := &#34;test.txt&#34;&#xA;&#x9;file, err := os.ReadFile(filename)&#xA;&#x9;HandleError(err)&#xA;&#x9;old_text := &#34;Hello&#xA;World&#34;&#xA;&#x9;new_text := &#34;Bye&#34;&#xA;&#x9;new_content := bytes.Replace(file, []byte(old_text), []byte(new_text), -1)&#xA;&#x9;err = os.WriteFile(filename, new_content, 0660)&#xA;&#x9;HandleError(err)&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ cat test.txt&#xA;Hi&#xA;Hello&#xA;World&#xA;Gophers&#xA;&#xA;$ go run main.go&#xA;&#xA;&#xA;$ cat test.txt&#xA;Hi&#xA;Bye&#xA;Gophers&#xA;```&#xA;&#xA;As we can see we have replaced `Hello&#xA;World` it with `Bye`. So, the \[Replace\] method in the bytes package takes in the parameters like the slice of bytes which should be the actual contents of the file, the old text to be replaced with again as a slice of bytes, and the new text to replace also as the slice of bytes, the final parameter is the number of replacements to be made. Here `-1` indicates there are no limits on how many replacements can be done, it can be `1`, `2` for replacing the first n occurrence of the old text, depending on how many times you want to replace the content in the file.&#xA;&#xA;## Delete Text from a File&#xA;&#xA;We can use the [os.Truncate](https://pkg.go.dev/os#File.Truncate) method to delete the contents of the file. The `Truncate` method takes in the parameters like the file path string and the size of the file to truncate or set to. If we set the second parameter to `0`, the file size will be zero and all the contents will be deleted or removed.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    /* test.txt&#xA;    Hi&#xA;    Hello&#xA;    World&#xA;    Gophers&#xA;    */&#xA;&#x9;err := os.Truncate(&#34;test.txt&#34;, 0)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;    /* test.txt is empty&#xA;    */&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ cat test.txt&#xA;Hi&#xA;Hello&#xA;World&#xA;Gophers&#xA;&#xA;$ go run delete.go&#xA;&#xA;$ cat test.txt&#xA;```&#xA;&#xA;As we can see that the contents of the file are emptied if we set the second parameter(size) of the `Truncate` method as 0.&#xA;&#xA;We can also set the value of the size as the number of bytes to keep, so instead of `0` we can set it to `n` positive integer to only save the first `n` bytes in the file.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    /* test.txt&#xA;    Hi&#xA;    Hello&#xA;    World&#xA;    Gophers&#xA;    */&#xA;&#x9;err := os.Truncate(&#34;test.txt&#34;, 6)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;    /* test.txt &#xA;    Hi&#xA;    Hel&#xA;    */&#xA;}&#xA;```&#xA;&#xA;```plaintext&#xA;$ cat test.txt&#xA;Hi&#xA;Hello&#xA;World&#xA;Gophers&#xA;&#xA;$ go run delete.go&#xA;&#xA;$ cat test.txt&#xA;Hi&#xA;Hel&#xA;```&#xA;&#xA;So, in the above example, if we set the size parameter to the Truncate method as `6`, it will keep the size of the file to 6 bytes. So we only see `Hi&#xA;Hel`, the new line is a single byte. The rest of the content is deleted. This is how we previously deleted all the bytes from the file by setting the size to `0`.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/files/write/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this section of the series, we were able to perform write operations on a file using golang. We used the packages from the standard library and performed operations like write, append, overwrite, delete and replace to a simple text file, but it could have been any file format.&#xA;&#xA;Thank you for reading. If you have any queries, questions, or feedback, you can let me know in the discussion below or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: JSON YAML TOML (config) File Reading.</title>
      <link>https://www.meetgor.com/posts/golang-config-file-read</link>
      <description>Reading specific file types (JSON, YAML, TOML) In the previous post, we have seen how to read files in golang, in this extended post of that part, we will look</description>
      <pubDate>Tue, 01 Nov 2022 00:00:00 UTC</pubDate>
      <content>## Reading specific file types (JSON, YAML, TOML)&#xA;&#xA;In the previous post, we have seen how to read files in golang, in this extended post of that part, we will look into reading some specific files used for configuration and storing data like JSON, YAML, TOML, CSV, etc.&#xA;&#xA;We will see how to read files and get individual pieces in the files. We&#39;ll use packages like `os`, `ioutil` and `encoding` to perform reading operations on file and file objects.&#xA;&#xA;### Reading a JSON File&#xA;&#xA;Golang has built-in support for reading JSON files, but still, we can and need to have low-level controls on how to parse and extract content from the file.&#xA;&#xA;Let&#39;s say we have a `json` file named `blog.json`, we can use the [encoding/json](https://pkg.go.dev/encoding/json) package to convert the JSON data into a GO object (an object that is native and understandable to go). The [Unmarshal](https://pkg.go.dev/encoding/json#Unmarshal) function is used to convert the slice of bytes from the file, into a map object.&#xA;&#xA;&#xA;```json&#xA;{&#xA;    &#34;title&#34;: &#34;Golang Blog Series&#34;,&#xA;    &#34;date&#34;: &#34;22nd October 2022&#34;,&#xA;    &#34;tags&#34;: [&#34;go&#34;, &#34;files&#34;],&#xA;    &#34;words&#34;: 1500,&#xA;    &#34;published&#34;: true&#xA;}&#xA;```&#xA;&#xA;The above is a simple JSON file, this file has a few types of key-value pairs like string, list, integer, and boolean. But we can also have nested objects and a list of those nested objects.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/json&#34;&#xA;&#x9;&#34;log&#34;&#xA;    &#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.ReadFile(&#34;blog.json&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Println(err)&#xA;&#x9;}&#xA;&#x9;var data map[string]interface{}&#xA;&#x9;json.Unmarshal([]byte(f), &amp;data)&#xA;&#xA;&#x9;log.Println(data)&#xA;&#x9;for k, v := range data {&#xA;&#x9;&#x9;log.Println(k, &#34;:&#34;, v)&#xA;&#x9;}&#xA;&#xA;}&#xA;```&#xA;&#xA;**I have removed the time stamp from the logs below so as to clearly see the output. We can use `fmt` to print the simple things while keeping consistent with the rest of the snippets in the series.**&#xA;&#xA;```&#xA;$ go run json.go&#xA;&#xA;map[date:22nd October 2022 published:true tags:[go files] title:Golang Blog Series words:1500]&#xA;published : true&#xA;title : Golang Blog Series&#xA;date : 22nd October 2022&#xA;tags : [go files]&#xA;words : 1500&#xA;```&#xA;The file is read using the [os.ReadFile](https://pkg.go.dev/os#ReadFile) method, that takes in a string as a path to the file and returns a slice of bytes or an error if there was an issue in reading the file. The parsed slice of byte is than passed as the first argument to the `Unmarshal` method in the `encoding/json` package. The second parameter is the output reference where the parsed JSON will be stored or returned. The function does not return the parsed content instead returns an error if there arose any while parsing the JSON content.&#xA;&#xA;As we can see we have got a map of `string` with an `interface`. The interface is used because the value of the key can be anything. There is no fixed value like a `string`, `int`, `bool`, or a nested `map`, `slice`. Hence we have mapped the JSON object as a map of `string` with an `interface`. The type of the value is identified with the interface it has attached to it. Let&#39;s take a look what is the type of each value in the map.&#xA;&#xA;```&#xA;published : true&#xA;bool&#xA;&#xA;title : Golang Blog Series&#xA;string&#xA;&#xA;date : 22nd October 2022&#xA;string&#xA;&#xA;tags : [go files]&#xA;[]interface {}&#xA;&#xA;words : 1500&#xA;float64&#xA;```&#xA;&#xA;Here, we can see it has correctly identified the string type of the fields like bool in case of true or false, a string for string type of values, the fourth field however has a list interface attached to it. The default precedence for `float64` over integer is the reason the `1500` value is of type `float64`. &#xA;&#xA;&#xA;### Reading a YAML File&#xA;&#xA;Though there is no standard package for parsing/unmarshaling `YAML` files in golang, it&#39;s quite easy to use a third-party package and use it to read YAML files.&#xA;&#xA;The package [gopkg.in/yaml.v3](https://pkg.go.dev/gopkg.in/yaml.v3) is used for encoding and decoding YAML files. We&#39;ll be just using it for decoding a YAML file by reading it and converting the file object into native Go objects like maps, lists, strings, etc.&#xA;&#xA;The below steps can be used for setting up the package and installing the YAML package locally.&#xA;&#xA;```&#xA;go mod init &lt;your_project_package_name&gt;&#xA;go get gopkg.in/yaml.v3&#xA;```&#xA;&#xA;This should create two files namely `go.mod` and `go.sum` with the dependency of the `gopkg.in/yaml.v3` package.&#xA;&#xA;```yml&#xA;title: &#34;Golang Blog Series&#34;&#xA;date: 2022-11-01&#xA;tags: [&#34;go&#34;, &#34;files&#34;]&#xA;published: false&#xA;words: 1500&#xA;```&#xA;&#xA;The above file is a simple YAML config, we&#39;ll follow similar kind of examples for the dummy files used in the examples.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;    &#34;os&#34;&#xA;&#xA;&#x9;yaml &#34;gopkg.in/yaml.v3&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.ReadFile(&#34;blog.yaml&#34;)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;var data map[string]interface{}&#xA;&#xA;&#x9;err = yaml.Unmarshal(f, &amp;data)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;log.Println(data)&#xA;&#x9;for k, v := range data {&#xA;&#x9;&#x9;log.Println(k, &#34;:&#34;, v)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run yaml.go&#xA;&#xA;map[date:22nd October 2022 published:false tags:[go files] title:Golang Blog Series words:1500]&#xA;published : false&#xA;words : 1500&#xA;title : Golang Blog Series&#xA;date : 22nd October 2022&#xA;tags : [go files]&#xA;```&#xA;&#xA;The above code and output demonstrate the usage of the `yaml.v3` package for reading a YAML file.&#xA;&#xA;Firstly, we read the file into a single-string object with the [os.ReadFile()](https://pkg.go.dev/os#ReadFile) method. The method will return a `[]byte` (slice of byte) or an error. If there is an error, we simply log or panic out of the program, else we can use the [yaml.Unmarshal](https://pkg.go.dev/gopkg.in/yaml.v3#Unmarshal) method to convert the string/slice of the byte into a map or a pre-defined struct. In this example, we just keep things simple by storing the file content as `map [string, interface{}]`, i.e. a map of `string` and an `interface`. The key for YAML can be only a string or an integer. It can&#39;t have unrestricted data types like the value can have. Though if you want to be unrestrictive, you can use a map of `map[interface{}]interface{}` to make the key any shape you like to have.&#xA;&#xA;So, we have created a variable called `data` as a map of `string` and `interface{}`, basically key can be a string and the value can be any type of interface depending on the parsed literally from the file object. The `Unmarshal` function takes in two parameters, the first being the slice of byte i.e. the file contents, and the second being the output variable. Now, the method does not return the parsed YAML, it can return an error if there arose, so we need to set the second parameter as a pointer to the object into which we want to store the parsed YAML.&#xA;&#xA;In the example, we have called `Unmarshal(f, &amp;data)` which will fetch the contents from the slice of bytes `f` and output the parsed YAML from the slice of bytes into the memory location of `data` and hence using `&amp;data` indicating the pointer to the variable(fetch the memory address).&#xA;&#xA;So, that is how we obtain the map of keys and values from the YAML config, thereafter, you can iterate on the map, access the keys and values, type caste them as per requirement, and basically have control over what processing needs to be done to the parsed YAML content.&#xA;&#xA;```&#xA;published : false&#xA;bool&#xA;&#xA;words : 1500&#xA;int&#xA;&#xA;title : Golang Blog Series&#xA;string&#xA;&#xA;date : 22nd October 2022&#xA;string&#xA;&#xA;tags : [go files]&#xA;[]interface {}&#xA;```&#xA;&#xA;I have just printed the types of the values in the above output as `log.Printf(&#34;%T&#34;, v)`, we can see the types are being correctly recognized and being parsed. The last object is indeed a `slice` so it has an interface of the slice(array) attached to it.&#xA;&#xA;### Reading a TOML file&#xA;&#xA;YAML and TOML are almost identical, though toml has more restrictive configuration options, but is more readable than YAML, as YAML can get complicated pretty quickly. Though both of them have their pros and cons, YAML is used everywhere in the DevOps world, configs, whereas TOML is the format of choice for python packaging, and static site generation configs.&#xA;&#xA;Let&#39;s see how we can use golang to read TOML files.&#xA;&#xA;```&#xA;$ go mod init &lt;your_project_package_name&gt;&#xA;$ go get github.com/pelletier/go-toml&#xA;```&#xA;&#xA;The above commands are used for setting up a golang package or project and installing the [go-toml](https://pkg.go.dev/github.com/pelletier/go-toml) package. Once the above commands are done executing, it will generate `go.mod` and `go.sum` files used for storing dependencies and packages installed for the project locally.&#xA;&#xA;```toml&#xA;[blog]&#xA;name=&#39;techstructive-blog&#39;&#xA;tags=[&#39;go&#39;,&#39;django&#39;,&#39;vim&#39;]&#xA;author=&#39;meet gor&#39;&#xA;active=true&#xA;&#xA;[author]&#xA;name=&#39;Meet Gor&#39;&#xA;github=&#39;mr-destructive&#39;&#xA;twitter=&#39;meetgor21&#39;&#xA;posts=80&#xA;```&#xA;&#xA;The above is the sample file `blog.toml` which we will use to read in the go script below. The toml file has a similar structure as we have seen in the previous examples. We have different data types like string, boolean, integer, and list.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;    &#34;os&#34;&#xA;&#xA;&#x9;toml &#34;github.com/pelletier/go-toml&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.ReadFile(&#34;blog.toml&#34;)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;var data map[interface{}]interface{}&#xA;&#xA;&#x9;err = toml.Unmarshal(f, &amp;data)&#xA;&#x9;log.Println(data)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;for k, v := range data {&#xA;&#x9;&#x9;log.Println(k, &#34;:&#34;, v)&#xA;&#xA;&#x9;&#x9;switch t := v.(type) {&#xA;&#x9;&#x9;case map[string]interface{}:&#xA;&#x9;&#x9;&#x9;for a, b := range t {&#xA;&#x9;&#x9;&#x9;&#x9;log.Println(a, &#34;:&#34;, b)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run toml.go&#xA;&#xA;map[author:map[github:mr-destructive name:Meet Gor posts:80 twitter:meetgor21] blog:map[active:true author:meet gor&#xA;&#xA;name:techstructive-blog tags:[go django vim]]]&#xA;&#xA;blog : map[active:true author:meet gor name:techstructive-blog tags:[go django vim]]&#xA;name : techstructive-blog&#xA;tags : [go django vim]&#xA;author : meet gor&#xA;active : true&#xA;&#xA;author : map[github:mr-destructive name:Meet Gor posts:80 twitter:meetgor21]&#xA;&#xA;name : Meet Gor&#xA;github : mr-destructive&#xA;twitter : meetgor21&#xA;posts : 80&#xA;```&#xA;&#xA;So, in the above example and output, the YAML file was read and the key-value pairs inside them were read. The first thing we do is read the file `blog.toml` with `ioutil` package, with the `ReadFile` function. The function takes in the string as the path to the file to be read and returns a slice of byte. We use this slice of byte as a parameter to the [Unmarshal](https://pkg.go.dev/github.com/pelletier/go-toml#Unmarshal) method. The second paramter for the `Unmarshal` is the output variable(usually a pointer to a variable), we have created a map of `interface{]` with an `interface` as we see there can be nested keys which hold the name of the config.&#xA;&#xA;The variable `data` is a map of `interface{}` to an `interface{}`, and we parse the memory address to the `data` variable to the `Unmarshal` method. Thereby the parsed `TOML` content is stored in the data variable.&#xA;&#xA;```&#xA;name : techstructive-blog&#xA;string&#xA;&#xA;tags : [go django vim]&#xA;[]interface{}&#xA;&#xA;author : meet gor&#xA;string&#xA;&#xA;active : true&#xA;bool&#xA;&#xA;name : Meet Gor&#xA;string&#xA;&#xA;github : mr-destructive&#xA;string&#xA;&#xA;twitter : meetgor21&#xA;string&#xA;&#xA;posts : 80&#xA;int64&#xA;```&#xA;&#xA;The above is a verbose output for the type of the values as parsed by golang, we have `string`, `bool`, `int64`, and a `slice` (list with interface{} attached with it). Only types like `string`, `bool`, `int`, `float64` can be parsed from the Unmarshal function, other than these types, the type will have an interface attached to it.&#xA; &#xA; In such cases, where the type of value is not among the 4 types(string, bool, int float), we can use a pre-defined struct to parse the content from the file. Though it would require a strict structure and predictable response from the parsed file.&#xA;&#xA;### Reading CSV file&#xA;&#xA;We can even read a CSV file in golang, we have seen in the previous post, we have used custom delimiters in the parsing of the file.&#xA;&#xA;```csv&#xA;id,name,posts,exp&#xA;21,jim,23,2&#xA;33,kevin,39,1&#xA;45,james,70,2&#xA;56,chris,89,3&#xA;```&#xA;&#xA;The above file is a sample csv file, though the size is too small, we can use it as an example.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/csv&#34;&#xA;    &#34;log&#34;&#xA;    &#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;f, err := os.Open(&#34;temp.csv&#34;)&#xA;&#x9;check_error(err)&#xA;&#xA;&#x9;reader := csv.NewReader(f)&#xA;&#xA;&#x9;n, err := reader.ReadAll()&#xA;&#x9;check_error(err)&#xA;&#x9;for _, line := range n {&#xA;&#x9;&#x9;for _, text := range line {&#xA;&#x9;&#x9;&#x9;log.Println(text)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;id&#xA;name&#xA;posts&#xA;exp&#xA;21&#xA;jim&#xA;23&#xA;2&#xA;33&#xA;kevin&#xA;39&#xA;1&#xA;45&#xA;james&#xA;70&#xA;2&#xA;56&#xA;chris&#xA;89&#xA;3&#xA;```&#xA;&#xA;The CSV package has a [NewReader](https://pkg.go.dev/encoding/csv#NewReader) method that accepts an `io.Reader` and returns a `Reader` object. After parsing the reader, we use the [ReadAll](https://pkg.go.dev/encoding/csv#Reader.ReadAll) method to return a 2d string or an error if there exists an error while parsing the content. You can get a detailed explanation of the CSV parsing and reading in the [previous post](https://www.meetgor.com/golang-file-read/#Read%20File%20by%20a%20delimiter).&#xA;&#xA;&#xA;### Reading CSV from URL&#xA;&#xA;The CSV file can also be read from the URL, the content of the file is a `response.Body` in place of the file object reference, in the previous example, the [os.Open()](https://pkg.go.dev/os#Open) method returns a [os.File](https://pkg.go.dev/os#File) object. &#xA;&#xA;We use the `http.Get(string)` method to get the response from the URL for reading the CSV file present on the web.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/csv&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;url := &#34;https://github.com/woocommerce/woocommerce/raw/master/sample-data/sample_products.csv&#34;&#xA;&#x9;response, err := http.Get(url)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Println(err)&#xA;&#x9;&#x9;return&#xA;&#x9;}&#xA;&#xA;&#x9;defer response.Body.Close()&#xA;&#xA;&#x9;reader := csv.NewReader(response.Body)&#xA;&#x9;n, err := reader.ReadAll()&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Println(err)&#xA;&#x9;}&#xA;&#xA;&#x9;for _, line := range n {&#xA;&#x9;&#x9;for _, text := range line {&#xA;&#x9;&#x9;&#x9;log.Println(text)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run csv.go&#xA;&lt;feff&gt;ID&#xA;Type&#xA;SKU&#xA;Name&#xA;Published&#xA;Is featured?&#xA;Visibility in catalog&#xA;Short description&#xA;Description&#xA;Date sale price starts&#xA;Date sale price ends&#xA;...&#xA;...&#xA;...&#xA;```&#xA;So, that&#39;s how we can read a CSV file from the URL. By fetching the CSV URL `https://github.com/woocommerce/woocommerce/raw/master/sample-data/sample_products.csv` from the [http.Get](https://pkg.go.dev/net/http#Client.Get) method, this will get us the [response.Body](https://pkg.go.dev/net/http#Response) that contains the actual CSV file content. The response than can be parsed to the `csv.NewReader(*Os.File).ReadAll()` i.e. [reader.ReadAll()](https://pkg.go.dev/encoding/csv#Reader.ReadAll). The function returns a multidimensional slice `[][]slice` that can be iterated and parsed as per requirement.&#xA;&#xA;### Reading XML file&#xA;&#xA;XML is the de facto standard for RSS feeds, it is widely used in many places and is still all over the web. We&#39;ll see an example to read an XML file locally, but as we saw in the above example, we can also read an RSS link from the web.&#xA;&#xA;Just like CSV, we have [encoding/xml](https://pkg.go.dev/encoding/xml), and the standard library has all the functions used for parsing the XML files.&#xA;&#xA;We will be using a local XML file called `rss.xml`, and reading the contents from the tags in the file.&#xA;&#xA;```xml&#xA;&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34; ?&gt;&#xA;&lt;channel&gt;&#xA;&lt;title&gt;Meet Gor&lt;/title&gt;&#xA;&lt;description&gt;Techstructive Blog Feed&lt;/description&gt;&#xA;&lt;item&gt;&#xA;&lt;title&gt;Why and How to make and use Vim as a text editor and customizable IDE&lt;/title&gt;&#xA;&lt;link&gt;https://www.meetgor.com/vim-text-editor-ide&lt;/link&gt;&#xA;&lt;/item&gt;&#xA;&lt;item&gt;&#xA;&lt;title&gt;Setting up Vim for Python&lt;/title&gt;&#xA;&lt;link&gt;https://www.meetgor.com/vim-for-python&lt;/link&gt;&#xA;&lt;/item&gt;&#xA;&lt;/channel&gt;&#xA;```&#xA;&#xA;The above example is a short part of my blog&#39;s [rss](https://www.meetgor.com/rss) feed. I have just trimmed the unwanted part and will be just using the tags that we want to fetch. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/xml&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;type Channel struct {&#xA;&#x9;XMLName     xml.Name `xml:&#34;channel&#34;`&#xA;&#x9;Title       string   `xml:&#34;title&#34;`&#xA;&#x9;Description string   `xml:&#34;description&#34;`&#xA;&#x9;Item        []Item   `xml:&#34;item&#34;`&#xA;}&#xA;&#xA;type Item struct {&#xA;&#x9;XMLName xml.Name `xml:&#34;item&#34;`&#xA;&#x9;Title   string   `xml:&#34;title&#34;`&#xA;&#x9;Link    string   `xml:&#34;link&#34;`&#xA;}&#xA;&#xA;func check_error(err error) {&#xA;    if err != nil {&#xA;        log.Fatal(err)&#xA;    }&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.ReadFile(&#34;rss.xml&#34;)&#xA;    check_error(err)&#xA;&#x9;defer f.Close()&#xA;&#xA;&#x9;d := Channel{}&#xA;&#x9;err = xml.Unmarshal(f, &amp;d)&#xA;    check_error(err)&#xA;&#xA;&#x9;for _, item := range d.Item {&#xA;&#x9;&#x9;log.Println(item.Title)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run xml.go&#xA;&#xA;{{ channel} Meet Gor Techstructive Blog Feed [{{ item} Why and How to make and use Vim as a text editor and customizable IDE https://www.meetgor.com/vim-text-editor-ide} {{ item} Setting up Vim for Python https://www.meetgor.com/vim-for-python}]}&#xA;&#xA;Why and How to make and use Vim as a text editor and customizable IDE&#xA;Setting up Vim for Python&#xA;```&#xA;&#xA;The above example uses a couple of `struct` like `Channel` and `Item` that stores the tag data like `title`, `description`, `link`, etc. Unlike the JSON, YAML, and toml files, XML can&#39;t parse the content directly we need a structure to parse into. And that&#39;s fine as XML is not much dynamic in terms of config, we usually have standard tags and elements which can be pre-defined in a struct type.&#xA;&#xA;In this example, the RSS feed has a `Channel` tag with `title`, `description`, and `item`. &#xA;&#xA;**NOTE: Use Title case for the fields of the structs. It will make the fields public, I spent a few hours debugging that really :)**&#xA;&#xA;So, we define the `Channel` struct with fields like `Title` as a string which is a tag in the file as `xml:&#34;title&#34;`. This means the title in the tag of the XML will be stored in the field as a string in the attribute name as `Title`. Similarly, we have fields like `Description` and `Item[]` this is a list or multiple of `item` tags that might be present in the XML file. The `XMLName` is used for identifying the parent tag for the struct, so we use `channel` for the first struct as it is the first tag appearing of the hierarchy in the XML file.&#xA;&#xA;So, we create an object for the root structure as `Channel{}` (an empty object instantiated). The `xml.Unmarshal` function is parsed with the content of the file as `data` which is a slice of byte as we have seen in the previous examples. The slice of byte is then used in the `Unmarshal` method as the first parameter and the reference of an empty `Channel` object as the second parameter. The second parameter will be to store the parsed XML content from the file.&#xA; &#xA;I have a few examples on the GitHub repository covering the reading of files from a URL for the CSV, and XML files. But, this concept in the example, can be applied to JSON, YAML, and other file formats as well.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/files/read/config_files) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, that&#39;s it from this post, we covered how to read specific configuration files like `JSON`, `CSV`, `YAML`, `TOML`, and `XML`. We saw how to read a local file and also touched on the concept to read contents from a file on the web with a URL. We also saw how we can use pre-defined structs to parse content from a file, especially for XML.&#xA;&#xA;Thank you for reading. If you have any queries, questions, or feedback, you can let me know in the discussion below or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: File Reading</title>
      <link>https://www.meetgor.com/posts/golang-file-read</link>
      <description>Introduction In the 22nd post of the series, we will be looking into the file-handling process in golang, in the next few posts, we will cover the operations on</description>
      <pubDate>Sun, 23 Oct 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 22nd post of the series, we will be looking into the file-handling process in golang, in the next few posts, we will cover the operations on file using golang. In this first entry of the file handling sub-series, we will understand the `READ` operation with files. We will see different ways to read a file, it can be word by word, line by line, or even custom chink by chunk. &#xA;&#xA;While dealing with files, we will also use standard library packages such as `os`, `bufio`, etc. We&#39;ll also touch on how we can read files from a remote location. Using golang, we will have a low-level interaction with file management but golang also abstracts the most of heavy lifting and management of files for us, so it becomes quite easy to work with files.&#xA;&#xA;&#xA;## Read the file as a single string (using os.ReadFile)&#xA;&#xA;We can use the [os](https://pkg.go.dev/os) package in golang, in which we have access to the [ReadFile](https://pkg.go.dev/os#ReadFile) funciton. The `ReadFile` function takes in a parameter as a string which should be a file name, it returns a slice of bytes or an error. We have discussed the error handling in the previous part of the series. So, we have to use the comma ok error syntax to get the appropriate return value from the funciton. We can grab the slice of bytes as the text we want or an error if there are errors like a file doesn&#39;t exist, it&#39;s a folder, etc. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;log&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;text, err := os.ReadFile(&#34;sample.txt&#34;)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;log.Println(string(text))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go                                                                                                                                                         &#xA;2022/10/23 22:39:11 Golang is a programming language.                                                                                                                    &#xA;created: 2007                                                                                                                                                            &#xA;type:static &#xA;```&#xA;&#xA;So, under the hood here, we have text as a slice of bytes. We can iterate over the text as a slice and get the character by character-in the content of the file. Though we don&#39;t directly interact with the file content, we are storing it in a variable. In technical words, the file is directly loaded into the memory at once. We thereby return a single string object containing the content of the file.&#xA;&#xA;&#xA;## Read file line by line &#xA;&#xA;We can even read a file line by line. Using the [bufio.NewScanner()](https://pkg.go.dev/bufio#NewScanner), the function takes in a [Reader](https://pkg.go.dev/io#Reader) object in our case it will be a file object. The function returns a scanner object that can be used to read the text with a particular scanner method. The returned object can be used in the loop to iterate over the content, in our case, we use the [Scan](https://pkg.go.dev/bufio#Scanner.Scan) method to split the file into lines. But we can use other methods like [ScanWords](https://pkg.go.dev/bufio#ScanWords) for scanning words, [ScanRunes](https://pkg.go.dev/bufio#ScanRunes) for scanning character by character, [ScanBytes](https://pkg.go.dev/bufio#ScanBytes) for scanning byte by byte. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bufio&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.Open(&#34;sample.txt&#34;)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;line_list := []string{}&#xA;&#xA;&#x9;defer f.Close()&#xA;&#x9;scanner := bufio.NewScanner(f)&#xA;&#x9;for scanner.Scan() {&#xA;&#x9;&#x9;line := scanner.Text()&#xA;&#x9;&#x9;line_list = append(line_list, line)&#xA;&#x9;}&#xA;&#x9;if err := scanner.Err(); err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;for _, line := range line_list {&#xA;&#x9;&#x9;log.Println(line)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run line.go&#xA;&#xA;2022/10/23 22:39:50 Golang is a programming language.&#xA;2022/10/23 22:39:50 created: 2007&#xA;2022/10/23 22:39:50 type:static&#xA;```&#xA;&#xA;In the above example, the file is read in with the `bufio.NewScanner` and is iterated line by line with the help of the `Scan` function. The text in the line is scanned and stored in the variable `line` which is a string, this is further appended to the string slice `line_list`. Hence we can iterate over the file content line by line and store the results as a string array.&#xA;&#xA;Here we have used the `defer` keyword before calling the `f.Close()` method because we want to close the file after performing operations on the file. The defer will call the function at almost the end of the main function i.e. at the end of the program.&#xA;&#xA;## Read File by a delimiter&#xA;&#xA;We can even read a file with a custom delimiter which can be used to read CSV or other delimiters. The [csv](https://pkg.go.dev/encoding/csv) package has a [NewReader](https://pkg.go.dev/encoding/csv#NewReader) function which takes in a object of file content, and it will return a [Reader](https://pkg.go.dev/encoding/csv#Reader) object. We can alter the attribute `Comma` in the `Reader` object and set it to any value we want as a delimiter. Thereafter we can read the whole content or read it as words, lines, bytes, or chunks as per your criteria. The extracted content will be split as a slice of the data separated by the delimiter set in the `Comma` attribute.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;encoding/csv&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.Open(&#34;delimeter.txt&#34;)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;defer f.Close()&#xA;&#xA;&#x9;reader := csv.NewReader(f)&#xA;&#xA;&#x9;reader.Comma = &#39;:&#39;&#xA;&#x9;&#xA;    data, err := reader.ReadAll()&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;log.Println(data)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ cat delimiter.txt&#xA;&#xA;10:22:2022&#xA;golang:21:read&#xA;&#xA;&#xA;$ go run delimiter.go&#xA;&#xA;2022/10/23 22:40:44 [[10 22 2022] [golang 21 read]]&#xA;```&#xA;&#xA;In the above example, the delimiter is set as `:` so by using the `Comma` attribute we can set the delimiter. By using the `NewReader` function, we fetch the reader object and by using the `ReadAll` function associated to the reader object, we read the contents. The content is fetched as a slice of strings which will be separated by the delimiter.&#xA;&#xA;&#xA;### Reading File word by word&#xA;&#xA;We can even use [ScanWords](https://pkg.go.dev/bufio#ScanWords) to read a file word by word. A word can be a collection of characters that are separated by space. Instead of reading it line by line, this function reads the file content after a space. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bufio&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;f, err := os.Open(&#34;sample.txt&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;defer f.Close()&#xA;&#xA;&#x9;scanner := bufio.NewScanner(f)&#xA;&#x9;scanner.Split(bufio.ScanWords)&#xA;&#xA;&#x9;if err := scanner.Err(); err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;wordlist := []string{}&#xA;&#x9;for scanner.Scan() {&#xA;&#x9;&#x9;word := scanner.Text()&#xA;&#x9;&#x9;wordlist = append(wordlist, word)&#xA;&#x9;&#x9;log.Println(word)&#xA;&#x9;}&#xA;&#x9;log.Println(wordlist)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run word.go&#xA;&#xA;2022/10/23 22:42:03 Golang&#xA;2022/10/23 22:42:03 is&#xA;2022/10/23 22:42:03 a&#xA;2022/10/23 22:42:03 programming&#xA;2022/10/23 22:42:03 language.&#xA;2022/10/23 22:42:03 created:&#xA;2022/10/23 22:42:03 2007&#xA;2022/10/23 22:42:03 type:static&#xA;&#xA;2022/10/23 22:42:03 [Golang is a programming language. created: 2007 type:static] &#xA;```&#xA;&#xA;With the `ScanWords` function, we can read the contents of the file word by word. The scanner object which has the actual content of the file is split by the `Split` function, the split criteria are used as Word, where the delimiter will be used as space. The `wordlist` is a slice of strings to which we append the string `word` that in turn is read from the `scanner.Text()` function.&#xA;&#xA;### Reading Files in chunks&#xA;&#xA;We can even read files in chunks, a chunk is a collection/array of bytes. We can specify the number of bytes we want to read in one go and the file reader will scan the content as a slice of that number of bytes each iteration. The [Read](https://pkg.go.dev/bufio#Reader.Read) funciton takes in a slice of bytes and will return the number of bytes in the reader object.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bufio&#34;&#xA;&#x9;&#34;io&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;f, err := os.Open(&#34;sample.txt&#34;)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;defer f.Close()&#xA;&#xA;&#x9;reader := bufio.NewReader(f)&#xA;&#x9;chunk_size := 16&#xA;&#x9;chunk_list := []string{}&#xA;&#x9;buf := make([]byte, chunk_size)&#xA;&#xA;&#x9;for {&#xA;&#x9;&#x9;n, err := reader.Read(buf)&#xA;&#xA;&#x9;&#x9;if err != nil {&#xA;&#x9;&#x9;&#x9;if err != io.EOF {&#xA;&#x9;&#x9;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;&#x9;&#x9;}&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;chunk_list = append(chunk_list, string(buf[0:n]))&#xA;&#x9;}&#xA;&#x9;for _, chunk := range chunk_list {&#xA;&#x9;&#x9;log.Print(chunk)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run chunks.go&#xA;&#xA;2022/10/23 22:44:41 Golang is a prog&#xA;2022/10/23 22:44:41 ramming language&#xA;2022/10/23 22:44:41 .&#xA;created: 2007&#xA;2022/10/23 22:44:41 type:static&#xA;```&#xA;&#xA;In the above example, we have opened the file and loaded the content into the `f` variable. the contents are read with the help of the `NewReader` function which returns a reader object which further can be used to read contents into chunks of bytes. The `chunk_size` defines the size we want to use for reading the content, `chunk_list` as a slice of strings which will hold the slice of chunks/bytes as a type caste into a slice of strings. With the `Read` function, the bytes are read into the function, and the buffer is split as per the chunk size obtained in the `Read` function. We append the slice of bytes into the sliced array and thereby we obtain the slice of strings.&#xA;&#xA;&#xA;### Read file character by character&#xA;&#xA;We can even read file each character at a time, using the [ScanRunes](https://pkg.go.dev/bufio#ScanRunes) function, this function scans a single rune/byte at a time. So, we can scan these runes one at a time and store them as a slice of bytes. Thereby we will have the content of the file stored as a slice of bytes.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;bufio&#34;&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;f, err := os.Open(&#34;sample.txt&#34;)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#x9;defer f.Close()&#xA;&#xA;&#x9;scanner := bufio.NewScanner(f)&#xA;&#x9;scanner.Split(bufio.ScanRunes)&#xA;&#xA;&#x9;if err := scanner.Err(); err != nil {&#xA;&#x9;&#x9;log.Fatal(err)&#xA;&#x9;}&#xA;&#xA;&#x9;charlist := []byte{}&#xA;&#x9;for scanner.Scan() {&#xA;&#x9;&#x9;char := byte(scanner.Text()[0])&#xA;&#x9;&#x9;charlist = append(charlist, char)&#xA;&#x9;}&#xA;&#x9;log.Println(charlist)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run char.go&#xA;&#xA;2022/10/23 22:48:55 [71 111 108 97 110 103 32 105 115 32 97 32 112 114 111 103 114 97 109 109 105 110 103 32 108 97 110 103 117 97 103 101 46 10 99 114 101 97 116 101 100 58 32 50 48 48 55 10 116 121 112 101 58 115 116 97 116 105 99 10]&#xA;```&#xA;&#xA;We can see in the above example, the output is a slice of byte, hence these are `uint8`, we can cast them to `string` and obtain the equivalent ASCII representation of the bytes. The `ScanRunes` function allows us to read the content from the reader object as a rune as we split the reader object into the unit bytes/runes.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/files/read/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;In this section, we explored the functions and packages related to file reading. We saw how we can use packages like `os`, `bufio`, `encoding`, etc. to read files in a different way. We saw how to read files as a single string, line by line, word by word, character by character, in chunks, and also with a custom delimiter. Hopefully, the basics of file reading will have been cleared and with the examples, the syntactical construct was understood. &#xA;&#xA;Thank you for reading. If you have any queries, questions, or feedback, you can let me know in the discussion below or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Paths</title>
      <link>https://www.meetgor.com/posts/golang-paths</link>
      <description>Introduction In the 21st post of the series, we will be exploring the file paths in golang, we will be exploring how we can deal with paths. By using packages l</description>
      <pubDate>Sun, 02 Oct 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 21st post of the series, we will be exploring the file paths in golang, we will be exploring how we can deal with paths. By using packages like `os`, `path`, `io`, we can work with file systems and operating system-specific details. In this section, we will see how to resolve paths, details from paths, extract relative or absolute paths, iterate over file systems, etc. &#xA;&#xA;Starting from this post, it will follow a specific topic in the upcoming few posts which will be covering files and paths. We will be talking about dealing with paths and files in golang. This post is just about working with paths.&#xA;&#xA;## Resolving and Parsing Path&#xA;&#xA;In golang, the [os](https://pkg.go.dev/os) and the [path](https://pkg.go.dev/path) packages are quite helpful in working with paths. We use the `path�ilpath` package specifically for working with paths and file structures. &#xA;&#xA;### Get the current working directory&#xA;&#xA;To get the path for the current working directory, we can use the [os.Getwd()](https://pkg.go.dev/os#Getwd) function. The function returns a-ok, an error-like object if the working directory exists it will return the absolute path to the directory else if the path is deleted or corrupted while processing, it will give an error object.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;os&#34;&#xA;    &#34;log&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    dir, err := os.Getwd()&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        log.Println(dir)&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ pwd&#xA;/home/meet/code/techstructive-blog&#xA;&#xA;$ go run main.go&#xA;2022/10/01 19:19:09 /home/meet/code/techstructive-blog&#xA;```&#xA;&#xA;So, as we can see the `Getwd` the function returns an absolute path to the current working directory which will be the path from which you will be executing/running the script file.&#xA;&#xA;### Get the path to the home directory&#xA;&#xA;We can even get the home directory path like the `/home` followed by the user name on Linux and the User Profile with the name for Windows. The [UserHomeDir()](https://pkg.go.dev/os#UserHomeDir), returns the home directory for the user from which the file is being executed. The return value is simply an string just like the `Getwd` function.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;os&#34;&#xA;    &#34;log&#34;&#xA;&#xA;)&#xA;&#xA;func main() {&#xA;    dir, err := os.UserHomeDir()&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        log.Println(dir)&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ echo $HOME&#xA;/home/meet/&#xA;&#xA;$ go run main.go&#xA;2022/10/01 19:35:50 /home/meet&#xA;```&#xA;&#xA;So, as expected, the `UserHomeDir` function returns the path string to the home directory of the user.&#xA;&#xA;### Get path from a file name string &#xA;&#xA;Let&#39;s say, we give in a filename and we want the absolute path of it. The [path/filepath](https://pkg.go.dev/path/filepath) package provides the [Abs](https://pkg.go.dev/path/filepath#Abs) function that does exactly that. The function returns a path string of the parameter parsed as a string to a directory or a file name. The function might as well return an error as the file path might not existing or the file might have got deleted, so we&#39;ll have to call the function with the ok, error syntax.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    file_name := &#34;default.md&#34;&#xA;    log.Println(file_name)&#xA;    dir, err := filepath.Abs(file_name)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        log.Println(dir)&#xA;    }&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;2022/10/01 19:52:23 default.md&#xA;2022/10/01 19:52:23 /home/meet/code/techstructive-blog/default.md&#xA;&#xA;```&#xA;&#xA;As we can see the file `default.md` was parsed in the `Abs()` function and it returned the absolute path of the file.&#xA;&#xA;### Get Parent Directory from a Path&#xA;&#xA;We can get the parent directory for a given path, if the path is to a file, we return the absolute path to the parent directory of the file, or if the path is to a folder, we return the folder&#39;s parent directory.&#xA;&#xA;```go&#xA;&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    file_name := &#34;drafts/default.md&#34;&#xA;    //file_name := &#34;drafts/&#34;&#xA;    path, err := filepath.Abs(file_name)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        //log.Println(path)&#xA;        log.Println(filepath.Dir(path))&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;2022/10/01 19:58:45 /home/meet/code/techstructive-blog/drafts&#xA;&#xA;$ go run main.go&#xA;2022/10/01 19:58:45 /home/meet/code/techstructive-blog &#xA;&#xA;```&#xA;&#xA;As we can see when we parse in a file path i.e. `drafts/default.md`, the `Dir` the method returns a path to the parent folder, and even if we parse the directory path i.e. `drafts/`, the method returns the parent of that directory.&#xA;&#xA;### Get the last file/folder for a given Absolute Path&#xA;&#xA;Golang also provides a way to get the file/directory name from a path string using the [Base](https://pkg.go.dev/path/filepath#Base) function provided in the [path/filepath](https://pkg.go.dev/path/filepath) package. &#xA;&#xA;```go&#xA;file_name := &#34;default.md&#34;&#xA;dir, err := filepath.Abs(file_name)&#xA;&#xA;if err != nil {&#xA;    log.Println(err)&#xA;} else {&#xA;    log.Println(dir)&#xA;    log.Println(filepath.Base(dir))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;2022/10/01 19:58:45 /home/meet/code/techstructive-blog/drafts/default.md&#xA;2022/10/01 20:19:28 default.md &#xA;```&#xA;&#xA;So, the function `Base` will return the last element in the path, it can be a file or a directory, just returns the name before the last `\`. In the above example, we start with a filename `default.md` but set the dir as the absolute path to that file and again grab the file name using the `Base` function.&#xA;&#xA;## Fetching details from a Path&#xA;&#xA;We can even use utility functions for dealing with paths in golang like for checking if a file or path exists, if a path is a file or a directory, grabbing file name and extension, etc. The `path/filepath` and the `os` the package helps with working with these kinds of operations.&#xA;&#xA;### Check if a path exists&#xA;&#xA;We can use the [os.Stat](https://pkg.go.dev/os#Stat) function along with the [os.IsNotExist](https://pkg.go.dev/os#IsNotExist) for finding if a path is existing or not. The Stat function returns a [FileInfo](https://pkg.go.dev/io/fs#FileInfo) object or an error. The `FileInfo` object will have methods such as `Name()`, `IsDir()`, `Size()`, etc. If we get an error, inside the Stat method, the error will probably arise if the path does not exist, so inside the `os` package, we also have the `IsNotExist()` method, that returns a `boolean` value. The method returns `true` if the parsed error indicates that the path doesn&#39;t exist and `false` if it exists.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    file_name := &#34;drafts/default.md&#34;&#xA;    path, err := filepath.Abs(file_name)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        if _, err := os.Stat(path); os.IsNotExist(err) {&#xA;            log.Println(&#34;No, &#34; + path + &#34; does not exists&#34;)&#xA;        } else {&#xA;            log.Println(&#34;Yes, &#34; + path + &#34; exists&#34;)&#xA;        }&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;2022/10/01 20:51:31 Yes, /home/meet/code/techstructive-blog/drafts/default.md exists&#xA;```&#xA;&#xA;So, from the above example, the program will log if the path is present in the system or not. The error is parsed from the `Stat` method to the `IsNotExist` method for logging relevant messages. Since the directory exists, we get the path exists log.&#xA;&#xA;### Check if a path is a file or directory &#xA;&#xA;The `FileInfo` object returned from the `Stat` the method provides a few methods such as `IsDir()` that can be used for detecting if a given path is a directory or not. The function simply returns a `boolean` value if the provided path points to a directory or not. Since we have to parse the path to the `IsDir()` function, we convert the file string into a path using the `Abs` method and then check if the path actually exist with the `Stat()` method.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    file_name := &#34;drafts/default.md&#34;&#xA;    //file_name := &#34;drafts/&#34;&#xA;    path, err := filepath.Abs(file_name)&#xA;    if err != nil {&#xA;            log.Println(err)&#xA;    } else {&#xA;        if t, err := os.Stat(path); os.IsNotExist(err) {&#xA;            log.Fatal(&#34;No, &#34; + path + &#34; does not exists&#34;)&#xA;        } else {&#xA;            log.Println(path)&#xA;            log.Println(t.IsDir())&#xA;        }&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;2022/10/01 20:55:20 /home/meet/code/techstructive-blog/drafts/default.md&#xA;2022/10/01 20:55:20 false&#xA;&#xA;$ go run main.go&#xA;2022/10/01 20:55:20 /home/meet/code/techstructive-blog/drafts/&#xA;2022/10/01 20:55:20 true&#xA;```&#xA;&#xA;So, by running the program for a file and a directory, we can see it returns `true` if the path is a directory and `false` if the provided path is a file. In the above example, since the `drafts/defaults.md` is a file, it returned `false`, and for the next example, when we set the path  `drafts/` it returns `true` as the path provided is a directory.&#xA;&#xA;### Get File Extension from path&#xA;&#xA;By using the [path](https://pkg.go.dev/path) package, the extension of a given path can be fetched. The [Ext](https://pkg.go.dev/path#Ext) method can be used for getting the extension of the provided path string, it doesn&#39;t matter if the provided path is exists or not, is absolute or relative, it just returns the text after the last . in the string. But if we are working with real systems it is good practice to check if the file or path actually exists.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;path&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    file_name := &#34;default.md&#34;&#xA;    dir, err := filepath.Abs(file_name)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        file_ext := path.Ext(dir)&#xA;        log.Println(file_ext)&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;2022/10/01 21:03:23 .md&#xA;```&#xA;&#xA;The above example demonstrates how we can get the extension of a file using the `Ext()` method in the `path` package. Given the string path as `default.md`, the function returned `.md` which is indeed the extension of the provided file.&#xA;&#xA;### Get Filename from path&#xA;&#xA;We can even get the file name from a path in golang using the [TrimSuffix](https://pkg.go.dev/strings#TrimSuffix) method in the [strings](https://pkg.go.dev/strings) package. The `TrimSuffix` method trim the string from the provided suffix, like if we have a string `helloworld` and we provide the suffix as `world`, the `TrimSuffix` the method will return the string `hello`, it will remove the suffix string from the end of the string.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;path&#34;&#xA;    &#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;    file_name := &#34;default.md&#34;&#xA;    dir, err := filepath.Abs(file_name)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    } else {&#xA;        file_ext := path.Ext(dir)&#xA;        log.Println(file_ext)&#xA;        log.Println(strings.TrimSuffix(dir, file_ext))&#xA;        log.Println(strings.TrimSuffix(file_name, file_ext))&#xA;        //log.Println(strings.TrimSuffix(dir, path.Ext(dir)))&#xA;        //log.Println(strings.TrimSuffix(file_name, path.Ext(dir)))&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;&#xA;2022/10/01 21:09:39 .md&#xA;2022/10/01 21:09:39 /home/meet/code/techstructive-blog/default&#xA;2022/10/01 21:09:39 default&#xA;```&#xA;&#xA;We can use the `TrimSuffix` method to remove the extension as the suffix and it returns the path which we get as the file name. The `TrimSuffix` method returns the path after removing the extension from the path.&#xA;&#xA;## List Files and Directories in Path&#xA;&#xA;In golang, we can use the `io` and the `path/filepath` packages to iterate over the file paths. Suppose, we want to list out all the files or directories in a given path, we can use certain functions such as `Walk`, `WalkDir` to iterate over a path string.&#xA;&#xA;There are certain types of iterations we can perform based on the constraints we might have, like iterating over only files, or directories, not including nested directories, etc. We&#39;ll explore the basic iterations and explain how we fine-tune the iteration based on the constraints.&#xA;&#xA;## List only the files in the Path&#xA;&#xA;The first example, we can take is to simply list out only the files in the current path directory, we don&#39;t want to list out the file in nested directories. So, it will be like a simple ls command in Linux. Let&#39;s see how we can list out the files in the given path.&#xA;&#xA;We can even use `path/filepath` package to iterate over a given path and list out the directories and files in it. The [filepath.Walk](https://pkg.go.dev/path/filepath#Walk) or the [WalkDir](https://pkg.go.dev/path/filepath#WalkDir) method is quite useful for this kind of operation, the function takes in a path string and a [WalkFunc](https://pkg.go.dev/path/filepath#WalkFunc) or the [WalkDirFunc](https://pkg.go.dev/io/fs#WalkDirFunc) Function, the walk function are simply used for walking of a path string. Both functions take two parameters, the first being the string which will be the file system path where we want to iterate or walk, and the next parameter is the function either [WalkFunc](https://pkg.go.dev/path/filepath@go1.19.1#WalkFunc) or [WalkDirFun](https://pkg.go.dev/io/fs#WalkDirFunc) respectively. Both functions are similar but a subtle difference in the type of parameter both take in.&#xA;&#xA;### WalkDir Function&#xA;&#xA;The `WalkDir` function takes in the parameters such as a `string` of the file path, the [fs.DirEntry](https://pkg.go.dev/io/fs#DirEntry) object and the `error` if any. The function returns an `error` if there arises any. We have to call the function with the parameters of a string and a function object which will be of type `type WalkDirFunc func(path string, d DirEntry, err error) error`.&#xA;&#xA;We can even use Walk the function to iterate over the given path.&#xA;&#xA;### Walk Function&#xA;&#xA;The `Walk` function takes in the parameters such as a `string` of the file path, the [fs.FileInfo](https://pkg.go.dev/io/fs#FileInfo) object and the `error` if any. The function returns an `error` if there arises any. We have to call the function with the parameters of a string and a function object which will be of type `type WalkFunc func(path string, info fs.FileInfo, err error) error`.&#xA;&#xA;It might be a user preference to select one of the functions for iterating over the file system, but the [documentation](https://pkg.go.dev/path/filepath#Walk) says, the `Walk` function is a little bit inefficient compared to the `WalkDir` function. But if performance is not an issue, you can use either of those based on which type of file system object you are currently working with.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;io/fs&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;var files []string&#xA;&#x9;dir_path := &#34;.&#34;&#xA;&#x9;err := filepath.WalkDir(dir_path, func(path string, info fs.DirEntry, err error) error {&#xA;&#x9;&#x9;dir_name := filepath.Base(dir_path)&#xA;&#x9;&#x9;if info.IsDir() == true &amp;&amp; info.Name() != dir_name{&#xA;&#x9;&#x9;&#x9;return filepath.SkipDir&#xA;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;files = append(files, path)&#xA;&#x9;&#x9;&#x9;return nil&#xA;&#x9;&#x9;}&#xA;&#x9;})&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;for _, file:= range files {&#xA;&#x9;&#x9;log.Println(file)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run walk.go&#xA;&#xA;2022/10/02 12:07:17 .&#xA;2022/10/02 12:07:17 .dockerignore&#xA;2022/10/02 12:07:17 .gitignore&#xA;2022/10/02 12:07:17 CNAME&#xA;2022/10/02 12:07:17 Dockerfile&#xA;2022/10/02 12:07:17 README.md&#xA;2022/10/02 12:07:17 markata.toml&#xA;2022/10/02 12:07:17 requirements.txt&#xA;2022/10/02 12:07:17 textual.log&#xA;```&#xA;&#xA;In the above example, we have used the `WalkDir` method for iterating over the file system, the directory is set as `.` indicating the current directory. We parse the first paramter as the string to the `WalkDir` function, the next parameter is a function so we can either create it separately or just define an `anonymous function`. It becomes a lot easier to write an `anonymous function` rather than writing the function separately.&#xA;&#xA;So, we have created the `dir_name` variable which parses the `dir_path` from the parameter to the function and gets the name of the directory or file. We can then fine-tune the requirements of the iteration of the directory, i.e. make checks if the path is a file or a directory and if we want to exclude any specific files with certain extensions or directories with a certain name, etc. In this example, we have added a check if the path is a directory(using `info.IsDir()`) and if the directory name is not the same as the parsed path(i.e. exclude the nested directories) we skip these types of directories (using [filepath.SkipDir](https://pkg.go.dev/io/fs#SkipDir)). So we only look for the files in the current directory or the directory which we provided in the paramter as `dir_path`. We append those paths into the files array using the `append` method. Finally, we check for errors in the parsed parameter while iterating over the file system and `panic` out of the function. We can then simply iterate over the files slice and print or perform operations as required.&#xA;&#xA;### All the files in the Path (inside directories)&#xA;&#xA;We can also list all the files within the folders provided in the path string by removing the directory name check. We will only append the file type to the file slice rather than appending all the directories.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;io/fs&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;var files []string&#xA;&#x9;root := &#34;static/&#34;&#xA;&#x9;err := filepath.WalkDir(root, func(path string, info fs.DirEntry, err error) error {&#xA;&#x9;&#x9;if info.IsDir() {&#xA;&#x9;&#x9;&#x9;return nil&#xA;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;files = append(files, path)&#xA;&#x9;&#x9;&#x9;return nil&#xA;&#x9;&#x9;}&#xA;&#x9;})&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;for _, file:= range files {&#xA;&#x9;&#x9;log.Println(file)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run walk.go&#xA;&#xA;2022/10/02 12:08:22 static/404.html&#xA;2022/10/02 12:08:22 static/CNAME&#xA;2022/10/02 12:08:22 static/index.html&#xA;2022/10/02 12:08:22 static/main.css&#xA;2022/10/02 12:08:22 static/projects/index.html&#xA;2022/10/02 12:08:22 static/social-icons.svg&#xA;2022/10/02 12:08:22 static/tbicon.png&#xA;```&#xA;&#xA;As we can see the iteration resulted in printing all the files in the given path including the files in the subdirectories. The static directory had the projects directory as a subdirectory in the path, hence we are listing the files in that directory as well.&#xA;&#xA;### Recursive directories in the Path&#xA;&#xA;We can also append the directory names as well as file names by completely removing the `info.IsDir()` check and add the printing out of the relevant information as dir and files depending on the type. We can also maintain different lists or slices for directory and file and append them accordingly.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;io/fs&#34;&#xA;func main() {&#xA;&#xA;&#x9;var files []string&#xA;&#x9;root := &#34;static/&#34;&#xA;&#x9;err := filepath.WalkDir(root, func(path string, info fs.DirEntry, err error) error {&#xA;        files = append(files, path)&#xA;&#x9;&#x9;var f string&#xA;&#x9;&#x9;if info.IsDir() {&#xA;&#x9;&#x9;&#x9;f = &#34;Directory&#34;&#xA;&#x9;&#x9;} else {&#xA;&#x9;&#x9;&#x9;f = &#34;File&#34;&#xA;&#x9;&#x9;}&#xA;&#x9;&#x9;log.Printf(&#34;%s Name: %s&#xA;&#34;, f, info.Name())&#xA;        return nil&#xA;&#x9;})&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;for _, file:= range files {&#xA;&#x9;&#x9;log.Println(file)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run walk.go&#xA;&#xA;2022/10/02 12:09:48 Directory Name: static&#xA;2022/10/02 12:09:48 File Name: 404.html&#xA;2022/10/02 12:09:48 File Name: main.css&#xA;2022/10/02 12:09:48 Directory Name: projects&#xA;2022/10/02 12:09:48 File Name: index.html&#xA;2022/10/02 12:09:48 File Name: social-icons.svg&#xA;2022/10/02 12:09:48 File Name: tbicon.png&#xA;&#xA;2022/10/02 12:09:48 static/&#xA;2022/10/02 12:09:48 static/404.html&#xA;2022/10/02 12:09:48 static/index.html&#xA;2022/10/02 12:09:48 static/main.css&#xA;2022/10/02 12:09:48 static/projects&#xA;2022/10/02 12:09:48 static/projects/index.html&#xA;2022/10/02 12:09:48 static/social-icons.svg&#xA;2022/10/02 12:09:48 static/tbicon.png&#xA;```&#xA;&#xA;We can see that the directories and files getting logged which are present in the given path. In the output above, the projects the directory is getting walked along with the files present inside the directory. This is how we can use the Walk method to iterate over directories in a file system.&#xA;&#xA;### All the folders in the Path (only directories)&#xA;&#xA;If we want to print only the directories, we can again add checks in the funciton body, we can simply append the path name when the path returns `true` on `IsDir` function call.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;    &#34;path/filepath&#34;&#xA;    &#34;log&#34;&#xA;    &#34;io/fs&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;var folders []string&#xA;&#x9;root := &#34;static/&#34;&#xA;&#x9;err := filepath.WalkDir(root, func(path string, info fs.DirEntry, err error) error {&#xA;&#x9;&#x9;dir_name := filepath.Base(root)&#xA;&#x9;&#x9;if info.IsDir() {&#xA;            folders = append(folders, info.Name())&#xA;            return nil&#xA;&#x9;&#x9;} else if info.IsDir() &amp;&amp; dir_name != info.Name() {&#xA;&#x9;&#x9;&#x9;return filepath.SkipDir&#xA;&#x9;&#x9;}&#xA;        return nil&#xA;&#x9;})&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;&#x9;for _, folder := range folders{&#xA;&#x9;&#x9;log.Println(folder)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run walk.go&#xA;&#xA;2022/10/02 12:13:25 static&#xA;2022/10/02 12:13:25 projects&#xA;```&#xA;&#xA;Here, we can see it lists all the folder names present in the given path, it will log all the nested directories as well. In the above example, the `static/` path in the local system had a projects directory and hence it prints the same, but that can be till the final depth of the file system.&#xA;&#xA;For all the examples on the `Walk` functions, you can check out the links on the GitHub repository:&#xA;&#xA;- [Walk and WalkDir function examples](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/scripts/paths/walk.go)&#xA;&#xA;- [Walk and WalkDir function as anonymous function](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/scripts/paths/walk_anonymous.go)&#xA;&#xA;## Relative or Absolute Paths&#xA;&#xA;We have been using absolute paths in the above examples, but while navigating from one directory to other, we heavily make use of relative paths as they make it easier to move around.&#xA;&#xA;### Check if a path is Absolute&#xA;&#xA;We can check if a path is absolute using the [IsAbs](https://pkg.go.dev/path#IsAbs) function, the function takes in a path string as a parameter and returns a boolean value. It returns `true` if the provided path is absolute else it returns `false`.&#xA;&#xA;### Check if a path is Absolute&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;path/filepath&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;dir, err := os.Getwd()&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#x9;log.Println(dir)&#xA;&#x9;log.Println(filepath.IsAbs(dir))&#xA;&#xA;    dir = &#34;../math&#34;&#xA;&#x9;log.Println(dir)&#xA;&#x9;log.Println(filepath.IsAbs(dir))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run rel_abs.go                                                                                                            &#xA;2022/10/02 14:38:44 /home/meet/code/techstructive-blog&#xA;2022/10/02 14:38:44 true&#xA;2022/10/02 14:38:44 ../math&#xA;2022/10/02 14:38:44 false&#xA;```&#xA;&#xA;In the above example, we can see that when we parse `../math` indicating there&#39;s a `/math` directory, before the current directory(parent directory) we get `false`.&#xA;&#xA;But when we parse the path obtained from `Getwd()` function call or a path which is located from the root path will get the return value as `true`.&#xA;&#xA;### Get the relative path from base to target path&#xA;&#xA;Let&#39;s say we are in a certain directory `/a/b/c/`, we want to move into `/a/c/d/`, we will have to move back two times and then move into `c` followed by the `d` directory. The relative path from `/a/b/c/` to `/a/c/d/` can be described as `../../c/d/`. We have a function in golang that does the same, basically creating a relative path from the base directory path to a target path. The function is provided in the path/filepath package as [Rel](https://pkg.go.dev/path/filepath@go1.19.1#Rel), the function takes in two parameters, both as a string representing paths. The first is the base path(like you are in) and the second is the target path (as the target to reach). The function returns the string representation of the absolute path from the base to the target directory.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;os&#34;&#xA;&#x9;&#34;path/filepath&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;dir, err := os.Getwd()&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;}&#xA;&#xA;    dir, err = filepath.Abs(&#34;plugins/&#34;)&#xA;&#x9;s, err := filepath.Abs(&#34;static/projects/&#34;)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    }&#xA;&#xA;&#x9;log.Println(s)&#xA;&#x9;log.Println(dir)&#xA;&#x9;log.Println(filepath.Rel(s, dir))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run rel_abs.go&#xA;&#xA;2022/10/02 12:26:09 /home/meet/code/techstructive-blog/static/projects&#xA;2022/10/02 12:26:09 /home/meet/code/techstructive-blog/plugins&#xA;2022/10/02 12:26:09 ../../plugins &lt;nil&gt;&#xA;```&#xA;&#xA;We can see that the relative path from the two directories is given as the return string from the Rel function.&#xA;&#xA;### Join paths&#xA;&#xA;The [Join](https://pkg.go.dev/path/filepath@go1.19.1#Join) method provided in the `filepath` package, is used for combining `n` number of path strings as one path. It separates the file paths with the operating system-specific separator like `/` for Linux and `\` for windows.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;log&#34;&#xA;&#x9;&#34;path/filepath&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;dir, err := filepath.Abs(&#34;operators/arithmetic/&#34;)&#xA;    if err != nil {&#xA;        log.Println(err)&#xA;    }&#xA;&#xA;&#x9;log.Println(filepath.Join(&#34;golang&#34;, &#34;files&#34;))&#xA;&#x9;log.Println(filepath.Join(dir, &#34;/files&#34;, &#34;//read&#34;))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run rel_abs.go&#xA;&#xA;2022/10/02 12:30:37 golang/files&#xA;2022/10/02 12:30:37 /home/meet/code/techstructive-blog/operators/arithmetic/files/read&#xA;```&#xA;&#xA;In the above example, we can see that it parses the path accurately and ignore any extra separators in the string path.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/paths) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from the following post, we were able to explore the path package along with a few functions  io as well as os package. By using various methods and type objects, we were able to perform operations and work with the file paths. By using functions to iterate over file systems, checking for absolute paths, checking for the existence of paths, etc, the fundamentals of path handling in golang were explored.&#xA;&#xA;Thank you for reading, if you have any queries, feedback, or questions, you drop them below on the blog as a [github discussion](https://www.meetgor.com/golang-paths/#comments), or you can ping me on my social handles as well. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Error Handling</title>
      <link>https://www.meetgor.com/posts/golang-error-handling</link>
      <description>Introduction Error handling is quite an important feature of any programming language to improve the quality and transparency between the user and the applicati</description>
      <pubDate>Sat, 17 Sep 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Error handling is quite an important feature of any programming language to improve the quality and transparency between the user and the application. By raising appropriate error messages, the user can get a clear idea about the things happening in the interface as well as the application can handle the errors with appropriate actions.&#xA;&#xA;In the 20th post of the series, we will be exploring the concept of error handling in golang. From this article, we will be able to learn the fundamentals of error or exception handling in golang, create custom error classes, raise and ignore error messages, and exit or redirect the flow of state of the application when an error is raised.&#xA;&#xA;## Why we need Error Type&#xA;&#xA;We need [error handling](https://go.dev/blog/error-handling-and-go) and catching in order to stop or divert the flow of the application which will restrict the dubious or unintentional execution of the code. Let&#39;s say, for example, we have a string as an input, and the user skipped the input and the string is returned as empty, we don&#39;t want to execute the further program as the execution might depend on the value of the string. So, in order to catch these kinds of events, we might use errors and log the results for better transparency.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;var s string&#xA;&#x9;n, err := fmt.Scanf(&#34;%s&#34;, &amp;s)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;// panic(err)&#xA;        // OR&#xA;        // return&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(n)&#xA;&#x9;&#x9;if s[0] == &#39;a&#39; {&#xA;&#x9;&#x9;&#x9;fmt.Println(s)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run main.go&#xA;asdf&#xA;1&#xA;asdf&#xA;&#xA;&#xA;$ go run main.go&#xA;&#xA;unexpected newline&#xA;&#xA;&#xA;$ go run main.go&#xA;wsd&#xA;1&#xA;```&#xA;&#xA;In the above example, we have a simple string `s` input, we will input the string using the `Scanf` function that will return an integer as the number of variables it has scanned and error if any. Here, as the function might return two values, we need two variables to call the function. The `n` variable stores the number of variables successfully scanned and the `err` as the err from the function generated. If there is an error, that is the value stored in `err` is not empty, we will log the error. And move into the rest of the program.&#xA;&#xA;This might be looking cool, but it doesn&#39;t break out of the program if there is an error. We want it to log the error as well as exit from the program. We can do that using the panic function.&#xA;&#xA;## Catching Errors&#xA;&#xA;We&#39;ll see a few examples, where we will catch errors in some of the regularly used functions. These error messages can be used in deciding the next procedure to be run.&#xA;&#xA;### Comma OK/Error syntax&#xA;&#xA;We use the [comma ok, error](https://go.dev/doc/effective_go#:~:text=is%20called%20the%20%E2%80%9C-,comma%20ok,-%E2%80%9D%20idiom.%20In%20this) syntax where we want multiple return values from a function. It is a narrowed syntax for a two-type return value function, we either return a value that we are expecting as `OK`, or we get an `error` from the function call.&#xA;&#xA;```go&#xA;&#xA;ok, err := function()&#xA;if err != nil {&#xA;    // handle error&#xA;    panic(err)&#xA;} else {&#xA;    // work with the ok object&#xA;    fmt.Println(ok)&#xA;}&#xA;&#xA;```&#xA;&#xA;In the above code, we have used the comma-ok, error syntax, the function call will return two objects either an expected object or an error object if there were errors in the processing. We handle the error if the error object is not empty i.e. it contains something, else we can do the rest of the required processing of the program.&#xA;&#xA;We can even ignore the `err` or the `ok` object using the `_` i.e. to say a don&#39;t care variable. Remember you can ignore either of the values and not both. It is not recommended to ignore errors but if you know the obvious thing to process, you might as well sometimes.&#xA;&#xA;A more compressed code might look something like below:&#xA;&#xA;```go&#xA;&#xA;if ok, err := function(); err != nil {&#xA;    // handle error&#xA;    panic(err)&#xA;} else {&#xA;    // work with the ok object&#xA;    fmt.Println(ok)&#xA;}&#xA;```&#xA;&#xA;The above code wraps the initialization of `ok, err` objects by calling the function inside the if statement and checking for the error.&#xA;&#xA;### Making HTTP requests&#xA;&#xA;Let&#39;s say we have a URL, and we want to check if the site exists or not, we can run the [http.Get](https://pkg.go.dev/net/http@go1.19.1#Client.Get) function from the [net/http](https://pkg.go.dev/net/http) package. We will parse a URL in the function and this function also returns a [Response](https://pkg.go.dev/net/http#Response) type object and an error object if there are any errors generated during the function call. If there is an error while we call the function, we simply log the error and panic out of the program. Else we can log the status code and do further processing.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;url := &#34;https://meetgor.com/&#34;&#xA;&#x9;resp, err := http.Get(url)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run web.go&#xA;URL: https://meetgor.com/&#xA;200&#xA;&#xA;&#xA;&#xA;$ go run web.go&#xA;&#xA;URL: htts://meetgor.com/&#xA;Get &#34;htts://meetgor.com/&#34;: unsupported protocol scheme &#34;htts&#34;&#xA;panic: Get &#34;htts://meetgor.com/&#34;: unsupported protocol scheme &#34;htts&#34;&#xA;&#xA;goroutine 1 [running]:&#xA;main.main()&#xA;        /home/meet/code/100-days-of-golang/scripts/errors/https.go:14 +0x170&#xA;exit status 2&#xA;```&#xA;&#xA;&#xA;This is how we can validate a URL handling the error if the parsed URL is invalid or does not exist.&#xA;&#xA;There is one more variation of the above code style, it is a bit compressed and might be just a syntactic change.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;url := &#34;https://meetgor.com/&#34;&#xA;&#x9;if resp, err := http.Get(url); err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;&#x9;panic(err)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(resp.StatusCode)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;This can be used wherever you are using the `ok, err` kind of syntax, but I prefer the clean syntax so I won&#39;t move ahead with this.&#xA;&#xA;### Opening or Handling of File&#xA;&#xA;We can even use error handling while dealing with Files or Folders. We can use the [os](https://pkg.go.dev/os) package to read the file in golang. The [Open](https://pkg.go.dev/os#Open) function will read the file if it exists or else it will return an error. We can catch the error from the comma `ok,error` syntax and do the required processing in the program.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;os&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;file_name := &#34;hello.txt&#34;&#xA;&#x9;file, err := os.Open(file_name)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(&#34;Error: &#34;, err)&#xA;&#xA;        // Create a File &#xA;&#xA;&#x9;&#x9;// _, err := os.Create(file_name)&#xA;&#x9;&#x9;// if err != nil {&#xA;&#x9;&#x9;// &#x9;fmt.Println(err)&#xA;&#x9;&#x9;// }&#xA;&#x9;&#x9;// fmt.Println(&#34;Created File&#34;, file_name)&#xA;&#x9;&#x9;// file, _ = os.Open(file_name)&#xA;&#x9;}&#xA;&#x9;fmt.Println(file.Stat())&#xA;}&#xA;```&#xA;&#xA;![File handling Error](https://res.cloudinary.com/techstructive-blog/image/upload/v1663425619/blog-media/golang-file-error.png)&#xA;&#xA;We can use the error as a hint that the file doesn&#39;t exist and create a file and then move toward the actual execution of the program. We can also ignore the file object while creating the file, as we are interested in only knowing that the file is just created without any errors, we use the `_` to ignore the variable in the assignment in the function call.&#xA;&#xA;Inside the commented code, we use the [Create](https://pkg.go.dev/os#Create) function to create a file and check for any errors in the process. We finally create the file and `Open` the newly created file.&#xA;&#xA;## Custom Error&#xA;&#xA;We can create custom error types in golang with the help of interfaces and structs. An Error struct will simply consist of a string `message`, that string will display the error. By overriding or creating the `Error` method as an interface for the struct we can construct custom errors.&#xA;&#xA;```go&#xA;type Invalid_URL_Error struct {&#xA;&#x9;message string&#xA;}&#xA;&#xA;func (e Invalid_URL_Error) Error() string {&#xA;&#x9;return &#34;Invalid URL&#34;&#xA;}&#xA;&#xA;```&#xA;&#xA;Here, we have the `Invalid_URL_Error` as the custom struct name and the `Error()` method as an interface that will print the error log. This Error method will be used while raising errors in the program. It might be called from another function while doing the actual processing of the URL while sending a GET request.&#xA;&#xA;Further, we can call this custom error method when we wish, by using the package functions, we can override the function call with the custom method.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;net/http&#34;&#xA;)&#xA;&#xA;type Invalid_URL_Error struct {&#xA;&#x9;message string&#xA;}&#xA;&#xA;func (e Invalid_URL_Error) Error() string {&#xA;&#x9;return &#34;Invalid URL&#34;&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;url := &#34;htt://meetgor.com&#34;&#xA;&#x9;response, err := http.Get(url)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(Invalid_URL_Error{})&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(response)&#xA;&#x9;&#x9;defer response.Body.Close()&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run custom_error.go&#xA;Invalid URL &#xA;Get &#34;htt://meetgor.com&#34;: unsupported protocol scheme &#34;htt&#34;&#xA;&#xA;```&#xA;&#xA;In the above code, we are basically calling the function `http.Get` that will return a `Response` or an `err` object. We can even call the custom error method with an empty `Invalid_URL_Error` object, this will call the function `Error` from that interface. The function will print the custom error message and thereby we are able to log the custom error message for the invalid URL example.&#xA;&#xA;Also, we can parse the default error method to the custom error method which will get us additional information inside the error interface method.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type Invalid_URL_Error struct {&#xA;&#x9;message string&#xA;}&#xA;&#xA;func (e Invalid_URL_Error) Error() string {&#xA;&#x9;return &#34;Invalid URL : &#34; + e.message&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;url := &#34;htt://meetgor.com&#34;&#xA;&#x9;response, err := http.Get(url)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(Invalid_URL_Error{err.Error()})&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(response)&#xA;&#x9;&#x9;defer response.Body.Close()&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run custom_error.go&#xA;Invalid URL : Get &#34;htt://meetgor.com&#34;: unsupported protocol scheme &#34;htt&#34;&#xA;&#xA;```&#xA;When the URL is invalid, we will call the custom error interface by parsing the default `err.Error` method. This will get us the error object from the main method to our custom interface. That is how we will be able to fetch additional information about the error from the interface with the `.` operator as `e.message`. So, the syntax is `Invalid_URL_Error{err.Error()}`, i.e. an object of type `Invalid_URL_Error` with the message set as the value returned from the default `Error()` function. hence we can implement the custom error message.&#xA;&#xA;We also need to look for the response object and close the Response Body as it is mandatory to do so and the responsibility of the caller.&#xA;&#xA;### Creating a function that returns two values (ok,error)&#xA;&#xA;We can even nest the calling of this error method inside another function. This will give us a good overview of how to deal with errors more thoroughly. We will construct a function with two return values one can be any normal desirable object (which we want from the function call) and the other as an error. This will check for any cases that we can call the custom error and return that error interface and the object which was to be returned will be nil if there is an error. If there are no errors, we will return the object and set the error as nil. This way, we can use the `ok, error` syntax while calling this function and thereby make it a lot easier in case of complex programs or multiple types of errors.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type Invalid_URL_Error struct {&#xA;&#x9;message string&#xA;}&#xA;&#xA;func (e Invalid_URL_Error) Error() string {&#xA;&#x9;return &#34;Invalid URL&#34;&#xA;}&#xA;&#xA;func get_resp(url_link string) (http.Response, error) {&#xA;&#xA;&#x9;resp, err := http.Get(url_link)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;return http.Response{}, &amp;Invalid_URL_Error{}&#xA;&#x9;} else {&#xA;&#x9;&#x9;defer resp.Body.Close()&#xA;&#x9;&#x9;return *resp, nil&#xA;&#x9;}&#xA;&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;url := &#34;htts://meetgor.com&#34;&#xA;&#x9;resp, err := get_resp(url)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(resp)&#xA;&#x9;}&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run errors.go&#xA;Invalid URL&#xA;```&#xA;&#xA;This is the simple function `get_resp` which will either fetch the `Response` or `error`. We call the `http.Get` method internally in the function and if something is invalid, it will return an empty Response but the error will be an `Invalid_URL_Error` object which is the custom error class. This means, that if we have an error, we will return a string object from the `Error` method in the interface and if there is no error, we will return the `Response` object and the error will be set as `nil`. Hence, you can now realize, why we check for `err != nil`, it is used for checking if an error object has returned a string or not. As said earlier, we also need to close the request Body.&#xA;&#xA;Further, we can pass the default error method to the custom error class as `Invalid_URL_Error{err.Error()}`. This will ensure, we get additional context from the custom error interface.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;net/http&#34;&#xA;)&#xA;&#xA;type Invalid_URL_Error struct {&#xA;&#x9;message string&#xA;}&#xA;&#xA;func (e Invalid_URL_Error) Error() string {&#xA;&#x9;return &#34;Invalid URL : &#34; + e.message&#xA;}&#xA;&#xA;func get_resp(url_link string) (http.Response, error) {&#xA;&#xA;&#x9;resp, err := http.Get(url_link)&#xA;&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;return http.Response{}, &amp;Invalid_URL_Error{err.Error()}&#xA;&#x9;} else {&#xA;&#x9;&#x9;defer resp.Body.Close()&#xA;&#x9;&#x9;return *resp, nil&#xA;&#x9;}&#xA;&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;url := &#34;htts://meetgor.com&#34;&#xA;&#x9;resp, err := get_resp(url)&#xA;&#x9;if err != nil {&#xA;&#x9;&#x9;fmt.Println(err)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(resp)&#xA;&#x9;}&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run custom_error&#xA;Invalid URL : Get &#34;htts://meetgor.com&#34;: unsupported protocol scheme &#34;htts&#34;&#xA;```&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/error-handling/main.go) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;From this article, we were able to understand the basics of error handling in golang. We can now work with handling errors in function calls and create custom error interfaces. Thank you for reading, if you have any queries, feedback, or questions, you can freely ask me on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Interfaces</title>
      <link>https://www.meetgor.com/posts/golang-interfaces</link>
      <description>Introduction In the 19th post of the series, we will be taking a look into interfaces in golang. Interfaces allow us to create function signatures common to dif</description>
      <pubDate>Sat, 10 Sep 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the 19th post of the series, we will be taking a look into interfaces in golang. Interfaces allow us to create function signatures common to different structs or types. So, we can allow multiple structs to have a common interface(method) that can have different implementations.&#xA;&#xA;## What are Interfaces&#xA;&#xA;Interface as the name suggests is a way to create methods that are common to different structures or types but can have different implementations. It&#39;s an interface to define the method or function signatures but not the implementation. Let&#39;s take an example of `Laptop` and `Phone` having the functionality of wifi. We can connect to wifi more or the less in a similar way on both devices. The implementation behind the functionality might be different but they share the same operation. The WiFi can act as an interface for both devices to connect to the internet.&#xA;&#xA;## Define an Interface&#xA;&#xA;To declare an interface in golang, we can use the `interface` keyword in golang. An interface is created with the type keyword, providing the name of the interface and defining the function declaration. Inside the interface which acts as a struct of general method signatures. The method signatures usually consist of the name of the function with its parameters if any and the return type of the function.&#xA;&#xA;```go&#xA;&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;&#xA;type Player struct {&#xA;&#x9;name   string&#xA;&#x9;health int&#xA;}&#xA;&#xA;type Mob struct {&#xA;&#x9;name     string&#xA;&#x9;health   int&#xA;&#x9;is_passive bool&#xA;}&#xA;&#xA;type Creature interface {&#xA;&#x9;intro() string&#xA;&#x9;//attack() int&#xA;&#x9;//heal() int&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;player := Player{name: &#34;Steve&#34;}&#xA;&#x9;mob := Mob{name: &#34;Zombie&#34;}&#xA;&#x9;fmt.Println(player)&#xA;&#x9;fmt.Println(mob)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run main.go&#xA;&#xA;{Steve 0}&#xA;{Zombie 0 false}&#xA;```&#xA;&#xA;In this above example, we have created an interface called `Creature`. There are a few structs that we have to define like `Player` and `Mob`, these two methods have a few attributes like `name` as `string` and `health` as `int` which are common in both structs but the `Mob` struct has an additional attribute `is_passive` as a `boolean` value. The `Creature` is an interface that will define certain function signatures, here we have declared `intro`, `attack`, and `heal` as the methods bound to the Creature interface. This means, that any object which satisfies the Creature interface, can define the methods associated with the interface.&#xA;&#xA;## Defining Interfaces&#xA;&#xA;Once we have declared the interface method signatures, we can move into defining the functionality of these methods depending on the struct. If we want a different working method for different types of struct objects passed we can define those for each type of struct that we want. Here, we have two types of structs namely `Creature` and `Mob`, based on the struct we can define the `intro` method for them individually.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type Creature interface {&#xA;&#x9;intro() string&#xA;&#x9;attack(*int) int&#xA;}&#xA;&#xA;type Player struct {&#xA;&#x9;name   string&#xA;&#x9;health int&#xA;}&#xA;&#xA;type Mob struct {&#xA;&#x9;name     string&#xA;&#x9;health   int&#xA;&#x9;category bool&#xA;}&#xA;&#xA;func (p Player) intro() string {&#xA;&#x9;fmt.Println(&#34;Player has spawned&#34;)&#xA;&#x9;return p.name&#xA;}&#xA;&#xA;func (p Player) attack(m_health *int) int {&#xA;&#x9;fmt.Println(&#34;Player has attacked!&#34;)&#xA;&#x9;*m_health = *m_health - 50&#xA;&#x9;return *m_health&#xA;}&#xA;&#xA;func (m Mob) intro() string {&#xA;&#x9;fmt.Printf(&#34;A wild %s has appeared!&#xA;&#34;, m.name)&#xA;&#x9;return m.name&#xA;}&#xA;func (m Mob) attack(p_health *int) int {&#xA;&#x9;fmt.Printf(&#34;%s has attacked you! -%d&#xA;&#34;, m.name, 30)&#xA;&#x9;*p_health = *p_health - 30&#xA;&#x9;return *p_health&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;player := Player{name: &#34;Steve&#34;, health: 100}&#xA;&#x9;mob := Mob{name: &#34;Zombie&#34;, health: 140}&#xA;&#x9;fmt.Println(player.intro())&#xA;&#x9;fmt.Println(mob.intro())&#xA;&#x9;fmt.Println(mob)&#xA;&#x9;fmt.Println(player)&#xA;&#x9;fmt.Println(player.attack(&amp;mob.health))&#xA;&#x9;fmt.Println(mob.attack(&amp;player.health))&#xA;&#x9;fmt.Println(mob)&#xA;&#x9;fmt.Println(player)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run main.go&#xA;&#xA;Player has spawned&#xA;Steve&#xA;A wild Zombie has appeared!&#xA;Zombie&#xA;{Zombie 140 false}&#xA;{Steve 100}&#xA;Player has attacked!&#xA;90&#xA;Zombie has attacked you! -30&#xA;70&#xA;{Zombie 90 false}&#xA;{Steve 70}&#xA;```&#xA;&#xA;As we can see, the method `intro()` is bound to both the struct depending on what struct signature is associated with the method. The method `intro` takes in the object struct associated as per the call and returns `string` as defined in the method signature. &#xA;&#xA;The `attack` method in the `Creature` interface is also implemented separately for the two structs. For the `Player` method, we simply take in a pointer to an integer and return an `int`. The parameter is the pointer to the mob health, and it returns the modified health. We take in a pointer to the mob or player health so as to parse in the actual value and not the copy of the value. If we modify the value, we want to reflect those changes in the actual object. So that is how we can use interfaces to construct dynamic operations on objects as well as different types of structs.&#xA;&#xA;We have seen a simple example of how to declare and define interfaces for given type structs. Also, we can pass by value as well as by pointers so as to define the behavior of the method whether to dynamically modify or change the values of the object associated with it.&#xA;&#xA;## Examples of Interfaces&#xA;&#xA;There are quite some use cases of interfaces, in object-oriented programming, the above example fits the polymorphism feature quite well. The ability to reuse certain method signatures and define the functions as per requirement brings flexibility to the code structure.&#xA;&#xA;We will see a few examples for understanding how we can use interfaces in various ways.&#xA;&#xA;### Type Switch Interface&#xA;&#xA;We can use an empty interface to check for the type of variable we have parsed. Using this empty interface we can create a kind of dynamic parameter to a function.  &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strconv&#34;&#xA;)&#xA;&#xA;func parse_int(n interface{}) int {&#xA;&#x9;switch n.(type) {&#xA;&#x9;case int:&#xA;&#x9;&#x9;return (n).(int) * (n).(int)&#xA;&#x9;case string:&#xA;&#x9;&#x9;s, _ := strconv.Atoi(n.(string))&#xA;&#x9;&#x9;return s&#xA;&#x9;case float64:&#xA;&#x9;&#x9;return int(n.(float64))&#xA;&#x9;default:&#xA;&#x9;&#x9;return n.(int)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;num := parse_int(4)&#xA;&#x9;fmt.Println(num)&#xA;&#x9;num = parse_int(&#34;4&#34;)&#xA;&#x9;fmt.Println(num)&#xA;&#x9;num = parse_int(4.1243)&#xA;&#x9;fmt.Println(num)&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;go run main.go&#xA;&#xA;16&#xA;4&#xA;4&#xA;```&#xA;&#xA;Here, we can see we have an interface as a parameter to the function `parse_int`, the return type is `int`, so the incoming parameter can be any valid type. But if we don&#39;t convert the given type into an appropriate int, it will result in an error as we are returning the int value of the parsed parameter. We are taking the parameter as `interface{}` which is an empty interface, this will contain the parameter parsed as an interface type. That&#39;s why we need to convert the interface object into an int or the parsed type of the interface.&#xA;&#xA;### Interface Slice&#xA;&#xA;We can even create a slice of interfaces, which means we can initialize or group together various objects of different structs in a single slice. This might be helpful for calling functions associated with different objects via the interface very easily and in a much cleaner way.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type Creature interface {&#xA;&#x9;intro() string&#xA;}&#xA;&#xA;type Player struct {&#xA;&#x9;name   string&#xA;&#x9;health int&#xA;}&#xA;&#xA;type Mob struct {&#xA;&#x9;name     string&#xA;&#x9;health   int&#xA;&#x9;category bool&#xA;}&#xA;&#xA;func (p Player) intro() string {&#xA;&#x9;fmt.Println(&#34;Player has spawned&#34;)&#xA;&#x9;return p.name&#xA;}&#xA;&#xA;func (m Mob) intro() string {&#xA;&#x9;var name string&#xA;&#x9;if m.name != &#34;&#34; {&#xA;&#x9;&#x9;name = m.name&#xA;&#x9;} else {&#xA;&#x9;&#x9;name = &#34;Mob&#34;&#xA;&#x9;}&#xA;&#x9;fmt.Printf(&#34;A wild %s has appeared!&#xA;&#34;, name)&#xA;&#x9;return m.name&#xA;}&#xA;&#xA;&#xA;func main() {&#xA;&#x9;entity := []Creature{Player{}, Mob{}, Mob{}, Player{}}&#xA;&#xA;&#x9;for _, obj := range entity {&#xA;&#x9;&#x9;fmt.Println(obj.intro())&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;Player has spawned&#xA;A wild Zombie has appeared!&#xA;A wild Zombie has appeared!&#xA;Player has spawned&#xA;```&#xA;&#xA;In the above example, we can see that the entity variable is created as a slice of interfaces `Creature`, i.e. various objects associated with the `Creature` interface can be contained in a single slice. There are 2 instances of Player and Mob each in the slice. We can further iterate over the slice as a range-based loop and thereby the functions associated with the interfaces can be called. Here, we have called the `intro` function.&#xA;&#xA;So, there are a lot of things that can be done with interfaces, we can create multiple interfaces for a single object struct and nest interfaces. Based on the use case of the program, interfaces can be used to reduce the boilerplate code as well as improve the readability of the code.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/interfaces/main.go) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;From this part of the series, we were able to understand the basics of interfaces using a few examples. We explored how interfaces can be used to bring in polymorphism in golang, also we can improve the readability of the code. The boilerplate code can be considerably reduced by using interfaces when dealing with structs and types. Hopefully, you found this post helpful and understood even the basics of interfaces in golang. Thank you for reading, if you have any queries, questions, or feedback, you can ping me on my social handles or in the comments. Happy Coding :)&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Closures</title>
      <link>https://www.meetgor.com/posts/golang-closures</link>
      <description>Introduction In the previous part of the series, we covered and in this section, we will look into which are quite a cool concept for various things. Closures a</description>
      <pubDate>Thu, 11 Aug 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the previous part of the series, we covered [anonymous functions](https://meetgor.com/golang-anonymous-functions/) and in this section, we will look into `closures` which are quite a cool concept for various things. Closures are basically a function that returns a function instead of a value, so basically we will leverage anonymous functions for creating closures.&#xA;&#xA;## Simple Closures&#xA;&#xA;A simple closure can be constructed for understanding how we can use closures in golang. We will return a function from a function, that is a simple closure. So, in the below code example, we have created a function `gophy()` which takes no parameters but returns a function that returns a `string`.  The function simply returns an anonymous function that returns a string.&#xA;&#xA;We will initialize the variable `g` that is assigned to the function `gophy` which will simply return a function call. We are not calling the function simply returning the call to the function `gophy` that has the return value as the anonymous function. We will simply have the function in the variable `g` rather than the simple value string. So we will have to call the `g` variable for actually returning the string.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func gophy() func() string{&#xA;  return func() string{&#xA;    return &#34;Hello, Gophers!&#34;&#xA;  }&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// using clousure/anonymous function to return a value&#xA;&#x9;// that value can be assigned to the variable&#xA;&#x9;g := gophy()&#xA;&#x9;fmt.Println(g())&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run simple.go&#xA;Hello, Gophers!&#xA;&#xA;```&#xA;&#xA;So, that is how we can call the function `g` that will return a string, so we have the function body stored in the variable `g`. We can call it as many times as we want.&#xA;&#xA;## Variable Scope in Closures&#xA;&#xA;We can even use variables that will remain in the function scope once it is initialized. So, let’s say we have a function that will increment the counter, but if we want to keep the counter the same throughout the program, we might have to use a global variable so as to maintain the context, but with closures, we will retain the value once we have initialized the function call.&#xA;&#xA;In the below example, we are creating the function `incrementer` that is a closure with int as the return type. We are initializing the variable `counter` that will be acting as the counter in the program, the function returns an anonymous function that will increment the counter and return it.&#xA;&#xA;Here, when we create an instance of the `increment` function it basically initializes the `counter` to `0` and returns the anonymous function as a call. Now, `c` will act as a function that has the counter variable bound to it and we can call `c` that will, in turn, call the anonymous function keeping the scope of the `counter` variable.  So, each time we call the function `c` it will increment the counter and thus we keep the counter inside the scope of the function `incrementer` in the `c` variable.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func inrementer() func() int{&#xA;  counter := 0&#xA;  return func() int{&#xA;    counter += 1&#xA;    return counter&#xA;  }&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;  c := inrementer()&#xA;&#xA;  fmt.Println(c())&#xA;  fmt.Println(c())&#xA;  fmt.Println(c())&#xA;  fmt.Println(c())&#xA;  fmt.Println(c())&#xA;&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$go run simple.go&#xA;1&#xA;2&#xA;3&#xA;4&#xA;5&#xA;&#xA;```&#xA;&#xA;If we want to extend the functionality,  we can even assign the function call `c()` to a variable and access the returned value which will be the current state of the counter.&#xA;&#xA;We can even use different scope or closures tied to a particular function, that is we can bind data to a different instances of a closure.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func inrementer() func() int{&#xA;  counter := 0&#xA;  return func() int{&#xA;    counter += 1&#xA;    return counter&#xA;  }&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;  c1 := inrementer()&#xA;&#xA;  fmt.Println(c1())&#xA;  fmt.Println(c1())&#xA;  fmt.Println(c1())&#xA;&#xA;  c2 := inrementer()&#xA;&#xA;  fmt.Println(c2())&#xA;  fmt.Println(c2())&#xA;  fmt.Println(c2())&#xA;  fmt.Println(c2())&#xA;&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$go run simple.go&#xA;1&#xA;2&#xA;3&#xA;1&#xA;2&#xA;3&#xA;4&#xA;&#xA;```&#xA;&#xA;Here we have `c1` and `c2` forming different closures and thereby we can have different scopes of the variables associated with it. The variable is bound to the instance it which was initialized, so we can see the different closure instances having different values.&#xA;&#xA;## Factorial of a function with Closures&#xA;&#xA;We can create some interesting programs with closures, we will implement the calculation of factorial with closures in golang.&#xA;&#xA;This will be a `factorial` function that returns an anonymous function with the return type as `int`. The function will initialize the variable `fact` which will store the actual factorial value and `n` as the initial number for calculating the factorial of it.&#xA;&#xA;Inside the anonymous function, we will calculate the factorial and increment the number and simply return the factorial value from the function. The `fact` variable will contain the factorial of the number n, so here we can leverage the use of closures as we will maintain the state of the variable `fact` and `n` from the previous calls or the initialization of the function.&#xA;&#xA;Inside the `main` function, we have created the `f` variable and called the `factorial` function, so that will initialize the `fact` and `n` of the variable and thereby returning the anonymous function call. Now we can call the variable `f` as many times as we want that will simply return the factorial of the number incremented each time we call the function.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func factorial() func() int{&#xA;&#x9;fact, n := 1, 1&#xA;&#x9;return func() int{&#xA;    fact = fact * n&#xA;    n += 1&#xA;&#x9;&#x9;return fact&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;  f := factorial()&#xA;  fmt.Println(f())&#xA;  fmt.Println(f())&#xA;  fmt.Println(f())&#xA;  fmt.Println(f())&#xA;  fmt.Println(f())&#xA;&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run simple.go&#xA;1&#xA;2&#xA;6&#xA;24&#xA;120&#xA;&#xA;```&#xA;&#xA;So, we can see that the factorial is getting printed for each call and the number is being incremented at each call.&#xA;&#xA;So that&#39;s the basics of closures in golang, we can use closures to keep the content secured and encapsulated from different function calls. We can bind data with closures, with the help of anonymous functions a closure can be constructed and data can be bound to a particular function called scope.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/closures/main.go) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;From this post, we could understand the fundamentals of closures in golang. The basic concept of closures in golang was understood with a few examples. Thank you for reading, if you have any queries or feedback please leave them in the comments or on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>PGCLI: Postgres from the terminal</title>
      <link>https://www.meetgor.com/posts/pgcli-pg-from-terminal</link>
      <description>Introduction Have you ever used the Postgres database and did you know you don&#39;t have to launch PGAdmin every time you want to write SQL queries, you can write</description>
      <pubDate>Sun, 07 Aug 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Have you ever used the Postgres database and did you know you don&#39;t have to launch PGAdmin every time you want to write SQL queries, you can write down those queries even inside a terminal with a python package. It&#39;s PGCLI, that can act as an editor window for writing SQL queries for Postgres Databases. We can simply execute SQL queries in a PSQL-like shell with additional features like autocompletion, text-editing modes, file input/output options, etc.&#xA;&#xA;If you are like CLIs and love to play with backend systems like Postgres(database) then chances are you will love this article. This article will cover the basics of using the PGCLI tool to perform simply as well as advanced options in interacting with your Postgres database.&#xA;&#xA;## PGCLI &#xA;&#xA;PGCLi is a python package that acts as a CLI for executing SQL queries in a Postgres database. PGCLI allows us to interact with any Postgres database via the command line, it can be a remote database or a local database, you can access it right away from your terminal. The package acts as a CLI for accessing and executing any SQL queries, so we can simply provide the options/parameters to plug the database in and access the schema and its related data.&#xA;&#xA;## Installation&#xA;&#xA;There are a lot of ways pgcli can be installed, my preferred way is with `pipx`, it just works well and you don&#39;t have to install it every time or mess up your global python packages environment. &#xA;&#xA;```&#xA;pipx install pgcli&#xA;```&#xA;&#xA;![PGCLI Install with pipx](https://res.cloudinary.com/techstructive-blog/image/upload/v1659878333/blog-media/pgcli-install-pipx.png)&#xA;&#xA;We don&#39;t even require installing it when using pipx, as it will each time create a fresh instance of the pgcli in a virtual environment.&#xA;&#xA;We can now run the `pgcli` with parameters using the `pipx run` command. In the below command, we have used the `--help` options to list out all the commands and other options available with the `pgcli` command.&#xA;&#xA;```&#xA;pipx run pgcli --help&#xA;```&#xA;&#xA;![PGCLI Help](https://res.cloudinary.com/techstructive-blog/image/upload/v1659878322/blog-media/pgcli-help.png)&#xA;&#xA;&#xA;If this command is working fine, it means we are ready to connect and interact with a database with pgcli.&#xA;&#xA;## List local databases&#xA;&#xA;We can list all the databases in our local system by providing the hostname and the username, in my case, I have to provide the `localhost` and `postgres` as the hostname and the username respectively. Further, by providing the `--list` option, we can list down all the available databases on our system, basically for a given host and user.&#xA;&#xA;```&#xA;pipx run pgcli --host localhost --user postgres --list&#xA;&#xA;OR&#xA;&#xA;pipx run pgcli -h localhost -U postgres -l&#xA;```&#xA;&#xA;![PGCLI List all databases](https://res.cloudinary.com/techstructive-blog/image/upload/v1659878596/blog-media/pgcli-list-db.png)&#xA;&#xA;As, we can see, the command lists out all the available databases on the local server. This can help in getting an overview of the Postgres databases present in your local system. You need to enter the password for the username to access the database.&#xA;&#xA;## Connect to a database&#xA;&#xA;To connect to an individual Postgres database we have two options, we can either use a URL string of postgres database or parse parameters to the command individually. It&#39;s easier to use the URL option because it wraps a lot of things in a single string rather than writing each parameters.&#xA;&#xA;### Connect with the Postgres Connection URI&#xA;&#xA;We can simply pass the postgres connection URI for accessing a postgres database with pgcli. The URI has a specific pattern in the order where we need to specify the values like hostname, password, username, port, and database name. &#xA;&#xA;```&#xA;postgresql://username:password@hostname:port/database_name&#xA;```&#xA;&#xA;The above is the structure for a postgres connection URI, we parse in the `username`, `password` for that `username`, `hostname` with the `port`, and the `database name`. These parameters if parsed correctly can directly access the database.&#xA;&#xA;```&#xA;pipx run pgcli postgresql://postgres:postgres@localhost:5432/techstructive_blog&#xA;```&#xA;&#xA;![PGCLI Postgres Connection URI](https://res.cloudinary.com/techstructive-blog/image/upload/v1659878955/blog-media/pgcli-connect-uri.png)&#xA;&#xA;This command will leave us in a prompt where we are basically in a PSQL shell in the provided database or host. For connecting to a local database, the hostname will likely be `localhost` and postgres database port is generally `5432` there might be exceptions to this. We can quit out of PGCLI with `CTRL + D` or `exit`.&#xA;&#xA;While using a remote database, the hostname and other details might be provided, for instance, we might have a Django application, in that project, we will have `database_name`, `port`, and details mentioned in the `DATABASES` settings, those fields can be checked and a postgres connection URI might be constructed.&#xA;&#xA;Let&#39;s take an example.&#xA;&#xA;```python&#xA;DATABASES = {&#xA;    &#34;default&#34;: {&#xA;        &#34;ENGINE&#34;: &#34;django.db.backends.postgresql&#34;,&#xA;        &#34;NAME&#34;: &#34;blog&#34;,&#xA;        &#34;USER&#34;: &#34;meet&#34;,&#xA;        &#34;PASSWORD&#34;: &#34;postgres&#34;,&#xA;        &#34;HOST&#34;: &#34;localhost&#34;,&#xA;    }&#xA;}&#xA;```&#xA;&#xA;We have all the fields provided in the django project&#39; settings file. We can use this to construct a postgres database connection URI.&#xA;&#xA;```&#xA;pgcli postgresql://meet:postgres@localhost:5432/blog&#xA;```&#xA;&#xA;So, that is how you can extract parameters from other technologies and frameworks for working with postgres URIs.&#xA;&#xA;### Connect with CLI options/paramters&#xA;&#xA;We can parse the options like `hostname`, `password`, and `database name` manually with the CLI options that PGCLI provides. &#xA;&#xA;```&#xA;pipx run pgcli --username postgres --host localhost --port 5432 --dbname techstructive_blog&#xA;&#xA;OR&#xA;&#xA;pipx run pgcli -u postgres -h localhost -p 5432 -d techstructive_blog&#xA;```&#xA;&#xA;![PGCLI Connect Paramaters](https://res.cloudinary.com/techstructive-blog/image/upload/v1659879642/blog-media/pgcli-connect-paramters.png)&#xA;&#xA;&#xA;You will be prompted for a password for the database, after entering the password, you will be inside the psql shell. In there, we can execute SQL queries and other specific psql or pgcli commands.&#xA;&#xA;Once we have connected to a database, we can now try to work with the SQL queries and explore some of the commands and features provided by pgcli.&#xA;&#xA;## List out all databases if connected to a server&#xA;&#xA;Let&#39;s say we don&#39;t know the database name, we are just connected to a database server, we can get the list of the databases inside the shell with the `\l+` command. This command is similar to the `--list` parameter in the PGCLI command, but the only difference is that we are executing it from the psql shell rather than our terminal shell.&#xA;&#xA;```&#xA;# Let&#39;s say we connected to a database server&#xA;&#xA;pipx run pgcli --username postgres --host localhost --port 5432&#xA;&#xA;&#xA;# List out all databases in that server&#xA;&#xA;\l+&#xA;```&#xA;&#xA;![PGCLI List databases in a server](https://res.cloudinary.com/techstructive-blog/image/upload/v1659879988/blog-media/pgcli-list-databases.png)&#xA;&#xA;&#xA;So, that&#39;s how we would get the gist of all the databases inside the database server to which we are connected using the `\l+` command.&#xA;&#xA;&#xA;## List all tables in the database&#xA;&#xA;We can get a list of all the tables existing in the database we are logged in with the `\dt` command, we can simply enter the command in the prompt we are directed to once we are logged in to the postgres database.&#xA;&#xA;```&#xA;\dt&#xA;```&#xA;&#xA;![PGCLI List tables in a database](https://res.cloudinary.com/techstructive-blog/image/upload/v1659880232/blog-media/pgcli-list-tables.png)&#xA;&#xA;We can use `\d &lt;table_name&gt;` to describe the details of the table provided. We get back the list of the attributes inside the table and the indexes of the relationships of the table as well as the referenced table details.&#xA;&#xA;![PGCLI describe a table in a database](https://res.cloudinary.com/techstructive-blog/image/upload/v1659880453/blog-media/pgcli-describe-table.png)&#xA;&#xA;We also have other commands for displaying types, schemas, roles, views, etc. with the `\d` prefix, a detailed list can be found in the [PGCLI documentation](https://www.pgcli.com/commands).&#xA;&#xA;## Executing queries&#xA;&#xA;We can execute SQL queries in the PGCLI prompt as normally we do in a PSQL shell. There are added benefits like certain modes of editing text and auto-completion. We&#39;ll execute a few queries and also try out other options for editing and saving queries.&#xA;&#xA;Normally for simple and easy tasks, we can write SQL queries right in the PSQL shell, it might be small and simple to construct a query. Let&#39;s take the simplest example to get all the attribute lists from a specific table in a database.&#xA;&#xA;```SQL&#xA;SELECT * FROM table_name&#xA;```&#xA;&#xA;![PGCLI executing SQL queries](https://res.cloudinary.com/techstructive-blog/image/upload/v1659880794/blog-media/pgcli-sql-query.png)&#xA;&#xA;We get an excellent table display of output which we can view by scroll or arrow keys. We can even use `j` and `k` for moving vertically in an output table.&#xA;&#xA;### Wriing modes in PGCLI&#xA;&#xA;We can use two modes in PGCLI prompt, one is the Emacs mode which is the default one and the other is Vi-mode for Vim fanboys. We can switch between these modes with `F4` key. The `Emacs-mode` is just a normal typing experience without any modes or shortcut macros and other stuff. Though it&#39;s nice that PGCLI offers a VI-mode, it is great for people using Vim, when using the Shell for editing a Query, this mode can be utilized.&#xA;&#xA;![PGCLI VI-mode](https://res.cloudinary.com/techstructive-blog/image/upload/v1659882586/blog-media/pgcli-vi-mode-demo.gif)&#xA;&#xA;We can use `hjkl` keybinding for movement and other normal mode keymaps like `w` for moving a word, or `b` for moving a word in the backward direction, `e` to the end of the word, etc. We can use `i` or `a` for entering into insert mode, `dd` to delete the entire line, `cc` or `S` to delete the current line and enter into insert mode, `yy` to yank line, and `p` to paste, `u` for undo, etc.&#xA;&#xA;We can even execute shell commands by using the `!` in normal mode where we will be prompted for the command executed in the same window.&#xA;&#xA;### Autocompletion in PGCLI shell&#xA;&#xA;PGCLI shell has an auto-completion feature that will provide some suggestions based on the recent queries or commands executed or the default options. You can toggle the autocompletion with the `F2` key, it is officially called as `Smart-Completion` as you would see in the bottom left corner of the shell.&#xA;&#xA;### Select an Editor to write queries&#xA;&#xA;We can use different editors for writing SQL queries. We can write a query from an editor by using the `�` command. The default editor is chosen which will be set in one of your `$EDITOR` or `$VISUAL` environment variables. You can set them manually in the bashrc file by exporting the value of `VISUAL` or `EDITOR` as the name of your editor.&#xA;&#xA;```bash&#xA;export VISUAL=vim&#xA;&#xA;OR&#xA;&#xA;export EDITOR=vim&#xA;&#xA;OR&#xA;&#xA;export VISUAL=gedit&#xA;```&#xA;&#xA;Here, we can set `VISUAL` environment variable as the name of the editor or the path to it. The `VISUAL` environment variable is preferred first. For the details, we can check out the [editor command documentation](https://www.pgcli.com/editor).&#xA;&#xA;Once, we have configured the editor, we will be inside the editor after we enter the `�` command. We can write queries inside the editor and quit after we are done, once the editor is closed, the text is read from the editor and entered in the psql shell. This will make us wait for the enter command for executing the query. &#xA;&#xA;We can even execute more than one query at a time if we use the `;` at the end of each query. Also, we can press `v` in VI-normal mode, to open the current query in the default editor.&#xA;&#xA;### Save the query output to a file&#xA;&#xA;Let&#39;s say we have written a query inside the psql shell and we want the output to be saved in a file. We can achieve that with the `\o` command, we need to specify the filename in which the output of the queries will be saved. &#xA;&#xA;**NOTE: Once we use the `\o` command, all the queries will start appending the results in the provided file. To stop the behavior, you can use `\o` (without any filename) to stop appending the results to a file.**&#xA;&#xA;```&#xA;# Start recording the output of queries to file&#xA;\o filename&#xA;&#xA;&#xA;# Stop recording the output of queries to file&#xA;\o&#xA;```&#xA;&#xA;![PGCLI Query output to file](https://res.cloudinary.com/techstructive-blog/image/upload/v1659884242/blog-media/pgcli-output-query.gif)&#xA;&#xA;### Execute query from a file&#xA;&#xA;We can even execute queries stored in a file, we can use the `\i` command followed by the file name. This will load in the queries from the file and will display the output of each query.&#xA;&#xA;```&#xA;\i file_name&#xA;```&#xA;&#xA;![PGCLI query from a file](https://res.cloudinary.com/techstructive-blog/image/upload/v1659883225/blog-media/pgcli-file-query.gif)&#xA;&#xA;## Summary&#xA;&#xA;In this little article, we were able to explore the basics of the PGCLI package in python, with that package we can interact with the Postgres database that can be on your local server or somewhere remotely. With the PSQL-like shell provided by the PGCLI tool, we were able to write SQL queries, execute commands, write/read queries from files, etc. This tool is quite handy for people dealing with postgres databases and who need to view data or schema in that database.&#xA;&#xA;I&#39;ve personally used this tool in my current internship where I needed to create a few SQL queries to fetch particular data related to various tables and relationships. It saved a lot of time with auto-completion and integration with the text editor for writing longer queries, especially with JOINS and nested queries.&#xA;&#xA;## Conclusion&#xA;&#xA;This was my take and experience of using the PGCLI tool and exploring the various features it provides that can be leveraged for quick interactions with a Postgres database. Hopefully, you have learned something from this post, if you have any queries(NOT SQL queries) or feedback, please let me know in the comments or on my social handles. Thank you for reading and Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django + HTMX CRUD application</title>
      <link>https://www.meetgor.com/posts/django-htmx-crud</link>
      <description>Introduction Gone are the days of writing Ajax requests with javascript, just add a few parameters to the HTML content tags and you will be ready for sending re</description>
      <pubDate>Sun, 31 Jul 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Gone are the days of writing Ajax requests with javascript, just add a few parameters to the HTML content tags and you will be ready for sending requests to your backend. So, we are going back in time and correcting the way we think of APIs and client/server-side rendering. We are referring to the [Hypermedia model](https://en.wikipedia.org/wiki/Hypermedia) for levering the server-side processing of data. Let&#39;s get our feets wet with this ancient but revolutionary methodology of development with [HTMX](https://htmx.org/).&#xA;&#xA;Yes, HTMX can be used for the API/server-side calls directly in the HTML. We will be exploring the basis of HTMX by creating a basic CRUD application.&#xA;&#xA;&#xA;## What is HTMX?&#xA;&#xA;The first question that might come up is what and why HTMX? Htmx is a great library, it&#39;s a javascript library but wait. It is a javascript library designed to allow us to write less or no javascript at all. It acts as a way to send AJAX requests without you writing any javascript. It uses native browser features directly from HTML.&#xA;&#xA;So, we can use HTMX to create interactive templates in our Django application. We can dynamically call and fetch data from the server by using simple HTML attributes like `hx-get`, `hx-post`, etc. We&#39;ll cover those in this article.&#xA;&#xA;You can check the source code used in this article on this [GitHub repository](https://github.com/Mr-Destructive/htmx-blog-django).&#xA;&#xA;## Setup Django Project&#xA;&#xA;We&#39;ll be creating a Django project from scratch and designing a basic blog kind of app. We will be creating a quite simple project with a couple of apps like `user` for authentication and `article` for the CRUD part of our blog application.&#xA;&#xA;To set up a django project, we can run the following commands to quickly get up and running with a base django project.&#xA;&#xA;```&#xA;mkdir htmx_blog&#xA;python3 -m venv .venv&#xA;source .venv/bin/activate&#xA;pip install django&#xA;django-admin startproject htmx_blog .&#xA;```&#xA;&#xA;I have a base user model that I use for a simple authentication system in some basic django projects, you can define your own user app or get the app from [here](https://github.com/Mr-Destructive/django-todo/tree/master/user).&#xA;&#xA;So, that being said, we will be using the user model for the article model which we will be defined next. By creating a basic signup functionality, you are good to go!&#xA;&#xA;### Create the Article app&#xA;&#xA;We will need at least an app to work with htmx as we will define models, views, and URLs later as we configure the htmx.&#xA;&#xA;```&#xA;django-admin startapp article&#xA;```&#xA;&#xA;After the app has been created, you can add those app labels into the `INSTALLED_APPS` config in the `settings.py` file. The `user` app and the `article` app need to be added to the installed apps for the django to pick those up for various contexts related to the project.&#xA;&#xA;```&#xA;# htmx_blog/settings.py&#xA;&#xA;INSTALLED_APPS = [&#xA;    ...&#xA;    ...&#xA;    ...&#xA;&#xA;    &#39;article&#39;,  &#xA;    &#39;user&#39;,&#xA;]&#xA;```&#xA;&#xA;We are sone with the base setup, we also would require a few more configs for the proper working of the project.&#xA;&#xA;### Setup Templates and Static files&#xA;&#xA;Templates will play an important role in the htmx part, so it is equally important to configure them properly before dabbling into the htmx and client-side rendering of data.&#xA;&#xA;I like to keep all the templates in a single folder in the `BASE_DIR` with separate sub-folders for specific apps. Also a single `static` folder with `css`, `js`, and `images` as the sub-folfers for a larger project.&#xA;&#xA;```&#xA;mkdir templates static&#xA;```&#xA;&#xA;Further, configure the created static and templates in the settings.&#xA;&#xA;```python&#xA;&#xA;TEMPLATES = [&#xA;    {&#xA;        &#39;BACKEND&#39;: &#39;django.template.backends.django.DjangoTemplates&#39;,&#xA;        &#39;DIRS&#39;: [os.path.join(BASE_DIR, &#34;templates&#34;)],&#xA;        &#39;APP_DIRS&#39;: True,&#xA;        &#39;OPTIONS&#39;: {&#xA;            &#39;context_processors&#39;: [&#xA;                &#39;django.template.context_processors.debug&#39;,&#xA;                &#39;django.template.context_processors.request&#39;,&#xA;                &#39;django.contrib.auth.context_processors.auth&#39;,&#xA;                &#39;django.contrib.messages.context_processors.messages&#39;,&#xA;            ],&#xA;        },&#xA;    },&#xA;]&#xA;&#xA;STATIC_URL = &#39;static/&#39;&#xA;STATICFILES_DIRS = [str(BASE_DIR/ &#34;static&#34;)]&#xA;STATIC_ROOT = BASE_DIR / &#34;staticfiles&#34;&#xA;```&#xA;&#xA;### Initial migration&#xA;&#xA;Run migration command for the user model and default model in the django project.&#xA;&#xA;```&#xA;python manage.py makemigrations&#xA;python manage.py migrate&#xA;```&#xA;&#xA;So, this project will also include authentication simple registration, and login/logout routes. We will be using the default Django User model by creating an abstract user just in case we require any additional attributes. &#xA;&#xA;## Setup HTMX&#xA;&#xA;We don&#39;t have to configure much for using HTMX as it is a javascript library, we can call it via a CDN or manually install it and link up the static javascript files. Either way, both are equally good, you may like the one I might like the other. &#xA;&#xA;If you already have a base template, you can simply put the below script inside the head tag of the template. This will make us the htmx attributes available.&#xA;&#xA;```html&#xA;&lt;script src=&#34;https://unpkg.com/htmx.org@1.8.0&#34;&gt;&lt;/script&gt;&#xA;```&#xA;&#xA;If you don&#39;t have a base template, you can create one by creating an HTML file inside the `templates` directory. The name can be anything but be careful for following up as it might be different for me. I will choose `base.html` as the template for this project. It will look something like as follows:&#xA;&#xA;```html&#xA;&lt;!-- tempaltes/base.html --&gt;&#xA;&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;HTMX Blog&lt;/title&gt;&#xA;    {% load static %}&#xA;    &lt;link rel=&#34;stylesheet&#34; href=&#34;https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css&#34; integrity=&#34;sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm&#34; crossorigin=&#34;anonymous&#34;&gt;&#xA;    &lt;script src=&#34;https://unpkg.com/htmx.org@1.8.0&#34;&gt;&lt;/script&gt;&#xA;&lt;/head&gt;&#xA;&lt;body &gt;&#xA;        &lt;nav&gt;&#xA;        &lt;h2&gt;HTMX Blog&lt;/h2&gt;&#xA;        &lt;div class=&#34;navbar&#34;&gt;&#xA;          {% if user.is_authenticated %}&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;logout&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Logout&lt;/button&gt;&lt;/a&gt;&#xA;          {% else %}&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;login&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Login&lt;/button&gt;&lt;/a&gt;&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;register&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Register&lt;/button&gt;&lt;/a&gt;&#xA;          {% endif %}&#xA;        &lt;/div&gt;&#xA;        &lt;/nav&gt;&#xA;&#xA;    {% block body %}&#xA;    {% endblock %}&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;I have a nav bar with my user authentication views, simply a login or signup button if the user is not logged in and a log-out button if the user is authenticated. We have added the htmx script file from the CDN just before the end of the head tag. We also have included the bootstrap CSS file for a decent UI which we will be creating in this post.&#xA;&#xA;That is one of the ways, htmx can be injected into an HTML template, you can even download the javascript file from the [htmx cdn](https://unpkg.com/browse/htmx.org/dist/). Further, this can be downloaded or pasted into your local folder and served as a static file or embedded directly into an HTML template.&#xA;&#xA;## Defining Models&#xA;&#xA;We will start the tutorial by defining the model of the application we are creating. Here, we will create a simple Article model with a few parameters like `title`, `content`, `author`, etc. &#xA;&#xA;```python&#xA;from django.db import models&#xA;from user.models import Profile&#xA;&#xA;class Article(models.Model):&#xA;    Article_Status = (&#xA;        (&#34;DRAFT&#34;, &#34;Draft&#34;),&#xA;        (&#34;PUBLISHED&#34;, &#34;Published&#34;),&#xA;    )&#xA;    title = models.CharField(max_length=128, unique=True)&#xA;    content = models.TextField()&#xA;    author = models.ForeignKey(Profile, on_delete=models.CASCADE)&#xA;    status = models.CharField(&#xA;        max_length=16,&#xA;        choices=Article_Status,&#xA;        default=Article_Status[0],&#xA;    )&#xA;&#xA;    def __str__(self):&#xA;        return self.title&#xA;```&#xA;&#xA;In the above model `Article`, we have a few fields like `title` simple Character Field, `content` as a text field as it will be a large text as the post body, `author` which is a ForeignKey to the User Model. We also have the status, which is defined as a character field but with a few choices like `draft` or `published`, we can further modify this status as public or private. But just keeping it simple and easy to understand.&#xA;&#xA;The object reference name for this model is the title as we have defined in the dunder string method. So, that is a simple model created, we can now migrate the changes into the database for adding the tables and attributes.&#xA;&#xA;```&#xA;python manage.py makemigrations&#xA;python manage.py migrate&#xA;```&#xA;&#xA;This will make migrations to the database i.e. convert the python model class into database tables and attributes. So, once the migration process is completed successfully, we can move into the crust of this article which is to actually design the views. In the next section, we will be utilizing the models in our views for representing the data on the templates.&#xA;&#xA;&#xA;## Creating Article Form&#xA;&#xA;Before diving into the views section, we need a few things like the Article Form, which will be a Django Model-based form. It will help us a lot in creating or updating the fields for the article model. We can define a form in a python file called `forms.py`, it&#39;s not necessary to keep your forms in the `forms.py` but if you have a lot of forms and models, it becomes a good practice to organize the components of our app. So, I&#39;ll be creating a new file inside of the `article` app called `forms.py` and defining the `ArticleForm`.&#xA;&#xA;```python&#xA;# article/forms.py&#xA;&#xA;from django import forms&#xA;from .models import Article&#xA;&#xA;&#xA;class ArticleForm(forms.ModelForm):&#xA;    class Meta:&#xA;        model = Article&#xA;        exclude = (&#xA;            &#34;created&#34;,&#xA;            &#34;updated&#34;,&#xA;            &#34;author&#34;,&#xA;        )&#xA;        widgets = {&#xA;            &#34;title&#34;: forms.TextInput(&#xA;                attrs={&#xA;                    &#34;class&#34;: &#34;form-control&#34;,&#xA;                    &#34;style&#34;: &#34;max-width: 450px; align: center;&#34;,&#xA;                    &#34;placeholder&#34;: &#34;Title&#34;,&#xA;                }&#xA;            ),&#xA;            &#34;content&#34;: forms.Textarea(&#xA;                attrs={&#xA;                    &#34;class&#34;: &#34;form-control&#34;,&#xA;                    &#34;style&#34;: &#34;max-width: 900px;&#34;,&#xA;                    &#34;placeholder&#34;: &#34;Content&#34;,&#xA;                }&#xA;            ),&#xA;        }&#xA;```&#xA;&#xA;So, the forms are inherited from the [ModelForm] which allows us to create forms based on our model. So, we specify the model name which in our case is `Article` and further we can have `exclude` or `fields` tuples. To exclude certain fields in the actual form, just parse the tuple of those attributes and if you want to only select a few attributes, you can specify the `fields` tuple and mention the required fields for the form.&#xA;&#xA;So, if we have a lot of things to be included in the form, we can specify only the attributes to be excluded with the `exclude` tuple. And if we have a lot of fields to be excluded, we can use the `fields` tuple to specify which attributes to use in the form.&#xA;&#xA;Let&#39;s take an example: For the above ArticleForm, if we wanted to specify the required fields to be included in the form, then we might use the `fields` tuple like below the rest will be not rendered in the form fields.&#xA;&#xA;```&#xA;class ArticleForm(forms.ModelForm):&#xA;    class Meta:&#xA;        model = Article&#xA;        fields = (&#xA;            &#34;title&#34;,&#xA;            &#34;content&#34;,&#xA;            &#34;status&#34;,&#xA;        )&#xA;```&#xA;&#xA;Both of them can be used, it just depends on how many fields you have to exclude or include in the rendered form.&#xA;&#xA;We have also specified the `widgets` attribute which gives a bit more control on how we need to display the form in a template. So I have specified the type of input it needs to render like a simple text input for the title, text area for content, etc. The cool thing about this is it can automatically set these by knowing the type of field in the model, but sometimes it can be a bit undesired mostly with complex relationships and attributes.&#xA;&#xA;## Creating Views&#xA;&#xA;Let&#39;s start creating views for creating, reading, updating, and deleting articles from the database. I will be using function-based views just because we are understanding the flow of how HTMX and Django can be integrated so we need to dive in deeper and understand the actual flow of the process.&#xA;&#xA;### Create View&#xA;&#xA;So, creating articles seems like a good way to start off. We can create a simple function-based view which will initially load in an empty `ArticleForm` and if the request is `GET` we will render the form in the `create.html` template. If the request is `POST` which will be after we submit the form, we will validate the form and attach the current user as the author of the article and save the for instance which will create an article record and this object will be rendered to the detail template.&#xA;&#xA;```python&#xA;from django.shortcuts import render&#xA;from .models import Article&#xA;from .forms import ArticleForm&#xA;&#xA;def createArticle(request):&#xA;    form = ArticleForm()&#xA;    context = {&#xA;        &#39;form&#39;: form,&#xA;    }&#xA;    return render(request, &#39;articles/create.html&#39;, context)&#xA;```&#xA;&#xA;#### Rendering the Form&#xA;&#xA;We are creating an empty instance of `ArticleForm` and rendering it in the template. So, this will render the empty form in the `create.html` template.&#xA;&#xA;```html&#xA;&lt;!-- templates/articles/create.html --&gt;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;&#xA;{% block body %}&#xA;&lt;div hx-target=&#34;this&#34; hx-swap=&#34;outerHTML&#34;&gt;&#xA;  &lt;form&gt;&#xA;    {% csrf_token %}&#xA;    {{ form.as_p }}&#xA;    &lt;button hx-post=&#34;.&#34; class=&#34;btn btn-success&#34;&#xA;      type=&#34;submit&#34;&gt;Save&lt;/button&gt;&#xA;  &lt;/form&gt;&#xA;&lt;/div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;Now, here we are inheriting from the base template and creating a form tag in HTML with the `{{ form}}` for rendering the form fields, we finally have the `button` element for submitting the form. We have used the `hx-post` attribute. More on this in just a minute. So, this is we create a template for rendering the article form.&#xA;&#xA;We have used the `hx-post` attribute here, which will send a `POST` request to the current `URL` represented by `hx-post=&#34;.&#34;`. You might have noticed the `div` attributes, the `hx-target` and `hx-swap`, so these are some of the many attributes provided by the htmx library for controlling the reactivity of the requests made. The `hx-target` allow us to specify the element or tag to which the data will be rendered. The `hx-swap` goes hand-in-hand for specifying the target DOM like `innerHTML`, `outerHTML`, etc. You can see the various options on the [htmx docs](https://htmx.org/docs/#swapping). By specifying the `hx-swap` as  `outerHTML`, we are saying to replace the entire element with the incoming content from the request which we will send with nearby request triggers.&#xA;&#xA;We need to map the view to a URL in order to get a good idea about the request and parsed content.&#xA;&#xA;We&#39;ll create a `create/` route and bind it to the `createArticle` view with the name `article-create`.&#xA;&#xA;```python&#xA;# article/urls.py&#xA;&#xA;from django.urls import path&#xA;from . import views&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;create/&#39;, views.createArticle, name=&#39;article-create&#39;), &#xA;]&#xA;```&#xA;&#xA;This URL will be mapped to the global URL in the project, here we can simply specify the prefix for the URLs in the `article` app and include those URLs.&#xA;&#xA;```python&#xA;# htmx_blog/urls.py&#xA;&#xA;from django.contrib import admin&#xA;from django.urls import path, include&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;admin/&#39;, admin.site.urls),&#xA;    path(&#39;user/&#39;, include(&#39;user.urls&#39;), name=&#39;auth&#39;),&#xA;    path(&#39;&#39;, include(&#39;article.urls&#39;), name=&#39;home&#39;),&#xA;]&#xA;```&#xA;Feel, free to add any other URL pattern like for instance, the article app is at `/` i.e. `127.0.01.:8000/`, you can add any other name like `127.0.0.1:8000/article/` by adding `path(&#39;article/&#39;, include(&#39;article.urls&#39;))`.&#xA;&#xA;![Django HTMX Create view Form Template](https://res.cloudinary.com/techstructive-blog/image/upload/v1659252089/blog-media/django-htmx-create-view.png)&#xA;&#xA;So, finally, we are sending a `GET` request to the `127.0.0.1:8000/create/` and this will output the form. As we have a `POST` request embedded in the button inside the form, we will send the `POST` request to the same URL -&gt; `127.0.0.1:8000/create/`.&#xA;&#xA;#### Submitting the Form&#xA;&#xA;Let&#39;s handle the `POST` request in the create view.&#xA;&#xA;```python&#xA;from django.shortcuts import render&#xA;from .models import Article&#xA;from .forms import ArticleForm&#xA;&#xA;def createArticle(request):&#xA;    form = ArticleForm(request.POST or None)&#xA;    if request.method == &#39;POST&#39;:&#xA;        if form.is_valid():&#xA;            form.instance.author = request.user&#xA;            article = form.save()&#xA;            return render(request, &#39;articles/detail.html&#39;, {&#39;article&#39;: article})&#xA;    context = {&#xA;        &#39;form&#39;: form,&#xA;    }&#xA;    return render(request, &#39;articles/create.html&#39;, context)&#xA;```&#xA;&#xA;**Simple explanation**&#xA;&#xA;- Create a form instance of ArticleForm with the request data or empty -&gt; `ArticleForm(request.POST or None)`&#xA;- If it&#39;s a POST request, validate and create the article, render the article object in `detail.html` template.&#xA;- If it&#39;s a GET request, render the empty form in `create.html`&#xA;&#xA;&#xA;There are a few changes in the view, instead of initializing the form to empty i.e. `ArticleForm()`, we are initializing with `ArticleForm(request.POST or None)`. This basically means that if we are having something in the `request.POST` dict, we will initialize the Form with that data or else an empty form instance.&#xA;&#xA;Next, we check if the request if `POST`, if it is then we check if the form is valid i.e. the form fields are not empty or if any other constraint on the model attributes is satisfied or not. If the form data is valid, we attach the author as the currently logged-in User/user who sent the request. Finally, we save the form which in turn creates the article record in the database. We then render the created article in the `detail.html` template which is not yet created.&#xA;&#xA;So, the `htmx-post` attribute has worked and it will send a post request to the same URL i.e. `127.0.0.1:8000/create` and this will again trigger the view `createArticle` this time we will have `request.POST` data. So, we will validate and save the form.&#xA;&#xA;### Detail View&#xA;&#xA;The detail view is used for viewing the details of an article. This will be rendered after the article has been created or updated. This is quite simple, we need an `id` or `primary key(pk)` of an article and render the `title` and `content` of the article in the template.&#xA;&#xA;We pass in a primary key along with the request as a parameter to the view, the `pk` will be passed via the URL. We fetch the Article object with the id as the parsed `pk` and finally render the `detail.html` template with the article object. The `context[&#39;article&#39;]` can be accessed from the template to render the specific attributes like `title`, `content`, etc.&#xA;&#xA;```python&#xA;# article/views.py&#xA;&#xA;def detailArticle(request, pk):&#xA;    article = Article.objects.get(id=pk)&#xA;    context = {&#39;article&#39;: article}&#xA;    return render(request, &#39;articles/detail.html&#39;, context)&#xA;&#xA;```&#xA;&#xA;We can now bind the view to a URL and parse the required parameter `pk` to the view. &#xA;&#xA;```python&#xA;from django.urls import path&#xA;from . import views&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;create/&#39;, views.createArticle, name=&#39;article-create&#39;), &#xA;    path(&#39;&lt;int:pk&gt;&#39;, views.detailArticle, name=&#39;article-detail&#39;), &#xA;]&#xA;```&#xA;&#xA;We have parsed the `pk` as `int` to the URL parameter, so for an article with id=4, the URL will be, `127.0.0.1:8000/4/`.&#xA;&#xA;We need to create the template for rendering the context from the `detailArticle` view. So, we create the `detail.html` in the `templates/articles` folder. We inherit the base template and render the `article.title` and the `article.content` with a linebreaks template filter so as to display the content properly.&#xA;&#xA;```html&#xA;&lt;!-- templates/articles/detail.html --&gt;&#xA;&#xA;&#xA;{% extends &#39;base.html&#39; %}&#xA;{% block body %}&#xA;&lt;div id=&#34;article-card&#34;&gt;&#xA;  &lt;h2&gt;{{ article.title }}&#xA;  &lt;p&gt;{{ article.content|linebreaks|safe }}&lt;/p&gt;&#xA;&lt;div&gt;&#xA;{% endblock %}&#xA;&#xA;```&#xA;&#xA;![Detail View Template](https://res.cloudinary.com/techstructive-blog/image/upload/v1659252227/blog-media/django-htmx-detail-view.png)&#xA;&#xA;So, we can now use `createArticle` view as well as `detailArticle` view, this both are configured properly, so (CR) or CRUD is completed. We can add `listArticle` for listing out all the author&#39;s(logged-in user) articles.&#xA;&#xA;### List View&#xA;&#xA;Listview of the articles is much similar to the detail view as it will return a list of articles rather than a single article.&#xA;&#xA;So in the `listArticle` view, we will return all the articles with the author as the user who sent the request/logged-in user. We will parse this object list into the template as `base.html` or `articles/list.html`.&#xA;&#xA;```python&#xA;# article/views.py&#xA;&#xA;&#xA;def listArticle(request):&#xA;    articles = Article.objects.filter(author=request.user.id)&#xA;    context = {&#xA;        &#39;articles&#39;: articles,&#xA;    }&#xA;    return render(request, &#39;base.html&#39;, context)&#xA;```&#xA;&#xA;We will add the URL route for this as the `/` route that is on `127.0.0.1:8000/` this is the base URL for the article app and is the route for the `listArticle` view. So, we will display the list of articles on the homepage.&#xA;&#xA;```python&#xA;# article/urls.py&#xA;&#xA;&#xA;from django.urls import path&#xA;from . import views&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;&lt;int:pk&gt;&#39;, views.detailArticle, name=&#39;article-detail&#39;), &#xA;    path(&#39;create/&#39;, views.createArticle, name=&#39;article-create&#39;), &#xA;    path(&#39;&#39;, views.listArticle, name=&#39;article-list&#39;), &#xA;]&#xA;```&#xA;&#xA;Let&#39;s create the template for the list view which will iterate over the articles and display the relevant data like the title and link to the article.&#xA;&#xA;```html&#xA;&lt;!-- templates/articles/list.html --&gt;&#xA;&#xA;&lt;ul id=&#34;article-list&#34;&gt;&#xA;  {% for article in articles %}&#xA;  &lt;li&gt;&#xA;    &lt;div class=&#34;card&#34; style=&#34;width: 18rem;&#34;&gt;&#xA;      &lt;div class=&#34;card-body&#34;&gt;&#xA;        &lt;h5 class=&#34;card-title&#34;&gt;{{ article.title }}&lt;/h5&gt;&#xA;        &lt;p class=&#34;card-text&#34;&gt;{{ article.content|truncatewords:5  }}&lt;/p&gt;&#xA;        &lt;a href=&#34;#&#34;article-detail&#39; article.id %}&#34; class=&#34;card-link&#34;&gt;Read more&lt;/a&gt;&#xA;      &lt;/div&gt;&#xA;    &lt;/div&gt;&#xA;  &lt;/li&gt;&#xA;  {% endfor %}&#xA;&lt;/ul&gt;&#xA;```&#xA;&#xA;We have used the `truncatewords:5` template filter for only displaying the content of the articles till the first 5 words as it is just a list view, we don&#39;t want to display every detail of the article here.&#xA;&#xA;![List view Template](https://res.cloudinary.com/techstructive-blog/image/upload/v1659252293/blog-media/django-htmx-list-view.png)&#xA;&#xA;We can use this template to render in the `base.html` file.&#xA;&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;HTMX Blog&lt;/title&gt;&#xA;    {% load static %}&#xA;    &lt;link rel=&#34;stylesheet&#34; href=&#34;https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css&#34; integrity=&#34;sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm&#34; crossorigin=&#34;anonymous&#34;&gt;&#xA;    &lt;script src=&#34;https://unpkg.com/htmx.org@1.8.0&#34;&gt;&lt;/script&gt;&#xA;&lt;/head&gt;&#xA;&lt;body hx-target=&#34;this&#34; hx-swap=&#34;outerHTML&#34; hx-headers=&#39;{&#34;X-CSRFToken&#34;: &#34;{{ csrf_token }}&#34;}&#39;&gt;&#xA;        &lt;nav&gt;&#xA;        &lt;h2&gt;HTMX Blog&lt;/h2&gt;&#xA;        &lt;div class=&#34;navbar&#34;&gt;&#xA;          {% if user.is_authenticated %}&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;article-list&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Home&lt;/button&gt;&lt;/a&gt;&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;logout&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Logout&lt;/button&gt;&lt;/a&gt;&#xA;          {% else %}&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;login&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Login&lt;/button&gt;&lt;/a&gt;&#xA;            &lt;a class=&#34;nav-item nav-link&#34; href=&#34;#&#34;register&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-link&#34;&gt;Register&lt;/button&gt;&lt;/a&gt;&#xA;          {% endif %}&#xA;        &lt;/div&gt;&#xA;        &lt;/nav&gt;&#xA;&#xA;    {% block body %}&#xA;    &lt;a href=&#34;#&#34;article-create&#39; %}&#34;&gt;&lt;button class=&#34;btn btn-success&#34; &gt;Create&lt;/button&gt;&lt;/a&gt;&#xA;    {% include &#39;articles/list.html&#39; %}&#xA;    {% endblock %}&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;We have now included the `list.html` template on the homepage and also added the `create` button as the link to the `article-create` URL.&#xA;&#xA;### Delete View&#xA;&#xA;For deleting an article, we will simply rely on htmx for sending the request and on that request, we will delete the current article and render the updated list of articles.&#xA;&#xA;With the `deleteArticle` view, we will take in two parameters the request which is by default for a Django function-based view, and the primary key as `pk`. Again we will parse the `pk` from the URL. We will delete the article object and get the latest list of articles. Finally, render the updated list of articles in the base template which is our list view.&#xA;&#xA;```python&#xA;# article/views.py&#xA;&#xA;&#xA;def deleteArticle(request, pk):&#xA;    Article.objects.get(id=pk).delete()&#xA;    articles = Article.objects.filter(author=request.user)&#xA;    context = {&#39;article&#39;: articles}&#xA;    return render(request, &#34;base.html&#34;, context)&#xA;&#xA;```&#xA;&#xA;We will add the `deleteArticle` into the URL patterns and call it `article-delete` with the URL of `delete/&lt;int:pk&gt;`. This will allow us to send a request to the URL `127.0.0.1:8000/delete/4` for deleting the article with id `4`.&#xA;&#xA;```python&#xA;# article/urls.py&#xA;&#xA;&#xA;from django.urls import path&#xA;from . import views&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;&#39;, views.listArticle, name=&#39;article-list&#39;), &#xA;    path(&#39;&lt;int:pk&gt;&#39;, views.detailArticle, name=&#39;article-detail&#39;), &#xA;    path(&#39;create/&#39;, views.createArticle, name=&#39;article-create&#39;), &#xA;    path(&#39;delete/&lt;int:pk&gt;&#39;, views.deleteArticle, name=&#39;article-delete&#39;), &#xA;]&#xA;```&#xA;&#xA;In the delete view, the template is important as we want to send a request appropriately to the defined URL. To do that, we will have a form but it won&#39;t have any inputs as such just a button that indicates to delete the current article. We will add the `hx-delete` attribute as the URL to the `deleteArticle` view. with the id of the article. This will send a request to the `article-delete` URL which will, in turn, trigger the view with the given id and delete the article.&#xA;&#xA;We have added the `hx-confirm` attribute for showing a pop-up of confirmation of deleting the article. As you can see we have added a little script for adding `csrf_token` into the HTML, this is important in order to submit a form with a valid `CSRFToken`.&#xA;&#xA;```html&#xA;&lt;!-- templates/article/delete.html --&gt;&#xA;&#xA;&lt;script&gt;&#xA;  document.body.addEventListener(&#39;htmx:configRequest&#39;, (event) =&gt; {&#xA;    event.detail.headers[&#39;X-CSRFToken&#39;] = &#39;{{ csrf_token }}&#39;;&#xA;  })&#xA;&lt;/script&gt;&#xA;&lt;div &gt;&#xA;  &lt;form method=&#34;post&#34; &gt;&#xA;  {% csrf_token %}&#xA;    &lt;button class=&#34;btn btn-danger&#34;&#xA;      hx-delete=&#34;{% url &#39;article-delete&#39; article.id %}&#34;&#xA;      hx-confirm=&#34;Are you sure, You want to delete this article?&#34;&#xA;      type=&#34;submit&#34;&gt;&#xA;      Delete&#xA;    &lt;/button&gt;&#xA;  &lt;/form&gt;&#xA;&lt;/div&gt;&#xA;```&#xA;&#xA;Do you have a question like how do we access the `article.id`? we are not rendering the `delete.html` template from the view, so there is no context to pass. We will include this snippet into the detail view template, so as to have the option of deleting the current article.&#xA;&#xA;We will modify the `articles/detail.html` template and include the `delete.html` template. This includes simply adding an HTML template in the specified location. So, we will basically inject the delete form into the detail template.&#xA;&#xA;```html&#xA;{% extends &#39;base.html&#39; %}&#xA;{% block body %}&#xA;&lt;div hx-target=&#34;this&#34; hx-swap=&#34;outerHTML&#34;&gt;&#xA;  &lt;h2&gt;{{ article.title }}&#xA;  {% include &#39;articles/delete.html&#39; %}&#xA;  &lt;p&gt;{{ article.content|linebreaks|safe }}&lt;/p&gt;&#xA;&lt;div&gt;&#xA;{% endblock %}&#xA;```&#xA;&#xA;Hence, we will have a nice option to delete the article in the detail section, this can be placed anywhere but remember, we need to add the `hx-target=&#34;this&#34;` and `hx-swap=&#34;outerHTML&#34;` in the div so as to correctly swap the HTML content after the request has been made.&#xA;&#xA;### Update View&#xA;&#xA;We can now move into the final piece of the CRUD i.e. `Update`. This will be similar to the `createArticle` with a couple of changes. We will parse parameters like `pk` to this view as well because we want to update a specific article. So, we will have to get the primary key of the article from the URL slug.&#xA;&#xA;Inside the `updateArticle` view, we will first grab the article object from the parsed primary key. We will have two kinds of requests here, one will be for fetching the `form` with the current article data, and the next request will be the `PUT` request for actually saving the changes in the article.&#xA;&#xA;The first request is simple as we need to parse the form data with the instance of the article object. We will call the `ArticleForm` with the instance of `article` this will load the data of the article into the form ready to render into the template. So once the `GET` request has been sent, we will render the template with the form pre-filled with the values of the article attributes.&#xA; &#xA; ```python&#xA;# article/views.py&#xA;&#xA;&#xA;def updateArticle(request, pk):&#xA;    article = Article.objects.get(id=pk)&#xA;    form = ArticleForm(instance=article)&#xA;    context = {&#xA;        &#39;form&#39;: form,&#xA;        &#39;article&#39;: article,&#xA;    }&#xA;    return render(request, &#39;articles/update.html&#39;, context)&#xA;```&#xA;&#xA;We will create a template in the `templates/articles/` folder as  `update.html` which will have a simple form for rendering the form fields and a button for sending a `PUT` request. We will render the `form` and then add a button element with the attribute `hx-put` for sending the `PUT` request to save changes to the article record. We will parse in the `article.id` for the primary key parameter to the view.&#xA;&#xA;```html&#xA;&lt;!-- templates/articles/update.html --&gt;&#xA;&#xA;&lt;div hx-target=&#34;this&#34; hx-swap=&#34;outerHTML&#34;&gt;&#xA;  &lt;form&gt;&#xA;    {% csrf_token %}&#xA;    {{ form.as_p }}&#xA;    &lt;button hx-put=&#34;{% url &#39;article-update&#39; article.id %}&#34;&#xA;      type=&#34;submit&#34;&gt;Update&lt;/button&gt;&#xA;  &lt;/form&gt;&#xA;&lt;/div&gt;&#xA;```&#xA;&#xA;We are yet to link the `updateArticle` into the URLs. We will add the view `updateArticle` into the URLs with the name as `article-update` and `update/&lt;int:pk` as the slug pattern. This URL pattern will trigger the `updateArticle` when we send an HTTP request to the `127.0.0.1:8000/update/4` for updating the article with id as `4`.&#xA;&#xA;```python&#xA;# article/urls.py&#xA;&#xA;&#xA;from django.urls import path&#xA;from . import views&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;&#39;, views.listArticle, name=&#39;article-list&#39;), &#xA;    path(&#39;&lt;int:pk&gt;&#39;, views.detailArticle, name=&#39;article-detail&#39;), &#xA;    path(&#39;create/&#39;, views.createArticle, name=&#39;article-create&#39;), &#xA;    path(&#39;delete/&lt;int:pk&gt;&#39;, views.deleteArticle, name=&#39;article-delete&#39;), &#xA;    path(&#39;update/&lt;int:pk&gt;&#39;, views.updateArticle, name=&#39;article-update&#39;), &#xA;]&#xA;```&#xA;&#xA;This is not done yet, we will need to handle the `PUT` request as well i.e. when the form details have been modified and we are about to save changes to the form data. So, we will check for the request method&#39;s type. If it is a `PUT` request, we will have to process a few things.&#xA;&#xA;```python&#xA;# article/views.py&#xA;&#xA;&#xA;from django.http import QueryDict&#xA;&#xA;def updateArticle(request, pk):&#xA;    article = Article.objects.get(id=pk)&#xA;    if request.method == &#39;PUT&#39;:&#xA;        qd = QueryDict(request.body)&#xA;        form = ArticleForm(instance=article, data=qd)&#xA;        if form.is_valid():&#xA;            article = form.save()&#xA;            return render(request, &#39;articles/detail.html&#39;, {&#39;article&#39;: article})&#xA;    form = ArticleForm(instance=article)&#xA;    context = {&#xA;        &#39;form&#39;: form,&#xA;        &#39;article&#39;: article,&#xA;    }&#xA;    return render(request, &#39;articles/update.html&#39;, context)&#xA;```&#xA;&#xA;In the above `updateArticle` view, we have to check for a `PUT` request, if we are sending a `PUT` request, the form instance needs to be loaded from the request object. We use the `request.body` to access the data sent in the request. The incoming data received from the `request.body` object is not a valid format to parse it to the form instance, so we will parse it using `QueryDict`. This will allow us to modify the `request.body` object into valid python serializable data.&#xA;&#xA;So, we import the `QueryDict` from `django.http` module. We parse the data as the parameter to `QueryDict` and store it in a variable. We then have to get the `ArticleForm` for fetching the data as per the form details, so we parse the instance and also the `data` parameter. The instance is the article object and the data is the received form data which we have stored in `qd` as `QueryDict(request.body)`. This will load the new form data and then we can validate it the form.&#xA;&#xA;After we have verified the form details, we can save the form and this will update the article record. Thereby we can render the updated article in the `detail` view with the updated `article` object as the context.&#xA;&#xA;![Update View Form Template](https://res.cloudinary.com/techstructive-blog/image/upload/v1659252091/blog-media/django-htmx-update-view.png)&#xA;&#xA;So, this will set up the update view as well, we can now create, read, update, and delete an article instance with HTMX in templates and Django function-based views without writing any javascript.&#xA;&#xA;## Summary&#xA;&#xA;We were able to create a basic CRUD application in Django with HTMX. We used simple function-based views to demonstrate the inner details of how we can work with HTMX and handle requests from the templates. By creating simple standalone templates, we can connect those together to make a fully functional and responsive webpage. The UI is not great but the purpose of this tutorial was to make a barebone CRUD app to work with the backend using HTMX, so hopefully, you would have got a good overview of how HTMX can be integrated into a Django application.&#xA;&#xA;Overall HTMX is a great library that can be used to enhance or even create a new web application for making the site responsive and without writing any javascript.&#xA;&#xA;![Django HTMX CRUD Application Demo GIF](https://res.cloudinary.com/techstructive-blog/image/upload/v1659252296/blog-media/django-htmx-demo.gif)&#xA;&#xA;You can check out the source code for this project and blog on the [htmx-blog GitHub](https://github.com/Mr-Destructive/htmx-blog-django) repository.&#xA;&#xA;## Conclusion&#xA;&#xA;From this post, we were able to understand the basics of HTMX and how we can integrate it into a Django application. Hopefully, you enjoyed the post, if you have any queries or feedback, please let me know in the comments or on my social handles. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Anonymous Functions</title>
      <link>https://www.meetgor.com/posts/golang-anonymous-functions</link>
      <description>Introduction We have looked at the defer keyword in golang in the part of the series, in this section, we will understand how we can use anonymous functions in</description>
      <pubDate>Sun, 24 Jul 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We have looked at the defer keyword in golang in the [previous](https://www.meetgor.com/golang-defer/) part of the series, in this section, we will understand how we can use anonymous functions in golang. We will explore how to declare and use anonymous functions with a few examples.&#xA;&#xA;## What are Anonymous Functions&#xA;&#xA;Anonymous functions are quite simple to understand, we don&#39;t define a function, we declare it and call it instantly. An anonymous function doesn&#39;t have a name so hence it is called an anonymous function. As a normal function it can take in parameters and return values. With anonymous functions, we can bind the operations to a variable or a constant as a literal(value). If an anonymous function takes in a parameter, it needs to be parsed immediately after the end of the function body. We will see how we define the syntax and specifications of the anonymous functions in golang. &#xA;&#xA;## Simple Anonymous functions&#xA;&#xA;To create a simple anonymous function we use the same function syntax without giving it a name like `func() {}`, inside the function body i.e. `{}`, you can define the operations that need to be performed. &#xA;&#xA;Here, we have created an anonymous function that simply calls the `fmt.Println` function with a string. So, we have made things a little too much as we can even directly call the `fmt.Println` function from the main function, instead we have called an anonymous function that in turn calls the `fmt.Println` function. It might not make sense to use an anonymous function here, but it can be helpful for other complex tasks that you want to isolate the logic without creating a dedicated function for the process.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;  func() {&#xA;    fmt.Println(&#34;Hello, Anonymous functions&#34;)&#xA;  }()&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run anonymous_function.go&#xA;&#xA;Hello, Anonymous functions&#xA;```&#xA;&#xA;So, this is how we create a basic anonymous function in golang, this can be further used for returning values from the function or passing parameters into the function and also assigning the function call to a variable or a constant.&#xA;&#xA;## Assigning anonymous function to variables&#xA;&#xA;We can assign the call to the anonymous function to a variable or a constant and call the function as many times as we require. So, we can basically store the function logic in a variable and call it whenever we require the function with the `()` parenthesis as an indication to call the function.&#xA;&#xA;In the following example, we have used the variable `draw` to store the function call which prints `Drawing` with the help of the `fmt.Println` function. The draw variable now contains the function and not its value. So be careful here, the anonymous function which we have defined as the `draw` variable&#39;s literal value, it&#39;s like we are giving a name to this anonymous function and the name will be the variable name so we have created the function `draw` which is an anonymous function stored in a variable.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;  draw := func() {&#xA;    fmt.Println(&#34;Drawing&#34;)&#xA;  }&#xA;  draw()&#xA;  draw()&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run anonymous_function.go&#xA;&#xA;Drawing&#xA;Drawing&#xA;```&#xA;&#xA;We see that we call the variable `draw` as a function call by appending `()` parenthesis to it as `draw()` this thereby calls the anonymous function stored inside the variable value.&#xA;&#xA;The main thing to note here is that we are not adding `()` at the time of declaring the anonymous function, as it will call the function directly. The problem then will arise because the function is not returning anything so we can&#39;t assign it to a variable.&#xA;&#xA;```go&#xA;draw := func() {&#xA;  fmt.Println(&#34;Drawing&#34;)&#xA;}()&#xA;```&#xA;&#xA;```&#xA;functions/anonymous_functions.go:10:11: func() {…}() (no value) used as value&#xA;```&#xA;&#xA;So, we are trying to assign a variable to a function call that has no return value. This has no value, not even nil, so we get an error for assigning a variable to nothing.&#xA;&#xA;Though if you had a return value from the function, we can directly assign the function call to the variable as it has returned a valid value defined in the function literal.&#xA;&#xA;## Parsing parameters&#xA;&#xA;We can even parse parameters to the anonymous functions as any other normal function. We define the name of the variable followed by the type of the variable in the parenthesis to use these parameters inside the anonymous function.&#xA;&#xA;We need to keep in mind that these function parameters can either be passed with the variable call or directly at the time of defining the function.&#xA;&#xA;In the below example, we have created a variable `call` and are assigning it to an anonymous function that takes in a parameter `name` which is a `string`, and prints some text to the console. So, we call the funciton `call` with a parameter as a string, as we have `&#34;Meet&#34;` and `person := &#34;Chris&#34;` as a string passed to the `call` function.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;  call := func(name string) {&#xA;    fmt.Println(&#34;Calling,&#34;, name)&#xA;  }&#xA;&#xA;  call(&#34;Meet&#34;)&#xA;  person := &#34;Chris&#34;&#xA;  call(person)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run anonymous_function.go&#xA;&#xA;Calling, Meet&#xA;Calling, Chris&#xA;```&#xA;&#xA;Here, we can see that the function `call` prints the text that we have defined to call the `fmt.Println` function. We parse the function with the string literal as the anonymous function takes a parameter in the form of a string. We can parse multiple parameters to the anonymous function as we can with the normal function like slices, maps, arrays, struct, etc.&#xA;&#xA;## Returning values&#xA;&#xA;We can even return values from the anonymous function if we want to instantly call the function and save the `returned` value in the variable. We can return single or multiple values as per our requirements just like any normal function in golang.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;  is_divisible := func(x int, y int) bool{&#xA;    var res bool &#xA;    if x % y == 0 {&#xA;      res = true&#xA;    } else{&#xA;      res = false&#xA;    }&#xA;    fmt.Println(res)&#xA;    return res&#xA;  }&#xA;&#xA;  fmt.Printf(&#34;%T&#xA;&#34;, is_divisible)&#xA;  is_divisible(10, 5)&#xA;  is_divisible(33, 5)&#xA;&#xA;  divisblity_check := is_divisible(45, 5)&#xA;  fmt.Printf(&#34;%T : %t&#xA;&#34;, divisblity_check, divisblity_check) &#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run anonymous_function.go&#xA;&#xA;func(int, int) bool &#xA;true&#xA;false&#xA;bool : true&#xA;```&#xA;&#xA;As we can see the function has returned a boolean value and we store it in a variable `divisblity_check`, the variable which contains the function i.e. `is_divisible` can then be called, and thereby we get the returned value in the variable as a boolean as we print the type and the value of the variable `divisblity_check`. We also can see that the type of the variable `is_divisible` is of type function literal which takes `int, int` and has a return value as `bool`.&#xA;&#xA;We can also do one more interesting here, which we were restricted previously in the above examples. We can directly call the function and store it as a value rather than the function itself. So, we can only use the value returned from the function but can&#39;t call the function later.&#xA;&#xA;```go&#xA;is_divisible := func(x int, y int) bool{&#xA;  var res bool &#xA;  if x % y == 0 {&#xA;    res = true&#xA;  } else{&#xA;    res = false&#xA;  }&#xA;  fmt.Println(res)&#xA;  return res&#xA;}(13, 4)&#xA;&#xA;fmt.Printf(&#34;%T&#xA;&#34;, is_divisible)&#xA;fmt.Printf(is_divisible) &#xA;```&#xA;&#xA;```&#xA;go run anonymous_function.go&#xA;&#xA;bool&#xA;false&#xA;```&#xA;&#xA;So, in the above-modified example, we have passed in the parameter instead of a callable function. This will store the returned value of the function call. So, we will store the boolean value in the `is_divisible` and we will have to pass the integer values to the function which we have parsed as `(13, 4)` to the anonymous function call.&#xA;&#xA;&#xA;In the below example, we have created an anonymous function that takes in three parameters like `(string, int, string)` and returns a string. We have used `fmt.Sprintf` function to format the variable and store it in a variable, we then return the string. This anonymous function is then directly called and we store the returned value instead of the function.&#xA;&#xA;So, in this example, the `format_email` variable will store a string instead of acting it as a function as a callable object.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;  format_email := func(name string, age int, company string) string{&#xA;    email := fmt.Sprintf(&#34;%s.%d@%s.com&#34;, name, age, company)&#xA;    return email&#xA;  }(&#34;john&#34;, 25, &#34;gophersoft&#34;)&#xA;&#xA;  fmt.Println(format_email)&#xA;  fmt.Printf(&#34;%T&#xA;&#34;,format_email)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run anonymous_function.go&#xA;&#xA;john.25@gophersoft.com   &#xA;string&#xA;```&#xA;&#xA;As we can see the `format_email` variable only returns a string instead of a callable object. We have directly parsed the parameters to the anonymous function and hence it instantly calls it and we return the string.&#xA;&#xA;## Passing Anonymous function to function parameters&#xA;&#xA;We can even pass an anonymous function to a parameter to a function. This can be quite helpful for writing some simple logic inside a complex script. We can pass the function type as a parameter to the parameter and theere we can parse the actual data and perform the desired operation.&#xA;&#xA;The below example is a bit of code to write but makes a lot of sense to understand anonymous functions in golang. The function `get_csv` is a function which takes in three parameters `int, string, func(string)[] string`. The third parameter is a function literal that takes in a string and returns a slice of string. So, we are basically converting a string `&#34;kevin,21,john,33&#34;` into a slice of the slice like `[[kevin 21] [john 33]]`, this is basically separating values with `,` comma as the delimiter and then processing slices to create a single slice.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;import &#34;strings&#34;&#xA;&#xA;func get_csv(index int, str string, t func(csv string)[]string) [][]string{&#xA;  s := t(str)&#xA;  var res [][]string&#xA;  for i := 1; i&lt;len(s); i+=2 {&#xA;    var data []string&#xA;    data = append(data, s[i-1], s[i])&#xA;    res = append(res, data)&#xA;  }&#xA;  return res&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;  csv_slice := func (csv string) []string{&#xA;    return strings.Split(csv, &#34;,&#34;)&#xA;  }&#xA;  csv_data := get_csv(2,&#34;kevin,21,john,33,george,24&#34;, csv_slice)&#xA;  fmt.Println(csv_data)&#xA;  for _, s := range csv_data{&#xA;    fmt.Printf(&#34;%s - %s&#xA;&#34;,s[0],s[1])&#xA;  }&#xA;```&#xA;&#xA;```&#xA;go run functions/anonymous_function.go&#xA;&#xA;[[kevin 21] [john 33] [george 24]]&#xA;&#xA;kevin - 21&#xA;john - 33&#xA;george - 24&#xA;```&#xA;&#xA;Let&#39;s break down the code one by one, we will start with the main function, where we have `csv_slice` as a function literal and is an anonymous function that takes in a string and returns a slice of string. The function returns a call to the function [strings.Split](https://pkg.go.dev/strings#Split) taking in the string from the function parameter. We then call the function `get_csv` with parameters `(2, &#34;kevin,21....&#34;, csv_slice)`, this function is defined outside the main. The function takes in three parameters as discussed and parsed from the main function and it returns a slice of type string. &#xA;&#xA;So, inside the `get_csv` function, we define `s` as the function call to `t(str)` which if you look carefully is a function call to `csv_slice` with parameter as a string. This function call returns a slice of strings separated by `,`. So that&#39;s all the logic we need to understand anonymous function from parameters. We have used a function literal to call the function from another function. In this case, the funciton is an anonymous function assigned to a variable. &#xA;&#xA;```&#xA;&#34;kevin,21,john,33,george,24&#34;&#xA;            ||&#xA;            \/&#xA;[kevin 21 john 33 george 24]&#xA;            ||&#xA;            \/&#xA;[[kevin 21] [john 33] [george 24]]&#xA;&#xA;```&#xA;&#xA;Further, after we have `s` which would look like `[kevin 21 john 33 george 24]` as each individual element. Thereafter we create an empty [slice of slice](https://www.geeksforgeeks.org/slice-of-slices-in-golang/) string as `res` and operate a loop through the slice jumping 2 indexes at a time. Why? because we want a slice of two elements combined. So, we also create a slice of string named `data` and we add the two components to it like `[kevin 21]` with the [append](https://pkg.go.dev/builtin#append) function, and this slice is appended to a slice of slices `res` as `[[kevin 21]]` thereby iterating over the entire slice and creating the desired slice data. We return the `res` from the function. This get&#39;s us back to the main function which simply iterates over the slice and we get the desired data from it.&#xA;&#xA;So, this is how we convert an extremely easy task to extremely complicated code :)&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/functions/anonymous_function.go) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;That is it from this part, we covered a little bit about anonymous functions in golang. Anonymous functions are simply function literals that can be used to do a lot of quick operations without needing to write an explicit function in the program. Further, in the next part look into closures which are a bit complex to understand and require the understanding of anonymous functions.&#xA;&#xA;Thank you for reading, if you have any queries, feedback, or questions, you can freely ask me on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Deploying Django Project with Railway Nixpacks</title>
      <link>https://www.meetgor.com/posts/django-deploy-railway-nixpacks</link>
      <description>Introduction We have seen how to deploy a Django application on railway app in the of the . We deployed the django project using the Heroku Buildpacks under the</description>
      <pubDate>Sat, 16 Jul 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We have seen how to deploy a Django application on railway app in the [previous article](https://www.meetgor.com/django-deploy-railway/) of the [series](https://www.meetgor.com/series/django-deployment). We deployed the django project using the Heroku Buildpacks under the hood. The railway app provides a couple of options on how to build your web application. Currently, there are three options, one of which is the [heroku buildpacks](https://devcenter.heroku.com/articles/heroku-20-stack) which is the default one, second, we have the nixpacks which we will see today, and the third is the [Paketo buildpack](https://paketo.io/). &#xA;&#xA;## What is a Buildpack?&#xA;&#xA;A buildpack is a set of programs that turns your source code into a container image. So it is basically a tool for converting your application into a deployment-ready state with the help of containerization technology. Buildpacks allow us to extract away the steps for deploying an application. The layer of abstraction for converting a source code into a deployable code/container is played by the build packs.&#xA;&#xA;### Benefits of buildpacks&#xA;&#xA;Buildpacks as we discussed act as a layer of abstraction from converting source code into the deployable containers, so it helps in avoiding manually writing dependencies and installing them. But on top of that, they can even detect low-level changes to the source code, i.e. if a dependency is changed, it would know which dependencies are altered and which components can be fetched from the cache. Caching is a great property of buildpacks which enhances the performance and deployment time. Since we can specify the commands for the build process, the buildpacks are customizable and hence provide a solid foundation for professional applications.&#xA;&#xA;For more references on Buildpacks, you can follow up with the great article on [dockerfile vs buildpacks](https://technology.doximity.com/articles/buildpacks-vs-dockerfiles).&#xA;&#xA;## What are Nixpacks?&#xA;&#xA;[Nixpacks](https://nixpacks.com/docs) are quite similar to buildpacks, but they have their own set of technologies used in managing and installing builds for the application. Nixpacks as the name suggests uses [Nix Packages](https://search.nixos.org/packages) for creating and installing dependencies and [Docker](https://www.docker.com/) for building images and running containers.&#xA;&#xA;Nixpacks are quite cool as most of the applications require little or almost no configuration. You don&#39;t need a lot of knowledge of docker or nix technologies, everything is abstracted for you. Nixpacks uses nix packages for installing the runtime environment and the dependencies for the applications. It also caches support for detecting the core modules or packages in the runtime, so it gets a huge boost in deployment performance. Minimal deployment time, simultaneously makes it developer friendly and improves the quality of the application.&#xA;&#xA;- Abstracted technologies like Nix and Docker.&#xA;- Caching of dependencies and steps to build.&#xA;- Customizable at most of the steps.&#xA;- Extensible and Developer friendly.&#xA;&#xA;## Installing Nixpacks&#xA;&#xA;Nixpacks are the [nix packges](https://search.nixos.org/packages) which are used with the source code to create a buildpack of their own. The nix packages take in the source code of your application and convert it into a simple OCI image e.g. Docker image that can be run on various environments. It is similar to buildpacks but it&#39;s not the same instead it is better and uses a different ecosystem.&#xA;&#xA;Railway has a cool [CLI](https://nixpacks.com/docs/cli) tool for creating nixpacks on your local system. You can install the nixpack cli from the official documentation site. There are a couple of ways to install them on your system. &#xA;You can install the nixpack with the curl command in your terminal which is one of the simplest ways to install it.&#xA;&#xA;```&#xA;curl -fsSL https://raw.githubusercontent.com/railwayapp/nixpacks/master/install.sh | bash&#xA;```&#xA;&#xA;![Railway Nixpacks Install](https://res.cloudinary.com/techstructive-blog/image/upload/v1657725394/blog-media/railway-nixpacks-install.png)&#xA;&#xA;Once it has been installed we can check the status if nixpacks was successfully installed or not.&#xA;&#xA;```&#xA;nixpacks --version&#xA;&#xA;OR &#xA;&#xA;nixpacks&#xA;```&#xA;&#xA;So that&#39;s how we install nixpacks CLI into the system. Now, we can move into the configuration required for creating a nixpack from the Django application.&#xA;&#xA;## Creating a Django Project&#xA;&#xA;So, I assume here, you have your django project ready. You can pick any django project and configure it as mentioned below.&#xA;&#xA;### Creating requirements file&#xA;&#xA;It is a good practice to include the `requirements.txt` file in any python based applications. It becomes really easy to set up and give the project a spin. It&#39;s often the case, that we have to use a virtual environment for creating a pip file, otherwise, we might conflict and mix up the globally installed packages with project-specific dependencies.&#xA;&#xA;```&#xA;pip freeze &gt; requirements.txt&#xA;```&#xA;&#xA;This will create a `requirements.txt` file in the current folder, which will contain the list of all the dependencies with the version mentioned in. We also have other package management system like [pipenv](https://www.meetgor.com/pipenv-intro/) or [poetry](https://python-poetry.org/).&#xA;&#xA;&#xA;## Creating a Nixpack for the project&#xA;&#xA;Finally, we can now start creating nixpacks from the source code of the django application. So, make sure the Django project is running on your local system. Set up a virtualenv and database as per your local environment. We will be using the build command for creating the nixpack of our django project.&#xA;&#xA;```&#xA;nixpacks build .&#xA;```&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1657881139/blog-media/nixpacks-demo-base.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;This is the simplest of commands you can run to create a nixpack. You need to be in the source file where all your files are located. In Django, we call it the `BASE_DIR`, the same folder where your `manage.py` resides. So, be on that path and run the above command. It would pick up a few things from the source code itself. There are also some default values picked up after analyzing the source code, for example in the case if it detects the django project, it attaches the `python manage.py migrate &amp;&amp; gunicorn &lt;project_name&gt;.wsgi` command as the default build command.&#xA;&#xA;Similarly, there are other default options like the version of the runtime, installation steps, etc. We can see that the port is not accessible from the docker container that we ran it is because of two reasons either we have not exposed the port or the port is not bound with gunicorn. We will see that configuration later, it&#39;s quite easy to bind the local port to the docker container.&#xA;&#xA;### Steps for creating Nixpacks&#xA;&#xA;Now, we need to understand the process of the creation of nixpacks. There were several stages that you can see in the above video. &#xA;&#xA;1. Planning &#xA;2. Building&#xA;&#xA;#### Planning Phase&#xA;&#xA;The first step was quite important, to understand the runtime environment. This is critical because it can decide how our application installs, runs, or even performs in the actual container environment. This is just the phase where we define the sets of commands or nix packages to be used for the building of the application.&#xA;&#xA;You would get a clear look in the nixpack cli, it gives a list of the planned instructions before the build process starts in the build command. You can get the plan before building the application by using the `plan` command in the CLI.&#xA;&#xA;```&#xA;nixpacks plan .&#xA;```&#xA;![Nixpacks Plan Command](https://res.cloudinary.com/techstructive-blog/image/upload/v1657960771/blog-media/nixpacks-plan-command.gif)&#xA;&#xA;![Nixpacks Plan Command Output](https://res.cloudinary.com/techstructive-blog/image/upload/v1657960816/blog-media/nixpacks-plan-cmd.png)&#xA;&#xA;So, this gives a `nix` output of all the configuration needed to build and install the application and it&#39;s dependencies. When you would have built the application i.e. the base build command, it also shows in the CLI, some useful information before moving to the build step.&#xA;&#xA;![Nixpacks Plan Steps in Build Command](https://res.cloudinary.com/techstructive-blog/image/upload/v1657960880/blog-media/nixpacks-build-plan.png)&#xA;&#xA;So the list of the following procedure is listed in the planning phase of building the application.&#xA;&#xA;- Packages/base runtime environment&#xA;- Installation commands&#xA;- Build commands&#xA;- Commands to Start Application &#xA;&#xA;In the case of django, the following correspond to the planning attributes.&#xA;&#xA;- Python version as the application runtime.&#xA;- Installing dependencies via pip/pipenv/poetry&#xA;- Building the Django app (collecting static files, setting database, etc)&#xA;- Running the Django app (gunicorn/Nginx web server to run the app)&#xA;&#xA;So, hopefully, this gives you a better understanding of what is going on in the nixpack CLI. We are trying to automate the process of deployment and building the application for repeated deployments i.e. in a continuous integration/delivery system.&#xA;&#xA;#### Building Phase&#xA;&#xA;This is the phase, where actual installation, setup, and configuration takes place at the application level. In the build phase, we have several layers where things happen like installing the core packages for the application, installing dependencies, setting/generating up the necessary steps for the proper running of the application, and finally steps to run the application. This process creates an OSI(a standard used in containers/virtualization technology) image of the application. We can run the image and thus create a container of the application. You will need docker or any other containerization tool for the building of images and running containers.&#xA;&#xA;![Django Build Command](https://res.cloudinary.com/techstructive-blog/image/upload/v1657961801/blog-media/nixpacks-build-command.gif)&#xA;&#xA;For the Django application, we have several steps of the application to be followed. We can use python as the base runtime environment, install the python packages, perform other commands like setting up admin accounts, static/media files, database setup, and finally run the steps to run the application at the container level.&#xA;&#xA;- Installing python as a [Nix Package](https://search.nixos.org/packages?channel=22.05&amp;show=python38&amp;from=0&amp;size=50&amp;sort=relevance&amp;type=packages&amp;query=python)&#xA;- Installing all python packages provided in the `requirements.txt`/`Pipfile`/`pyproject.toml` files.&#xA;- Call commands from the environment to set up the applications like `createsuperuser`, `collectstatic`, `makemigrations`, pull data, management commands, etc. There are a lot of things that can be done here depending on the application.&#xA;- Finally, the step to run the Django app, usually gunicorn/Nginx server is used for running the django application on a port with the host.&#xA;&#xA;So, this is what the build phase does, this is the whole and soul of the nixpacks. We literally do every installation setup and configuration of the application. Though the planning phase is equally important, a single missing data can ruin the build phase.&#xA;&#xA;&#xA;### Understanding the build phase for Django&#xA;&#xA;We ran the build command for creating the nixpack build, this started with planning the application configuration and then building up the application. This build phase was also divided into further processes like installing, running commands, copying actual source code to an image, and all the docker-related stuff that is required to create an image for a django application.&#xA;&#xA;![Django Application build nixpacks](https://res.cloudinary.com/techstructive-blog/image/upload/v1657961691/blog-media/nixpacks-build-process.png)&#xA;&#xA;So, we have used the Debian nixpack, which sets as the base runtime for the application. Railway provides a [package](https://github.com/railwayapp/nixpacks/pkgs/container/nixpacks) of the Debian image as the base runtime for our application. This is where we will run all the build processes on. This Debian image will be used for installing all types of dependencies and layers of language-specific runtime installation in the form of [nix packages](https://search.nixos.org/packages). &#xA;&#xA;Now, we have the base image of debian, the nixpack build command actually starts executing the `Dockerfile`, this is an auto-generated step after the planning phase. So, with the instructions in `Dockerfile`, steps are executed one after other just as a normal Docker image build. This will generate the image for the application and after a while, because this process takes a while on the first iteration locally, after the build process has been completed, it will give a container name for you to run. This can be used to test the application locally, in production, the container is instantly created after the image has been generated.&#xA;&#xA;```&#xA;RUN:&#xA;&#xA;docker run -it &lt;container_id_or_name&gt;&#xA;&#xA;```&#xA;&#xA;![Nixpacks build command run container](https://res.cloudinary.com/techstructive-blog/image/upload/v1657966274/blog-media/nixpacks-build-run.png)&#xA;&#xA;This is the command for running your application, this marks the end of the build process and also the build command provided with the CLI.&#xA;&#xA;### Build Command Parameters&#xA;&#xA;The build command in the nixpacks CLI provides a few parameters or arguments to customize how to output the result and build the application, you can definitely provide the configuration in the application source code itself, but it is nicer to have it locally before deploying the application.&#xA;&#xA;#### Give a name to the Nixpack Image/Container&#xA;&#xA;The first parameter which might be helpful is to provide a name to the application at the build time. This becomes useful for running the container, this helps in avoiding long container names and giving a context of the nixpack.&#xA;&#xA;```&#xA;nixpacks build . --name &lt;project_name&gt;&#xA;&#xA;OR &#xA;&#xA;nixpacks build . -n &lt;project_name&gt;&#xA;&#xA;```&#xA;&#xA;![Nixpacks Build Command Name Image](https://res.cloudinary.com/techstructive-blog/image/upload/v1657962253/blog-media/nixpacks-build-name.png)&#xA;&#xA;![Nixpacks Build Command name run](https://res.cloudinary.com/techstructive-blog/image/upload/v1657962328/blog-media/nixpacks-build-name-image-run.png)&#xA;&#xA;This gives a name to the image which has been built. Thereby providing a better context for the user to run the image and create a container out of it.&#xA;&#xA;#### Output the Built Image to a folder&#xA;&#xA;This is the command that can output the built application into a  provided folder. This parameter will not run the docker step thereafter; hence, no image is created if you provide an output folder. Though the folder will contain the `Dockerfile` and `environment.nix` files for creating the image and running the container. **Make sure the output folder is NOT in the application folder itself, it will result in errors.** The output command will not create an image but the process will be definitely executed in order to generate the `Dockerfile` and `environment.nix` files.&#xA;&#xA;```&#xA;nixpacks build . --out ../blog_image&#xA;&#xA;OR&#xA;&#xA;nixpacks build . -o ../blog_image&#xA;```&#xA;&#xA;![Nixpacks Build Command Output folder](https://res.cloudinary.com/techstructive-blog/image/upload/v1657962407/blog-media/nixpacks-build-output.png)&#xA;&#xA;&#xA;**Dockerfile**&#xA;&#xA;![Nixpacks Build command ouptut](https://res.cloudinary.com/techstructive-blog/image/upload/v1657962479/blog-media/nixpacks-build-output-folder.png)&#xA;&#xA;&#xA;**environment.nix File**&#xA;&#xA;![Nixpacks environment.nix file](https://res.cloudinary.com/techstructive-blog/image/upload/v1657969127/blog-media/nixpacks-build-environment-nix-file.png)&#xA;&#xA;So, this will output the built application into the provided path. The output folder should necessarily be out of the application folder as it makes no sense to output in the same folder as the application since the nixpacks CLI will consider the folder as the application folder.&#xA;&#xA;### Provide a Install/Build/Start Command&#xA;&#xA;We can provide the commands at the install phase/build/start phase of the application to the build command in order to build the app with non-default or custom commands. This will add up to the docker steps that will involve making the build for the application.&#xA;&#xA;```&#xA;nixpacks build . --build-cmd &#39;python manage.py collectstatic&#39;&#xA;&#xA;OR&#xA;&#xA;nixpacks build . -b &#39;python manage.py collectstatic&#39;&#xA;```&#xA;&#xA;![Nixpacks Build Command Providing install/build/start commands](https://res.cloudinary.com/techstructive-blog/image/upload/v1657962514/blog-media/nixpacks-build-build-command.png)&#xA;&#xA;These kinds of parameters can be passed similarly for `install-cmd` and `start-cmd` as `-i` and `-s` respectively. We can further chain up the commands and customize the build process as per the application&#39;s requirements.&#xA;&#xA;### Providing environment variables to image&#xA;&#xA;The environment variable can be passed to the build command for parsing to the application. This can be used for parsing additional or optional environment variables to the application image.&#xA;&#xA;```&#xA;nixpacks build . --env &#39;NAME=VALUE&#39;&#xA;&#xA;nixpack build . --env &#39;DATABASE_URL=postgres://postgres:postgres@localhost:5432/techstructive_blog&#39;&#xA;```&#xA;&#xA;![Nixpacks Build Comand parsing environment variables](https://res.cloudinary.com/techstructive-blog/image/upload/v1657963255/blog-media/nixpacks-build-env-variable-db-url.png)&#xA;&#xA;![Nixpacks Build Command env variable migrate](https://res.cloudinary.com/techstructive-blog/image/upload/v1657963302/blog-media/nixpacks-build-db-migrate-env.png)&#xA;&#xA;Here, we provide the environment variable `DATABASE_URL` to the build command and this is parsed to the application image. Thereby when the image is run as a container, it is parsed as a normal environment variable and thereby is available for utilization from the application.&#xA;&#xA;For further references on the build command arguments, you can follow the [documentation of nixpack](https://nixpacks.com/docs/cli) by railway app.&#xA;&#xA;### Creating a Procfile&#xA;&#xA;This is important for telling any buildpack in this case nixpacks to understand the process to start for this web application. For django it is simply to add the web process like to mention the `wsgi` app with the project name. We can use the gunicorn as the web server in production.&#xA;&#xA;```&#xA;&#xA;# Procfile&#xA;&#xA;web: gunicorn &lt;django_project_name&gt;.wsgi&#xA;&#xA;```&#xA;&#xA;This is the Procfile, this is a file type without the extension. So, this is a typical Django application Procfile, you can also use the other variants of Procfile for applying migration for every web process start-up.&#xA;&#xA;```&#xA;&#xA;# Procfile&#xA;&#xA;web: python manage.py migrate &amp;&amp; gunicorn &lt;django_project_name&gt;.wsgi&#xA;&#xA;```&#xA;&#xA;As we saw in the base build command, the local server was not able to listen to the gunicorn server in the container, so we need to bind the gunicorn server to the local port.&#xA;&#xA;```&#xA;&#xA;# Procfile&#xA;&#xA;web: python manage.py migrate &amp;&amp; gunicorn &lt;django_project_name&gt;.wsgi -b :8000&#xA;&#xA;```&#xA;&#xA;So, we use the `-b` option in the gunicorn command to bind the port in the container to the port in the local machine. Now, if we build the application and forward the port to the `8000` port in the local machine, we will see our application running &#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1657964597/blog-media/nixpacks-local-bind.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;```&#xA;docker run -p 8000:8000 -it &lt;container_id&gt; &#xA;&#xA;```&#xA;&#xA;### Specifying the Python version&#xA;&#xA;This is created for specifying the python version for building the Django application or any other python app.&#xA;&#xA;```&#xA;# .python-version&#xA;&#xA;3.10&#xA;```&#xA;&#xA;Save the `.python-version` file with just the python version like `3.6`, `3.10`, etc. and this will be picked by the build command while creating the build image.&#xA;&#xA;## Deploying the Django Application&#xA;&#xA;After we looked at the nix picks specifications, we can now deploy our Django application with nixpacks on Railway. So, you can follow up with the [Railway Deployment]() Article for setting up your Django app for deployment at the railway. This usually involves a few steps like creating Procfile(not necessary but recommended), requirements.txt(necessary to pull dependencies), and the python version which is chosen as `3.8` as default. The further steps are to create a GitHub repository to link with the Railway app and create a PostgreSQL database service on the railway platform.&#xA;&#xA;### Create configuration files&#xA;&#xA;As we have seen we will require a `requirements.txt` file, `Pipfile` or a `pyproject.toml` file for listing out and installing dependencies for our django application. This can be done with various commands like:&#xA;&#xA;```&#xA;# For requirements.txt and virtualenv&#xA;pip freeze &gt; requirements.txt&#xA;&#xA;# Autogenerated Pipfile for pipenv&#xA;# Autogenerated pyproject.toml for poetry&#xA;&#xA;```&#xA;&#xA;So, this file should be present on the base directory of the django application in order for the nixpack to pick up and install the python packages. Also, for customization of the start command in the build process, you can create a `Procfile` as discussed earlier in order to run commands to start the Django web server.&#xA;&#xA;The python version can be specified with the `.python-version` file with just the version name as `3.9`, `3.10`, etc. OR we can add an environment variable `NIXPACKS_PYTHON_VERSION` to the python version we want.&#xA;&#xA;### Create and Linkup a GitHub repository for existing Django projects&#xA;&#xA;We can create a GitHub repository and link up the project to the Railway platform and thereby creating an automated build for every push. &#xA;&#xA;The below video will explain how to set up the GitHub repository for the Railway app.&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1652970864/blog-media/django-deployment/railway_project_init.webm&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1657965284/blog-media/railway-platform-github.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;### Setup environment variables &#xA;&#xA;We can use `python-environ` to set up environment variables in a Django application, we will require these environment variables for attributes like `SECRET_KEY`, `DATABASE_URL`, email-stuff, etc. These are quite handy to avoid leaking sensitive information to the open source project on GitHub.&#xA;&#xA;You can install the `python-environ` package with pip or any other package management tool as follows:&#xA;&#xA;```&#xA;pip install python-environ&#xA;```&#xA;&#xA;After installing the package, we can set up the environment variable in the settings file.&#xA;&#xA;``` python&#xA;# &lt;project_name&gt;/settings.py&#xA;&#xA;import os&#xA;from dotenv import load_dotenv&#xA;&#xA;BASE_DIR = Path(__file__).resolve().parent.parent&#xA;&#xA;load_dotenv(os.path.join(BASE_DIR, &#34;.env&#34;))&#xA;```&#xA;&#xA;After loading the environment variables, we can access them with `os.env(&#34;ENV_NAME&#34;, default=&#34;&#34;)`, this will load the environment variable with the name or we can provide a default value.&#xA;&#xA;### Attach a PostgreSQL database service&#xA;&#xA;You can add a PostgreSQL database service to your Django Railway app by attaching a service. This will add a new service along with the django application, so these two act as different entities within a railway project. &#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1652963718/blog-media/django-deployment/postgres_spinup_railway_d2xkpt.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;You can then access the `DATABASE_URL` from the connect settings and copy the database URL and set it as an environment variable in the django railway project service. This will link up the Django app and the PostgreSQL database. While setting it up locally, you can use the `.env` file and add the environment variable there.&#xA;&#xA;```&#xA;# environment variable&#xA;DATABASE_URL=postgres://username:password@hostname:port/db_name&#xA;&#xA;# local database postgres&#xA;DATABASE_URL=postgres://postgres:postgres@localhost:5432/db_name&#xA;```&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1657964943/blog-media/railway-postgres-spinup.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;&#xA;This will set up the database environment variable and you can access these from the settings.py file with the `env.db` function as follows:&#xA;&#xA;```python&#xA;env.db(&#34;DATABASE_URL&#34;, default=&lt;local_database_url&gt;)&#xA;```&#xA;&#xA;So, we can finally use the database from the Railway app in our Django application once the environment variable is correctly used.&#xA;&#xA;### Choose the Buildpack&#xA;&#xA;We can choose a buildpack for our Django application in the Railway platform, we have options like &#xA;&#xA;1. Heroku Buildpack&#xA;2. Railway Nixpacks&#xA;3. Paketo Buildpack&#xA;&#xA;As of the writing of the article, on 16th July 21, the Railway has made `Nixpacks` the default buildpack for an application :) It was the `Heroku` Buildpack as a default earlier. So, that is a cool thing, you can toggle these settings for choosing the buildpacks from the project settings.&#xA;&#xA;Railway Dashboard Choose BuildPack&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1657964699/blog-media/nixpacks-railway-dashboard.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;### Deploy to Railway with Nixpacks&#xA;&#xA;Now, we have seen how to set up the nixpacks, we had the Postgres database setup, and we can finally deploy our application to the railway platform with nixpacks.&#xA;&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1657965560/blog-media/railway-nixpacks-deploy.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;So, we simply can configure the source code or include the `environment.nix` file into the source code for desired behavior. The nixpack selection can be done based on the source code or the presence of `environment.nix` and that&#39;s why we can rely on expected behaviors from the deployment builds. &#xA;&#xA;## Summary&#xA;&#xA;So, nixpacks is a great way to deploy an application, for me it&#39;s an automated docker deployment, it basically creates docker images of the application and runs it with the appropriate environment. There is a lot of language support on nixpacks currently on Railway, you can check them out on the official website. Every programming language has specific requirements for managing or installing dependencies and packages, the nixpacks manage them automatically for us.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this post of the [Django Deployment Series](https://www.meetgor.com/series/django-deployment), we were able to understand how to deploy a Django application on the Railway app with Nixpacks which are a very intuitive way to deploy apps. We covered what are nixpacks, the process of building an application with nixpacks, and deploying a existing, new Django project on the railway with nixpacks. We also explored the various commands provided in the nixpacks CLI to build. plan a Django application.&#xA;&#xA;Hopefully, you were able to understand the concept of nixpacks and how they can automate the process of containerization and deployment. Thank you for reading, if you have any queries or feedback, you can leave them down in the comments or freely drop them on social media. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Configure Neovim in Lua</title>
      <link>https://www.meetgor.com/posts/neovim-vimscript-to-lua</link>
      <description>Introduction It has been a while since I have written a Vim article. Finally, I got some ideas after configuring my Neovim setup for Lua. I recently migrated to</description>
      <pubDate>Sat, 09 Jul 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;It has been a while since I have written a Vim article. Finally, I got some ideas after configuring my Neovim setup for Lua. I recently migrated to Ubuntu a couple of months back and it has been a cool change from Windows 7! &#xA;&#xA;In this article, we&#39;ll see how you can set up neovim for Lua. Since Neovim 0.5, it supports lua out of the box, so in the recent release 0.7, it added more native support to lua making it a lot easier to configure and play with neovim. So, we will see how we can use lua to convert all the 200 liner vimscript into lua (We can even have packages and modules:) We will cover how to configure your keymaps, pull up all the plugins, vim options, and other customizations.&#xA;&#xA;## Why move to Lua?&#xA;&#xA;I have used Vimscript for quite a while now, configured it as per my needs, and also made a few plugins like [frontmatter](https://github.com/Mr-Destructive/frontmatter.vim), [dj.vim](https://github.com/Mr-Destructive/dj.vim), and [commenter](https://github.com/Mr-Destructive/commenter.vim) which are quite clunky and not robust in terms of usage and customizability. Vimscript is good but it&#39;s a bit messy when you want extreme customization. &#xA;&#xA;I recently wanted to go full Neovim, I was kind of stuck in Vimscript and some of my plugins work for me but it might not work for others, it might be a piece of gibberish vimscript dumped. So, Why not have full native experience in Neovim if you can. It has now baked-in support for LSP, Debugging, Autocommands, and so much more. If you have Neovim 0.5+ you should be good to go full lua.&#xA;&#xA;## Backup Neovim Config&#xA;&#xA;Firstly, keep your original neovim/vim init files safe, back them up, make a copy and save it with a different name like `nvim_config.vim`. If you already have all your config files backed up as an ansible script or dotfiles GitHub repository, you can proceed ahead. &#xA;&#xA;But don&#39;t keep the `init.vim` file as it is in the `~/.config/nvim` folder, it might override after we configure the lua scripts.&#xA;&#xA;## Basic Configuration&#xA;&#xA;So, I assume you have set up Neovim, If not you need to follow some simple steps like downloading the package and making sure your neovim environment is working with vimscript first. The [Neovim Wiki](https://github.com/neovim/neovim/wiki/Installing-Neovim) provides great documentation on how to install neovim on various platforms using different methods.&#xA;&#xA;After your neovim is set up and you have a basic configuration, you can now start to migrate into lua.&#xA;Create a `init.lua` file in the same path as your `init.lua` file is i.e. at `~/.config/nvim` or `~/AppData/Local/nvim/` for Windows. That&#39;s why it is recommended to keep the initial configuration vimscript file in a safe place. While migrating from vimscript to lua, once the lua file is created and the next time you restart neovim, the default settings will be from `init.lua` and not `init.vim`, so be prepared.&#xA;&#xA;Firstly, you need to configure some options like `number`, `syntax highlighting`, `tabs`, and some `keymaps` of course. We can use the `vim.opt` method to set options in vim using lua syntax. So, certain corresponding vim options would be converted as follows:&#xA;&#xA;If you have the following kind of settings in your vimrc or init.vim:&#xA;&#xA;```vimscript&#xA;-- vimscript&#xA;set number&#xA;set tabstop=4 &#xA;set shiftwidth=4 &#xA;set softtabstop=0 &#xA;set expandtab &#xA;set noswapfile&#xA;```&#xA;The above settings are migrated into lua syntax as follows:&#xA;&#xA;```lua&#xA;--lua&#xA;vim.opt.number = true&#xA;vim.opt.tabstop = 4&#xA;vim.opt.shiftwidth = 4&#xA;vim.opt.softtabstop = 0&#xA;vim.opt.expandtab = true&#xA;vim.opt.swapfile = false&#xA;```&#xA;&#xA;You can set other options in your config file accordingly. If you get sick of writing `vim.opt.` again and again, you can use a variable set to `vim.opt` and then access that variable to set the options. Something of the lines as below:&#xA;&#xA;```lua&#xA;local set = vim.opt&#xA;&#xA;set.number = true&#xA;set.tabstop = 4&#xA;set.shiftwidth = 4&#xA;set.softtabstop = 0&#xA;set.expandtab = true&#xA;set.swapfile = false&#xA;```&#xA;&#xA;We can create a variable in lua like `local variable_name = something` so, we have created a variable `set` which is assigned to the value of `vim.opt` which is a method(function) in lua to set the options from the vimscript environment. Finally, access the `set` keyword to set the options. Using the `set` word is not necessary, you can use whatever you want. &#xA;&#xA;After setting up the basic options, you can source the file with `:so %` from the command mode. Just normally as you source the config files.&#xA;&#xA;### Using Lua in Command Mode&#xA;&#xA;We can use the lua functions or any other commands from the command mode in neovim using the lua command. Just prefix the command with `:lua` and after that, you can use lua syntax like function calling or setting variables, logging things, etc.&#xA;&#xA;![Lua in Command Mode](https://res.cloudinary.com/techstructive-blog/image/upload/v1657380885/blog-media/lua_in_command_mode.gif)&#xA;&#xA;## Adding Keymaps&#xA;&#xA;Now, that we have some basic config setup, we can quickly get the keymaps. It&#39;s not that hard to make keymaps to set up in lua. To create keymaps, we have two options. One is compatible with Neovim and the other is compatible with Vim as well.&#xA;&#xA;1. vim.keymap.set OR &#xA;2. vim.api.nvim_set_keymap&#xA;&#xA;Personally, I don&#39;t see a difference in terms of usage but [vim.keymap.set](https://github.com/neovim/neovim/blob/master/runtime/lua/vim/keymap.lua#L51) is a wrapper around [nvim_set_keymap](https://github.com/neovim/neovim/blob/master/src/nvim/api/vim.c#L1451). So, it is really a matter of personal preference which you want to use. &#xA;&#xA;So, both the functions have quite similar parameters:&#xA;&#xA;```lua&#xA;vim.keymap.set({mode}, {lhs}, {rhs}, {options})&#xA;&#xA;vim.api.nvim_set_keymap({mode}, {lhs}, {rhs}, {options})&#xA;```&#xA;&#xA;The advantage of `vim.keymap.set` over `vim.api.nvim_set_keymap` is that it allows directly calling lua functions rather than calling vimscripty way like `:lua function()`, so we directly can use lua code in the RHS part of the function parameter.&#xA;&#xA;Let&#39;s take a basic example mapping:&#xA;&#xA;```&#xA;vim.keymap.set(&#39;n&#39;, &#39;Y&#39;, &#39;yy&#39;, {noremap = false})&#xA;```&#xA;&#xA;Here, we have mapped `Shift+y` with the keys `yy` in `n`ormal mode. The first argument is the mode, it can be a single-mode like `&#39;n&#39;`, `&#39;v&#39;`, `&#39;i&#39;`, etc., or a multi-mode table like `{&#39;n&#39;, &#39;v&#39;}`, `{&#39;v&#39;, &#39;i&#39;}`, etc. &#xA;&#xA;The next parameter is also a string but it should be the key for triggering the mapping. In this case, we have used `Y` which is `Shift + y`, it can be any key shortcut you want to map.&#xA;&#xA;The third parameter is the string which will be the command to be executed after the key has been used. Here we have used the keys `yy`, if the map is from a command mode, you will be using something like `&#39;:commands_to_be executed&#39;` as the third parameter.&#xA;&#xA;The fourth parameter which is optional can contain [special arguments](https://neovim.io/doc/user/api.html#:~:text=nvim_set_keymap(%7Bmode%7D%2C%20%7Blhs%7D%2C%20%7Brhs%7D%2C%20%7B*opts%7D)%20%20%20%20%20%20%20%20%20%20%20%20%20*nvim_set_keymap()*). We have set a default option which is `noremap` as true, the options are not string but lua tables instead, so it can be similar to python dictionary or a map kind of a structure with a key value pair.&#xA;&#xA;&#xA;One more important aspect in keymapping might about the leader key, you can set your leader key by using the global vim options with `vim.g` and access `mapleader` to set it to the key you wish. This will make the `leader` key available to us and thereafter we can map the leader key in custom mappings.&#xA;&#xA;```&#xA;vim.g.mapleader = &#34; &#34;&#xA;```&#xA;&#xA;Here, I have set my leader key to the `&lt;Space&gt;` key. Now, we can map keys to the existing keymaps in the vimscript. Let&#39;s map some basic keymaps first and then after setting up the plugins,we can move into plugin-specific mappings.&#xA;&#xA;You can also use `vim.api.nvim_set_keymap` function with the same parameters as well. &#xA;&#xA;```lua&#xA;vim.keymap.set(&#39;n&#39;, &#39;&lt;leader&gt;w&#39;, &#39;:w&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.keymap.set(&#39;n&#39;, &#39;&lt;leader&gt;q&#39;, &#39;:q!&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.keymap.set(&#39;n&#39;, &#39;&lt;leader&gt;s&#39;, &#39;:so %&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.keymap.set(&#39;n&#39;, &#39;&lt;leader&gt;ev&#39;, &#39;:vsplit $MYVIMRC&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.keymap.set(&#39;n&#39;, &#39;&lt;leader&gt;sv&#39;, &#39;:w&lt;CR&gt;:so %&lt;CR&gt;:q&lt;CR&gt;&#39;,{noremap = true})&#xA;&#xA;OR&#xA;&#xA;vim.api.nvim_set_keymap(&#39;n&#39;, &#39;&lt;leader&gt;w&#39;, &#39;:w&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.api.nvim_set_keymap(&#39;n&#39;, &#39;&lt;leader&gt;q&#39;, &#39;:q!&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.api.nvim_set_keymap(&#39;n&#39;, &#39;&lt;leader&gt;s&#39;, &#39;:so %&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.api.nvim_set_keymap(&#39;n&#39;, &#39;&lt;leader&gt;ev&#39;, &#39;:vsplit $MYVIMRC&lt;CR&gt;&#39;,{noremap = true})&#xA;vim.api.nvim_set_keymap(&#39;n&#39;, &#39;&lt;leader&gt;sv&#39;, &#39;:w&lt;CR&gt;:so %&lt;CR&gt;:q&lt;CR&gt;&#39;,{noremap = true})&#xA;```&#xA;&#xA;If, you don&#39;t like writing `vim.keymap.set` or `vim.api.nvim_set_keymap` again and again, you can create a simpler function for it. In lua a function is created just like a variable by specifying the scope of the function i.e. local followed by the `function` keyword and finally the name of the function and parenthesis. The function body is terminated by the `end` keyword.&#xA;&#xA;```lua&#xA;function map(mode, lhs, rhs, opts)&#xA;    local options = { noremap = true }&#xA;    if opts then&#xA;        options = vim.tbl_extend(&#34;force&#34;, options, opts)&#xA;    end&#xA;    vim.api.nvim_set_keymap(mode, lhs, rhs, options)&#xA;end&#xA;```&#xA;Now, in this function `map`, we have passed in the same parameters like the `vim.keymap.set` function takes but we have just parsed the function in a shorter and safer way by adding `noremap = true` by default. So this is just a helper function or a verbose function for calling the vim.keymap.set function.&#xA;&#xA;To use this function, we can simply call `map` with the same arguments as given to the prior functions.&#xA;&#xA;```lua&#xA;map(&#39;n&#39;, &#39;&lt;leader&gt;w&#39;, &#39;:w&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;&lt;leader&gt;q&#39;, &#39;:q!&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;&lt;leader&gt;s&#39;, &#39;:so %&lt;CR&gt;&#39;)&#xA;```&#xA;&#xA;Notice here, though, we have not passed the `{noremap = true}` as it will be passed by default to the `vim.api.nvim_set_keymap` or `vim.keymap.set` function via the custom map function.&#xA;&#xA;If you want some more examples, here are some additional mapping specific to languages, meant for compiling or running scripts with neovim instance. &#xA;&#xA;```lua&#xA;-- vimscript&#xA;&#xA;nnoremap cpp :!c++ % -o %:r &amp;&amp; %:r&lt;CR&gt;&#xA;nnoremap c, :!gcc % -o %:r &amp;&amp; %:r&lt;CR&gt;&#xA;nnoremap py :!python %&lt;cr&gt;&#xA;nnoremap go :!go run %&lt;cr&gt;&#xA;nnoremap sh :!bash %&lt;CR&gt;&#xA;&#xA;&#xA;-- lua&#xA;&#xA;map(&#39;n&#39;, &#39;cpp&#39; &#39;:!c++ % -o %:r &amp;&amp; %:r&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;c,&#39; &#39;:!gcc % -o %:r &amp;&amp; %:r&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;py&#39; &#39;:!python %&lt;cr&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;go&#39; &#39;:!go run %&lt;cr&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;sh&#39; &#39;:!bash %&lt;cr&gt;&#39;)&#xA;&#xA;```&#xA;&#xA;So, this is how we can set up our keymaps in lua. You can customize this function as per your needs. These are just made for the understanding purpose of how to reduce the repetitive stuff in the setup.&#xA;&#xA;**If you are really stuck up and not feeling to convert those mappings into lua then I have a function that can do it for you, check out my dotfile repo -&gt; [keymapper](https://github.com/Mr-Destructive/dotfiles/blob/master/nvim/lua/destructive/options.lua#L9)**&#xA;&#xA;## Adding Plugin Manager&#xA;&#xA;Now, we really missing some plugins, aren&#39;t we? So, the neovim community has some good choices for using a new plugin manager written in core lua. It is usually a good idea to move into lua completely rather than switching to and fro between vimscript and lua.&#xA;&#xA;So, [Packer](https://github.com/wbthomason/packer.nvim) is the new plugin manager for Neovim in Lua, there is other plugin managers out there as well like [paq](https://github.com/savq/paq-nvim). If you don&#39;t want to switch with the plugin manager, you can still use vim-based plugin managers like [Vim-Plug](https://dev.to/vonheikemen/neovim-using-vim-plug-in-lua-3oom).&#xA;&#xA;So, let&#39;s install the Packer plugin manager into Neovim. We simply have to run the following command in the console and make sure the plugin manager is configured correctly.&#xA;&#xA;```&#xA;# Linux&#xA;&#xA;git clone --depth 1 https://github.com/wbthomason/packer.nvim\&#xA;~/.local/share/nvim/site/pack/packer/start/packer.nvim&#xA;&#xA;&#xA;# Windows&#xA;&#xA;git clone https://github.com/wbthomason/packer.nvim &#34;$env:LOCALAPPDATA&#xA;vim-data\site\pack\packer\start\packer.nvim&#34;&#xA;```&#xA;&#xA;Now, if you open a new neovim instance and try to run the command `:PackerClean`, and no error pops out that means you have configured it correctly. You can move ahead to installing plugins now. Yeah! PLUG-IN time! &#xA;&#xA;```lua&#xA;return require(&#39;packer&#39;).startup(function()&#xA;end)&#xA;```&#xA;&#xA;First try to source the file, if it throws out errors it shouldn&#39;t try to fix the installation path of Packer. If the command succeded we can finally pull up some plugins.&#xA;&#xA;Below are some of the plugins that you can use irrespective of what language preferences you would have. This includes basic dev-icons for the status line as well as the explorer window file icons. As usual, add your plugins and make them yours.&#xA;&#xA;```lua&#xA;&#xA;return require(&#39;packer&#39;).startup(function()&#xA;  use &#39;wbthomason/packer.nvim&#39;&#xA;  use &#39;tpope/vim-fugitive&#39;&#xA;  use {&#xA;    &#39;nvim-lualine/lualine.nvim&#39;,&#xA;    requires = { &#39;kyazdani42/nvim-web-devicons&#39;, opt = true }&#xA;  }&#xA;  use &#39;tiagofumo/vim-nerdtree-syntax-highlight&#39;&#xA;  use &#39;kyazdani42/nvim-web-devicons&#39;&#xA;  use &#39;vim-airline/vim-airline&#39;&#xA;  use &#39;vim-airline/vim-airline-themes&#39;&#xA;end)&#xA;```&#xA;&#xA;After adding the list of your plugins, you need to source the file and then install the plugins with the command `:PackerInstall`. This function will install all the plugins after the file has been sourced so make sure you don&#39;t forget it.&#xA;&#xA;## Colors and Color Themes&#xA;&#xA;You might fancy some good-looking and aesthetic setup for neovim of course! In Neovim, we already have a wide variety of configurations to set up like color schemes, GUI colors, terminal colors, etc. You can pick up a color scheme from a large list of awesome color schemes from [GitHub](https://github.com/topics/neovim-colorscheme). &#xA;&#xA;After choosing the theme, plug it in the packer plugin list which we just created and source the file and finally run `:PackerInstall`. This should install the plugin. You can then set the colorscheme as per your preference, first view the color scheme temporarily on the current instance with the command `:colorscheme colorscheme_name`. &#xA;&#xA;```lua&#xA;return require(&#39;packer&#39;).startup(function()&#xA;  use &#39;wbthomason/packer.nvim&#39;&#xA;  -- &#xA;  use &#39;Mofiqul/dracula.nvim&#39;&#xA;  --&#xA;end)&#xA;```&#xA;&#xA;You can then add the command to set it as the default color scheme for Neovim.&#xA;&#xA;```lua&#xA;vim.cmd [[silent! colorscheme dracula]]&#xA;```&#xA;&#xA;You can change the background color, text color, icons style and terminal and gui style separately with the augroup and setting the colorscheme commands.&#xA;&#xA;```lua&#xA;vim.api.nvim_command([[&#xA;    augroup ChangeBackgroudColour&#xA;        autocmd colorscheme * :hi normal termbg=#000030 termfg=#ffffff&#xA;        autocmd colorscheme * :hi Directory ctermfg=#ffffff&#xA;    augroup END&#xA;]])&#xA;vim.o.termguicolors = true&#xA;```&#xA;&#xA;Here, I have used the background and foreground colors of the terminal variant of Neovim. Also for the Directory Explorer i.e. netrw, I have changed the terminal foreground. This you can configure as per your needs, Though this is still vimscripty, there are Autocommands and autogroups available sooner in Neovim.&#xA;&#xA;## Separating Configurations&#xA;&#xA;If you wish to keep all your config in one file i.e. `init.lua` file, you can, though you can separate out things like `keymaps`, `plugins`, `custom_options` or if you have `lsp` configurations into separate lua packages or creating a separate module. This helps in loading up things as per requirement and also looks readable, making it a lot easier to test out things without breaking the whole config. &#xA;&#xA;Definitely, there will be personal preferences and pros and cons of both approaches, you can pick up whatever suits your style.&#xA;&#xA;### Creating separate packages &#xA;&#xA;To create a separate package, we can simply add a file in the same folder as `init.vim` i.e. in the folder `~/.config/nvim` or equivalent for windows. The package name can be any valid filename with the `lua` extension. &#xA;&#xA;For instance, you can create a package for all your keymaps and load it in the `init.lua` as per the order you want to load them. It can be at the top, or else if you have certain base settings lower in the init file, these might not reflect or load up in the keymap package so better to load them after some of the core settings have been set.&#xA;&#xA;Let&#39;s create the package and dump all our maps into the keymap file package.&#xA;&#xA;```lua&#xA;-- ~/.config/nvim/keymap.lua&#xA;&#xA;function map(mode, lhs, rhs, opts)&#xA;    local options = { noremap = true }&#xA;    if opts then&#xA;        options = vim.tbl_extend(&#34;force&#34;, options, opts)&#xA;    end&#xA;    vim.api.nvim_set_keymap(mode, lhs, rhs, options)&#xA;end&#xA;&#xA;map(&#39;n&#39;, &#39;&lt;leader&gt;w&#39;, &#39;:w&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;&lt;leader&gt;q&#39;, &#39;:q!&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;&lt;leader&gt;s&#39;, &#39;:so %&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;cpp&#39; &#39;:!c++ % -o %:r &amp;&amp; %:r&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;c,&#39; &#39;:!gcc % -o %:r &amp;&amp; %:r&lt;CR&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;py&#39; &#39;:!python %&lt;cr&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;go&#39; &#39;:!go run %&lt;cr&gt;&#39;)&#xA;map(&#39;n&#39;, &#39;sh&#39; &#39;:!bash %&lt;cr&gt;&#39;)&#xA;&#xA;-- more keymaps&#xA;&#xA;```&#xA;&#xA;So, this might work if you don&#39;t have any plugin-related keymaps as it would require those functions or objects available to execute. So, we might also want to separate plugins and load them first in our keymaps or in the init file.&#xA;&#xA;Now, there needs to be a way for grabbing a package. Yes, there is basically like import in python or any other programming language, lua has `require` keyword for importing packages. Since the `init` file and the `keymaps` are in the same folder path, we can simply say, `require(&#34;keymap&#34;)` in our `init.lua` file. Now, it depends on your config where you want to load this package. At the top i.e. at the beginning of neovim instance or after loading the plugins down.&#xA;&#xA;```lua&#xA;-- init.lua&#xA;&#xA;require(&#34;keymaps&#34;)&#xA;&#xA;-- At the top&#xA;-- OR&#xA;-- After loading Packer plugins&#xA;```&#xA;&#xA;So, now you can separate all your configs as per your requirement. It is like splitting up a puzzle and again combining them. Similar package can be created for `plugins`, `options` or `lsp` configurations.&#xA;&#xA;### Creating a separate module&#xA;&#xA;Now, we have seen how to create a lua package and loading in neovim. We also can create modules in neovim configuration. For instance, first, the init file is loaded, Other files might not be required hence it is not loaded by default, it is only loaded when `require`ed. So, we can create a module in lua with a folder, and inside of it, we can have packages as we had in the previous method. Also, every module can have a init file loaded first when we require that module. The rest of the packages in the module are thus made available thereafter.&#xA;&#xA;- Module is a component not loaded by default&#xA;- Only loaded when required (literally require)&#xA;- Every module can have a `init.lua` file loaded first when required.&#xA;- Require a package in module from outside -&gt; `require(&#39;module_name.package_name&#39;)`&#xA;&#xA;So, in neovim, we need to create a `lua` folder for placing all our modules so that the lua runtime is picked up correctly. Inside this lua folder, we can create a module basically a folder. This folder name can be anything you like, I like to keep it my nickname, you can use whatever you prefer. &#xA;&#xA;```&#xA;# ~/.config/nvim&#xA;&#xA;-- init.lua&#xA;-- lua/&#xA;    -- module_name/&#xA;        -- init.lua&#xA;        -- package_name.lua&#xA;        -- keymaps.lua&#xA;```&#xA;&#xA;Now, we can create packages in this module. You can move your keymaps package inside this folder. The keymaps package is nothing here but an example provided in the previous section for creating a package. You can basically dump all your keymaps in a single lua file and then import it from the init file. Similarly you can create a package inside a module and import it from the init file of the module(local init file `~/.config/nvim/lua/module_name/init.lua`) or the global init file(`~/.config/nvim/init.lua`). &#xA;&#xA;To load the packages, you have to use the same require statement irrespective of where you are importing from i.e. either from the module or from the global init file. The require statement would look like the following `require(&#34;module_name/package_name&#34;)`. Now, we can use the keymaps package to import from the module init file and then import the module from the global init file. To import a module, we can simply use the module name in the require string as `require(&#34;module_name&#34;)`.&#xA;&#xA;```lua&#xA;-- ~/.config/nvim&#xA;&#xA;-- lua/module_name/options.lua&#xA;&#xA;vim.opt.number = true&#xA;vim.opt.tabstop = 4&#xA;vim.opt.swapfile = false&#xA;&#xA;&#xA;-- lua/module_name/plugins.lua&#xA;&#xA;require(&#34;module_name.options&#34;)&#xA;return require(&#39;packer&#39;).startup(function()&#xA;  use &#39;wbthomason/packer.nvim&#39;&#xA;  --plugins&#xA;end)&#xA;&#xA;-- lua/module_name/keymap.lua&#xA;&#xA;require(&#34;module_name.plugins&#34;)&#xA;-- maps()&#xA;&#xA;&#xA;-- lua/module_name/init.lua&#xA;&#xA;require(&#34;module_name.keymaps)&#xA;&#xA;&#xA;-- init.lua&#xA;&#xA;require(&#34;module_name&#34;)&#xA;&#xA;```&#xA;&#xA;So, this is how we can create modules and packages for configurations in neovim using lua. This is also a kind of a structure for creating your own neovim plugin with lua!&#xA;&#xA;For further references, you can check out my [dotfiles](https://github.com/Mr-Destructive/dotfiles). &#xA;### References&#xA;&#xA;- [Configure Neovim for Lua](https://vonheikemen.github.io/devlog/tools/configuring-neovim-using-lua/)&#xA;- [Neovim with Lua for beginners](https://alpha2phi.medium.com/neovim-for-beginners-init-lua-45ff91f741cb)&#xA;- [TJ Devries Youtube](https://www.youtube.com/c/TJDeVries/videos)&#xA;&#xA;## Conclusion&#xA;&#xA;So, that is just a basic overview of how you can get your neovim configured for lua. It is a great experience to just create such a personalized environment and play with it and have fun. You might have hopefully configured Neovim for Lua from this little guide. Maybe in the next article, I&#39;ll set up LSP in Neovim. If you have any queries or feedback, please let me know. Thank you for reading.&#xA;&#xA;Happy Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Defer</title>
      <link>https://www.meetgor.com/posts/golang-defer</link>
      <description>Introduction In this part of the series, we will be taking a look at the keyword in golang. The defer keyword is used for delaying the function call in a partic</description>
      <pubDate>Wed, 29 Jun 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this part of the series, we will be taking a look at the `defer` keyword in golang. The defer keyword is used for delaying the function call in a particular block of program(function or a loop).&#xA;&#xA;## Defer Keyword&#xA;&#xA;The `defer` keyword is an interesting keyword in golang, it basically holds up the execution of the statement until all the statements around the local scope has been executed. It is basically like a stack holding the execution of statements. You can have multiple defer keywords in the single code block(function or a loop), those will be called by the principle of **first in last out**. &#xA;&#xA;So, let&#39;s take a simple example, the syntax is quite simple just add `defer` before the statement you want to hold up.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(&#34;First&#34;)&#xA;&#x9;defer fmt.Println(&#34;Second Ahhh..&#34;)&#xA;&#x9;fmt.Println(&#34;Third&#34;)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run defer.go                                                                                                               &#xA;&#xA;First&#xA;Third&#xA;Second Ahhh..&#xA;```&#xA;&#xA;As, you can see the function call `Second` was executed at the end of all the function calls in the main function. This is because of the defer keyword. It will halt or postpone the calling of a statement/function before all the statements have bee executed in the local scope. This keyword can be stacked for calling different statements at the specific time in the state of the program.&#xA;&#xA;### Multiple defer keyword&#xA;&#xA;We can understand the `defer` keyword in a better way if we modify the previous example a bit. We will use multiple defer statements in it for understanding the flow of the program.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(&#34;bag&#34;)&#xA;&#x9;defer fmt.Println(&#34;book&#34;)&#xA;&#x9;defer fmt.Println(&#34;cap&#34;)&#xA;&#x9;fmt.Println(&#34;laptop&#34;)&#xA;&#x9;defer fmt.Println(&#34;wallet&#34;)&#xA;&#x9;fmt.Println(&#34;headphones&#34;)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run defer.go                                                                                                            &#xA;&#xA;bag&#xA;laptop&#xA;headphones&#xA;wallet&#xA;cap&#xA;book&#xA;```&#xA;&#xA;   Here, we can see that the `bag` is printed out first, then `laptop` and then `headphones`, but after these, we have maintained a stack for calling out the defer statements. &#xA;&#xA;Initially, consider a empty stack `[]` for storing defer statements. Firstly, the `bag` statement will be printed. After printing the `bag` keyword, there is a defer key word, so we add the statement to the stack `[&#34;book&#34;]`. Just for convenience, ignore the function syntax and the actual statements, just focus on what is printed when. So, we have printed `bag` and we have stack as `[&#34;book&#34;,]`. We again encounter a `defer` keyword, so we push the statement `cap` into the stack `[&#34;book&#34;, &#34;cap&#34;,]`. Further, we encounter the normal statement, so we print `&#34;laptop&#34;`. Next up, we encounter a defer keyword and thereby we push `&#34;wallet&#34;` into the stack, which then becomes `[&#34;book&#34;, &#34;cap&#34;, &#34;wallet&#34;]`. The last statement in the main function is `&#34;headphones&#34;`, so we print `&#34;headphones&#34;` with the `Println` function. Since, there are no further statements to execute in the main function, we start poping out the function calls from the stack. Remember last in first out, so we will print the statement which were pushed last into the stack. Since the stack is `[&#34;book&#34;, &#34;cap&#34;, &#34;wallet&#34;]`, so we will pop out `&#34;wallet&#34;` from the stack since we have pushed it last in the stack. Thereby we print `&#34;wallet&#34;` after `&#34;headphones&#34;` and the stack now becomes `[&#34;book&#34;, &#34;cap&#34;]`. Next, we have to pop out the second last element that we pushed into the stack which was `&#34;cap&#34;`, thus we print `&#34;cap&#34;`. The stack only has one element in the stack and we pop out that as well which is `&#34;book&#34;`. So, this is how the defer keyword stacks up in golang.&#xA;&#xA;**NOTE: The defer keyword calls functions after the execution of all other functions in it&#39;s scope but the parameters are evaluated before handed which means just the function is executed later, but the parameters are already loaded**&#xA;&#xA;### defer keyword in function call&#xA;&#xA;The defer keyword is quite similarly used while calling the functions. The `fmt.Println` is also a function, but writing custom functions gives a different feeling to us.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func square(x int) int {&#xA;&#x9;fmt.Println(x * x)&#xA;&#x9;return x * x&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;// defer keyword in function&#xA;&#x9;defer square(12)&#xA;&#x9;defer square(10)&#xA;&#x9;square(15)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run defer.go                                                                                                            &#xA;&#xA;225&#xA;100&#xA;144&#xA;```&#xA;&#xA;   So, we have created the `square` function and called it thrice and twice we have used the `defer` keyword. The defer keyword first pushes the function call `square(12)` to the stack, so the stack is `[square(12) ]`. Next, we again have the defer keyword, so we push the `square(10)` to the stack which becomes `[square(12) square(10)]`. The next statement is immediately called the function `square(15)` and thereby we end the main function. So, we have to pop the function calls from the stack. The last element `square(10)` is called first and then the only call to the `square(12)`. Thereby, the order of the calling becomes `225 100 144`. &#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/tree/main/scripts/defer/defer.go) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this post, we were able to understand the defer keyword in golang. We were able to explore how we can delay the calling of a function in a particular scope of the program.&#xA;&#xA;Thank you for reading, if you have any queries, feedback, or questions, you can freely ask me on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: String Manipulation</title>
      <link>https://www.meetgor.com/posts/golang-strings</link>
      <description>Introduction In the 15th post of the Series, we will be looking into the details of the String manipulation and performing types of operations in Golang. We wil</description>
      <pubDate>Tue, 28 Jun 2022 00:00:00 UTC</pubDate>
      <content>## Introduction &#xA;&#xA;In the 15th post of the Series, we will be looking into the details of the String manipulation and performing types of operations in Golang. We will explore string manipulation, concatenation, helper functions, etc. which will help in working with strings in Golang. &#xA;&#xA;## String Concatenation&#xA;&#xA;String Concatenation refers to the combining and formatting of strings in Golang. We can combine multiple strings and formating the way we display the strings in Golang. We have a few ways and functions to concatenate strings in Golang.&#xA;&#xA;### Using the + operator&#xA;&#xA;We can simply concatenate strings using the `+` operator, though keep in mind you should only concatenate the string with a string and not any other data type like integer, or float, it will throw out errors for mismatched string types.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;)&#xA;func main() {&#xA;    s1 := &#34;Go&#34;&#xA;    s2 := &#34;Programming&#34;&#xA;    s3 := &#34;Language&#34;&#xA;&#xA;    s := s1 + s2 + s3&#xA;    fmt.Println(s)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run concatenate.go&#xA;&#xA;GoProgrammingLanguage&#xA;&#xA;```&#xA;&#xA;The `+` operator will literally join the strings as it is and form a string. &#xA;&#xA;### Using the += operator&#xA;&#xA;The other way to continuously append a string to an existing string, we can use the `+=` operator. This operator will append the provided string to the end of the original string. &#xA;&#xA;```go&#xA;p := &#34;Story&#34;&#xA;p += &#34;Book&#34;&#xA;fmt.Println(p)&#xA;```   &#xA;&#xA;```&#xA;go run concatenate.go&#xA;&#xA;StoryBook&#xA;```&#xA;&#xA;### Using the Join method &#xA;&#xA;The join method is a function available in the string package in Golang. We can join strings elements in a slice or an array using the [Join](https://pkg.go.dev/strings#Join) method in the strings [package](https://pkg.go.dev/strings) in golang. The Join method will combine all the elements in between the elements with a particular string. So, the function takes two parameters `Join(array, string)`, the array or a slice is parsed into the function which will be used to insert the provided string in between the elements of the slice.&#xA;&#xA;#### Join&#xA;&#xA;- Parameters   : array/slice, string&#xA;- Return Value : string&#xA;&#xA;```go&#xA;q := []string{&#34;meetgor.com&#34;, &#34;tags&#34;, &#34;golang&#34;, &#34;string&#34;}&#xA;r := strings.Join(q, &#34;/&#34;)&#xA;fmt.Println(r)&#xA;```   &#xA;&#xA;```&#xA;go run concatenate.go&#xA;&#xA;meetgor.com/tags/golang/string&#xA;```&#xA;&#xA;   In the above example, we use have used the `Join` method to insert a string in between the elements of a slice of strings. The string `&#34;/&#34;` has been inserted in between the elements, and the elements are combined as a single string. So, each individual element starting from the `0` index `meetgor.com` is appended the string `/` and further the next element `tags` have been appended and the procedure caries on till the last element. Note that the string is not inserted after the last element. The function `Join` returns a string and thereby we store the string in a variable.&#xA;&#xA;### Using Sprintf method&#xA;&#xA;We can use the [Sprintf](https://pkg.go.dev/fmt#Sprintf) function from the fmt package to format the string by storing the string rather than printing it to the console. The sprintf function is quite similar to the `Printf` but it only parses strings rather than printing them directly to the console.&#xA;&#xA;```go&#xA;// Using Sprintf function to format strings&#xA;&#xA;name := &#34;peter&#34;&#xA;domain := &#34;telecom&#34;&#xA;service := &#34;ceo&#34;&#xA;&#xA;email := fmt.Sprintf(&#34;%s.%s@%s.com&#34;, service, name, domain)&#xA;fmt.Println(email)&#xA;```   &#xA;&#xA;```&#xA;go run concatenate.go&#xA;&#xA;ceo.peter@telecom.com&#xA;```&#xA;&#xA;The sprintf function basically allows us to concatenate strings in a defined format just like we use `printf` to print formatted strings. In the above example, we have formatted three strings in the form of an email by assigning a placeholder for the string i.e. `%s`, and adding the required characters in the formatted string.&#xA;&#xA;### Using Go string Builder method&#xA;&#xA;The [Builder](https://pkg.go.dev/strings#Builder) type is provided by the strings package in Golang. The Builder type helps in building strings in an efficient way. By creating a string builder object, we can perform operations on a String.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;  // Using Builder function&#xA;&#xA;  c := []string{&#34;j&#34;, &#34;a&#34;, &#34;v&#34;, &#34;a&#34;}&#xA;  var builder strings.Builder&#xA;  for _, item := range c {&#xA;    builder.WriteString(item)&#xA;  }&#xA;  fmt.Println(&#34;builder = &#34;, builder.String())&#xA;  b := []byte{&#39;s&#39;, &#39;c&#39;, &#39;r&#39;, &#39;i&#39;, &#39;p&#39;, &#39;t&#39;}&#xA;  for _, item := range b {&#xA;    builder.WriteByte(item)&#xA;  }&#xA;  fmt.Println(&#34;builder = &#34;, builder.String())&#xA;  builder.WriteRune(&#39;s&#39;)&#xA;  fmt.Println(&#34;builder = &#34;, builder.String())&#xA;  fmt.Println(&#34;builder = &#34;, builder)&#xA;}&#xA;```   &#xA;&#xA;```&#xA;go run concatenate.go&#xA;&#xA;builder =  java&#xA;builder =  javascript&#xA;builder =  javascripts&#xA;builder =  {0xc000088dd8 [106 97 118 97 115 99 114 105 112 116 115]}&#xA;```&#xA;&#xA;   The builder structure provided by the strings package is quite important for working with strings in an efficient manner. Its usually used for string concatenation operations. We can perform write operations to the buffer which is a byte slice.  Here we have created the `builder` variable which is of type `strings.Builder`, further we have appended the string to it in a for a loop. So, we construct a string from the string list elements, they can be even rune slice or byte slice.&#xA;   We have used three methods here, the `WriteString`, `WriteByte`, and `WriteRune` which are quite obliviously used for writing `string`, `byte`, and `rune`to the string builder object.&#xA;   &#xA;&#xA;### Using the Bytes buffer method&#xA;&#xA;The `bytes` package also has something similar to `Builder` type in `string` as [Buffer](https://pkg.go.dev/bytes#Buffer.Bytes). It has almost the same set of methods and properties. The main difference is the efficiency, `strings.Builder` is comparatively faster than the `bytes.Buffer` type due to several low-level implementations. We can discuss those fine details in a separate article but right now we&#39;ll focus on the ways we can utilize this type for string concatenation.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;bytes&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// Using bytes buffer method&#xA;&#xA;&#x9;var buf bytes.Buffer&#xA;&#xA;&#x9;for i := 0; i &lt; 2; i++ {&#xA;&#x9;&#x9;buf.WriteString(&#34;ja&#34;)&#xA;&#x9;}&#xA;&#x9;fmt.Println(buf.String())&#xA;&#xA;&#x9;buf.WriteByte(&#39;r&#39;)&#xA;&#xA;&#x9;fmt.Println(buf.String())&#xA;&#xA;&#x9;k := []rune{&#39;s&#39;, &#39;c&#39;, &#39;r&#39;, &#39;i&#39;, &#39;p&#39;, &#39;t&#39;}&#xA;&#x9;for _, item := range k {&#xA;&#x9;&#x9;buf.WriteRune(item)&#xA;&#x9;}&#xA;&#x9;fmt.Println(buf.String())&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run concatenate.go&#xA;&#xA;jaja&#xA;jajar&#xA;jajarscript&#xA;{[106 97 106 97 114 115 99 114 105 112 116] 0 0}&#xA;```&#xA;&#xA;   So, like for the `strings.Builder` type, we have [WriteString](https://pkg.go.dev/bytes#Buffer.WriteString), [WriteByte](https://pkg.go.dev/bytes#Buffer.WriteByte), and [WriteRune](https://pkg.go.dev/bytes#Buffer.WriteRune) in the `bytes.Buffer` type. We can use it exactly the way we do with the previous example. Also, the `bytes.Buffer` type returns a slice of bytes so we will have to use the [String()](https://pkg.go.dev/strings#Builder.String) method to format it as a string.&#xA;&#xA;   If we look at the [bytes.Buffer](https://go.dev/src/bytes/buffer.go) type, it returns a slice of bytes and two more properties viz. `off` and `lastRead`. These two properties are used for indicating the index of the byte in the buffer and reallocation of the buffer. This is too low-level stuff that can be explored and explained in a separate section. For further readings on the bytes Buffer or String Builder types, you can follow up with these articles: &#xA;&#xA;   - [bytes.Buffer vs strings.Builder](https://medium.com/@felipedutratine/string-concatenation-in-golang-since-1-10-bytes-buffer-vs-strings-builder-2b3081848c45)&#xA;   - [Bytes Buffer](https://syslog.ravelin.com/bytes-buffer-i-thought-you-were-my-friend-4148fd001229)&#xA;   - [Best ways to use bytes.Buffer](https://stackoverflow.com/a/49295215)&#xA;&#xA;&#xA;## String Comparison&#xA;&#xA;Now, we can move into the comparison of Strings in Golang. We have quite a few ways to compare strings in golang. We cover some of them in this section.&#xA;&#xA;### Using Comparison operators&#xA;&#xA;The basic comparison can be done with the comparison operators provided by Golang. Just like we compare numeric data we can compare strings. Though the factor with which we compare them is different. We compare them by the lexical order of the string characters.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;s1 := &#34;gopher&#34;&#xA;&#x9;s2 := &#34;Gopher&#34;&#xA;&#x9;s3 := &#34;gopher&#34;&#xA;&#xA;&#x9;isEqual := s1 == s2&#xA;&#xA;  //&#34;gopher&#34; is not same as &#34;Gopher&#34; and hence `false`&#xA;&#x9;fmt.Printf(&#34;S1 and S2 are Equal? %t &#xA;&#34;, isEqual)&#xA;&#x9;fmt.Println(s1 == s2)&#xA;&#xA;  // &#34;gopher&#34; is not equal to &#34;Gopher&#34; and hence `true`&#xA;&#x9;fmt.Println(s1 != s2)&#xA;&#xA;  // &#34;Gopher&#34; comes first lexicographically than &#34;gopher&#34; so return true &#xA;  // G -&gt; 71 in ASCII and g -&gt; 103&#xA;&#x9;fmt.Println(s2 &lt; s3)&#xA;&#x9;fmt.Println(s2 &lt;= s3)&#xA;&#xA;  // &#34;Gopher&#34; is not greater than &#34;gopher&#34; as `G` comes first in ASCII table&#xA;  // So G value is less than g i.e. 71 &gt; 103 which is false&#xA;&#x9;fmt.Println(s2 &gt; s3)&#xA;&#x9;fmt.Println(s2 &gt;= s3)&#xA;&#xA;}&#xA;```   &#xA;&#xA;```&#xA;go run comparison.go&#xA;&#xA;S1 and S2 are Equal? false &#xA;false&#xA;true&#xA;true&#xA;true&#xA;false&#xA;false&#xA;```&#xA;&#xA;In the above examples, we are able to see the comparison of two strings. There are three strings, two of which are identical, and the third is identical as well but is not equal considering the case of the characters in the string. We have compared the strings in order of the ASCII value of the characters of the strings. For example, A (65) comes before a (97). Similarly, numbers come before letters. So accordingly the comparison of these string characters decides the result.&#xA;&#xA;For the ASCII table, you can take a look over the below image:&#xA;&#xA;![ASCII Table](https://res.cloudinary.com/techstructive-blog/image/upload/v1656423519/blog-media/ascii_table.png)&#xA;&#xA;### Using Compare method&#xA;&#xA;We also have the [Compare](https://res.cloudinary.com/techstructive-blog/image/upload/v1656423519/blog-media/ascii_table.png) method in the strings package for comparing two strings. The comparison method returns an integer value of either -1, 0, or 1. If the two strings are equal, it will return 0. Else if the first string is lexicographically smaller than the second string, it will return -1, else it will return +1.&#xA; &#xA;&#xA;#### strings.Compare &#xA;&#xA;- Return Type: Int (-1, 0, 1)&#xA;- Parameters: string, string&#xA;&#xA;You can check out the [source code](https://go.dev/src/strings/compare.go) for further clarity.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;  &#34;fmt&#34;&#xA;  &#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;s1 := &#34;gopher&#34;&#xA;&#x9;s2 := &#34;Gopher&#34;&#xA;&#x9;s3 := &#34;gopher&#34;&#xA;&#xA;&#x9;fmt.Println(strings.Compare(s1, s2))&#xA;&#x9;fmt.Println(strings.Compare(s1, s3))&#xA;&#x9;fmt.Println(strings.Compare(s2, s3))&#xA;}&#xA;```   &#xA;&#xA;```&#xA;go run comparison.go&#xA;&#xA;1&#xA;0&#xA;-1&#xA;```&#xA;&#xA;   In the above example, the two strings `s1` and `s2` are compared and it returns the integer value `+1`, indicating the first string is lexicographically greater than the second string which is true `&#34;gopher&#34;` will be lexicographically after `&#34;Gopher&#34;` due to the presence of `G`.&#xA;&#xA;   In the second example, we are comparing the strings `s1` and `s3` which are equal, and hence the function returns `0` as expected.&#xA;&#xA;   In the third example, we are comparing the strings `s2` and `s3` identical to the first case but here order matters. We are comparing `&#34;Gopher&#34;` with `&#34;gopher&#34;` so the first string is lexicographically smaller than the second string and thereby returning `-1` as discussed.&#xA;&#xA;### Using strings EqualFold&#xA;&#xA;We also have another method in the strings library called [EqualFold](https://pkg.go.dev/strings#EqualFold) which compares two strings lexicographically but without considering the case precedence. That is the upper case or lower case is ignored and considered equal. So we are truly case-insensitively comparing the strings.&#xA;&#xA;#### strings.EqualFold&#xA;&#xA;- Return Type: bool (true or false)&#xA;- Parameters: string, string&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import(&#xA;  &#34;fmt&#34;&#xA;  &#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;s1 := &#34;gopher&#34;&#xA;&#x9;s2 := &#34;Gopher&#34;&#xA;&#x9;s3 := &#34;gophy&#34;&#xA;&#xA;&#x9;fmt.Println(strings.EqualFold(s1, s2))&#xA;&#x9;fmt.Println(strings.EqualFold(s1, s3))&#xA;&#x9;fmt.Println(strings.EqualFold(s2, s3))&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run comparison.go&#xA;&#xA;true&#xA;false&#xA;false&#xA;```&#xA;&#xA;   So, in the above example, we are comparing the strings `&#34;gopher&#34;` and `&#34;Gopher&#34;` i.e. `s1` and `s2`, which are equal if we think case-insensitively. Hence the method returns true, they are equal.&#xA;   In the next example, we compare the strings, `s1` and `s3` i.e. `&#34;gopher&#34;` `&#34;gophy&#34;` which are not equal, and hence we return `false`. Similar is the case for `&#34;Gopher&#34;` and `&#34;gophy&#34;` which is `false`. Also, if we consider two strings `&#34;gophy&#34;` and `&#34;gophy&#34;` it will quite obliviously return `true`.&#xA;&#xA;## String Manipulation and utility methods&#xA;&#xA;The strings package in golang has some great utility methods for working with string or any form of text. We will explore some of the quite useful and widely used utilities in the strings package.&#xA;&#xA;### ToLower, ToUpper and Title functions&#xA;&#xA;The strings package also provides some utility functions for operating on the case of the characters in the strings. We have functions like [ToLower](https://pkg.go.dev/strings#ToLower), [ToUpper](https://pkg.go.dev/strings#ToUpper), and [Title](https://pkg.go.dev/golang.org/x/text/cases#Title) which can be used to convert the string into lower case, uppercased or Capitalised(Title) cased characters respectively.&#xA;&#xA;#### strings.ToLower&#xA;&#xA;- Return Type: string &#xA;- Parameters: string&#xA;&#xA;&#xA;#### strings.ToUpper&#xA;&#xA;- Return Type: string &#xA;- Parameters: string&#xA;&#xA;&#xA;#### cases.Title&#xA;&#xA;- Return Type: Caser&#xA;- Parameters: Language Options &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;&#xA;&#x9;&#34;golang.org/x/text/cases&#34;&#xA;&#x9;&#34;golang.org/x/text/language&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;s1 := &#34;Ubuntu 22&#34;&#xA;&#x9;s2 := &#34;meet&#34;&#xA;&#x9;s3 := &#34;IND&#34;&#xA;&#x9;fmt.Println(strings.ToLower(s1))&#xA;&#x9;fmt.Println(strings.ToLower(s2))&#xA;&#x9;fmt.Println(strings.ToLower(s3))&#xA;&#xA;&#x9;fmt.Printf(&#34;&#xA;&#34;)&#xA;&#x9;fmt.Println(strings.ToUpper(s1))&#xA;&#x9;fmt.Println(strings.ToUpper(s2))&#xA;&#x9;fmt.Println(strings.ToUpper(s3))&#xA;&#xA;&#x9;fmt.Printf(&#34;&#xA;&#34;)&#xA;&#x9;cases := cases.Title(language.English)&#xA;&#x9;fmt.Println(cases.String(s1))&#xA;&#x9;fmt.Println(cases.String(s2))&#xA;&#x9;fmt.Println(cases.String(s3))&#xA;}&#xA;```&#xA;&#xA;```&#xA;# 100-days-of-golang/scripts/strings&#xA;&#xA;go mod init&#xA;go get golang.org/x/text/cases&#xA;go get golang.org/x/text/language&#xA;&#xA;go run utility.go                                                                                             &#xA;ubuntu 22&#xA;meet&#xA;ind&#xA;&#xA;UBUNTU 22&#xA;MEET&#xA;IND&#xA;&#xA;Ubuntu 22&#xA;Meet&#xA;Ind&#xA;```&#xA;&#xA;   Here, we can see that the function, `ToLower` has converted all the characters of a string to the lower case of the alphabet. Similarly, the `ToUpper` function has turned the characters of the strings to their respective alphabetical upper case. &#xA;&#xA;   The [Title](https://pkg.go.dev/strings#Title) method in the strings package has been deprecated due to incompatibility with certain languages and cases. So, we are using the [text/cases](https://pkg.go.dev/golang.org/x/text/cases) package to get the [Title](https://pkg.go.dev/strings#Title) method that appropriately converts a string to Title cased. &#xA;  To set up this function, you need to perform a certain package installation process which is quite straightforward. Just create a go mod which is used for managing dependencies and packages for a project. So run the commands given below in the same order in your local setup:&#xA;&#xA;```&#xA;go mod init&#xA;go get golang.org/x/text/cases&#xA;go get golang.org/x/text/language&#xA;```&#xA;   This will set up a go.mod file and install the packages namely the `cases` and `language` packages. After doing this you will be able to access the functions `Title` from the cases package which can be imported by the format `&#34;golang.org/x/text/cases&#34;` and `&#34;golang.org/x/text/language&#34;`. Now, we can use the Title function and parse the parameters which is the language type. Here we have used the `language.English` which is a [language Tag](https://pkg.go.dev/golang.org/x/text@v0.3.7/language#Tag) to say use the semantics of English language while parsing the title cased characters. We now assign the value of the function `Title` to a variable as it will be of type `Caseer` and we want to still parse the string into the function. The [caser](https://pkg.go.dev/golang.org/x/text/cases#Caser) object will have certain methods and properties attached to it, we will use the method [Strings](https://pkg.go.dev/golang.org/x/text/cases#Caser.String) that will convert the given string into the title cased string. Hence we return the title cased string using the title function with the help of cases and language packages.&#xA;&#xA;### Contains and ContainsAny functions&#xA;&#xA;In the strings package, we have the [Contains](https://pkg.go.dev/strings#Contains) and [ContainsAny](https://pkg.go.dev/strings#ContainsAny) method which checks for the presence of substrings within a string. This will help in looking up hidden patterns and find for substrings in a string.&#xA;&#xA;#### strings.Contains&#xA;&#xA;- Parameters: string, string&#xA;- Return Type: bool (true, false)&#xA;&#xA;The Contains method helps in getting the exact match of the substring in the given string. Firstly, the method takes two parameters one being the actual string and the other being the substring that you want to find inside the string. Let&#39;s say we have the `string=&#34;bootcamp&#34;` and `substr=&#34;camp&#34;`, then the `Contains` method will return true because the string contains the substring `camp`.&#xA;&#xA;#### strings.ContainsAny&#xA;&#xA;- Parameters: string, string&#xA;- Return Type: bool (true, false)&#xA;&#xA;The `ContainsAny` method just like the Contains method takes two parameters string and the other as the substring, but it would return true if it finds any character or a single byte(Unicode chars) inside the string. Let&#39;s say the `string=&#34;google photos&#34;` and `substring=&#34;abcde&#34;`, then the `ContainsAny` method will return true because the string contains at least one character in the substring which is `e`. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;str := &#34;javascript&#34;&#xA;&#x9;substr := &#34;script&#34;&#xA;&#x9;s := &#34;python&#34;&#xA;&#xA;&#x9;fmt.Println(strings.Contains(str, substr))&#xA;&#x9;fmt.Println(strings.Contains(str, s))&#xA;&#xA;&#x9;fmt.Println(strings.ContainsAny(str, &#34;joke&#34;))&#xA;&#x9;fmt.Println(strings.ContainsAny(str, &#34;xyz&#34;))&#xA;&#x9;fmt.Println(strings.ContainsAny(str, &#34;&#34;))&#xA;}&#xA;```&#xA;&#xA;Here, we have used the string methods like `Contains` and `ContainsAny` to find the substring inside a string. In the first example, the `str` variable is assigned as `&#34;javascript&#34;` and `substr` string as `&#34;script&#34;`. We use the `Contains` function with parameters `(str, substr)`. This will return `true` as we can see the `&#34;script&#34;` is a substring of `&#34;javascript&#34;`. Also, we have initialized the variable `s` to `&#34;python&#34;`. We use the `Contains` method with the parameters `(str, s)` which will return `false` as `&#34;python&#34;` is not a substring of `&#34;javascript&#34;`. &#xA;&#xA;The next set of examples is of the `ContainsAny` method which will return true for any character present in the substring is present in the string :). Quite a Simple right, Just understand that any character in your substring present in the string will return `true`. As we see in the example, `ContainsAny` method is used with the parameters  `(&#34;javascript&#34;, &#34;joke&#34;)`, It will return `true` as `j` is present inside the string, though the entire substring is not present. &#xA;&#xA;The next example is by passing `(&#34;javascript&#34;, &#34;xyz&#34;)` to the method `ContainsAny` will return `false` as we don&#39;t have either `&#34;x&#34;`, `&#34;y&#34;`, or `&#34;z&#34;` in the string. So this is how the `ContainsAny` method works. &#xA;&#xA;Other similar methods you might be interested in learning are: [Index](https://pkg.go.dev/strings#Index), [IndexAny](https://pkg.go.dev/strings#IndexAny), [LastIndex](https://pkg.go.dev/strings#LastIndex), etc. you can find the list of methods in the [strings package](https://pkg.go.dev/strings#pkg-functions).&#xA;&#xA;### Split and SplitAfter functions&#xA;&#xA;We also have methods to manipulate the string for splitting the characters or bytes with certain patterns. In the strings package, the [Split](https://pkg.go.dev/strings#Split) and [SplitAfter](https://pkg.go.dev/strings#SplitAfter) methods are quite handy to know about.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// Split method for splitting string into slice of string&#xA;&#x9;link := &#34;meetgor.com/blog/golang/strings&#34;&#xA;&#x9;fmt.Println(strings.Split(link, &#34;/&#34;))&#xA;&#x9;fmt.Println(strings.SplitAfter(link, &#34;/&#34;))&#xA;&#xA;&#x9;// SplitAfter method for splitting string into slice of string with the pattern&#xA;&#x9;data := &#34;200kms50kms120kms&#34;&#xA;&#x9;fmt.Println(strings.Split(data, &#34;kms&#34;))&#xA;&#x9;fmt.Println(strings.SplitAfter(data, &#34;kms&#34;))&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run utility.go&#xA;&#xA;[meetgor.com blog golang strings]&#xA;[meetgor.com/ blog/ golang/ strings]&#xA;[200 50 120 ]&#xA;[200kms 50kms 120kms ]&#xA;```&#xA;&#xA;   So, from the above examples, we can see how `Split` and `SplitAfter` methods work. The Split method splits the text before and after the pattern string also removing the pattern string whereas the `SplitAfter` method keeps the pattern and splits it after it, hence we see the pattern string getting attached to the left string.&#xA;&#xA;   In the first example, we see the `link` variable being split into strings as `&#34;/&#34;` being the separator. The Split method returns the slice of the string elements which have been split. In the `data` variable, the `&#34;kms&#34;` is the separator, so we get the resultant slice as `[200, 50, 120]`, the `&#34;kms&#34;` string acts as a separator and it is ignored.&#xA;&#xA;   In the next example, we see the `link` variable being split into strings as `&#34;/&#34;` being the separator as previously but here, the splitting occurs after the separator has been parsed. So, we see `&#34;meetgor/&#34;` being split after the separator string and then `&#34;blog/&#34;` and so on. Also the next example, in the `data` variable, we see `&#34;200kms&#34;` being split instead of `200` using Split.&#xA;&#xA;#### Repeat and Fields functions&#xA;&#xA;In the strings package, we have methods like [Repeat](https://pkg.go.dev/strings#Repeat) and [Fields](https://pkg.go.dev/strings#Fields) for manipulating the text inside the string. These methods are used to populate or extract data from the string. &#xA;&#xA;##### strings.Repeat&#xA;&#xA;- Parameters: string, int&#xA;- Return Type: string&#xA;&#xA;The `Repeat` method is used to create a string by repeating it n number of times and appending it to the string which is returned as the final string. So, the `Repeat` method takes in two parameters the string to repeat, an integer as a count to repeat the string, and returns a string.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// Repeat method for creating strings with given string and integer&#xA;&#x9;pattern := &#34;OK&#34;&#xA;&#x9;fmt.Println(strings.Repeat(pattern, 3))&#xA;}&#xA;```&#xA;```&#xA;go run utility.go&#xA;&#xA;OKOKOK&#xA;```&#xA;&#xA;   So in this example, we can see that the string `&#34;OK&#34;` is passed to the method `Repeat` with the integer `3` and thus it is appended into itself three times and the resultant string becomes `&#34;OKOKOK&#34;`.&#xA;&#xA;##### strings.Fields&#xA;&#xA;- Parameters: string&#xA;- Return Type: []string&#xA;&#xA;The `Fields` method is used to extract the words from the string, that is the characters/bytes and group them with one or more consecutive white spaces. The function returns a slice of string.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;strings&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// Fields method for extracting string from the given string with white space as delimiters&#xA;&#x9;text := &#34;Python is a prgramming language.   Go is not&#34;&#xA;&#x9;text_data := strings.Fields(text)&#xA;&#x9;fmt.Println(text_data)&#xA;&#x9;for _, d := range text_data {&#xA;&#x9;&#x9;fmt.Println(&#34;data = &#34;, d)&#xA;&#x9;}&#xA;}&#xA;```   &#xA;&#xA;```&#xA;go run utility.go&#xA;&#xA;[Python is a prgramming language. Go is not]&#xA;data =  Python&#xA;data =  is&#xA;data =  a&#xA;data =  prgramming&#xA;data =  language.&#xA;data =  Go&#xA;data =  is&#xA;data =  not&#xA;```&#xA;&#xA;   The above example demonstrates the usage of `Fields` which will extract characters and split after encountering whitespace. So, we return a slice of string in which the string elements are split before white space. Thus, using the Fields method we get the words or characters as space-separated values in the slice. &#xA;&#xA;   You can even expand on this method with the [FieldsFunc](https://pkg.go.dev/strings#FieldsFunc) method which allows you to write a custom delimiter option and extract data according to your requirement. There are tons of methods in the strings package for working with strings, you can see a detailed list of functions in the [documentation](https://pkg.go.dev/strings#pkg-functions).&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this post, we were able to understand the different methods and types to concatenate and interpolate strings in golang. We explored different types of concatenating strings, string comparison, and various methods for manipulating and interpolating strings. &#xA;&#xA;Thank you for reading, if you have any queries, feedback, or questions, you can freely ask me on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Mutable and Immutable Data Types</title>
      <link>https://www.meetgor.com/posts/golang-mutable-immutable</link>
      <description>Introduction In this 14th Post of the 100 days of GOlang, we will be understanding about the mutable and immutable data types in Golang. Firstly, we will unders</description>
      <pubDate>Tue, 21 Jun 2022 00:00:00 UTC</pubDate>
      <content>## Introduction &#xA;&#xA;In this 14th Post of the 100 days of GOlang, we will be understanding about the mutable and immutable data types in Golang. Firstly, we will understand the concept of mutability and understand the differences in mutable and immutable data types, further we will explore which data types in Golang are Mutable and Immutable.&#xA;&#xA;## Mutable Data Type&#xA;&#xA;Mutable data type is a data type which can be modified without reallocating any chunk of the memory assigned at the time of initialization. In simple words, a variable is mutable if its value can be altered without reallocating itself to a new memory space.&#xA;&#xA;In mutable data type the value located in a memory address can be modified. This means we do not have to reallocate any memory or change the pointer of a variable to point to other address in order to change the value of the variable.&#xA;&#xA;```&#xA;// Initialization &#xA;&#xA;int age = 12&#xA;&#xA;// The value of age is 12 which is stored at some memory address let&#39;s sayy 0x12345&#xA;&#xA;&#xA;// Modifying the value&#xA;&#xA;int age = 13&#xA;&#xA;// If the data type is mutable, we can directly change the value in the memory address&#xA;```&#xA;&#xA;Mutable data type looks quite simple by using a single variable to demonstrate, but if we scale the example a little, things start to escalate pretty quickly.&#xA;&#xA;```&#xA;// Initialization &#xA;&#xA;int age = 12&#xA;// The value of age is 12 which is stored at some memory address let&#39;s sayy 0x12345&#xA;&#xA;int experience = age&#xA;&#xA;// This will make the experience variable point to the address which the age variable is pointing to.&#xA;// Yes, It will point to the same memory address&#xA;&#xA;// Variable    |  Memory Address  |  Value&#xA;&#xA;// age        -&gt;    0x12345       -&gt;   12&#xA;// experience -&gt;    0x12345       -&gt;   12&#xA;&#xA;// Modifying the value&#xA;&#xA;int experience = 17&#xA;&#xA;// A change in either of the variable will change the value in both the variables &#xA;// Since both the variables are poiting to the same memory address&#xA;&#xA;// Variable    |  Memory Address  |  Value&#xA;&#xA;// age        -&gt;    0x12345       -&gt;   17&#xA;// experience -&gt;    0x12345       -&gt;   17&#xA;&#xA;// If the data type is mutable, we can directly change the value in the memory address&#xA;&#xA;```   &#xA;&#xA;## Immutable Data Types&#xA;&#xA;Immutable data type is a data type which cannot be modified without allocating a new memory. So, the immutable data type has to reallocate memory for making changes to the value of a variable. This might be a downside if the variable is holding a large sets of values, it will require a lot of memory re-allocation for a slight change in the value. &#xA;&#xA;Immutable data types also mean that you cannot change the value in the memory address which the variable is pointing to, but you can make the variable point to a different memory location under the hood to change or modify the content of a variable.&#xA;&#xA;```&#xA;&#xA;// Initialization&#xA;&#xA;string fruit = &#34;apple&#34;&#xA;&#xA;// the value &#34;apple&#34; is stored in a memory location let&#39;s say 0x12345 originally&#xA;&#xA;&#xA;&#xA;// Modifying the value&#xA;&#xA;string fruit = &#34;orange&#34;&#xA;&#xA;// the value &#34;orange&#34; will be located in a different memory location say 0x98765&#xA;// The memory address that the variable fruit points to is changed and not the value of the memory address&#xA;// This is called immutablility in data types&#xA;```&#xA;&#xA;This is the basics of mutability and immutability, this might be a bit difficult to digest, but take your time and understand it thoroughly. This concept is critical for understanding the under the hood point of view while debugging in several occasions.&#xA;&#xA;&#xA;## Mutable data type in Golang&#xA;&#xA;In golang there are a few mutable data types &#xA;&#xA;1. Slice&#xA;2. Array&#xA;3. Map&#xA;4. Channels&#xA;&#xA;### Slice and Arrays&#xA;&#xA;Slice and Arrays are mutable data types in golang, this means the value of the elements in slice or array can be changed after initialization without re-allocations of memory.&#xA;&#xA;```&#x9;go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// Slice&#xA;&#x9;s := []int{1, 2, 3}&#xA;&#x9;fmt.Printf(&#34;S[1] -&gt; %p&#xA;&#34;, &amp;s[1])&#xA;&#x9;p := s&#xA;&#x9;p[1] = 4&#xA;&#x9;fmt.Printf(&#34;S[1] -&gt; %p&#xA;&#34;, &amp;s[1])&#xA;&#xA;&#x9;fmt.Println(&#34;s =&#34;,s)&#xA;&#x9;fmt.Println(&#34;p =&#34;,p)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run mutable.go&#xA;&#xA;S[1] -&gt; 0xc000018200&#xA;S[1] -&gt; 0xc000018200&#xA;s = [1 4 3]&#xA;p = [1 4 3]&#xA;```&#xA;&#xA;In the above example, we can see that the slice has been initialized to `{1,2,3}` another slice has been referenced to that string i.e. it will point to the same address as the original string `s`. If we modify the slice `p`, since it is pointing to the same memory address as the slice `s` is pointing it will modify the slice `s` as well (they are the same slice).&#xA;&#xA;This is what mutability does, it modifies the value stored in a memory address directly, without allocating any extra memory to the variable.&#xA;&#xA;### Arrays&#xA;&#xA;You won&#39;t be able to see the concept of mutable data types with arrays as they are not referenced to any memory address, it is a collection of a single type of value and it is thus static. Since it is not a reference to any memory address, the value of the elements doesn&#39;t change if we change the value of an element in the copy of the array.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// Arrays &#xA;&#x9;a := [3]int{10, 20, 30}&#xA;&#x9;fmt.Printf(&#34;A[1] -&gt; %p&#xA;&#34;, &amp;a[1])&#xA;&#x9;b := a&#xA;&#x9;b[1] = 40&#xA;&#x9;fmt.Printf(&#34;A[1] -&gt; %p&#xA;&#34;, &amp;a[1])&#xA;&#xA;&#x9;fmt.Println(&#34;a =&#34;, a)&#xA;&#x9;fmt.Println(&#34;b =&#34;, b)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run mutable.go&#xA;&#xA;A[1] -&gt; 0xc0000aa038&#xA;A[1] -&gt; 0xc0000aa038&#xA;a = [10 20 30]&#xA;b = [10 40 30]&#xA;```&#xA;&#xA;   In the above example, we can see the array `a` is initialized with a fixed length and initialized values. We then create another array by assigning the array `a` to it, this creates a copy of a collection of the elements in a different memory location. So, if we change an element in the array `b`, there won&#39;t be any change in the elements of array `a` as the elements are stored in a completely different memory location.&#xA;&#xA;### Map&#xA;&#xA;Map is similar to slices in a way they are references to the memory address. A map as we have explored in the [seventh part](https://www.meetgor.com/golang-maps) of the series], they are a pair of key and value pairs. The map is internally a reference to a hash map, a hash map is an abstract data type or a structure in Golang, it basically is an array of buckets. Buckets contain high-order bits with a hash(random value) to make the keys distinct in the map. The number of buckets is initially 8, but it expands as required so it doubles the number of buckets and assigns the value to the map elements. For a detailed reference, you can look at the source implementation of [golang&#39;s map](https://github.com/golang/go/blob/master/src/runtime/map.go).&#xA;&#xA;So, a map is mutable, which means if we change the value of a key it is changed directly to the memory rather than reallocating the memory space for the entire map.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;// Map&#xA;&#x9;m := map[string]int{&#34;level&#34;: 5, &#34;health&#34;: 9}&#xA;&#x9;fmt.Println(&#34;m =&#34;, m)&#xA;&#x9;n := m&#xA;&#x9;n[&#34;food&#34;] = 12&#xA;&#xA;&#x9;fmt.Println(&#34;m =&#34;, m)&#xA;&#x9;fmt.Println(&#34;n =&#34;, n)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run mutable.go&#xA;&#xA;m = map[health:9 level:5]&#xA;m = map[food:12 health:9 level:5]&#xA;n = map[food:12 health:9 level:5]&#xA;```&#xA;&#xA;In the above example, we can see the map `m` is a string integer map with 2 keys. We create a new map called `n` and reference it as the map `m` this will make the map `n` point to the same hash map as the map `m` does. Hence, if we change the value of a key in the map `n` it will thereby change the map `m` as maps `m` and `n` both point to the same map (in memory address).&#xA;&#xA;## Immutable data type in Golang&#xA;&#xA;In golang, there are a few immutable data types as well like string, pointers, boolean, and core data types like integer, float, etc. As we discussed immutable data types, are data types that don&#39;t change the value of the variable directly into the provided memory address, it re-allocates the memory address with the new value/edited value. &#xA;&#xA;1. Boolean, Int, Float&#xA;3. Pointers&#xA;3. String&#xA;4. Interfaces&#xA;&#xA;### Boolean&#xA;&#xA;The boolean data type on golang is an immutable data type which means it re-allocates the memory for any change in the value of the boolean variable. Boolean variables are simple as they can have two values either `true` or `false`. If we declare a boolean variable initialize it with a value, if we further wanted to change the value of the variable, it is done by reallocating the memory address which was initially holding the value. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// bool&#xA;&#x9;boolean := true&#xA;&#x9;b := boolean&#xA;&#x9;b = false&#xA;&#x9;fmt.Println(&#34;boolean = &#34;, boolean)&#xA;&#x9;fmt.Println(&#34;b = &#34;, b)&#xA;&#xA;}&#xA;```   &#xA;&#xA;```&#xA;go run immutable.go&#xA;&#xA;boolean =  true&#xA;b =  false&#xA;```&#xA;&#xA;   In this example, we can see that the boolean variable `boolean` (I have literally named it boolean) is initialized as `true`, next we declare another variable `b` and set it to the value of `boolean` which is the value `true`. We then change the value of the variable `b` but the value of `boolean` does not change. It doesn&#39;t give much clarity on the immutability of the boolean data type. This is because it is an under an hood process and a low-level process in programming in golang.&#xA;&#xA;Immutable doesn&#39;t mean you cannot change values, but the value is not directly changed, there happens some under-the-hood (low level) stuff to alter the value and change the memory address which is ben pointed as the location which holds the value. &#xA;&#xA;To get more clarity, you can refer to this [Stack-Overflow discussion](https://stackoverflow.com/questions/71589811/go-ints-and-strings-are-immutable-or-mutable/71590289#71590289).&#xA;&#xA;&#xA;### Pointers&#xA;&#xA;Pointer as well is an immutable data type in golang, we cannot change the value in this case the memory address of a variable which we are pointing to, directly but we need to re-allocate the memory for changing the value of the variable.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;  // Pointer&#xA;&#x9;n := 567&#xA;&#x9;t := 123&#xA;&#x9;ptr := &amp;n&#xA;&#x9;ptr_new := ptr&#xA;&#x9;fmt.Println(&#34;ptr = &#34;, ptr)&#xA;&#x9;fmt.Println(&#34;ptr_new = &#34;, ptr_new)&#xA;&#xA;&#x9;ptr_new = &amp;t&#xA;&#xA;&#x9;fmt.Println(&#34;ptr = &#34;, ptr)&#xA;&#x9;fmt.Println(&#34;ptr_new = &#34;, ptr_new)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run immutable.go&#xA;&#xA;ptr     =  0xc0000aa008&#xA;ptr_new =  0xc0000aa008&#xA;ptr     =  0xc0000aa008&#xA;ptr_new =  0xc0000aa020&#xA;```&#xA;&#xA;   Clearly, in the example, we have initially created a pointer `ptr` which points to the variable `n`, then we create another pointer `ptr_new` and assign it to the value of the pointer `ptr`. This will make both the pointers point to the same variable `n`. We then point the `ptr_new` pointer to the variable `t`, this changes the value of pointer `ptr_new` but since the pointer stores a memory address, it was not holding the actual value at the memory address. So, the change in the pointing value of `ptr_new` does not change the value of the pointer `ptr`.&#xA;&#xA;Again, immutable types cannot be seen actually changing the memory location of those variables, it is just the low-level implementation that sometimes needs to be kept in mind.&#xA;&#xA;### String&#xA;&#xA;Strings are the classical example of immutable data types in golang, this data type is quite commonly used and is quite important for creating a wide variety of applications. The value of the string variable can be changed but the process happens with/without changing the value of the memory address of the initial value, we have to change the memory address of the variable in order to change the value. This might not be evident by even using `&amp;variable_name` as it doesn&#39;t give the actual insight about the internal working of how the memory address might be processed at run time.&#xA;&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// String&#xA;&#x9;str := &#34;gopher&#34;&#xA;&#x9;str_copy := str&#xA;&#x9;str_copy = &#34;cooper&#34;&#xA;&#x9;fmt.Println(&#34;str = &#34;, str)&#xA;&#x9;fmt.Println(&#34;str_copy = &#34;, str_copy)&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run immutable.go&#xA;&#xA;str =  gopher&#xA;str_copy =  cooper&#xA;```&#xA;&#xA;Here, in the above example, we have created a variable `str` with a value `&#34;gopher&#34;` and then, another variable `str_copy` which is assigned the value of `str`. If we change the value of `str_copy`, the value of `str` is not changed, this is not giving any information on the immutability of a data type.&#xA;&#xA;What actually gives a clear understanding of the immutability of data type in the string is the modification of the character of a string. We cannot change the character at a particular index of the string in golang.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// Character at index cannot be changed in string&#xA;&#xA;&#x9;s := &#34;StarWars&#34;&#xA;&#x9;s[4] = &#39;C&#39;&#xA;&#x9;// s[4] = &#34;C&#34;&#xA;&#x9;// also won&#39;t work&#xA;&#x9;fmt.Println(s)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;go run immutable.go&#xA;&#xA;immutable.go:18:2: cannot assign to s[4] (value of type byte)&#xA;```&#xA;  &#xA;   So, we can see we cannot change the value of the internal characters of the string. This is why string data type is immutable, we cannot change the value of internal elements of the variable directly but we can change the value of the variable as a whole as we saw in the previous example. &#xA;&#xA;So this is how the immutable data types are perceived in golang, we cannot change the value directly at the memory address, we change the internal location for any change in the value which is dependent on the decision by the garbage collector.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/scripts/im-mutable) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this small post, we were able to understand the different behaviors of data types in golang. We can understand it at the beginning of the variables part of the series, but it becomes more clear when we play with all the data types and then understand how and why they behave as they do. This was not a deep dive into the core working of immutable and mutable data types but gave a fair bit of understanding of the behavior of different data types in golang.&#xA;&#xA;Thank you for reading, if you have any queries, feedback, or questions, you can freely ask me on my social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Project with PostgreSQL Deployment on Railway App</title>
      <link>https://www.meetgor.com/posts/django-deploy-railway</link>
      <description>Introduction We have already seen the process of deployment of a Django app on Heroku, now we can move to another platform which is . This allows a faster and m</description>
      <pubDate>Thu, 19 May 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We have already seen the process of deployment of a Django app on Heroku, now we can move to another platform which is [Railway App](https://railway.app). This allows a faster and more intuitive way to deploy a Django app(or any sort of web app). This platform has several [Starter Apps](https://railway.app/starters) for shipping your app in several clicks, definitely, it would require some configuration and tweaking a bit.&#xA;&#xA;We will be following similar steps from the [Django-Heroku Deployment Guide](https://mr-destructive.github.io/techstructive-blog/series/django-deployment) for configuring and setting up our Django project for the deployment on the Railway app. We explore a few different ways to create and deploy a Django project on Railway.&#xA;&#xA;&#xA;## Creating a Project on Railway App&#xA;&#xA;Once we have our Django project setup, we can move ahead and create a Railway Project. A Railway project is a collection of environments, plugins, services, and deployment variables. By heading on the [New Project](https://railway.app/new) webpage, you can log in or sign up for a free account and create a simple project.&#xA;&#xA;Here we can have a few options:&#xA;&#xA;1. Create a Django App Template (Djangopy Starter)&#xA;2. Pull a Project from a GitHub Repository (Deploy from GitHub Repo)&#xA;&#xA;We&#39;ll look into each of them, the first one is by allowing the `Railway` bot to create a prebuilt Django template on our GitHub account. The second option is for fetching and deploying an existing Django project repository on GitHub.&#xA;&#xA;## Railway Django Project Template&#xA;&#xA;Railway provides a simple Django project template, it consists of a single app and a simple view that displays a `HttpResponse`. The Django project template provided by Railway is open source and available on [GitHub](https://github.com/railwayapp/starters/tree/master/examples/djangopy). With this method, you won&#39;t require any django project to create by yourself, it would be a template of django project created by the railway bot. Though for extending and building a meaningful django project, you will have to clone the repository and make necessary changes to it.&#xA;&#xA;Create a Project by heading on to the [New Project](https://railway.app/new) Tab and search for `Django`, you should see a `django` project as a starter pack.&#xA;&#xA;![New Railway Django Project](https://res.cloudinary.com/techstructive-blog/image/upload/v1652945887/blog-media/django-deployment/railway-django-new-project.png)&#xA;&#xA;Once you select the Django project, we have an initial configuration tab opened up for us. It is mandatory to fill the `SECRET_KEY` environment variable as it makes your django project more secure. Also, do change the name of the `repository` as it will be the name of your repository name in your GitHub account.&#xA;&#xA;To create a `SECRET_KEY` key, you can move into your terminal and do some python wizardry.&#xA;&#xA;Open a Python REPL, by entering the command `python` or `python3`. Import the `secrets` module and run the function `token_hex(24)` which will generate a key of length 24.&#xA;&#xA;```&#xA;python&#xA;&#xA;import secrets&#xA;secrets.token_hex(24)&#xA;```&#xA;&#xA;![Python Secret Key Generation](https://res.cloudinary.com/techstructive-blog/image/upload/v1652946842/blog-media/django-deployment/python-secret-key-gen.png)&#xA;&#xA;Now, copy the `SECRET_KEY` without quotes into the prompt and this will create a repository on your GitHub with the provided name. The `Railway` Bot will create a django project with the name `djangopy` in that repository with some pre-configured settings.&#xA;&#xA;![Django Proejct Create](https://res.cloudinary.com/techstructive-blog/image/upload/v1652947362/blog-media/django-deployment/create-project-django.png)&#xA;&#xA;![Railway Bot creating Djangopy](https://res.cloudinary.com/techstructive-blog/image/upload/v1652947344/blog-media/django-deployment/railway-bot-djangopy.png)&#xA;&#xA;This will create a simple repository on your GitHub but also a django application deployed on Railway along with PostgreSQL Database attached to it.&#xA;&#xA;![Railway Django Project Dashboard](https://res.cloudinary.com/techstructive-blog/image/upload/v1652947689/blog-media/django-deployment/deployed-project-dashboard.png)&#xA;&#xA;So, this has deployed the project on Railway with this [https://djangopy-production-43cb.up.railway.app/](https://djangopy-production-43cb.up.railway.app/) URL Link. The name of the link can be configured from the Settings Tab in Dashboard Section and editing the Service Domains and even adding a Custom domain.&#xA;&#xA;So this is how we deploy a basic django application on railway app. Further, you can modify the contents of the Github repository and push the code by committing the changes and it will pick it from there and also deploy it automatically, thereby creating a sort of CI-CD.&#xA;&#xA;We&#39;ll be demonstrating this in the next section which is a method to deploy the project from a custom GitHub repository i.e. by setting up everything ourselves and then attaching the Railway app for deployment. The benefit of this method is that we can configure some already existing Django applications by tinkering a bit.&#xA;&#xA;## Deploying from GitHub repository to Railway&#xA;&#xA;This is the second method for deploying the Railway project. For this we need a proper Django Project, we will set up a django application from scratch, I have already created a simple CRUD application in Django for a Blog on [GitHub](https://github.com/Mr-Destructive/django-blog). This won&#39;t be a Django guide for setting up views and creating models, I&#39;ve explained all the setup of the django-blog in my [Django Basics series](https://mr-destructive.github.io/techstructive-blog/series/django-basics).&#xA;&#xA;### Creating a Django Application&#xA;&#xA;For deploying an app, we definitely need an app, we need to create a basic Django application to deploy on the web. We&#39;ll be creating a simple blog application with a couple of views and a simple model structure. As for the database, we&#39;ll be using Postgres as Railway has an database service for it and it is pretty easy to configure.&#xA;&#xA;#### Set up a virtual environment&#xA;&#xA;We need to set up a virtual environment in order to keep the Django project neat and tidy by managing the project-specific dependencies and packages. We can use the [virtualenv](https://pypi.org/project/virtualenv/) package to isolate a python project from the rest of the system.&#xA;&#xA;```&#xA;# install the virtualenv package&#xA;pip install virtualenv&#xA;&#xA;# create a virtual env for the project&#xA;virtualenv .venv&#xA;&#xA;# activate the virtualenv&#xA;Windows:&#xA;.venv\Scripts�ctivate&#xA;&#xA;Linux/macOS:&#xA;source .venv/bin/activate&#xA;```&#xA;&#xA;This will set up the project nicely for a Django project, you now install the core Django package and get started with creating a Django application.&#xA;&#xA;&#xA;```bash&#xA;# install django&#xA;pip install django&#xA;&#xA;# start a django project&#xA;django-admin startproject blog .&#xA;&#xA;cd blog&#xA;&#xA;# create a application in django project&#xA;python manage.py createapp api&#xA;&#xA;# Create some models, views, URLs, templates&#xA;&#xA;# run the server&#xA;python manage.py runserver&#xA;```&#xA;&#xA;We assume you already have a Django project configured with some basic URLs, views, and templates or static files as per your project and requirements, for this tutorial I will be using the simple blog application from my previous Django tutorials as a reference. As said earlier, you can follow along with my [Django Basics](https://techstructiveblog.hashnode.dev/series/django-basics) series and refer to the Blog Application project on [GitHub](https://github.com/Mr-Destructive/django-blog).&#xA;&#xA;### Configuring the Django Application&#xA;&#xA;Make sure to create and activate the virtual environment for this django project. This should be done to manage the dependencies and packages used in the project. If you are not aware of the virtual environment and django setup, you can follow up with this [post](https://mr-destructive.github.io/techstructive-blog/django-setup-script/).&#xA;&#xA;#### Creating a runtime.txt file&#xA;&#xA;Now, Firstly we need to specify which type and version of language we are using. Since Django is a Python-based web framework, we need to select the python version in a text file.&#xA;&#xA;**runtime.txt**&#xA;```&#xA;python-3.9.5&#xA;```&#xA; &#xA;Here, the version can be anything as per your project and the packages installed.  &#xA;&#xA;#### Creating requirements.txt file&#xA;&#xA;We&#39;ll first create a `requirements.txt` file for storing all the dependencies and packages installed in the application. This will help in installing dependencies while deploying the application. We can either use a `requirements.txt` file using `virtualenv` or a `Pipfile` using Pipenv. Both serve the same purpose but a bit differently. &#xA;&#xA;Assuming you are in an isolated virtual environment for this Django project, you can create a requirements.txt file using the below command:&#xA;&#xA;Make sure the virtualenv is activated before running the command:&#xA;&#xA;```&#xA;pip freeze &gt; requirements.txt&#xA;```&#xA;&#xA;This will create a simple text file that contains the package names along with the versions used in the current virtual environment. A simple Django requirements file would look something like this:&#xA;&#xA;```&#xA;asgiref==3.4.1&#xA;Django==3.2.11&#xA;pytz==2021.3&#xA;sqlparse==0.4.2&#xA;typing_extensions==4.0.1&#xA;```&#xA;&#xA;![pip freeze command](https://res.cloudinary.com/techstructive-blog/image/upload/v1652956558/blog-media/django-deployment/railway_requirements_freeze.png)&#xA;&#xA;This is vanilla Django without any additional dependencies but if you have integrated other functionalities like Django Rest Framework, PostgreSQL, Crispy Forms, Schedulers, etc. there might be additional dependencies that become quite crucial for the smooth working of the project.  &#xA;&#xA;If you are using pipenv, you don&#39;t need to make any efforts to manually activate and manage the virtual environment, the dependencies are installed and taken care of by the pipenv installer. You just need to make sure to install any package with `pipenv install` and not `pip install` for better and improved package tracking.&#xA;&#xA;So, that&#39;s all we need to take care of packages and keep a list of these integrated packages for the project. You need to update the requirements.txt file every time you install any new package or modify the dependencies for a project. Simply use the command `pip freeze &gt; requirements.txt` in the activated virtual environment.  &#xA;&#xA;#### Creating a Procfile&#xA;&#xA;Next up, we have the `Procfile`, a procfile is a special file that holds information about the processes to be run to start or activate the project. In our case, for django we need a web process that can manage the server.&#xA;&#xA;A Procfile is a simple file without any extension, make sure to write `Procfile` as it is the name of the file in the root folder of the project. Inside the file add the following contents:&#xA;&#xA;**Procfile**&#xA;```Procfile&#xA;web: gunicorn &lt;project_name&gt;.wsgi&#xA;```&#xA;&#xA;For the Procfile, Railway has a built [Procfile](https://github.com/railwayapp/starters/blob/master/examples/djangopy/Procfile) in the Django Template, you can refer to it and create it as follows:&#xA;&#xA;```Procfile&#xA;web: python manage.py migrate &amp;&amp; gunicorn &lt;project_name&gt;.wsgi&#xA;```&#xA;&#xA;As we can see we have defined the `web` process using `gunicorn`, [Gunicorn](https://pypi.org/project/gunicorn/) is a python package that helps in creating WSGI HTTP Server for the UNIX operating systems. So, we need to install the package and update the package dependency list. &#xA;&#xA;```&#xA;pip install gunicorn&#xA;&#xA;pip freeze &gt; requirements.txt&#xA;``` &#xA;&#xA;#### Configuring Environment Variables&#xA;&#xA;We need to keep our secrets for the django project out of the deployed code and in a safe place, we can create environment variables and keep them in a `.env` file which will be git-ignored. We do not want this `.env` file to be open source and thus should not be committed.&#xA;&#xA;We first need to create a new secret key if you already have a GitHub repository, chances are you would have committed the default secret key open for the world to see, it is an insecure way of deploying django apps in production.&#xA;&#xA;This should generate a secret key that only you know now. So, just copy the key without the quotes and create a file `.env` in the root project folder. &#xA;&#xA;**.env**&#xA;```&#xA;SECRET_KEY=76419fd6885a677f802fd1d2b5acd0188e23e001042b05a8&#xA;```&#xA;&#xA;The `.env` file should also be added to the `.gitignore` file, so simply append the following in the `.gitignore` file&#xA;&#xA;```&#xA;.env&#xA;```&#xA;&#xA;This is just one of the environment variables in our django project, further, we will also be adding a few other variables like database credentials, debug status, etc.&#xA;&#xA;We have now created environment variables for our django application, we now need a way to parse these environment variables into the Django project.  &#xA;&#xA;#### Parsing Environment variables using python-dotenv&#xA;&#xA;We will use [python-dotenv](https://pypi.org/project/python-dotenv/) to parse variables into the django settings configurations like `SECRET_KEY` and `DATABASES`. &#xA;&#xA;- Install `python-dotenv` with pip with the command :  &#xA;&#xA;```&#xA;pip install python-dotenv&#xA;```&#xA;&#xA;We need to then modify the default variables in the `settings.py` file. Firstly, we will load in the `.env` file for accessing the environment variables for the configuration of the project.&#xA;&#xA;Append the following code, to the top of the `settings.py` file, make sure don&#39;t override the configuration so remove unnecessary imports and configurations.&#xA;&#xA;``` python&#xA;# &lt;project_name&gt;/settings.py&#xA;&#xA;import os&#xA;from dotenv import load_dotenv&#xA;&#xA;BASE_DIR = Path(__file__).resolve().parent.parent&#xA;&#xA;load_dotenv(os.path.join(BASE_DIR, &#34;.env&#34;))&#xA;&#xA;```&#xA;&#xA;We have imported the package `dotenv` basically the `python-dotenv` into the `settings.py` file and the module `os` for loading the `.env` file. The `load_dotenv` function helps in loading the `key-value` pairs which are the configuration variables that can act as actual environment variables. We provide in a file to the [load_dotenv](https://saurabh-kumar.com/python-dotenv/) function which is the `.env` file in our case, you can pick up any location for the `.env` file but make sure to change the location while parsing the file into the `load_dotenv` function. &#xA;&#xA;After loading the variables into the `settings.py` file, we now need to access those variables and set the appropriate variables the configuration from the variables received from the `load_dotenv` function. The `os.getenv` function to access the environment variables. The `os.getenv` function takes a parameter as the `key` for the environment variable and returns the value of the environment variable.&#xA;&#xA;``` python&#xA;SECRET_KEY = os.getenv(&#34;SECRET_KEY&#34;)&#xA;```&#xA;&#xA;We have secretly configured the `SECRET_KEY` for the django project. If you have any other variables as simple key-value pairs like `AUTH` passwords, username, etc. you can use this method to get the configuration variables. &#xA;&#xA;#### Add gitignore file&#xA;&#xA;We would need a `.gitignore` file for only committing the project and pushing it to the remote repository. Here, we will set up a minimal `.gitignore` file in our root repository.&#xA;&#xA;Here&#39;s a sample `.gitignore` for a minimal django project.&#xA;&#xA;```gitignore&#xA;.env/&#xA;.venv/&#xA;env/&#xA;venv/&#xA;*.env&#xA;&#xA;*.pyc&#xA;db.sqlite3&#xA;staticfiles/&#xA;```&#xA;&#xA;If you want a full-fledged `.gitignore` for a complex django project, you can take the reference from Jose Padilla&#39;s [gitignore Template](https://github.com/jpadilla/django-project-template/blob/master/.gitignore) for a django project.  &#xA;&#xA;That would be good to go for creating and serving up the project while deploying the project on Railway App.&#xA;&#xA;### Git Commit the Django Project&#xA;&#xA;Now, we can safely commit the code and push it to a GitHub repository. This will make sure you have a basic django proejct on GitHub from which we can build the Railway app into deployment.&#xA;&#xA;```&#xA;git status &#xA;&#xA;git add .&#xA;&#xA;git commit -m &#34;config for railway deployment&#34;&#xA;```&#xA;&#xA;Carefully check the status of the git repository before committing and make sure you don&#39;t forget anything by mistake, only commit the files which you think are important and ignore the rest.&#xA;&#xA;```&#xA;git remote add rail https://github.com/Mr-Destructive/django-blog/tree/railway&#xA;&#xA;git push rail railway&#xA;```&#xA;&#xA;![Django Project Push GitHub](https://res.cloudinary.com/techstructive-blog/image/upload/v1652956515/blog-media/django-deployment/railway_push_github.png)&#xA;&#xA;### Creating the Railway project&#xA;&#xA;Now, since we have a django project nicely configured and setup on GitHub, we can pull out a railway project and fetch the project from the GitHub repository.&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1652970864/blog-media/django-deployment/railway_project_init.webm&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;![Railway Proejct from GitHub](https://res.cloudinary.com/techstructive-blog/image/upload/v1652970665/blog-media/django-deployment/railway-proejct-github.png)&#xA;&#xA;#### Spinning up the database &#xA;&#xA;We also need a database that we can attach in the django project in our Railway application. We can integrate a `PostgreSQL` database as a service running in our Railway project instance. We can do that by locating the `+New` tab on the project dashboard and attaching a `Database` in the drop-down menu. &#xA;&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1652963718/blog-media/django-deployment/postgres_spinup_railway_d2xkpt.mp4&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;After creating a database service, we need the credentials of the database or the `DATABASE_URL` of the PostgreSQL in order to integrate it into the django settings. We can locate into the `Connect` of the PostgreSQL service and grab the URL of the database. This can be stored in the main django application serves as an environment variable. &#xA;&#xA;```&#xA;DATABASE_URL=postgresql://postgres:SE74bEw@containers-51.railway.app:6919/railway&#xA;```&#xA;&#xA;![PostgreSQL database variable config](https://res.cloudinary.com/techstructive-blog/image/upload/v1652964755/blog-media/django-deployment/postgres_database_config.png)&#xA;&#xA;In the Django Starter Project provided by Railway, we should have a PostgreSQL database integrated as a Service. So, we can consume the database in our django project. &#xA;&#xA;We will add the `DATABASE_URL` config variable into the `.env` file and also add it into the main Django project service so that it can communicate to the database. You need to copy the `DATABSE_URL` into our local setup file(`.env` file). Copy the Database URL and paste it into the `.env` file as follows:&#xA;&#xA;```env&#xA;DATABASE_URL=postgres://sjxgipufegmgsw:78cbb568e@ec2-52-4-104-184.compute-1.amazonaws.com:5432/dbmuget&#xA;```&#xA;&#xA;The format for the Postgres URL is as follows:&#xA;&#xA;```&#xA;postgresql://[user[:password]@][netloc][:port][/dbname]&#xA;```&#xA;&#xA;#### Loading Database configuration&#xA;&#xA;Databases are a bit different as compared to other simple configurations in django. We need to make a few adjustments to the default database configuration. We will install another package `dj-database-url` to configure `DATABASE_URL`. Since the databse_url has a few components we need a way to extract the details like the `hostname`, `port`, `database_name`, and `password`. Using the `dj-database-url` package we have a few functions that can serve the purpose.&#xA;&#xA;```&#xA;pip install dj-database-url&#xA;```&#xA;&#xA;At the end of your `settings.py` file, append the following code. &#xA;&#xA;``` python&#xA;import dj_database_url&#xA;&#xA;DATABASE_URL = os.getenv(&#34;DATABASE_URL&#34;)&#xA;&#xA;DATABASES = {&#xA;    &#34;default&#34;: dj_database_url.config(default=DATABASE_URL, conn_max_age=1800),&#xA;}&#xA;```&#xA;&#xA;We would need an adapter for making migrations to the `PostgreSQL` database i.e. the `psycopg2` package. This is a mandatory step if you are working with `postgres` database. This can be installed with the simple pip install:&#xA;&#xA;```&#xA;pip install psycopg2&#xA;&#xA;# If it does not work try&#xA;pip install psycopg2-binary&#xA;&#xA;&#xA;# if still error persists try installing setuptools&#xA;pip install -U setuptools&#xA;pip install psycopg2&#xA;```&#xA;&#xA;Now, that we have configured the database, we can now apply migrations to the new database of Postgres provided by Railway. We will simply run the migrate command and in the Railway Project the PostgreSQL database would have been modified and an appropriate schema should be applied.&#xA;&#xA;```&#xA;python manage.py migrate&#xA;```&#xA;&#xA;![Railway PostgreSQL db migrate](https://res.cloudinary.com/techstructive-blog/image/upload/v1652965335/blog-media/django-deployment/railway_database_migrate.png)&#xA;&#xA;Make sure to update the `requirements.txt` file before pushing the project to Railway app for making sure everything works as expected. Since we have installed a few additional packages that are directly used in the `settings.py` file, we need to run the `pip freeze` command to update the `requiremnets.txt` file.&#xA;&#xA;#### Serving Static Files&#xA;&#xA;Now, if you have some static files like `CSS`, `Javascript`, or `images`, you need to configure the static files in order to serve them from the Railway app server. We will require another config variable for collecting the static files from the selected repository. &#xA;&#xA;```python&#xA;&#xA;STATIC_URL = &#34;static/&#34;&#xA;STATICFILES_DIRS = [os.path.join(BASE_DIR, &#34;static&#34;)]&#xA;STATIC_ROOT = os.path.join(BASE_DIR, &#34;staticfiles&#34;)&#xA;&#xA;``` &#xA;&#xA;Here, if you have served your static files from the `static` folder in the root directory of your django project, you can add the above code in the settings.py file. We will collect all static files in the folder along with the default static files provided by django in the `staticfiles` directory. Run the following command if you want to test whether the static files are properly collected and served.&#xA;&#xA;```&#xA;python manage.py collectstatic &#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652600828657/JgL4nLuiL.png)&#xA;&#xA;So, this command will collect all the static files and store them in a single place. We see that the files from the admin section are also copied as well as the custom static files from the project configuration. Now, we can move on to set the config variable for the Railway app in the Variables Tab.&#xA;&#xA;```&#xA;DISABLE_COLLECTSTATIC = 0&#xA;```&#xA;&#xA;We can set the `DISABLE_COLLECTSTATIC` variable as `0` or `1` indicating whether to disable it or not. We have currently enabled the static file collection while deploying the app but you can set it to `1` to disable the collection of static files.&#xA;&#xA;Since I first tested the application on Railway, the static files don&#39;t work as expected, we need another package to make sure the staticfiles are served property. We will be installing the `whitenoise` package which serves as the middleware for serving the static files.&#xA;&#xA;```&#xA;pip install whitenoise&#xA;```&#xA;&#xA;Add the whitenoise middleware `whitenoise.middleware.WhiteNoiseMiddleware` to the `MIDDLEWARE` list in the `settings.py` file.&#xA;&#xA;```python&#xA;MIDDLEWARE = [&#xA;...&#xA;...&#xA;...&#xA;    &#39;whitenoise.middleware.WhiteNoiseMiddleware&#39;,&#xA;]&#xA;&#xA;```&#xA;&#xA;That should be enough to make the most of the deployment on Railway app. You will have to make a few adjustments as per your requirements and project.&#xA;&#xA;&#xA;## Deploy to GitHub&#xA;&#xA;Finally, we will have all the pieces connected, only we need to push the code to the GitHub repository in order to trigger a build on the railway app. So, we make sure we commit every critical thing that are in our django project and include every sensitiv information in the gitignore file.&#xA;&#xA;```&#xA;pip freeze &gt; requirements.txt&#xA;```&#xA;&#xA;This step is quite important because you need to make sure that all the packages are listed in the `requirements.txt` file else you will have to wait for the build to fail and redeploy.&#xA;&#xA;Make sure the server is running first on your local machine, remember the server will be set up from scratch but the database will already have applied migrations if you have applied migrations before after connecting the Railway database service.&#xA; &#xA;```&#xA;python manage.py collectstatic&#xA;&#xA;python manage.py runserver&#xA;```&#xA;&#xA;This will set up the origin of the remote repository that will be pushing the project code. Next, make sure to commit the code which will contain all the required stuff for deploying the code.&#xA;&#xA;Checklist for deploying the code&#xA;&#xA;```&#xA;- requirements.txt&#xA;- Procfile&#xA;- runtime.txt&#xA;- django-project&#xA;- environment variables / config variables &#xA;- static file configuration&#xA;- database configuration&#xA;- migrate schema of database &#xA;- gitignore file for ignoring virtualenvs, .env file, staticfiles, etc&#xA;```&#xA;&#xA;```&#xA;git push origin main&#xA;&#xA;origin -&gt; remote repository URL&#xA;main   -&gt; branch name&#xA;```&#xA;&#xA;![Deployed Railway Project](https://res.cloudinary.com/techstructive-blog/image/upload/v1652966755/blog-media/django-deployment/railway-production-ss.png)&#xA;&#xA;## Railway CLI &#xA;&#xA;Railway also provides a cool CLI, it has some quite handy features like creating and managing services, local development environment, etc. We&#39;ll just dive into a few nice features of the CLI tool.&#xA;&#xA;We&#39;ll first install the CLI on our local system, for that the guide is quite limited in a way for a few options to choose from like `npm`, `shell`, and `scoop`. For me, the shell was the way to go, but it had a few issues with permission, so I made a few changes in the [install.sh](https://github.com/railwayapp/cli/blob/master/install.sh) script ran on my machine and it worked fine.&#xA;&#xA;![Install Railway CLI](https://res.cloudinary.com/techstructive-blog/image/upload/v1652944836/blog-media/django-deployment/railway-install.png)&#xA;&#xA;Now, that we have the `Railway CLi` set up we can run a few commands like:&#xA;&#xA;```&#xA;railway login&#xA;&#xA;OR&#xA;&#xA;# if it doesn&#39;t work&#xA;railway login --browserless&#xA;&#xA;OR&#xA;&#xA;# with node&#xA;npx railway login&#xA;```&#xA;&#xA;![Railway CLI Login](https://res.cloudinary.com/techstructive-blog/image/upload/v1652967978/blog-media/django-deployment/railway-cli-login.png)&#xA;&#xA;This will sign you in with your railway account.&#xA;&#xA;We now need to link our project and execute and work around the command in that project from the railway app. To link a project from a railway account, you have to use the `link` command. The link command takes in a parameter as the project id which can be obtained from the project dashboard settings tab. &#xA;&#xA;```&#xA;railway link &lt;proejct_id&gt;&#xA;```&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1652969451/blog-media/django-deployment/railway-proejct-link.webm&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;&#xA;Now, we can explore some more commands like `run`, `up`, and so on. The `run` command is pretty solid and provides an interface to run commands for your project for instance in Django, create a superuser, manage dependencies, collectstatic files, etc. This allows us to set up and run commands into the actual production environment just from the CLI.&#xA;&#xA;&#xA;```&#xA;# railway run (command to be executed)&#xA;railway run python manage.py createsuperuser&#xA;&#xA;# deploy the project&#xA;railway up&#xA;```&#xA;&#xA;&lt;video width=&#34;800&#34; height=&#34;450&#34; controls&gt;&#xA;  &lt;source src=&#34;https://res.cloudinary.com/techstructive-blog/video/upload/v1652968693/blog-media/django-deployment/railway-cli-run-cmd.webm&#34; type=&#34;video/mp4&#34;&gt;&#xA;&lt;/video&gt;&#xA;&#xA;So, this was all about creating and deploying a django application on Railway. Here are some reference links:&#xA;&#xA;- [GitHub Repository](https://github.com/Mr-Destructive/django-blog/tree/railway)&#xA;- [Live Webpage](https://django-blog-production.up.railway.app/)&#xA;&#xA;## Conclusion&#xA;&#xA;So, that&#39;s how we deploy a Django project on the Railway app. We have seen two of the many ways to create and deploy a Django application on Railway. We also integrated a PostgreSQL service provided by Railway in our Django Project. Using the Railway CLI, we were able to manage and create Projects from the terminal, also interaction of the railway project was achieved along with the production build from CLI.&#xA;&#xA;Hopefully, you liked the above tutorial, if you have any questions. feedback, or queries, you can contact me on the Social handles provided below. Thank you for reading and till the next post Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django + PostgreSQL Deployment on Heroku</title>
      <link>https://www.meetgor.com/posts/django-deploy-heroku</link>
      <description>Introduction Django projects are quite easy to build and simple to understand, you might have created a Django application and wanted to show it to the world? Y</description>
      <pubDate>Sun, 15 May 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Django projects are quite easy to build and simple to understand, you might have created a Django application and wanted to show it to the world? You can deploy a basic Django application with a database(PostgreSQL) with Heroku. It provides a decent free tier with some great features and add-ons. A free tier Heroku account has a limitation of 5 apps, limited data in the database, limited connections to the server per month, and so on.  &#xA;&#xA;Though the free tier is not a great option for bigger applications, it suits really well for smaller scale and ide projects, so we will be looking into the details of how to deploy a Django application to [Heroku](https://heroku.com/) which is a Platform as Service (PaS). &#xA;&#xA;This series will be an extension of the series [Django basics](https://techstructiveblog.hashnode.dev/series/django-basics) which covered the basics of the Django framework, we covered from basic Django fundamentals to building a CRUD API. In this series, we will be exploring some platforms for giving a better understanding of how things work and understanding a few components that were left as default while understanding the basics of Django. Let&#39;s get started with [Django Deployment](https://techstructiveblog.hashnode.dev/series/django-deployment)!&#xA;&#xA;## Creating a Django Application&#xA;&#xA;For deploying an app, we definitely need an app, we need to create a basic Django application to deploy on the web. We&#39;ll be creating a simple blog application with a couple of views and a simple model structure. As for the database, we&#39;ll be using Postgres as Heroku has an add-on for it and it is pretty easy to configure. &#xA;&#xA;### Set up a virtual environment&#xA;&#xA;We need to set up a virtual environment in order to keep the Django project neat and tidy by managing the project-specific dependencies and packages. We can use the [virtualenv](https://pypi.org/project/virtualenv/) package to isolate a python project from the rest of the system.&#xA;&#xA;```&#xA;# install the virtualenv package&#xA;pip install virtualenv&#xA;&#xA;# create a virtual env for the project&#xA;virtualenv .venv&#xA;&#xA;# activate the virtualenv&#xA;Windows:&#xA;.venv\Scripts�ctivate&#xA;&#xA;Linux/macOS:&#xA;source .venv/bin/activate&#xA;```&#xA;&#xA;This will set up the project nicely for a Django project, you now install the core Django package and get started with creating a Django application.&#xA;&#xA;&#xA;```bash&#xA;# install django&#xA;pip install django&#xA;&#xA;# start a django project&#xA;django-admin startproject blog .&#xA;&#xA;cd blog&#xA;&#xA;# create a application in django project&#xA;python manage.py createapp api&#xA;&#xA;# Create some models, views, URLs, templates&#xA;&#xA;# run the server&#xA;python manag.py runserver&#xA;```&#xA;&#xA;We assume you already have a Django project configured with some basic URLs, views and templates or static files as per your project and requirements, for this tutorial I will be using the simple blog application from my previous Django tutorials as a reference. You can follow along with my [Django Basics](https://techstructiveblog.hashnode.dev/series/django-basics) series and refer to the Blog Application project on [GitHub](https://github.com/Mr-Destructive/django-blog).&#xA;&#xA;## Configuring the Django Application&#xA;&#xA;Make sure to create and activate the virtual environment for this django project. This should be done to manage the dependencies and packages used in the project. If you are not aware of the virtual environment and django setup, you can follow up with this [post](https://mr-destructive.github.io/techstructive-blog/django-setup-script/).&#xA;&#xA;### Creating a runtime.txt file&#xA;&#xA;Now, Firstly we need to specify which type and version of language we are using. Since Django is a Python-based web framework, we need to select the python version in a text file.&#xA;&#xA;**runtime.txt**&#xA;```&#xA;python-3.9.5&#xA;```&#xA; &#xA;Here, the version can be anything as per your project and the packages installed.  &#xA;&#xA;### Creating requirements.txt file&#xA;&#xA;We&#39;ll first create a `requirements.txt` file for storing all the dependencies and packages installed in the application. This will help in installing dependencies while deploying the application. We can either use a `requirements.txt` file using `virtualenv` or a `Pipfile` using Pipenv. Both serve the same purpose but a bit differently. &#xA;&#xA;Assuming you are in an isolated virtual environment for this Django project, you can create a requirements.txt file using the below command:&#xA;&#xA;Make sure the virtualenv is activated before running the command:&#xA;&#xA;```&#xA;pip freeze &gt; requirements.txt&#xA;```&#xA;&#xA;This will create a simple text file that contains the package names along with the versions used in the current virtual environment. A simple Django requirements file would look something like this:&#xA;&#xA;```&#xA;asgiref==3.4.1&#xA;Django==3.2.11&#xA;pytz==2021.3&#xA;sqlparse==0.4.2&#xA;typing_extensions==4.0.1&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614060461/kPTZ9R8Xp.png)&#xA;&#xA;This is vanilla Django without any additional dependencies but if you have integrated other functionalities like Django Rest Framework, PostgreSQL, Crispy Forms, Schedulers, etc. there might be additional dependencies that become quite crucial for the smooth working of the project.  &#xA;&#xA;If you are using pipenv, you don&#39;t need to make any efforts to manually activate and manage virtual environment, the dependencies are installed and taken care of by the pipenv installer. You just need to make sure to install any package with `pipenv install` and not `pip install` for better and improved package tracking.&#xA;&#xA;So, that&#39;s all we need to take care of packages and keep a list of these integrated packages for the project. You need to update the requirements.txt file every time you install any new package or modify the dependencies for a project. Simply use the command `pip freeze &gt; requirements.txt` in the activated virtual environment.  &#xA;&#xA;### Creating a Procfile&#xA;&#xA;Next up, we have the `Procfile`, a procfile is a special file that holds information about the processes to be run to start or activate the project. In our case, for django we need a web process that can manage the server.&#xA;&#xA;A Procfile is a simple file without any extension, make sure to write `Procfile` as it is as the name of the file in the root folder of the project. Inside the file add the following contents:&#xA;&#xA;**Procfile**&#xA;```Procfile&#xA;web: gunicorn &lt;project_name&gt;.wsgi&#xA;```&#xA;&#xA;As we can see we have defined the `web` process using `gunicorn`, [Gunicorn](https://pypi.org/project/gunicorn/) is a python package that helps in creating WSGI HTTP Server for the UNIX operating systems. So, we need to install the package and update the package dependency list. &#xA;&#xA;```&#xA;pip install gunicorn&#xA;&#xA;pip freeze &gt; requirements.txt&#xA;``` &#xA;&#xA;That would be good to go for creating and serving up the project while deploying the project on Heroku.&#xA;&#xA;## Creating a Heroku App&#xA;&#xA;A Heroku App is basically like your Django Project, you can create apps for deploying your django projects on Heroku. You are limited to 5 apps on the Free tier but that can be expanded on the paid plans.  &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652456732519/cyOQZ3UZK.png)&#xA;&#xA;The name of your heroku app should be unique globally, you need to try a few combinations before a good one fits your project. This name has no significance on your django project code, though it would be the name from which you would access the web application as a name `&lt;app-name&gt;.herokuapp.com`.   &#xA;&#xA;So, choose it wisely if you are not attaching a custom domain. You can attach a custom domain, you can navigate to the `domain` section in the settings tab. &#xA;&#xA;&#xA;## Setting up the database &#xA;&#xA;To set up and configure a database in django on Heroku, we need to manually acquire and attach a PostgreSQL add-on to the heroku app.&#xA;&#xA;- Firstly locate to the Resources Tab in your Heroku app.&#xA;- Search `postgres` in the Add-ons Search bar&#xA;- Click on the `Heroku Postgres` Add-on&#xA;- Submit the Order Form and you have the add-on enabled in the app.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652456842273/ijeWsVdOf.png)&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652456877447/dLG30ac_m.png)&#xA;&#xA;Alternately, you can use the Heroku CLI,&#xA;&#xA;### Heroku CLI Setup&#xA;&#xA;You can use the Heroku CLI which is a command-line interface for managing and creating Heroku applications.  You need to first install the CLI by heading towards the [heroku documentation](https://devcenter.heroku.com/articles/heroku-command-line). Once the CLI is installed, you need to login into your Heroku account by entering the following command:&#xA;&#xA;```&#xA;heroku login&#xA;&#xA;``` &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652605604920/HnTr2KbTi.png)&#xA;&#xA;This will allow us to work with heroku commands and manage our heroku application from the command line itself. The below command will create a add-on for `heroku-postgres` for the application provided as the parameter options &#xA;&#xA;```&#xA;heroku addons:create heroku-postgresql:hobby-dev --app &lt;app_name&gt;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652507166978/i1IJ5EGjJ.png)&#xA;&#xA;This should hopefully add a fresh instance of a postgres database for your heroku app. You can now configure the database for your project, we simply need the Database URL from the heroku app dashboard. We&#39;ll see how to configure the environment variables in Django for Heroku to keep your secrets like the `SECRET_KEY`, `DATABSE_URL`, etc. &#xA;&#xA;If you want MySQL as a database, you can check out the [ClearDB](https://devcenter.heroku.com/articles/cleardb) Add-On for Heroku to simply attach it like Postgres on your Heroku application. Also, if you wish to add [MongoDB](https://www.mongodb.com/compatibility/mongodb-and-django) into your Django application on Heroku, you can use [Object Rocket](https://devcenter.heroku.com/articles/ormongo)(OR Mongo). It is not available in the free tier though, unlike PostgreSQL and MySQL.&#xA;&#xA;## Configuring Environment Variables&#xA;&#xA;We need to keep our secrets for the django project out of the deployed code and in a safe place, we can create environment variables and keep them in a `.env` file which will be git-ignored. We do not want this `.env` file to be open source and thus should not be committed.  &#xA;&#xA;&#xA;&#xA;We first need to create a new secret key if you already have a GitHub repository, chances are you would have committed the default secret key open for the world to see, it is an insecure way of deploying django apps in production.&#xA;&#xA;So, open up a terminal and launch a python REPL:&#xA;&#xA;```&#xA;python&#xA;&#xA;import secrets&#xA;secrets.token_hex(24)&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652512239319/5AACaTGOD.png)&#xA;&#xA;This should generate a secret key that only you know now. So, just copy the key without the quotes and create a file `.env` in the root project folder. &#xA;&#xA;**.env**&#xA;```&#xA;SECRET_KEY=76419fd6885a677f802fd1d2b5acd0188e23e001042b05a8&#xA;```&#xA;&#xA;The `.env` file should also be added to the `.gitignore` file, so simply append the following in the `.gitignore` file&#xA;&#xA;```&#xA;.env&#xA;```&#xA;This file is only created to test the project locally, so we need to also make this key available on heroku. For doing that we need to add Config Variables to the heroku app. &#xA;&#xA;- Locate to the Settings Tab in your Heroku Application Dashboard&#xA;- We have the `Config Vars` section in the located tab&#xA;= We need to reveal those variables and we will be able to see all the variables.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652456988713/5VM6E29_o.png)&#xA;&#xA;We already have a `DATABASE_URL` variable declared when we attached the `django-postgres` database to our application. We will now add one more variable to the Application, i.e. the `SECRET_KEY`. Simply, enter the name of the key and also enter the value of the key, so basically a key-value pair. Simply click on the `Add` button and this will add the variable to your application.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652515244870/LRyPzJr41.png)&#xA;&#xA;You also need to copy the `DATABSE_URL` into our local setup file(`.env` file). Copy the Database URL and paste it into the `.env` file as follows:&#xA;&#xA;```env&#xA;DATABASE_URL=postgres://sjxgipufegmgsw:78cbb568e@ec2-52-4-104-184.compute-1.amazonaws.com:5432/dbmuget&#xA;```&#xA;The format for the postgres URL is as follows:&#xA;&#xA;```&#xA;postgresql://[user[:password]@][netloc][:port][/dbname]&#xA;```&#xA;&#xA;We have now created environment variables for our django application and also added config vars in the heroku app, we now need a way to parse these environment variables into the Django project.  &#xA;&#xA;### Parsing Environment variables using python-dotenv&#xA;&#xA;We will use [python-dotenv](https://pypi.org/project/python-dotenv/) to parse variables into the django settings configurations like `SECRET_KEY` and `DATABASES`. &#xA;&#xA;- Install `python-dotenv` with pip with the command :  &#xA;```&#xA;pip install python-dotenv&#xA;```&#xA;We need to then modify the default variables in the `settings.py` file. Firstly, we will load in the `.env` file for accessing the environment variables for the configuration of the project.&#xA;&#xA;Append the following code, to the top of the `settings.py` file, make sure don&#39;t override the configuration so remove unnecessary imports and configurations.&#xA;&#xA;``` python&#xA;# &lt;project_name&gt;/settings.py&#xA;&#xA;import os&#xA;from dotenv import load_dotenv&#xA;&#xA;BASE_DIR = Path(__file__).resolve().parent.parent&#xA;&#xA;load_dotenv(os.path.join(BASE_DIR, &#34;.env&#34;))&#xA;&#xA;```&#xA;&#xA;We have imported the package `dotenv` basically the `python-dotenv` into the `settings.py` file and the module `os` for loading the `.env` file. The `load_dotenv` function helps in loading the `key-value` pairs which are the configuration variables that can act as actual environment variables. We provide in a file to the [load_dotenv](https://saurabh-kumar.com/python-dotenv/) function which is the `.env` file in our case, you can pick up any location for the `.env` file but make sure to change the location while parsing the file into the `load_dotenv` function. &#xA;&#xA;After loading the variables into the `settings.py` file, we now need to access those variables and set the appropriate variables the configuration from the variables received from the `load_dotenv` function. The `os.getenv` function to access the environment variables. The `os.getenv` function takes a parameter as the `key` for the environment variable and returns the value of the environment variable.&#xA;&#xA;``` python&#xA;SECRET_KEY = os.getenv(&#34;SECRET_KEY&#34;)&#xA;```&#xA;&#xA;We have secretly configured the `SECRET_KEY` for the django project. If you have any other variables as simple key-value pairs like `AUTH` passwords, username, etc. you can use this method to get the configuration variables. &#xA;&#xA;### Loading Database configuration&#xA;&#xA;Databases are a bit different as compared to other simple configurations in django. We need to make a few adjustments to the default database configuration. We will install another package `dj-database-url` to configure `DATABASE_URL`. Since the databse_url has a few components we need a way to extract the details like the `hostname`, `port`, `database_name`, and `password`. Using the `dj-database-url` package we have a few functions that can serve the purpose.&#xA;&#xA;```&#xA;pip install dj-database-url&#xA;```&#xA;&#xA;At the end of your `settings.py` file, append the following code. &#xA;&#xA;``` python&#xA;import dj_database_url&#xA;&#xA;DATABASE_URL = os.getenv(&#34;DATABASE_URL&#34;)&#xA;&#xA;DATABASES = {&#xA;    &#34;default&#34;: dj_database_url.config(default=DATABASE_URL, conn_max_age=1800),&#xA;}&#xA;```&#xA;&#xA;We would need an adapter for making migrations to the `PostgreSQL` database i.e. the `psycopg2` package. This is a mandatory step if you are working with `postgres` database. This can be installed with the simple pip install:&#xA;&#xA;```&#xA;pip install psycopg2&#xA;&#xA;# If it does not work try&#xA;pip install psycopg2-binary&#xA;&#xA;&#xA;# if still error persists try installing setuptools&#xA;pip install -U setuptools&#xA;pip install psycopg2&#xA;```&#xA;&#xA;Now, that we have configured the database, we can now apply migrations to the fresh database of postgres provided by heroku. We will simply run the migrate command and in the heroku app the PostgreSQL database would have been modified and an appropriate schema should be applied.&#xA;&#xA;```&#xA;python manage.py migrate&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652602284553/oTtGev28-.png)&#xA;&#xA;Make sure to update the `requirements.txt` file before pushing the project to Heroku for making sure everything works as expected. Since we have installed a few additional packages that are directly used in the `settings.py` file, we need to run the `pip freeze` command to update the `requiremnets.txt` file.&#xA;&#xA;## Serving Static Files&#xA;&#xA;Now, if you have some static files like `CSS`, `Javascript`, or `images`, you need to configure the staticfiles in order to serve them from the heroku server. We will require another config variable for collecting the static files from the selected repository. &#xA;&#xA;```python&#xA;&#xA;STATIC_URL = &#34;static/&#34;&#xA;STATICFILES_DIRS = [os.path.join(BASE_DIR, &#34;static&#34;)]&#xA;STATIC_ROOT = os.path.join(BASE_DIR, &#34;staticfiles&#34;)&#xA;&#xA;``` &#xA;&#xA;Here, if you have served your static files from the `static` folder in the root directory of your django project, you can add the above code in the settings.py file. We will collect all static files in the folder along with the default static files provided by django in the `staticfiles` directory. Run the following command if you want to test whether the static files are properly collected and served.&#xA;&#xA;```&#xA;python manage.py collectstatic &#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652600828657/JgL4nLuiL.png)&#xA;&#xA;So, this command will collect all the static files and store them in a single place. We see that the files from the admin section are also copied as well as the custom static files from the project configuration. Now, we can move on to set the config variable for the heroku app.&#xA;&#xA;```&#xA;DISABLE_COLLECTSTATIC = 0&#xA;```&#xA;&#xA;We can set the `DISABLE_COLLECTSTATIC` variable as `0` or `1` indicating whether to disable it or not. We have currently enabled the static file collection while deploying the app but you can set it to `1` to disable the collection of static files.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652613798420/mbqzf1Kqd.png)&#xA;&#xA;Since I first tested the application on heroku, the static files don&#39;t work as expected, we need another package to make sure the staticfiles are served property. We will be installing the `whitenoise` package which serves as the middleware for serving the static files.&#xA;&#xA;```&#xA;pip install whitenoise&#xA;```&#xA;&#xA;Add the whitenoise middleware `whitenoise.middleware.WhiteNoiseMiddleware` to the `MIDDLEWARE` list in the `settings.py` file.&#xA;&#xA;````python&#xA;MIDDLEWARE = [&#xA;...&#xA;...&#xA;...&#xA;    &#39;whitenoise.middleware.WhiteNoiseMiddleware&#39;,&#xA;]&#xA;&#xA;```&#xA;&#xA;That should be enough to make the most of the deployment on heroku. You will have to make a few adjustments as per your requirements and project.&#xA;&#xA;## Deploy from GitHub&#xA;&#xA;We are now all set to deploy the application on Heroku, we can use the `Connect to GitHub` or `Heroku CLI` to push the code into production. Heroku CLI is quite easy with a few sets of commands but if your project is deployed on GitHub, you can straightaway let the deployment start the build on a push to a specific branch. This becomes quite automotive and easy to scale while deploying a large-scale application. &#xA;&#xA;```&#xA;pip freeze &gt; requirements.txt&#xA;```&#xA;&#xA;This step is quite important because you need to make sure that all the packages are listed in the `requirements.txt` file else you will have to wait for the build to fail and redeploy.&#xA;&#xA;Make sure the server is running first on your local machine, remember the server will be set up from scratch but the database will already have applied migrations if you have applied migrations before after connecting the Heroku Postgres database.&#xA; &#xA;```&#xA;python manage.py collectstatic&#xA;&#xA;python manage.py runserver&#xA;```&#xA;&#xA;This will set up the origin of the remote repository that will be pushing the project code. Next, make sure to commit the code which will contain all the required stuff for deploying the code.&#xA;&#xA;Checklist for deploying the code&#xA;&#xA;```&#xA;- requirements.txt&#xA;- Procfile&#xA;- runtime.txt&#xA;- django-project&#xA;- environment variables / config variables &#xA;- static file configuration&#xA;- database configuration&#xA;- migrate schema of database &#xA;- gitignore file for ignoring virtualenvs, .env file, staticfiles, etc&#xA;```&#xA;&#xA;here&#39;s a sample `.gitignore` for a minimal django project.&#xA;&#xA;```gitignore&#xA;.env/&#xA;.venv/&#xA;env/&#xA;venv/&#xA;*.env&#xA;&#xA;*.pyc&#xA;db.sqlite3&#xA;staticfiles/&#xA;```&#xA;&#xA;If you want a full-fledged `.gitignore` for a complex django project, you can take the reference from Jose Padilla&#39;s [gitignore Template](https://github.com/jpadilla/django-project-template/blob/master/.gitignore) for a django project.  &#xA;&#xA;### Git Commit the Django Project&#xA;```&#xA;git status &#xA;&#xA;git add .&#xA;&#xA;git commit -m &#34;config for heroku deployment&#34;&#xA;```&#xA;Carefully check the status of the git repository before committing and make sure you don&#39;t forget anything by mistake, it won&#39;t a big problem but it would mess up the build process.&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652613991525/hxQgtGOoM.png)&#xA;&#xA;After committing the code, we can now push the code to GitHub. We first need to set the remote repository reference to be able to push the code to it. &#xA;&#xA;```&#xA;git remote add origin https://github.com/&lt;username&gt;/&lt;repo_name&gt;&#xA;```&#xA;This will set up the `origin` as the remote repository on GitHub. Once the remote repository is created, we can move into the push of the git repository that will trigger the build. First, navigate to the `Deploy` section in the heroku app&#39;s dashboard where we want to connect the `GitHub` repository and allow the automatic deploy from a branch in this case we have chosen the `main` branch.&#xA;&#xA;Due to some `Heroku` Internal Server Issues, the GitHub integration seems to have broken and isn&#39;t working as of May 2022, but it might work later. &#xA;&#xA;Below is a screenshot of my previous project deployed to Heroku using a GitHub repository.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652605497382/5VuQUQ0t0.png)&#xA;&#xA;```&#xA;git push origin main&#xA;```&#xA;&#xA;This will prompt you for your GitHub credentials and will deploy the commits to the remote repository on GitHub. This push on the main branch should also trigger the build process of the heroku app for this django project. You can navigate to the Activity section for the Build logs. &#xA;&#xA;If you have followed the article well, and your repository has all the correct configurations, the build will succeed, else chances are you might have missed a few things and the app might have crashed. You can debug your application build with the simple heroku CLI command:&#xA;&#xA;```&#xA;heroku logs --tail -a &lt;app_name&gt;&#xA;```&#xA;&#xA;This can be quite handy and saves a lot of time in understanding what went wrong in the build. It might be related to database migration, static files, python package not found, and some silly mistakes and errors that can be fixed after committing the code and pushing it to GitHub again.&#xA;&#xA;If you do not want a GitHub repository, you can directly push the code from the local git repository to the remote heroku app center. This will require us the Heroku CLI.&#xA;&#xA;## Heroku CLI&#xA;&#xA;We can use the heroku CLI for pushing the code via the simple git repository. We can push the references via the branch and a remote repository on heroku to build our app.  For this, we assume you have heroku installed and logged in. We will require the django project code and heroku cli to build the django web application.&#xA;&#xA;```bash&#xA;heroku git:remote -a &lt;heroku_app_name&gt;&#xA;&#xA;# for my case&#xA;heroku git:remote -a blog-django-dep&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614221069/vCAKD0zsz.png)&#xA;&#xA;After this, you can commit your code and the project as git repository. We have added the remote repository location on heroku, we can now simply push the code to the remote repository.&#xA;&#xA;```&#xA;git push heroku main&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614125785/uEzFQ9VvQ.png)&#xA;&#xA;Here, `heroku` is the remote repository location and `main` is the branch of the repository. This will push the code to the repository and thereafter create a build to deploy the Django project as a Heroku application.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614381808/kYTmB3EO2p.png)&#xA;&#xA;You can hit the `Open App` button on the top right corner and there should be your deployed Django application. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652610395538/xjUiODhoK.png)&#xA;&#xA;&#xA;## A Few Tricks and Gotchas&#xA;&#xA;There are a few tricks and issues that you might pop into while deploying a django project on heroku, especially the build process. It requires a few iterations to get the complete app setup.&#xA;&#xA;### Run command from the Dashboard console&#xA;&#xA;If you don&#39;t have heroku CLI set up and want to fix a few things on the pushed code to the heroku app, you can use the `Run Console` option from the `More` Tab on the top right corner of theApplication dashboard. This is a great way to run migrations, configure static files or tweak a few things here and there without messing up the code on GitHub or the remote git repositories. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614775294/lgDPwr2yr.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614821950/uTzQVB8sC.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1652614845269/BkZhu3SGH.png)&#xA;&#xA;### Deploy with Docker &#xA;&#xA;You can even use the docker container to deploy a Django application on Heroku. It is a great way of learning a lot of deployment strategies and techniques using Docker. You&#39;ll get familiar with interesting concepts like virtualization, and containerization, and also learn Docker on the way. You can follow this tutorial on [Deploying Django applications with Docker on Heroku](https://testdriven.io/blog/deploying-django-to-heroku-with-docker/). Also, you can check out the official Heroku documentation for [deploying python applications](https://devcenter.heroku.com/articles/deploying-python).&#xA;&#xA;### What are Dynos?&#xA;&#xA;Dynos are simply web processes or workers that serve your web application. Dynos in Heroku are allocated based on the build process, once the slug is created a dyno is created as it runs on a VM container. This simply means there are limitations on how to use the web application and its sleep process. The hobby tier is sufficient for normal testing projects and side projects though you will have to pay and move into advance tiers to increase the dyno allocations and scaling of those web processes. &#xA;&#xA;It&#39;s not a simple thing to understand but to keep it simple, it might be a container to process the client&#39;s request and serve the page for a finite duration of the interaction. Also, your application will sleep after half an hour, if you try to reload the application every half an hour it will consume your resource allocation for the month and this is how the tiers and divided for paid services on Heroku. You can check out the detail over [here](https://www.heroku.com/pricing#containers).&#xA;&#xA;## Conclusion&#xA;&#xA;So, that is one of the ways we can deploy a Django application on Heroku with the PostgreSQL database. You can find the [django-blog project] on [GitHub] for following along with the deployment process.  In the next few parts of the series, we will be hopefully covering other platforms where you can deploy a Django o application.&#xA;&#xA;Hopefully, you liked the above tutorial, if you have any questions. feedback, or queries, you can contact me on the Social handles provided below. Thank you for reading and till the next post Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Operators</title>
      <link>https://www.meetgor.com/posts/golang-operators</link>
      <description>Introduction In this 13th part of the series, we will be exploring the fundamentals of operators in Golang. We will be exploring the basics of operators and the</description>
      <pubDate>Sat, 07 May 2022 00:00:00 UTC</pubDate>
      <content>## Introduction &#xA;&#xA;In this 13th part of the series, we will be exploring the fundamentals of operators in Golang. We will be exploring the basics of operators and the various types like Arithmetic, Bitwise, Comparison, Assignment operators in Golang.&#xA;&#xA;Operators are quite fundamentals in any programming language. Operators are basically expressions or a set of character(s) to perform certain fundamental tasks. They allow us to perform certain trivial operations with a simple expression or character. There are quite a few operators in Golang to perform various operations.&#xA;&#xA;## Types of Operators&#xA;&#xA;Golang has a few types of operators, each type providing particular aspect of forming expressions and evaluate conditions.&#xA;&#xA;1. Bitwise Operators&#xA;2. Logical Operators&#xA;3. Arithmetic Operators&#xA;4. Assignment Operators&#xA;5. Comparison Operators&#xA;&#xA;### Bitwise Operators&#xA;&#xA;Bitwise Operators are used in performing operations on binary numbers. We can perform operation on a bit level and hence they are known as bitwise operators. Some fundamental bitwise operators include, `AND`, `OR`, `NOT`, and `EXOR`. Using this operators, the bits in the operands can be manipulated and certain logical operations can be performed. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;x := 3&#xA;&#x9;y := 5&#xA;&#x9;// 3 -&gt; 011&#xA;&#x9;// 5 -&gt; 101&#xA;&#x9;fmt.Println(&#34;X AND Y = &#34;, x &amp; y)&#xA;&#x9;fmt.Println(&#34;X OR Y = &#34;, x | y)&#xA;&#x9;fmt.Println(&#34;X EXOR Y = &#34;, x ^ y)&#xA;&#x9;fmt.Println(&#34;X Right Shift 1  = &#34;, x &gt;&gt; 1)&#xA;&#x9;fmt.Println(&#34;X Right Shift 2  = &#34;, x &gt;&gt; 2)&#xA;&#x9;fmt.Println(&#34;Y Left Shift 1 = &#34;, y &lt;&lt; 1)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run bitwise/main.go&#xA;&#xA;X AND Y =  1&#xA;X OR Y =  7&#xA;X EXOR Y =  6&#xA;X Right Shift 1  =  1&#xA;X Right Shift 2  =  0&#xA;Y Left Shift 1 =  10&#xA;&#xA;```&#xA;&#xA;We use the `&amp;` (AND operator) for performing AND operations on two operands. Here we are logically ANDing `3` and `5` i.e. `011` with `101` so it becomes `001` in binary or 1 in decimal.&#xA;&#xA;Also, the `|` (OR operator) for performing logical OR operation on two operands. Here we are logically ORing `3` and `5` i.e. `011` with `101` so it becomes `111` in binary or 7 in decimal.&#xA;&#xA;Also the `^` (EXOR operator) for performing logical EXOR operation on two operands. Here we are logically EXORing `3` and `5` i.e. `011` with `101` so it becomes `110` in binary or 6 in decimal.&#xA;&#xA;We have a couple of more bitwise operators that allow us to shift bits in the binary representation of the number. We have two types of these shift operators, right sift and left shift operators. The main function of these operator is to shift a bit in either right or left direction. &#xA;&#xA;In the above example, we have shifted `3` i.e. `011` to right by one bit so it becomes `001`. If we would have given `x &gt;&gt; 2` it would have become `0` since the last bit was shifted to right and hence all bits were 0.&#xA;&#xA;Similarly, the left shift operator sifts the bits in the binary representation of the number to the left. So, in the example above, `5` i.e. `101` is shifted left by one bit so it becomes `1010` in binary i.e. 10 in decimal. &#xA;&#xA;This was a basic overview of bitwise operators in Golang. We can use these basic operators to perform low level operations on numbers.&#xA;&#xA;### Comparison Operators&#xA;&#xA;This type of operators are quite important and widely used as they form the fundamentals of comparison of variables and forming boolean expressions. The comparison operator is used to compare two values or expressions. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;a := 45&#xA;&#x9;b := 12&#xA;&#x9;fmt.Println(&#34;Is A equal to B ? &#34;, a == b)&#xA;&#x9;fmt.Println(&#34;Is A not equal to B ? &#34;, a != b)&#xA;&#x9;fmt.Println(&#34;Is A greater than B ? &#34;, a &gt; b)&#xA;&#x9;fmt.Println(&#34;Is A less than B ? &#34;, a &lt; b)&#xA;&#x9;fmt.Println(&#34;Is A greater than or equal to B ? &#34;, a &gt;= b)&#xA;&#x9;fmt.Println(&#34;Is A less than or equal to B ? &#34;, a &lt;= b)&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run comparison/main.go&#xA;&#xA;Is A equal to B ?  false&#xA;Is A not equal to B ?  true&#xA;Is A greater than B ?  true&#xA;Is A less than B ?  false&#xA;Is A greater than or equal to B ?  true&#xA;Is A less than or equal to B ?  false&#xA;```&#xA;&#xA;We use simple comparison operators like `==` or `!=` for comparing if two values are equal or not. The expression `a == b` will evaluate to `true` if the values of both variables or operands are equal. However, the expression `a != b` will evaluate to `true` if the values of both variables or operands are not equal.&#xA;&#xA;Similarly, we have the `&lt;` and `&gt;` operators which allow us to evaluate expression by comparing if the values are less than or grater than the other operand. So, the expression `a &gt; b` will evaluate to `true` if the value of `a` is greater than the value of `b`. Also the expression `a &lt; b` will evaluate to `true` if the value of `a` is less than the value of `b`. &#xA;&#xA;Finally, the operators `&lt;=` and `&gt;=` allow us to evaluate expression by comparing if the values are less than or equal to and greater than or equal to the other operand. So, the expression `a &gt;= b` will evaluate to `true` if the value of `a` is greater than or if it is equal to the value of `b`, else it would evaluate to `false`. Similarly, the expression `a &lt;= b` will evaluate to `true` if the value of `a` is less than or if it is equal to the value of `b`, else it would evaluate to `false`.&#xA;&#xA;These was a basic overview of comparison operators in golang.&#xA;&#xA;### Logical Operators&#xA;&#xA;Next, we move on to the logical operators in Golang which allow to perform logical operations like `AND`, `OR`, and `NOT` with conditional statements or storing boolean expressions. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;a := 45&#xA;&#x9;b := &#34;Something&#34;&#xA;&#x9;fmt.Println(a &gt; 40 &amp;&amp; b == &#34;Something&#34;)&#xA;&#x9;fmt.Println(a &lt; 40 &amp;&amp; b == &#34;Something&#34;)&#xA;&#x9;fmt.Println(a &lt; 40 || b == &#34;Something&#34;)&#xA;&#x9;fmt.Println(a &lt; 40 || b != &#34;Something&#34;)&#xA;&#x9;fmt.Println(!(a &lt; 40 || b != &#34;Something&#34;))&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run logical/main.go&#xA;&#xA;true&#xA;false&#xA;true&#xA;false&#xA;true&#xA;```&#xA;&#xA;Here, we have used logical operators like `&amp;&amp;` for Logical AND, `||` for logical OR, and `!` for complementing the evaluated result. The `&amp;&amp;` operation only evaluates to `true` if both the expressions are `true` and `||` OR operator evaluates to `true` if either or both the expressions are `true`. The `!` operator is used to complement the evaluated expression from the preceding parenthesis.&#xA;&#xA;### Arithmetic Operators&#xA;&#xA;Arithmetic operators are used for performing Arithmetic operations. We have few basic arithmetic operators like `+`, `-`, `*`, `/`, and `%` for adding, subtracting, multiplication, division, and modulus operation in golang. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;a := 30&#xA;&#x9;b := 50&#xA;&#x9;fmt.Println(&#34;A + B = &#34;, a+b)&#xA;&#x9;fmt.Println(&#34;A - B = &#34;, a-b)&#xA;&#x9;fmt.Println(&#34;A * B = &#34;, a*b)&#xA;&#x9;fmt.Println(&#34;A / B = &#34;, a/b)&#xA;&#x9;fmt.Println(&#34;A % B = &#34;, a%b)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run arithmetic/main.go&#xA;A + B =  80&#xA;A - B =  -20&#xA;A * B =  1500&#xA;A / B =  0&#xA;A % B =  30&#xA;```&#xA;&#xA;These are the basic mathematical operators in any programming language. We can use `+` to add two values, `-` to subtract two values, `*` to multiply to values, `/` for division of two values and finally `%` to get the remainder of a division of two values i.e. if we divide 30 by 50, the remainder is 30 and the quotient is 0. &#xA;&#xA;We also have a few other operators like `++` and `--` that help in incrementing and decrementing values by a unit value. Let&#39;s say we have a variable `k` which is set to `4` and we want to increment it by one, so we can definitely use `k = k + 1` but it looks kind of too long, we have a short notation for the same `k++` to do the same.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;k := 3&#xA;&#x9;j := 20&#xA;&#x9;fmt.Println(&#34;k = &#34;, k)&#xA;&#x9;fmt.Println(&#34;j = &#34;, j)&#xA;&#x9;k++&#xA;&#x9;j--&#xA;&#x9;fmt.Println(&#34;k = &#34;, k)&#xA;&#x9;fmt.Println(&#34;j = &#34;, j)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run arithmetic/main.go&#xA;&#xA;k =  3&#xA;j =  20&#xA;&#xA;k =  4&#xA;j =  19&#xA;```&#xA;&#xA;So, we can see that the variable `k` is incremented by one and variable `j` is decremented by `1` using the `++` and `--` operator.&#xA;&#xA;### Assignment Operators&#xA;&#xA;These types of operators are quite handy and can condense down large operations into simple expressions. These types of operators allow us to perform operation on the same operand. Let&#39;s say we have the variable `k` set to `20` initially, we want to add `30` to the variable `k`, we can do that by using `k = k + 30` but a more sophisticated way would be to use `k += 30` which adds `30` or any value provided the same variable assigned and operated on.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;var a int = 100&#xA;&#x9;b := 20&#xA;&#x9;fmt.Println(&#34;a = &#34;, a)&#xA;&#x9;fmt.Println(&#34;b = &#34;, b)&#xA;&#x9;a += 30&#xA;&#x9;fmt.Println(&#34;a = &#34;, a)&#xA;&#x9;b -= 5&#xA;&#x9;fmt.Println(&#34;b = &#34;, b)&#xA;&#x9;a *= b&#xA;&#x9;fmt.Println(&#34;a = &#34;, a)&#xA;&#x9;fmt.Println(&#34;b = &#34;, b)&#xA;&#x9;a /= b&#xA;&#x9;fmt.Println(&#34;a = &#34;, a)&#xA;&#x9;fmt.Println(&#34;b = &#34;, b)&#xA;&#x9;a %= b&#xA;&#x9;fmt.Println(&#34;a = &#34;, a)&#xA;&#x9;fmt.Println(&#34;b = &#34;, b)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run assignment/main.go&#xA;&#xA;a =  100&#xA;b =  20&#xA;&#xA;a =  130&#xA;b =  15&#xA;&#xA;a =  1950&#xA;b =  15&#xA;&#xA;a =  130&#xA;b =  15&#xA;&#xA;a =  10&#xA;b =  15&#xA;```&#xA;&#xA;From the above example, we are able to perform operations by using shorthand notations like `+=` to add the value to the same operand. These also saves a bit of time and memory not much but considerable enough. This allow us to directly access and modify the contents of the provided operand in the register rather than assigning different registers and performing the operations.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from the following part of the series, we were able to learn the basics of operators in golang. Using some simple and easy to understand examples, we were able to explore different types of operators like arithmetic, logical, assignment and bitwise operators in golang. These are quite fundamental in programming in general, this lays a good foundation for working with larger and complex projects that deal with any kind of logic in it, without a doubt almost all of the applications do have a bit of logic attached to it. So, we need to know the basics of operators in golang.</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Math Package</title>
      <link>https://www.meetgor.com/posts/golang-math</link>
      <description>Introduction Moving on in the 100 days of golang series, we can take a look into the math package in golang&#39;s standard library. In programming, math is quite cr</description>
      <pubDate>Mon, 25 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Moving on in the 100 days of golang series, we can take a look into the math package in golang&#39;s standard library. In programming, math is quite critical aspect, we need to perform certain mathematical operations quite regularly so golang&#39;s standard library has a package for serving some quite commonly used math functions and procedures. We&#39;ll take a look at some of the basic and common functions which are available in the math package.&#xA;&#xA;## Mathematical Constants&#xA;&#xA;We have some constants like `pi`, `e`, `Phi` already defined as constants in the math package of the standard library in golang. They have a precision till 15 digits stored in float64 values. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(&#34;Pi = &#34;, math.Pi)&#xA;&#x9;fmt.Println(&#34;E = &#34;, math.E)&#xA;&#x9;fmt.Println(&#34;Phi = &#34;, math.Phi)&#xA;&#x9;fmt.Println(&#34;Sqrt of 2 = &#34;, math.Sqrt2)&#xA;&#x9;fmt.Println(&#34;Naturla Log 2 = &#34;, math.Ln2)&#xA;&#x9;fmt.Println(&#34;Naturla Log 10 = &#34;, math.Ln10)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/constants.go&#xA;Pi =  3.141592653589793&#xA;E =  2.718281828459045&#xA;Phi =  1.618033988749895&#xA;Sqrt of 2 =  1.4142135623730951&#xA;Naturla Log 2 =  0.6931471805599453&#xA;Naturla Log 10 =  2.302585092994046&#xA;```&#xA;&#xA;We can use these constants in trigonometric calculations and also in scientific computing. Further, you can get a list of all constants defined in the math package of the go standard library from the [documentation](https://pkg.go.dev/math#pkg-constants).&#xA;&#xA;## Basic Math functions&#xA;&#xA;We have some quite basic and fundamental functions in the math package that can be used commonly in many programs. Let&#39;s take a look at a few of them.&#xA;&#xA;### - Abs :parameters (float64) , returns float64&#xA;&#xA;As the name suggest, the [Abs](https://pkg.go.dev/math#Abs) it returns the absolute result of a numbers. It takes a parameter as a float64 value and returns the absolute value of the provided number as a `float64` number.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;a := 45&#xA;&#x9;b := 100&#xA;&#x9;diff := a - b&#xA;&#x9;fmt.Println(diff)&#xA;&#xA;&#x9;absolute_diff := math.Abs(float64(a) - float64(b))&#xA;&#x9;fmt.Println(absolute_diff)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/main.go&#xA;-55&#xA;55&#xA;```&#xA;&#xA;As, we can see the `Abs` function takes in a float64 value and returns a absolute value of the given number that too a `float64` value. We need to cast the numbers `a` and `b` into `float64` as we have not provided the initial values and so the compiler has assigned the type to them as `int`. &#xA;&#xA;#### Type Casting&#xA;&#xA;We can caste a type into other by using the variable around the type name as `type_name(variable)`. In the above example we have converted the `int` value `45` into a `float64` as `float64(45)` which again yields `45` but as a float64 type. &#xA;&#xA;```go&#xA;foo := 77&#xA;fmt.Printf(&#34;Type of foo = %T &#xA;&#34;, foo)&#xA;fmt.Println(&#34;foo = &#34;, int(foo))&#xA;fmt.Println(&#34;String Cast: &#34;, string(foo))&#xA;fmt.Println(&#34;Float Cast: &#34;, float64(foo))&#xA;```&#xA;&#xA;Though not every type cannot be casted into due to quite oblivious reasons, for instance `77` or any other integer value (except for 0 or 1) cannot be converted into boolean value.&#xA;&#xA;**Hello Gopher! Just a small note, the math package almost deals with float64 types rather than int to avoid backwards compatibility to perform operations on floating point values which can be casted into integers rather than defining separate functions for decimal values and integers.**&#xA;&#xA;### - Min/Max: parameters(float64) , returns float64&#xA;&#xA;We can get the [minimum](https://pkg.go.dev/math#Min) and [maximum](https://pkg.go.dev/math#Max) value of the two numbers provided to the function.&#xA;&#xA;```go&#xA;var float64 a = 120&#xA;var float64 b = 54&#xA;&#xA;minimum := math.Min(float64(a), float64(b))&#xA;maximum := math.Max(float64(a), float64(b))&#xA;fmt.Printf(&#34;Min of %v and %v is %v &#xA;&#34;, a, b, minimum)&#xA;fmt.Printf(&#34;Max of %v and %v is %v &#xA;&#34;, a, b, maximum)&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/main.go&#xA;Min of 120 and 54 is 54&#xA;Max of 120 and 54 is 120&#xA;```&#xA;&#xA;### - Pow  : parameters(float64, float64) , returns float64&#xA;### - Pow10: parameters(int)              , returns float64&#xA;&#xA;The [Pow](https://pkg.go.dev/math#Pow) function is used to get the exponential result of the base number. So, if we provide the values x and y, we would get the result as the number x raised to y. &#xA;&#xA;```go&#xA;var x float64 = 3&#xA;var y float64 = 4&#xA;z := math.Pow(x, y)&#xA;z10 := math.Pow10(int(x))&#xA;fmt.Println(&#34;X ^ Y = &#34;, z)&#xA;fmt.Println(&#34;10 ^ X = &#34;, z10)&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/main.go&#xA;X ^ Y =  81&#xA;10 ^ X =  1000&#xA;```&#xA;&#xA;We also have the [Pow10](https://pkg.go.dev/math#Pow10) function which works just like the pow function except the x value is 10 and we don&#39;t have to provide it, there is just one parameter as a integer which returns a float64 value.&#xA;&#xA;### - Sqrt: parameters(float64) , returns float64&#xA;&#xA;The [Sqrt](https://pkg.go.dev/math#Sqrt) function as the name suggest, it is used to get the square root value of a floating point value which returns a float64 value.&#xA;&#xA;```go&#xA;var k float64 = 125&#xA;sqrt_of_k := math.Sqrt(k)&#xA;cbrt_of_k := math.Cbrt(k)&#xA;&#xA;fmt.Printf(&#34;Square root of %v = %v &#xA;&#34;, k, sqrt_of_k)&#xA;fmt.Printf(&#34;Cube root of %v = %v &#xA;&#34;, k, cbrt_of_k)&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/main.go&#xA;Square root of 125 = 11.180339887498949&#xA;Cube root of 125 = 5&#xA;```&#xA;&#xA;### - Trunc: parameters(float64) , returns float64&#xA;&#xA;The [Truncate](https://pkg.go.dev/math#Trunc) function provides the way to round off a decimal value(float64) to an integer but it returns a value in `float64`.&#xA;&#xA;```go&#xA;var p float64 = 445.235&#xA;trunc_p := math.Trunc(p)&#xA;fmt.Printf(&#34;Truncated value of %v = %v &#xA;&#34;, p, trunc_p)&#xA;p = 123.678&#xA;trunc_p = math.Trunc(p)&#xA;fmt.Printf(&#34;Truncated value of %v = %v &#xA;&#34;, p, trunc_p)&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/main.go&#xA;Truncated value of 445.235 = 445&#xA;Truncated value of 123.678 = 123&#xA;```&#xA;&#xA;### - Ceil : parameters(float64) , returns float64&#xA;&#xA;We also can use the [Ceil](https://pkg.go.dev/math#Ceil) function to roud up the value to the next integer value but the value is returned as `float64`. &#xA;&#xA;```go&#xA;var c float64 = 33.25&#xA;ceil_c := math.Ceil(c)&#xA;fmt.Printf(&#34;Ceiled value of %v = %v &#xA;&#34;, c, ceil_c)&#xA;c = 134.78&#xA;ceil_c = math.Ceil(c)&#xA;fmt.Printf(&#34;Ceiled value of %v = %v &#xA;&#34;, c, ceil_c)&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/main.go&#xA;Ceiled value of 33.25 = 34&#xA;Ceiled value of 134.78 = 135&#xA;```&#xA;&#xA;### - Trigonometric Functions&#xA;&#xA;Trigonometric functions are quite helpful that can help in intense mathematical computations in backend projects or precision dependent projects. We have functions [Sin](https://pkg.go.dev/math#Sin), [Cos](https://pkg.go.dev/math#Cos), [SinCos](https://pkg.go.dev/math#Sincos), [Tan](https://pkg.go.dev/math#Tan), hyperbolic functions in Trigonometric functions like [Sinh](https://pkg.go.dev/math#Sinh), [Cosh](https://pkg.go.dev/math#Cosh), [Tanh](https://pkg.go.dev/math#Tanh), and Inverse Trigonometric functions like [Asin](https://pkg.go.dev/math#Asin), [Asinh](https://pkg.go.dev/math#Asinh), etc. &#xA;&#xA;- Sin: parameters(float64) , returns float64&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// basic trigonometric functions&#xA;&#x9;var x float64 = math.Pi / 2&#xA;&#x9;sinx := math.Sin(x)&#xA;&#x9;cosx := math.Cos(x)&#xA;&#x9;tanx := math.Tan(x)&#xA;&#x9;fmt.Printf(&#34;Sin(%v) = %v &#xA;&#34;, x, sinx)&#xA;&#x9;fmt.Printf(&#34;Cos(%v) = %v &#xA;&#34;, x, cosx)&#xA;&#x9;fmt.Printf(&#34;Tan(%v) = %v &#xA;&#34;, x, tanx)&#xA;&#xA;&#x9;// hyperbolic trigonometric functions&#xA;&#x9;var h float64 = math.Pi / 2&#xA;&#x9;sinh := math.Sinh(h)&#xA;&#x9;cosh := math.Cosh(h)&#xA;&#x9;tanh := math.Tanh(h)&#xA;&#x9;fmt.Printf(&#34;Sinh(%v) = %v &#xA;&#34;, h, sinh)&#xA;&#x9;fmt.Printf(&#34;Cosh(%v) = %v &#xA;&#34;, h, cosh)&#xA;&#x9;fmt.Printf(&#34;Tanh(%v) = %v &#xA;&#34;, h, tanh)&#xA;&#xA;&#x9;// Inverse Trigonometric functions&#xA;&#x9;var y float64 = -1&#xA;&#x9;arc_sin := math.Asin(y) // -pi/2 radians or 90 degrees&#xA;&#x9;arc_cos := math.Acos(y) // pi randians or 180 degrees&#xA;&#x9;arc_tan := math.Atan(y) &#xA;&#x9;fmt.Printf(&#34;Sin^-1(%v) = %v &#xA;&#34;, y, arc_sin)&#xA;&#x9;fmt.Printf(&#34;Cos^-1(%v) = %v &#xA;&#34;, y, arc_cos)&#xA;&#x9;fmt.Printf(&#34;Tan^-1(%v) = %v &#xA;&#34;, y, arc_tan)&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/trignometric.go&#xA;Sin(1.5707963267948966) = 1&#xA;Cos(1.5707963267948966) = 6.123233995736757e-17&#xA;Tan(1.5707963267948966) = 1.6331239353195392e+16&#xA;Sinh(1.5707963267948966) = 2.3012989023072947&#xA;Cosh(1.5707963267948966) = 2.5091784786580567&#xA;Tanh(1.5707963267948966) = 0.9171523356672744&#xA;Sin^-1(-1) = -1.5707963267948966&#xA;Cos^-1(-1) = 3.141592653589793&#xA;Tan^-1(-1) = -0.7853981633974483&#xA;```&#xA;&#xA;Here we can see that the functions are working fine and giving a decently precise value. This might be enough for simple and smaller projects, though for higher precision and accuracy areas, other computations and programming is required to compute the values.&#xA;&#xA;### - Exponential and Logarithmic Functions&#xA;&#xA;We also have the exponential and logarithmic functions defined in the math package to leverage computations realted to formulae that deal with logarithmic or exponential calculations. &#xA;&#xA;- Exp  : parameters(flaot64) , returns float64&#xA;- Exp2 : parameters(flaot64) , returns float64&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// exponential function&#xA;&#x9;var x float64 = 2&#xA;&#x9;y := math.Exp(x)&#xA;&#x9;fmt.Println(&#34;e^x = &#34;, y)&#xA;&#x9;var n float64 = 3.5&#xA;&#x9;y = math.Exp2(n)&#xA;&#x9;fmt.Println(&#34;2^n = &#34;, y)&#xA;&#xA;&#x9;// Logarithmic function&#xA;&#x9;y = math.Log(x)&#xA;&#x9;fmt.Println(&#34;natural log x = &#34;, y)&#xA;&#xA;&#x9;n = 128&#xA;&#x9;y = math.Log2(n)&#xA;&#x9;fmt.Println(&#34;Log2 of 100 = &#34;, y)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/expo_log.go&#xA;e^x =  7.38905609893065&#xA;2^n =  11.31370849898476&#xA;natural log x =  0.6931471805599453&#xA;Log2 of 100 =  7&#xA;```&#xA;&#xA;Here, we have exponential functions such as `e^x` and `2^n` which might be useful in some common programming calculations. Also the logarithmic functions like `log x` which is natural log of x(base e), and `log2 n` which is logn to the base 2.&#xA;&#xA;## The Random package&#xA;&#xA;The `random` sub-package in golang provides some great tools for working with random numbers and generating them. It provides exhaustive list of functions and types that help in generating pseudo random numbers. &#xA;&#xA;- Int : parameters() , returns int&#xA;- Intn : parameters( int ) , returns int&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/rand&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;// random integer generation&#xA;&#x9;x := rand.Int()&#xA;&#x9;fmt.Println(x)&#xA;&#xA;&#x9;// random number generation till range&#xA;&#x9;for i := 0; i &lt; 5; i++ {&#xA;&#x9;&#x9;y := rand.Intn(10)&#xA;&#x9;&#x9;fmt.Println(y)&#xA;&#x9;}&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/rand.go&#xA;5577006791947779410&#xA;7&#xA;7&#xA;9&#xA;1&#xA;8&#xA;```&#xA;&#xA;In the above example, we have used the [Int](https://pkg.go.dev/math/rand@go1.18.1#Int) function in the random sub-package of the math package which generates a pseudo random integer of the range dependent on the system architecture generally `int32` or `int64`. We get a huge number which is pseudo random i.e. not truly random. If you try to execute the program a couple of time, you would notice the number remains the same and we are calling it random? Well we need to dive into random numbers and seeding for a different part of the series for sure. &#xA;&#xA;The [Intn](https://pkg.go.dev/math/rand@go1.18.1#Intn) function also generates a pseudo random number but this time, we define the range of the upper boundary to generate them. It is not inclusive of the number provided i.e. we have provide the value `10` so the number `10` is not included in the range. It&#39;s called half open interval. It starts from 0 so the range becomes mathematically `[0, n)` if n is the number provided to the Intn function. &#xA;&#xA;## The Bits package&#xA;&#xA;We also have a bit sub-package in the math package of the go standard library. This sub package is used for working around with bit manipulation and operations at the binary level. This is quite helpful in competitive programming , also in understanding the basics of data structures and fundamentals in computer science.&#xA;&#xA;- Add       : parameters(uint, uint, uint) , returns uint, uint &#xA;- Len       : parameters(uint) , returns int&#xA;- OnesCount : parameters(uint) , returns int&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/bits&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;s, c := bits.Add(0, 9, 1)&#xA;&#x9;fmt.Printf(&#34;Sum = %d &#xA;Carry = %d &#xA;&#34;, s, c)&#xA;&#xA;&#x9;// (45) in decimal = (1 0 1 1 0 1) in binary&#xA;&#x9;var n uint = 45&#xA;&#x9;length := bits.Len(n)&#xA;&#x9;ones_in_45 := bits.OnesCount(n)&#xA;&#x9;fmt.Printf(&#34;Minimum bits required to represent 45 = %d &#xA;&#34;, length)&#xA;&#x9;fmt.Printf(&#34;Set Bits in 45 = %d &#xA;&#34;, ones_in_45)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/bit.go&#xA;Sum = 10&#xA;Carry = 0&#xA;Minimum bits required to represent 45 = 6&#xA;Set Bits in 45 = 4&#xA;```&#xA;&#xA;Here, in the above example, we have used the bits sub pacakge in the math package, the [Add](https://pkg.go.dev/math/bits@go1.18.1#Add) function allows us to provide the two numbers and a carry bit on which it returns two values the sum and the carry. The sum is defined as the summation of `x + y + carry` the two numbers and the carry bit. The carry bit needs to be either 0 or 1. &#xA;&#xA;Also the value provided the function i.e. `x and y` need to be unsigned `uint` iorder to work with bits. &#xA;&#xA;We also have the [Len](https://pkg.go.dev/math/bits@go1.18.1#Len) function which returns the maximum number of bits required to represent the provided unsigned integer. We have used 45 which is equivalent to `10110` and hence the function returns `6` as teh number of bits. The [OnesCount](https://pkg.go.dev/math/bits@go1.18.1#OnesCount) Function is also similar but it returns the number of set bits(the `1` bit) in the number provided to it. &#xA;&#xA;We&#39;ll see this sub package in a separate section of its own. Bits is really a great pacakge to work with bits and low level manipulation of numbers in Golang.&#xA;&#xA;## The Complex package&#xA;&#xA;The complex subpackage is really specific to the operation to the complex numbers and its operations. Using complex numbers with basic operations and trigonometric functions are provided in the package. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;&#34;math/cmplx&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;x := complex(5, 8)&#xA;&#x9;y := complex(3, 4)&#xA;&#x9;mod_x := cmplx.Abs(x)&#xA;&#x9;mod_y := cmplx.Abs(y)&#xA;&#x9;conj_x := cmplx.Conj(x)&#xA;&#x9;phase_x := cmplx.Phase(x)&#xA;&#x9;mod, phase := cmplx.Polar(x)&#xA;&#xA;&#x9;fmt.Println(&#34;x = &#34;, x)&#xA;&#x9;fmt.Println(&#34;Modulus of x = &#34;, mod_x)&#xA;&#x9;fmt.Println(&#34;Modulus of y = &#34;, mod_y)&#xA;&#x9;fmt.Println(&#34;Conjugate of x = &#34;, conj_x)&#xA;&#x9;fmt.Println(&#34;Phase of x = &#34;, phase_x)&#xA;&#x9;fmt.Printf(&#34;Polar Form : %v, %v&#xA;&#34;, mod, phase)&#xA;&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic-functions/complex.go&#xA;x =  (5+8i)&#xA;Modulus of x =  9.433981132056603&#xA;Modulus of y =  5&#xA;Conjugate of x =  (5-8i)&#xA;Phase of x =  1.0121970114513341&#xA;Polar Form : 9.433981132056603, 1.0121970114513341&#xA;&#xA;```&#xA;&#xA;We have used the complex function to create complex numbers. The `cmplx` subpackage in the math package provides many functions to play with trignometric and simple operations with complex numbers. The [Abs](https://pkg.go.dev/math/cmplx@go1.18.1#Abs) function is used to get the modulus of the provided complex number. The modulus is calculated with `sqrt(x^2 + y^2)`, this gives the magnitude of the complex number. Here, we get the modulus as `9.43` as `sqrt(25 + 64)` for the complex number `5+8i`. Also, for `3+4i` the modulus becomes `sqrt(9+16)` which turns out to be `5`. The [Conjugate](https://pkg.go.dev/math/cmplx@go1.18.1#Conj) function is used to get the conjugate of the provided complex number. &#xA;&#xA;Also the phase or the Argument of the complex number can be obtained with the [Phase](https://pkg.go.dev/math/cmplx@go1.18.1#Phase) function. The phase is caluculated by the formula `tan^-1 (y/x)` but the angle is returned in randians. So for `x = 5+8i` the argument/Phase becomes `tan^-1( 8/5)` which is `57.995 degrees` or `1.012 radians`. &#xA;&#xA;We have the [Polar](https://pkg.go.dev/math/cmplx@go1.18.1#Polar) function which gives the polar form of the complex number i.e. `(modulus r, phase theta)` So this function returns two values the modulus and the argument/phase of the complex number. We have already calcualted both the values but this functions gets both of them in a single function. Quite neat, we can even ignore one value after the return of the function by using the ignore operator `_, phase := cmplx.Polar(5+7i)` to only care and get the phase/argument of the complex number or `modulus, _ := cmplx.Polar(5+7i)` to get the modulus from the complex number.&#xA;&#xA;So that&#39;s some basic operations on complex numbers, this might have very few use cases but it&#39;s still quite useful when needed. &#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So from this section we were able to get a bit deeper introduction to the `math` package in golang&#39;s standard library. We covered some few important functions and constants in the main math package along with the glimpse of other subpackages like `rand`, `cmplx` and `bits`. We didn&#39;t get too much in detail with those sub packages as they can be explored on a separate section of their own. Hopefully, you have got a godd overview of the math package in golang which again is really important aspect in programming.&#xA;&#xA;Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Packages</title>
      <link>https://www.meetgor.com/posts/golang-packages</link>
      <description>Introduction In this 11th part of the series, we will be covering packages in golang. Package is a cool way to organize code in large projects. We can create a</description>
      <pubDate>Fri, 22 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this 11th part of the series, we will be covering packages in golang. Package is a cool way to organize code in large projects. We can create a separate file which can include certain helper functions or variables from other files or scripts. There are couple of types of packages like packages from the standard library, open sourced community packages and custom packages that you can build of your own. In this particular section, we will be covering the fundamentals of packages and exploring the standard library in golang. &#xA;&#xA;## What is a Package ?&#xA;&#xA;Package in simplest of terms is a collection of go scripts that can serve a purpose. Just like `fmt` has several functions like `Println`, `Printf`, `Scan`, `Scanf`, etc. most packages have functions that we can use in our own programs to solve a simple problem. We have already created many packages but none of them have been super useful so far, we just used the statement `package main` and didn&#39;t use the package anywhere. Every package has a entry point called `main` along with a entry function `main` which is triggered when we run the package. &#xA;&#xA;```&#xA;- package_name&#xA;    - script_1.go&#xA;    - script_2.go&#xA;&#xA;    - sub_package_name&#xA;        - utility_1.go&#xA;    - go.mod&#xA;```&#xA;&#xA;The above is a simple folder structure for a package in golang, we have the package itself as the name of the folder. Inside of the package folder, we would have the scripts or sub-packages if any. Also there is a `go.mod` file in all go source package folders which contain the meta information about the package and it&#39;s dependencies. &#xA;&#xA;Let&#39;s take a look at the [fmt](https://github.com/golang/go/tree/master/src/fmt) package source code, it has a [print.go](https://github.com/golang/go/blob/master/src/fmt/print.go) file which has all the functions associated with printing, similarly separate files for different core functionality.&#xA;&#xA;We will soon into the details of packages in this series. Right now, we only care about the fundamentals of a package in golang.&#xA;&#xA;## Importing Packages&#xA;&#xA;We have been already importing a package since writing our hello world in go, the `fmt` package which holds some functions for formatting, printing, logging and various string input/output manipulation. &#xA;&#xA;```go&#xA;import &#34;fmt&#34;&#xA;```&#xA;&#xA;So, we have used the simple `import &#34;package&#34;` statement, but there are a couple of more ways to import packages if we have multiple packages to import.&#xA;&#xA;```go&#xA;import (&#xA;    &#34;fmt&#34;&#xA;    &#34;math&#34;&#xA;)&#xA;```&#xA;&#xA;Using the `()` and by specifying the name of the package we can import multiple packages at once. Also there is a harder way out there, if you really like to toil hard.&#xA;&#xA;```go&#xA;import &#34;fmt&#34;&#xA;import &#34;math&#34;&#xA;```&#xA;&#xA;This is generally avoided as it just looks too pythonic, we are in golang. &#xA;&#xA;### Aliasing Imported packages&#xA;&#xA;We can alias an package a name whatever we want for the usage in the rest of the script file. This allows a bit better semantics of longer package names into readable code. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;&#x9;r &#34;math/rand&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(r.Int())&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run import.go&#xA;3454565657742387432&#xA;```&#xA;&#xA;We have imported the package `math/rand`, here [rand](https://pkg.go.dev/math/rand@go1.18.1#Int) is a sub package of the main package math. Hence we have aliased the rand package as `r` and thus, we can use r to access all the functions and other types from the package. &#xA;&#xA;&#xA;### Blank Package Import&#xA;&#xA;We can even import the package but not use it without getting a compilation error. So, the blank identifier is used in golang to ignore the initialized or returned values from any context and avoid the compilation warning or errors.  &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;_ &#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run blank_import.go&#xA;```&#xA;&#xA;Here, we have imported `fmt` package with a `_` blank identifier but we didn&#39;t use it anywhere, still we don&#39;t get any error messages and it compiled the code successfully.&#xA;&#xA;## Standard Library Packages&#xA;&#xA;The Golang Standard Library has some good number of packages which can be used for some general tasks like formatting input/output, file handling, web requests, system commands, etc. You can check out the entire list along with the documentation on the [official website](https://pkg.go.dev/std). &#xA;&#xA;We can import these standard library packages just by parsing their name in the string quotes like we did with `fmt` as `&#34;fmt&#34;`. We have previously used the `rand` package from the math standard library package as a sub-package by using the statement `&#34;math/rand&#34;`, if we want we can import the entire `math` package as `&#34;math&#34;` but that&#39;s unwanted and we import only the package which we really need.&#xA;&#xA;There are other packages as well like `bufio` which is used for reading and performing operations with text, `os` for working with files systems and operating system level stuff, and other packages which are specific to use cases like rendering templates, time, sorting, math operations, encoding, etc. We will dive into some of them throughout this series. &#xA;&#xA;## Installing Packages&#xA;&#xA;We can now get into installing other packages which are not in the standard library. You can get the documentation along with all references for a particular package on the official Golang [package repository](https://pkg.go.dev/). We use the CLI command to grab the packages into our `GOPATH`. OK, GOPATH, we have not covered this!&#xA;&#xA;### GOPATH&#xA;&#xA;GOPATH is the path or the location in your system&#39;s disk where all the packages and modules are stored. You can get the default location of your GOPATH environment variable from the simple shell command.&#xA;&#xA;```&#xA;$ echo $GOPATH&#xA;C:\Users�cer\go&#xA;```&#xA;&#xA;It has a few folders namely, `bin`, `pkg`, and `src`. These folder server different purpose like: &#xA;&#xA;- `bin` for storing the binaries generated from `go install` command&#xA;- `pkg` for storing pre-compiled source files and objects for quicker generation of executables and compilation.&#xA;- `src` for storing all the go source files of packages and modules.&#xA;&#xA;### Go Get command&#xA;&#xA;Now, let&#39;s see how to install a package from the go community on GitHub.  &#xA;&#xA;```&#xA;go get github.com/gorilla/mux&#xA;```&#xA;&#xA;We have installed a package which is a powerful HTTP router and a URL dispatcher and it can also be used to make web applications. It&#39;s called `mux`, we won&#39;t be using it right away just to get a feel for installing and playing with packages at the moment. &#xA;&#xA;After executing the command `go get` you should see a folder to be added in the `$GOPATH\pkg\mod` as `github.com\gorilla` and inside of it we should have a mux folder with the latest version. So, the `go get` command is used to download and install a package along with its all dependencies.&#xA;&#xA;### Set up a project for using a package&#xA;&#xA;Now, we have got the package so we can import it from anywhere in our go environment. &#xA;&#xA;#### Create a new folder (any name)&#xA;&#xA;You can test a go package from a isolated environment from the GOPATH by creating using the mod command. &#xA;The `mod init` command is a official way to create modules in golang and it creates kind of a environment to work on a templated project and structure the project/module/package properly. &#xA;&#xA;```&#xA;go mod init&#xA;```&#xA;&#xA;#### Install the packages &#xA;&#xA;We have already installed the package but that was a global install in the GOPATH, so we need to install it in this module. &#xA;&#xA;```&#xA;go get github.com/gorilla/mux&#xA;```&#xA;&#xA;#### Use the package&#xA;&#xA;Now, we can move into actually using the package in our source go file. We won&#39;t do any thing complicated just a simple web server. It&#39;s too easy don&#39;t worry!&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;net/http&#34;&#xA;&#xA;&#x9;&#34;github.com/gorilla/mux&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#xA;&#x9;router := mux.NewRouter()&#xA;&#xA;&#x9;router.HandleFunc(&#34;/&#34;, Server)&#xA;&#xA;&#x9;http.ListenAndServe(&#34;:8000&#34;, router)&#xA;}&#xA;&#xA;func Server(writer http.ResponseWriter, request *http.Request) {&#xA;&#x9;writer.Write([]byte(&#34;Hello Mux!&#34;))&#xA;}&#xA;```&#xA;&#xA;We firstly setup a router(a pair of HTTP route with some logic) from the [NewRouter](https://pkg.go.dev/github.com/gorilla/mux#NewRouter) function provided by `mux`. We&#39;ll attach a function to this newly created router by pairing a URL with a function. So, in simple terms when the mentioned URL is visited or a GET request is sent(don&#39;t get into too much details) we want a function to be invoked or called which does something. Finally we will set up a web server that listens at a port with the created router.&#xA;&#xA;The final piece is the function which we will call when the URL is visited i.e. the `Server` function it can be any name. The function needs to have two arguments as it is invoked by a router, the writer and the request. The Writer is a Response writer i.e. to write the message to the server. We will simply use the [Write](https://pkg.go.dev/net/http#Header.Write) function to simply print a array of bytes. &#xA;&#xA;The type of the writer is specifically `http.ResponseWriter` as we want to write a simple HTTP response header. Also the request is a [http.Request](https://pkg.go.dev/net/http#Request) type as we simply accept a HTTP request.&#xA;&#xA;So, on running the following script, we will be able to see a simple HTTP response on the localhost at port `8000` or on your provided port.&#xA;&#xA;```&#xA;go run main.go&#xA;```&#xA;&#xA;![GO Gorilla MUX web server](https://res.cloudinary.com/techstructive-blog/image/upload/v1650645111/blog-media/gzje6ukyystp2x27u83o.png)&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, we were able to dive a bit deeper in golang packages and modules. We covered from installation to importing packages in Golang, we also touched on basics of initializing a module in Golang. Hopefully, we were able to get the basics covered when it comes to packages in Golang.&#xA;&#xA;Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Pointers</title>
      <link>https://www.meetgor.com/posts/golang-pointers</link>
      <description>Introduction In the tenth part of the series, we will be looking into an interesting concept in programming i.e. Pointer. It&#39;s a simple thing but a really power</description>
      <pubDate>Tue, 19 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the tenth part of the series, we will be looking into an interesting concept in programming i.e. Pointer. It&#39;s a simple thing but a really powerful concept. Using pointers we can do several things very easily rather than writing a lot of code for a simple thing. We will be looking into basic concepts like declaration, referencing, de-referencing, and some examples on passing by reference, along with a pointer to struct instances. &#xA;&#xA;&#xA;## Pointers&#xA;&#xA;Pointers are simple, it&#39;s just their use case that makes it a big concept. Pointers are really powerful, they can do a lot of things that might seem impossible for a given problem. A pointer is a variable but unlike another variable which stores values in the form of integers, string, boolean, etc. pointers store the memory address. Memory address can be any valid location in memory that generally holds a variable. &#xA;&#xA;So, using pointers we can play with the memory address of variables and modify the contents of the variable directly using the memory address rather than accessing the variable. In golang, we have ways to store pointers and perform operations for the same.&#xA;&#xA;## Declaring Pointers&#xA;&#xA;To declare pointers in golang, we can use the `*` before the type of data type we want to refer to. This means a pointer needs to specify which data type it is referencing as a measure of caution to mismatch types in the variable. Initially, the pointer variable is mapped to `&lt;nil&gt;` that is it points to nothing but a null pointer. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#x9;var ptr *int&#xA;&#x9;fmt.Println(ptr)&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run pointer.go&#xA;&lt;nil&gt;&#xA;```&#xA;&#xA;As we can see, the pointer that references an integer is initialized to nil. We have used `*` before the data type, this can be anything like `*string`, `*bool`, `*float64`, etc. &#xA;&#xA;## The * and &amp; in Pointers&#xA;&#xA;After declaring a pointer, we can now move into assigning a pointer a memory address. Using the `&amp;` or the ampersand operator we can get the memory address of a variable. &#xA;&#xA;```go&#xA;var n := 34&#xA;var a_pointer *int = &amp;n&#xA;fmt.Println(a_pointer)&#xA;```&#xA;&#xA;```&#xA;$ go run pointer.go&#xA;0xc0000a6080&#xA;```&#xA;&#xA;Here, we can see that the pointer variable is storing the memory address of an integer variable. Don&#39;t worry about the value of the pointer variable, it is just a memory location on your machine. So, we use the `&amp;` to access the memory address of any variable.&#xA;&#xA;We have seen that the `*` is used to declare a pointer variable, but it is also used for dereferencing a pointer. So, if we used `&amp;` to get the memory address of a variable, similarly we can use the `*` to get back the value from the memory address. Both are opposite in terms of accessing the value. &#xA;&#xA;```go&#xA;n := 34&#xA;var a_pointer *int = &amp;n&#xA;fmt.Println(a_pointer)&#xA;m := *a_pointer&#xA;fmt.Println(m)&#xA;```&#xA;&#xA;```&#xA;$ go run pointer.go&#xA;0xc0000a8080&#xA;34&#xA;```&#xA;&#xA;As we can see, we have accessed the value stored in the pointer variable(`a_pointer`) by using the `*`. Here, the variable which we have created `m` will be of type whatever is stored in the memory address of the provided pointer variable. In this case, it is `int`, it can anything.&#xA;&#xA;So, this is how `*` and the `&amp;` work in Golang. The `*` is used for declaring pointer variables as well as de-referencing pointer variables, and the `&amp;` operator is used for accessing the memory address of the variable. &#xA;&#xA;That&#39;s basically the concept of pointers in golang. It&#39;s that simple. Using the simple concept of referencing and de-referencing, we can perform some operations like passing by reference to functions which will allow us to actually pass the value rather than the copy of the variable&#39;s value. &#xA;&#xA;## Passing by Reference to Function&#xA;&#xA;Now we have the fundamentals of pointers cleared, we can move into actually using them to do some really useful operations. Generally, when we use parameters such as integers, strings, bool, etc. we are passing the copy of the variables into the function rather than the actual value of the variable. This is where pointers come in. By using pointers to pass the memory address of the variables we need to pass in we actually pass the location of the variables.&#xA;&#xA;Let&#39;s take a look at a simple example of a function that swaps the value of two variables.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func swap(x *int, y *int) {&#xA;&#x9;temp := *x&#xA;&#x9;*x = *y&#xA;&#x9;*y = temp&#xA;}&#xA;&#xA;func main() {&#xA;&#xA;    x := 3&#xA;&#x9;y := 6&#xA;&#x9;k := &amp;x&#xA;&#x9;p := &amp;y&#xA;&#x9;fmt.Printf(&#34;Before swapping : x = %d and y = %d.&#xA;&#34;, x, y)&#xA;&#x9;swap(k, p)&#xA;&#x9;fmt.Printf(&#34;After swapping  : x = %d and y = %d.&#xA;&#34;, x, y)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run pointer.go&#xA;Before swapping : x = 3 and y = 6.&#xA;After swapping  : x = 6 and y = 3.&#xA;```&#xA;&#xA;We can see here, that we have used pointers to pass the value of parameters to a function. Without using pointers, the value of the variable is passed as a copy but by using pointers, we are actually passing the memory address. In the main function, we first store the memory address of two variables `x` and `y` into two different pointer variables. We now can construct a function that accepts two memory addresses and perform further operations. &#xA;&#xA;Inside the function, we have de-referenced the pointer variables as with `*`. Don&#39;t confuse `x *int` with `*x`. We use `x *int` to make the function realize that we are passing a pointer variable of an integer value, and `*x` is used to de-reference the memory address which is stored in `x`. &#xA;&#xA;So, simply we &#xA;- store the value in the memory location stored at `x` in the temp variable&#xA;- store the value at the memory address stored in `y` into the memory address `x`. &#xA;- store the value of the temp variable into the memory address stored in `x`.&#xA;&#xA;We have successfully swapped two values without returning any values from the function.&#xA;&#xA;## Pointer to a Struct Instance/Object&#xA;&#xA;We can now even modify the values of Struct objects/instances by referencing the instance to a pointer. By assigning the pointer variable to a struct instance, we have access to its associated properties and function. Thereby we can modify the contents directly from the pointer variable.&#xA;&#xA;Let&#39;s take a look at a basic example of modifying properties using a pointer to a struct instance.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type Book struct {&#xA;&#x9;pages int&#xA;&#x9;genre string&#xA;&#x9;title string&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;new_book := Book{120, &#34;fiction&#34;, &#34;Harry Potter&#34;}&#xA;&#x9;fmt.Println(new_book)&#xA;&#x9;fmt.Printf(&#34;Type of new_book -&gt; %T&#xA;&#34;, new_book)&#xA;&#x9;book_ptr := &amp;new_book&#xA;&#x9;book_ptr.title = &#34;Games of Thrones&#34;&#xA;&#x9;fmt.Println(new_book)&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run pointer.go&#xA;{120 fiction Harry Potter}&#xA;Type of new_book -&gt; main.Book&#xA;{120 fiction Games of Thrones}&#xA;```&#xA;&#xA;So, we have created a pointer variable of the type which is a struct `Book`, this gives us access to the memory addresses associated with various properties defined in the struct. Using the pointer variable, we can access properties and thereby change the value directly as we have the memory address stored in `book_ptr`. So, if we say `book_ptr.title = &#34;Games of Thrones&#34;`, we are storing the string directly into the memory address of the `new_book` object as `book_ptr` refers to the memory addresses to the struct object `new_book`. &#xA;&#xA;Here, we have literally changed the value of a property in a struct object using pointers. This is really powerful and time-saving. If pointers were not a thing, you would have to write a separate function for doing the same. &#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, that&#39;s it we have covered enough basics of pointers so that we are able to understand the working of simple scripts or programs. Even simple data structures like slices or strings can be understood by using pointers in golang. From this part of the series, we were able to understand the declaration, referencing, and de-referencing of pointers along with passing by reference to functions and creating pointers to struct instances. &#xA;&#xA;Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Structs</title>
      <link>https://www.meetgor.com/posts/golang-structs</link>
      <description>Introduction Moving on to the 9th part of the series, we will be understanding structs in golang. Structs are an important aspect of programming in Golang, they</description>
      <pubDate>Thu, 14 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Moving on to the 9th part of the series, we will be understanding structs in golang. Structs are an important aspect of programming in Golang, they provide a way to define custom types and add functionality to them. We will be understanding the basics of operating on structs like declaration, initialization and adding functional logic into those structs. &#xA;&#xA;## Structs in Golang&#xA;&#xA;Structs or Structures in Golang are the sequences or collections of built-in data types as a single type interface. Just like we have int, string, float, and complex, we can define our own data types in golang. They can consist of built-in data types as mentioned and also certain functions or methods which can be used to operate on them. Using structs we can create custom data types that can meet the specific requirements of our problem. We can define structs and later inside functions we can create instances of those structures. &#xA;&#xA;Structures are like a template or blueprint representation of data. It doesn&#39;t hold the actual data in memory, it is just used to construct an object of that type. After defining a struct, we can create instances or objects of those structs. These instances actually hold data in memory in the run time, so we basically deal with objects in the actual program. We&#39;ll see certain concepts of creating instances, declaring and defining structs, accessing data from instances and so on in the following section of the article.&#xA;&#xA;```&#xA;Struct / Class &#xA;&#xA;Template / Structure for creating custom data types &#xA;&#xA;- Properties  (variables and constants defined inside a structure)&#xA;- Methods     (functions that are bound to a struct)&#xA;&#xA;```&#xA;&#xA;## Declaring Struct&#xA;&#xA;We can declare structs by using the keyword `type` followed by the name of the struct, after tha name, the `struct` keyword itself, and finally sets of parenthesis `{}`. Inside the parenthesis, we define the structure i.e. which type of data is to be stored and the name of those respective variables.&#xA;&#xA;```go&#xA;type Article struct {&#xA;    title string&#xA;    is_published bool&#xA;    words int&#xA;}&#xA;```&#xA;&#xA;We have declared a struct or a custom data-type or a class(not really) in golang with the name `Article` that has few associated properties/variables inside of it. We have `title` as a string, `is_published` as a boolean, and `words` as an integer value. This constructs a simple type of golang which has a defined structure. We can further use this Article struct as a data type in the main function or any appropriate scope for actually assigning the structure memory at runtime. &#xA;&#xA;### Struct Naming Convention&#xA;&#xA;There are a few things that we need to understand and make a note of, especially the naming convention.&#xA;&#xA;- The struct name should be capitalized if you want to make it publicly accessible.&#xA;- The variable/properties names i.e. `title`, `is_published`, and `words` should be also capitalized if you want to make them accessible from the struct instance.&#xA;&#xA;This might not be important right now but it is worth knowing for later use cases. Let&#39;s say we want to use a struct from other files or modules, for that the name of the struct in the file/script where the struct is defined should have the `Capitalized` convention. If you have a simple and single file script/program, you can keep it `lowercased` or `camelCased`. &#xA;&#xA;Leaving that aside, for now, we will try to focus on the essence of the structs in golang. &#xA;&#xA;## Creating Instances/Objects of Structs&#xA;&#xA;Now, after defining the struct we need to create instances or objects of them. This can be done in several ways like using Struct literal, Manual assignment, and using the new function. We&#39;ll look into each of them in this section.&#xA;&#xA;### Using struct literal&#xA;&#xA;The most simplest and straightforward way to initialize a struct is to use the struct literal just like we did with Maps, Slices, and Arrays. We basically parse the values of the respective fields in the struct.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type article struct {&#xA;&#x9;title        string&#xA;&#x9;is_published bool&#xA;&#x9;words        int&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;golang := article{&#34;Golang Intro&#34;, true, 2000}&#xA;&#x9;fmt.Println(golang)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run struct.go&#xA;{Golang Intro true 2000}&#xA;```&#xA;&#xA;We have created the object or instance of the struct `Article` using the shorthand notation or the walrus `:=` operator. Inside the `{}` braces, we can assign values but those values need to be in the same order as defined in the struct definition, else it gives a compilation error of `type mismatch`. So, here we have assigned the value `title`, `is_published`, and `word` as `Golang Intro`, `true`, and `2000` respective in that order. &#xA;&#xA;### Using Key-value pairs&#xA;&#xA;We can also use the `key-value` notation for assigning values in the instance. With the previous method, we need to specify and thus initialize all the properties at once, but using this method we have a bit more flexibility. &#xA;&#xA;```go&#xA;vim := Article{title: &#34;Vim: Keymapping&#34;, is_published: false}&#xA;fmt.Println(vim)&#xA;```&#xA;&#xA;```&#xA;$ go run struct.go&#xA;{Vim: Keymapping false 0}&#xA;```&#xA;&#xA;Here, we have provided the key i.e. the variable name inside the struct, and then provided the value to it separated by a colon `:`. Using this way of initializing instances of struct we have better control and flexibility in providing a default value for that object. In the example above, we didn&#39;t initialize the property `words` but it already initialized to `0` since the object is created hence the memory allocation is completed, and thereby it needs to have a default value.&#xA;&#xA;### Using the new function&#xA;&#xA;We can use the [new](https://pkg.go.dev/builtin#new) function to create a new instance of a struct. Though we can&#39;t provide an initial value, using the new function all the properties are initialized with their respective default values. Further, if we want to modify the values, we can access each property (variables in struct) using the `dot operator` and assign the desired values. &#xA;&#xA;```go&#xA;django := *new(Article)&#xA;fmt.Println(django)&#xA;```&#xA;&#xA;```&#xA;$ go run struct.go&#xA;{ false 0}&#xA;```&#xA;&#xA;We have used the new function to allocate memory for an instance of struct with the provided name. This function basically allocates all the properties of a default value and returns a pointer to that memory address. If we store the result of the new function in a variable object, we would get a pointer but we need the object itself, so we use `*` before the new function so as to de-reference the memory address from the pointer. &#xA;&#xA;So, we have stored the default values in the newly created object of Article structure in `django`, this gives the default values like an empty string `&#34;&#34;`, default boolean value `false` and default integer value `0`. If we don&#39;t dereference the pointer and use it like `djagno := new(Article)`, thereby we get a pointer in that variable as `&amp;{ false 0}`. Hence we use `*` before the new keyword. &#xA;&#xA;#### Accessing/Assigning values to properties&#xA;&#xA;We can now change the values of the properties in the object of the struct using the dot operator. We basically use the instance object name followed by a `.` and the property name to set its value.&#xA;&#xA;```go&#xA;django := *new(Article)&#xA;fmt.Println(django)&#xA;&#xA;django.title = &#34;Django View and URLs&#34;&#xA;django.words = 3500&#xA;django.is_published = true&#xA;fmt.Println(django)&#xA;```&#xA;&#xA;```&#xA;$ go run struct.go&#xA;{ false 0}&#xA;{Django View and URLs true 3500}&#xA;```&#xA;&#xA;So, here we have used the object name which is `django`, and access any property by name with the `dot operator`, thereby we set the value as per the requirement. Note, we have not used the `:=` operator as the properties have already been initialized, we simply need to modify the default value.&#xA;&#xA;&#xA;## Creating Functions associated to Structs&#xA;&#xA;We can now move into creating functions in the struct, by adding functions/methods in structs we can incorporate a lot of functionality into the structure of our data type. For instance, we can set the value of a string as `&#34;Empty&#34;` or `&#34;NA&#34;` beforehand rather than empty string `&#34;&#34;`. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type Mail struct {&#xA;&#x9;sender     string&#xA;&#x9;subject    string&#xA;&#x9;sent       bool&#xA;&#x9;word_count int&#xA;}&#xA;&#xA;func (m Mail) check_spam() {&#xA;&#x9;if m.subject == &#34;&#34; {&#xA;&#x9;&#x9;fmt.Println(&#34;Spam!&#34;)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(&#34;Safe!&#34;)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;mail_one := *new(Mail)&#xA;&#x9;fmt.Printf(&#34;Mail one: &#34;)&#xA;&#x9;mail_one.check_spam()&#xA;&#xA;&#x9;mail_two := Mail{&#34;xyz@xyz.com&#34;, &#34;Golang Structs&#34;, true, 100}&#xA;&#x9;fmt.Printf(&#34;Mail two: &#34;)&#xA;&#x9;mail_two.check_spam()&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run methods.go&#xA;Mail one: Spam!&#xA;Mail two: Safe!&#xA;```&#xA;&#xA;We define a function associated with a struct by providing the `struct-name` and a parameter name which can be just used inside of the function. Here, we have used `(m Mail)` so as to reference the object of the struct provided to it. This basically binds the function to the struct and hence it becomes a method of that struct. &#xA;&#xA;Further, we can access the properties from the struct by their name using the dot separator. We are just checking whether the subject property in the instance is empty or not and simply printing text to the console. We are accessing the function and calling it with the syntax as `instance_name.function_name()`, here the function name is `check_spam` and the object name is `mail_one` for the first instance. Thereby we have called the function which is bounded to the instance of the struct. As we have accessed the function after the instance name the binding of the function i.e. the statements `(m Mail)` has taken the current instance and parsed it as the instance of the struct. Hence we are able to access the current instance&#39;s properties within the function/method.&#xA;&#xA;#### Adding a return statement&#xA;&#xA;By simply providing the return type and return statement with value, we can create functions of specific return types. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type Mail struct {&#xA;&#x9;sender     string&#xA;&#x9;subject    string&#xA;&#x9;sent       bool&#xA;&#x9;word_count int&#xA;}&#xA;&#xA;func (m Mail) check_spam() bool {&#xA;&#x9;if m.subject == &#34;&#34; {&#xA;&#x9;&#x9;return true&#xA;&#x9;} else {&#xA;&#x9;&#x9;return false&#xA;&#x9;}&#xA;}&#xA;&#xA;func (m Mail) print_spam(spam bool) {&#xA;&#x9;if spam {&#xA;&#x9;&#x9;fmt.Println(&#34;Spam!!&#34;)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(&#34;Safe!!&#34;)&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;mail_one := *new(Mail)&#xA;&#x9;fmt.Printf(&#34;Mail one: &#34;)&#xA;&#x9;is_mail_1_spam := mail_one.check_spam()&#xA;&#x9;mail_one.print_spam(is_mail_1_spam)&#xA;&#xA;&#x9;mail_two := Mail{&#34;xyz@xyz.com&#34;, &#34;Golang Structs&#34;, true, 100}&#xA;&#x9;fmt.Printf(&#34;Mail two: &#34;)&#xA;&#x9;is_mail_2_spam := mail_two.check_spam()&#xA;&#x9;mail_two.print_spam(is_mail_2_spam)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run methods.go&#xA;Mail one: Spam!!&#xA;Mail two: Safe!!&#xA;```&#xA;&#xA;We have modified the `check_spam` function which returns a boolean value. If the subject is empty it returns true else it returns false. Also, we have added a function `print_spam` function which takes in a parameter as a boolean value and prints text according to the value. This is how we work with functions in structs. We have parsed the return value of the `check_spam` function as a parameter to the `print_spam` function. &#xA;&#xA;&#xA;### Constructor in Structs&#xA;&#xA;Constructors are special methods that are invoked when the instance of a struct is created i.e. the properties are assigned an initial value or default value. In this way, we can perform basic operations which we need to perform after the instantiation of the struct.&#xA;&#xA;Golang does not have built-in constructors, but it is quite easy to create one. We simply need to create a function with an appropriate name(don&#39;t clash it with the struct name!!), by providing all the parameters that are in the struct so as to initialize them, and finally the return value as a reference to the struct instance. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;type Repository struct {&#xA;&#x9;name       string&#xA;&#x9;file_count int&#xA;}&#xA;&#xA;func New_Repository(name string, file_count int) *Repository {&#xA;&#x9;file_count++&#xA;&#x9;name = &#34;Test&#34;&#xA;&#x9;return &amp;Repository{name, file_count}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;blog := *New_Repository(&#34;&#34;, 0)&#xA;&#x9;fmt.Println(blog)&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run constructor.go&#xA;{Test 1}&#xA;```&#xA;&#xA;We have created a function that is technically acting like a constructor as it sets a default value to the properties in the structure. We have struct `Repository` containing `name` as a string and `file_count` as an integer. We created a Constructor function named `New_Repository` that basically takes in the properties in the struct, remember they haven&#39;t been initialized yet as we are writing the constructor for the very purpose. We have to parse the parameters with the initial value and let it modify once we have created the instance. &#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we are able to understand the basics of structs in golang. We covered declaration, definition, and adding methods in a struct. This gives a glimpse of Object-Oriented Programming in Golang. Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Functions</title>
      <link>https://www.meetgor.com/posts/golang-functions</link>
      <description>Introduction In the eighth part of the series, we will be exploring functions in golang. We will be diving into some basics of functions in golang like declarat</description>
      <pubDate>Wed, 13 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the eighth part of the series, we will be exploring functions in golang. We will be diving into some basics of functions in golang like declaration, definition and calling. We won&#39;t be exploring all the topics of functions as it is quite a large topic to cover in one shot. So, building from the base, we will be starting from the basic declaration to simple return statements. &#xA;&#xA;## Functions in Golang&#xA;&#xA;Functions in golang are a simple way to structure a block of code that can be re-usable. Functions also allow us to process a piece of logic and return the output. Functions allow us to write readable and scalable code as we have to write the code once and we can re-use the functionality of it by calling it. &#xA;&#xA;## Declaring Functions &#xA;&#xA;We have already defined a function, if you have followed the series so far, or even written a `hello-world` application. The `main` function is the most fundamental function we can define in golang. The main is complicated if dive deeper but in the simplest of term it acts as a entry point for the entire program. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;func main() {&#xA;&#xA;}&#xA;```&#xA;&#xA;We have written the above code a lot of times till now, but never really talked about it&#39;s significance. Here we will understand the terminologies related to the main function. A function is declared with the `func` keyword along with the name of the function. There needs to be the `()` parenthesis after the name of the function, optionally it can take parameters inside the parameters to be used inside the function. &#xA;&#xA;We define the core functionality or the core logic of the function inside the braces `{}`. We also have the `return` keyword which can return values from the function to the block where we have called the function. Usually, we call a function from other function (most of the times it&#39;s the `main` function). The `return` keyword is not mandatory and it is usually added at the end of the function block, just before the closing braces `}`. &#xA;&#xA;```go&#xA;func hello_world_007() {&#xA;&#xA;}&#xA;```&#xA;&#xA;We can define a custom function outside the main function by giving it a appropriate name. For the time bwing we can leave it empty and further define the logic of the actual function. &#xA;&#xA;The name of the function can be given as per the following standards:&#xA;&#xA;- Using letters`a-z A-Z`, numbers`0-9`, underscore `_` as a name.&#xA;- Should not contain any spaces in-between the name.&#xA;- Should not begin with a number or underscore.&#xA;&#xA;## Defining Functions&#xA;&#xA;Inside the `{}` we define the actual functionality/logic of the function. The variables inside the function will remain local to the function and can&#39;t be accessed or altered from outside the function, though if we really want to access some global variables(from main or other functions) we can pass parameters, we will look into it in the next few sections. For time being, we will be focusing on the actual code block inside the function. &#xA;&#xA;```go&#xA;&#xA;func hello_world() {&#xA;    fmt.Println(&#34;Hello World&#34;)&#xA;}&#xA;```&#xA;&#xA;This is a basic function that just calls another function `Println` from the fmt package, which basically prints text in the console. Though, we are using the function Println, it won&#39;t print the content to the string as we are not using/calling the function. Now, we can get a step ahead and start working with variables inside the function.&#xA;&#xA;```go&#xA;func hello_world() {&#xA;    name := &#34;Gopher&#34;&#xA;    fmt.Println(&#34;Hello&#34;, name)&#xA;}&#xA;```&#xA;&#xA;We have now added the local variable `name `inside the function, so this variable can only we referred inside the particular function. &#xA;&#xA;## Calling Functions&#xA;&#xA;We can call the function from the main function or any other function by just specifying the name along with the `()` and optionally the parameters inside the parenthesis. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    hello_world()    &#xA;}&#xA;&#xA;func hello_world() {&#xA;    name := &#34;Gopher&#34;&#xA;    fmt.Println(&#34;Hello&#34;, name)&#xA;}&#xA;&#xA;```&#xA;&#xA;```&#xA;$ go run func.go&#xA;Hello Gopher&#xA;```&#xA;&#xA;So, we define the function `hello_world` and call the function by using the statement `hello_world()` inside the main function and now, we are able to actually run the function. &#xA;&#xA;## Passing Parameters&#xA;&#xA;We can optionally parse variables from a function to other and process it for further computation and programming. So, we can pass parameters in a function by specifying the name to be used inside the function followed by the type of that variable. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;greet_me(&#34;Meet&#34;)&#xA;&#x9;n := &#34;John&#34;&#xA;&#x9;greet_me(n)&#xA;}&#xA;&#xA;func greet_me(name string) {&#xA;&#x9;fmt.Println(&#34;Hello,&#34;, name, &#34;!&#34;)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run func.go&#xA;Hello, Meet !&#xA;Hello, John !&#xA;```&#xA;&#xA;We have used the parameter `name` as a string in the function and used it inside the function body. The parameter name which is to be called from the main function can be anything and not necessarily be the same as declared in the function declaration. For instance, we have used the variable in the main function `n` which is passed in the function call. We can even pass the value as it is to the function in golang.  &#xA;&#xA;## Return Keyword&#xA;&#xA;We can use the return keyword to actually return a value from the function and not just display the message. The returned value can be later used from other places in the program. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// return value&#xA;&#x9;y := line_eq(3, 1, 2)&#xA;&#x9;fmt.println(&#34;for x = 3 , y = &#34;, y)&#xA;}&#xA;&#xA;func line_eq(x int, m int, c int) int {&#xA;&#x9;return ((m * x) + c)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run func.go&#xA;for x = 3 , y =  5&#xA;```&#xA;&#xA;So, here we are able to fetch the returned value from the function and store it in another variable and further compute the required logic. We also need to specify the return type of the function after the parameters like `func (parameters) return-type { }`. Here, we need to return the specified type of the return value from the function else it would give a compilation error. &#xA;&#xA;So, we basically need to provide the return value and also the return statement to capture the value from the function call. &#xA;&#xA;### Multiple return values&#xA;&#xA;We can also provide multiple return values by providing a list of return values like `(type1 type2 type3 ....)`. We can return the values by separating the values by a comma. So, while calling the function, we need to specify the variables again as comma-separated name and this will capture the value from the function call.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;// multiple return values&#xA;&#x9;s, c, odd := sqube(5)&#xA;&#x9;fmt.Println(&#34;for x = 5 , x^2 =&#34;, s, &#34;x^3 =&#34;, c)&#xA;&#x9;if odd == true {&#xA;&#x9;&#x9;fmt.Println(&#34;x is odd&#34;)&#xA;&#x9;} else {&#xA;&#x9;&#x9;fmt.Println(&#34;x is true&#34;)&#xA;&#x9;}&#xA;}&#xA;&#xA;func sqube(x int) (int, int, bool) {&#xA;&#x9;square := x * x&#xA;&#x9;cube := x * x * x&#xA;&#x9;var is_odd bool&#xA;&#x9;if x%2 == 0 {&#xA;&#x9;&#x9;is_odd = false&#xA;&#x9;} else {&#xA;&#x9;&#x9;is_odd = true&#xA;&#x9;}&#xA;&#x9;return square, cube, is_odd&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run func.go&#xA;for x = 5 , x^2 = 25 x^3 = 125&#xA;x is odd&#xA;```&#xA;&#xA;So, we have returned multiple values from the function like two integers and one boolean. The parameter is a single integer, now we need to parse 3 variables in order to capture all the values from the function call. Thus, we are able to get all the values from the function.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we are able to understand the basics of functions in golang. We covered from declaration, definition and simple return statements and function calling. Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Maps</title>
      <link>https://www.meetgor.com/posts/golang-maps</link>
      <description>Introduction In the seventh part of the series, we will be covering Maps. We have covered some basic data structures like arrays and slices, and now we can move</description>
      <pubDate>Tue, 12 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the seventh part of the series, we will be covering Maps. We have covered some basic data structures like arrays and slices, and now we can move into maps or hash tables. Maps allow us to store key-value pairs of a particular type. In this part of the series, we will be covering the basics of Maps in Golang like declaration, iteration, and Creating, updating, and deleting keys from the map. &#xA;&#xA;## Maps in Golang&#xA;&#xA;Maps in golang are data structures that provide a way to store key-value pairs. It is also known as a hash table. Maps allow us to create unique keys which are associated with a value. It can be used to create a data structure that can have an item that is associated with a particular value, for example, the basic example of the map can be a frequency table of a list of numbers. We can store the frequency of each element occurring in the list. Let&#39;s say we have a list of numbers as `[3, 5, 9, 4, 9, 5, 5]`, we can create a map of the frequency of these elements as `[3:1, 5:3, 4:1, 9:2]`. Here, we have stored the information in the form of `key-value` pairs as a frequency. So, `3` has occurred one time, `5` 3 times, and so on. &#xA;&#xA;Maps are not stored in order of the numbers they are unordered so we need to manually sort them in the order we want. &#xA;&#xA;## Declaring Maps &#xA;&#xA;We can declare maps by defining the type of mapping like the two types we are mapping. We can map any type with any other, like a character with an integer, an integer with an integer as we saw earlier, etc. We have several ways to decalre maps in golang, like using map literal, make function, new function, and a few others. We&#39;ll look into each of them in a brief.&#xA;&#xA;### Simple map literal&#xA;&#xA;As we saw in the array and slices, we used the slice literals to declare and initialize an array or a slice. Similarly, we can use the map literal to create a map in golang. Here, we use the `map` keyword followed by the two types of data we are going to map with. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;&#xA;&#x9;char_freq := map[string]int{&#xA;&#x9;&#x9;&#34;M&#34;: 1,&#xA;&#x9;&#x9;&#34;e&#34;: 2,&#xA;&#x9;&#x9;&#34;t&#34;: 1,&#xA;&#x9;}&#xA;&#x9;fmt.Println(char_freq)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;map[M:1 e:2 t:1]&#xA;```&#xA;&#xA;We have used the map keyword to initialize a map with a `string` with `int`. The first data type is declared inside the square brackets`[]` and the second data type outside the square brackets. We use the `{}` to define the map values. We can even leave the `{}` empty. &#xA;&#xA;```&#xA;char_freq := map[string]int{}&#xA;```&#xA;&#xA;We initialize the values of the map by specifying the data for that data type in this example a string `&#34;&#34;` followed by a colon `:` and finally the value of the second pair data. Each value is separated by a comma(`,`). &#xA;&#xA;### Using make function&#xA;&#xA;We can even use the [make](https://pkg.go.dev/builtin#make) function to create a map in golang. The make function is used for allocating memory. The make function allocates memory which might be enough for the initial values provided. It allocates more memory as the map grows in size. We use the make function by providing the `map` keyword along with the data types of the key values pairs to be mapped. Optionally we can provide the capacity as we provided in the slice declaration. It basically doubles once it reaches the limit and is re-allocated. &#xA;&#xA;```go&#xA;marks := make(map[int]int)&#xA;marks[65] = 8&#xA;marks[95] = 3&#xA;marks[80] = 5&#xA;fmt.Println(marks)&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;map[65:8 80:5 95:3]&#xA;```&#xA;&#xA;We have used the `make` function for declaring the map, the initial size is around 7 if not mentioned. After it hits 7, the capacity is mostly doubled and increased as per the modifications. &#xA;&#xA;### Using the new function&#xA;&#xA;We can even use the [new](https://pkg.go.dev/builtin#new) function(a bit hacky) to crated a map in golang. The new function basically is used to allocate memory but is not the same as the `make` function, it returns the memory address to an allocated pointer. So, we can set the value of the returned function call of the new function with a pointer variable. A pointer in golang is simply a reference to a memory address, we&#39;ll dive into pointers in a different section. After the pointer is assigned a memory address, we can refer to the address of that pointer and thus access the original value which is the map itself.&#xA;&#xA;```go&#xA;name := new(map[byte]int)&#xA;*name = map[byte]int{}&#xA;name_map := *name&#xA;&#xA;name_map[&#39;m&#39;] = 1&#xA;name_map[&#39;e&#39;] = 2&#xA;name_map[&#39;t&#39;] = 1&#xA;&#xA;fmt.Println(name_map)&#xA;&#xA;for k, _ := range name_map {&#xA;    fmt.Printf(&#34;%c&#xA;&#34;, k)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;map[101:2 109:1 116:1]&#xA;m&#xA;e&#xA;t&#xA;```&#xA;&#xA;So, we can see we created the map with the new function and stored the address into a pointer, later we initialized the empty map and stored the initial reference in the same pointer address. Then, we can finally store the map in another variable so that we can use it as a normal map. So, this is how we declare the map using the new function. &#xA;&#xA;## Access Keys and Values in Maps&#xA;&#xA;We can access the values by simply accessing them with the keys. Using the square bracket and the key literal into the braces, we get the value associated with that key. For example, the map `[&#34;M&#34;: 1, &#34;E&#34;: 2, &#34;T&#34;:1]`, we can use the `map_name[&#34;E&#34;]` which will get the value as `3`. &#xA;&#xA;### Length of Map&#xA;&#xA;The length of the map can be accessed using the len function, the len function returns the number of key-value pairs in the map. &#xA;&#xA;```go&#xA;char_freq := map[string]int{&#xA;    &#34;M&#34;: 1,&#xA;    &#34;e&#34;: 2,&#xA;    &#34;t&#34;: 1,&#xA;}&#xA;fmt.Println(char_freq)&#xA;fmt.Println(len(char_freq))&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;map[M:1 e:2 t:1]&#xA;3&#xA;```&#xA;&#xA;## Check for existing Keys in Map&#xA;&#xA;We can check if a key exists in the map by using the comma-ok syntax. The key can be accessed using the first variable and if the key doesn&#39;t exist, the second variable is set to false. So, we can verify the existence of a key in the map using the two-variable approach.&#xA;&#xA;```go&#xA;name_map := map[byte]int{&#xA;    &#39;m&#39;: 1,&#xA;    &#39;e&#39;: 2,&#xA;    &#39;t&#39;: 1,&#xA;}&#xA;var key byte = &#39;t&#39;&#xA;value, exist := name_map[key]&#xA;if exist == true {&#xA;    fmt.Printf(&#34;The key %c exist and has value %d&#xA;&#34;, key, value)&#xA;} else {&#xA;    fmt.Printf(&#34;The key %c does not exist.&#xA;&#34;, key)&#xA;}&#xA;```&#xA;&#xA;```    &#xA;$ go run map.go&#xA;The key t exist and has value 1&#xA;```&#xA;&#xA;So, we can see the exist value is true if the key exists and false if it doesn&#39;t. So, we can then verify if a particular key exists in a map or not. &#xA;&#xA;## Adding and Modifying Keys/Values in Maps&#xA;&#xA;We can add a key-value pair in a map by just using the key as we did in the initialization process. We simply pass the key in the square braces `[]` and assign it a value appropriate to the data type used in the map. &#xA;&#xA;```go&#xA;cart_list := map[string]int{&#xA;    &#34;shirt&#34;: 2,&#xA;    &#34;mug&#34;: 4,&#xA;    &#34;shoes&#34;: 3,&#xA;}&#xA;&#xA;fmt.Println(cart_list)&#xA;&#xA;cart_list[&#34;jeans&#34;] = 1&#xA;cart_list[&#34;mug&#34;] = 3&#xA;fmt.Println(cart_list)&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;map[mug:4 shirt:2 shoes:3]&#xA;map[jeans:1 mug:3 shirt:2 shoes:3]&#xA;```&#xA;&#xA;We can access the keys in the map by just using the key as it is and altering the value it holds, the same thing applies to the addition of the key-value pairs, we can use the key and assign the value associated with it. &#xA;&#xA;## Delete Keys in Maps&#xA;&#xA;We can delete the key-value pairs in the map, using the `delete` function. We pass in the `key` and the map to delete the key-value pair from the map. &#xA;&#xA;```go&#xA;cart_list := map[string]int{&#xA;    &#34;shirt&#34;: 2,&#xA;    &#34;mug&#34;:   4,&#xA;    &#34;shoes&#34;: 3,&#xA;}&#xA;fmt.Println(cart_list)&#xA;&#xA;cart_list[&#34;jeans&#34;] = 1&#xA;cart_list[&#34;mug&#34;] = 3&#xA;delete(cart_list, &#34;shoes&#34;)&#xA;&#xA;fmt.Println(cart_list)&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;map[mug:4 shirt:2 shoes:3]&#xA;map[jeans:1 mug:3 shirt:2]&#xA;```&#xA;&#xA;So, we can see the key-value pair was deleted from the map. &#xA;&#xA;## Iterate over a Map&#xA;&#xA;We can iterate over a map similar to the range keyword iteration for slices and arrays, but the exception here, is that we use the key, value instead of the index, copy of an element in the map as the range. &#xA;&#xA;```go&#xA;is_prime := map[int]bool{&#xA;    7:  true,&#xA;    9:  false,&#xA;    13: true,&#xA;    15: false,&#xA;    16: false,&#xA;}&#xA;&#xA;for key, value := range is_prime {&#xA;    fmt.printf(&#34;%d -&gt; %t&#xA;&#34;, key, value)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;9 -&gt; false&#xA;13 -&gt; true&#xA;15 -&gt; false&#xA;16 -&gt; false&#xA;7 -&gt; true&#xA;```&#xA;&#xA;So, we can observe that we can access the keys and values in the map using the range keyword for iterating over the map. Inside the for loop, we can refer to the assigned values present in the map. &#xA;&#xA;### Use only key or value while iterating&#xA;&#xA;If we don&#39;t use either of the variables like `key` or  `value`, the compiler might give us the unused variable error, so we have an alternative to use don&#39;t care variables namely the `_` underscore character.&#xA;&#xA;```go&#xA;is_prime := map[int]bool{&#xA;    7:  true,&#xA;    9:  false,&#xA;    13: true,&#xA;    15: false,&#xA;    16: false,&#xA;}&#xA;&#xA;for key, _ := range is_prime {&#xA;    fmt.Printf(&#34;Key : %d&#xA;&#34;, key)&#xA;}&#xA;&#xA;for _, value := range is_prime {&#xA;    fmt.Printf(&#34;Value: %t&#xA;&#34;, value)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run map.go&#xA;Key : 7&#xA;Key : 9&#xA;Key : 13&#xA;Key : 15&#xA;Key : 16&#xA;Value: true&#xA;Value: false&#xA;Value: true&#xA;Value: false&#xA;Value: false&#xA;```&#xA;&#xA;So, we use the `_` to ignore the usage of the variable in the loop, if we are not sure of using any variable, we can ignore it completely with the underscore operator and thus prevent any compilation errors/warnings. So, here if we want to only access keys, we use `key, _` in order to fetch only keys and silence the values in the map. If we want to access only the values, we can use `_, value` so as to get all the values from the map. The variable name `key` or `value` can be anything but make sure to use those only inside the loop.&#xA;&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to understand the basics of maps in golang. We covered some basics stuff including the declaration, initialization, and iteration. Maps are quite simple but important for creating interesting applications.&#xA;&#xA;Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Slices</title>
      <link>https://www.meetgor.com/posts/golang-slices</link>
      <description>Introduction In the sixth part of the series, we will be covering slices. Slices are almost like arrays but have a lot of advantages over them, including flexib</description>
      <pubDate>Thu, 07 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the sixth part of the series, we will be covering slices. Slices are almost like arrays but have a lot of advantages over them, including flexibility and control over them. We can adjust the size and capacity of the data which we will store at a place using slices. We will be covering basic declaration, initialization, capacity in slices, iteration, and accessing the elements of the slices.&#xA;&#xA;## Slices in Golang&#xA;&#xA;Slices are Arrays but can provide more control and are more flexible than arrays. In slices, we can increase the size of the array/list of elements whenever required. We can even have a capacity for slices i.e. the maximum length we wish to grow the initial slice. &#xA;&#xA;Though slices are dynamic, it has a few disadvantages like compile safety, access time, comparability, etc. Everything has its pros and cons, you have to decide on the right data structure as per your problem statement and requirements. &#xA;&#xA;## Declaring Slices &#xA;&#xA;There are a couple of different ways in which we can declare a slice that might be an uninitialized or initialized slice. Some of the standard ways include using the `make` function and the normal array-like declaration though there are other methods as well including using the `new` function. We&#39;ll explore some of the most easiest and handy ways to declare and initialize slices. &#xA;&#xA;### Using array-like declaration&#xA;&#xA;Slices can be declared quite similar to arrays but we don&#39;t enter the initial size(length). As discussed in the array part, we can use the var keyword and the square brackets. Though you don&#39;t have to enter the length inside the `[]` brackets, the type of the slice needs to enter.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    var marks [] int&#xA;    fmt.Println(marks)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;[]&#xA;```&#xA;&#xA;So, initially, it is empty without an element as we have not initialized any elements. If you enter the length in the `[]` brackets, it would be an array and not a slice.&#xA;&#xA;We can also initialize the elements in the slice with the slice literal as we did with arrays using an array literal. &#xA;&#xA;```go&#xA;frameworks = []string{&#34;Django&#34;, &#34;Laravel&#34;, &#34;Flask&#34;, &#34;Rails&#34;}&#xA;fmt.Println(frameworks)&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;[Django Laravel Flask Rails]&#xA;```&#xA;&#xA;### Using make function&#xA;&#xA;We can even use the [make](https://pkg.go.dev/builtin#make) function to create a slice in golang. The make function basically allows us to create a slice by providing the length as well as the capacity. Let&#39;s clear the concept of Capacity and Slice first.&#xA;&#xA;#### Capacity in Slice&#xA;&#xA;Capacity in slices is the initial value provided during the declaration of a slice. It basically holds the capacity of the slice to grow beyond the length. OK, let&#39;s take an example, If you declare a slice using make with length 3 and capacity 5, you initially can access 3 elements but memory has been allocated for 5 elements, if your slice exceeds this capacity it will double its original capacity. &#xA;&#xA;#### Obtain Length and Capacity of Slice &#xA;&#xA;We can get the capacity by using the [cap](https://pkg.go.dev/builtin#cap) function just like the `len` function. In slices, we can use the lens function to get the length and the cap function to get its underlying capacity to grow.&#xA;&#xA;It&#39;s just a way for us to manage the memory reallocation for a slice. Slices under the hood are arrays with a more dynamic punch. So, now we can a bit confidently move to the make function for declaring slices.&#xA;&#xA;### Back to make function&#xA;&#xA;The Make function is used to declare and initialize a slice (not only slice but maps and channels as well). The function primarily takes in 3 parameters namely, the type of slice, the initial length of the slice, and optionally the capacity of the slice. If we don&#39;t provide the capacity, the capacity is set the same as the length.&#xA;&#xA;```go&#xA;var langs = make([]string, 3, 5)&#xA;&#xA;langs[0], langs[1], langs[2] = &#34;Python&#34;, &#34;Go&#34;, &#34;Javascript&#34;&#xA;fmt.Println(langs)&#xA;&#xA;fmt.Printf(&#34;Length = %d &#xA;Capacity = %d&#xA;&#34;, len(langs), cap(langs))&#xA;&#xA;langs = append(langs, &#34;Java&#34;, &#34;Kotlin&#34;, &#34;PHP&#34;)&#xA;&#xA;fmt.Println(langs)&#xA;fmt.Printf(&#34;Length = %d &#xA;Capacity = %d&#xA;&#34;, len(langs), cap(langs))&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;[Python Go Javascript]&#xA;Length = 3&#xA;Capacity = 5&#xA;[Python Go Javascript Java Kotlin PHP]&#xA;Length = 6&#xA;Capacity = 10&#xA;```&#xA;&#xA;So, there are a lot of things to take in here. We use the make function by parsing in three parameters as said the type in the form of `[]type` as a distinction for creating slices because we also use the map to create maps and channels. The next two parameters are length and capacity. So, we have initialized a slice of type string, length 3 i.e. we are saying we will initially access only three elements from the array, finally, the third parameter is the capacity which will be the already initialized array under the hood for the slice. So, we have already created an array(under the hood) with 5 elements initialized but only 3 accessible from the slice interface.&#xA;&#xA;Further, we initialize/modify the elements in the created slice. We set 3 elements to some strings and that way we have all elements filled with non-default values in the slice. Now if we say `langs[3] = &#34;Something&#34;` it would give an error like `panic: runtime error: index out of range [3] with length 3`. This is a panic in golang which can be triggered in one of the ways in the slice when you access an unreferenced element in it. We have initialized the element but not in the slice interface. So, we have a particular function called append which appends and thus grows the length beyond its current length and refers to the elements initialized from the capacity.&#xA;&#xA;So initially the capacity was 5 and after adding the 6th element, it doubled down to `10`. So, under the hood, all 5 elements in the array would have been re-allocated into a new memory location and the sixth element would have been added to the next location. This is how we efficiently re-allocate memory for elements in slices. We have a bit more control over the length and are more flexible than arrays.&#xA;&#xA;### Using new function&#xA;&#xA;The [new](https://pkg.go.dev/builtin#new) function in golang is used to allocate a slice/any other type. We can use the new function so as to mimic the make function by adding a bit of value like the initial capacity and length. You can refer to the [article](https://www.golangprograms.com/go-language/slices-in-golang-programming.html) for providing the original idea. &#xA;&#xA;```go&#xA;langs2 := new([3]string)[0:2]&#xA;&#xA;langs2[0], langs2[1] = &#34;Python&#34;, &#34;Go&#34;&#xA;fmt.Println(langs2)&#xA;&#xA;fmt.Printf(&#34;Length = %d &#xA;Capacity = %d&#xA;&#34;, len(langs2), cap(langs2))&#xA;&#xA;langs2 = append(langs2, &#34;Java&#34;, &#34;Kotlin&#34;, &#34;PHP&#34;)&#xA;&#xA;fmt.Println(langs2)&#xA;fmt.Printf(&#34;Length = %d &#xA;Capacity = %d&#xA;&#34;, len(langs2), cap(langs2))&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;Length = 2&#xA;Capacity = 3&#xA;[Python Go Java Kotlin PHP]&#xA;Length = 5&#xA;Capacity = 6&#xA;```&#xA;&#xA;So, it would work almost similar to the make function. But by default, it would work as nil if you don&#39;t provide any length in the `[]` brackets like `new([]string)`. This will create a empty slice `[]` with zero capacity and zero length.&#xA;&#xA;&#xA;## Adding elements in Slice&#xA;&#xA;We had a few spoilers for this already with the `append` function. The [append](https://pkg.go.dev/builtin#append) takes in the variable and then the list of values that we want to add. Here, if the capacity of the slice is exceeded, it re-allocates the slice to a new location, and the elements are moved to that location and then the provided elements are added. &#xA;&#xA;```go&#xA;var percentages = []float64{78.8, 85.7, 94.4, 79.8}&#xA;fmt.Println(percentages)&#xA;percentages = append(percentages, 60.5, 75.6)&#xA;fmt.Println(percentages)&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;[78.8 85.7 94.4 79.8]&#xA;[78.8 85.7 94.4 79.8 60.5 75.6]&#xA;```&#xA;&#xA;So, here we can see the append function adding the elements in a slice. You can pass as many elements(MaxInt i.e. int64 or int32 elements precisely) you require in the append function. Calling the append function, again and again, might degrade the efficiency though, so make sure to add a right number of elements in a single call.&#xA;&#xA;## Accessing and Modifying elements in Slice&#xA;&#xA;We can simply access the elements using the index in the `[]` brackets. But there is more to that in slices. We can actually get slices of slices. Even in arrays or slices, we can get the particular elements between a specific two indices like 2 and 4, so we can write `array[2:4]` to get elements at index `2`, and `3` the upper bound is non-inclusive. But if we want to have all the elements from a specific index to the last element, we can leave the number blank as `[2:]` would give elements from index 2 to the last index in the array/slice.&#xA;&#xA;```go&#xA;scores := []int{80, 85, 90, 75, 60, 56, 83}&#xA;fmt.Println(scores)&#xA;fmt.Println(&#34;From index 2 to 4&#34;, scores[2:5])&#xA;fmt.Println(&#34;From index 0 to 2&#34;, scores[:3])&#xA;fmt.Println(&#34;From index 3 to 5&#34;, scores[3:])&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;[80 85 90 75 60 56 83]&#xA;From index 2 to 4 [90 75 60]&#xA;From index 0 to 2 [80 85 90]&#xA;From index 3 to 5 [75 60 56 83]&#xA;```&#xA;&#xA;So, we are able to perform index slicing in golang on arrays and slices.&#xA;Further, as for the array, we can also modify elements in slices. Using the index of that element, we can access the element and perform operations on it and thus change the literal value of the element.&#xA;&#xA;```go&#xA;word := []byte{&#39;f&#39;, &#39;u&#39;, &#39;z&#39;, &#39;z&#39;, &#39;y&#39;}&#xA;fmt.Printf(&#34;%s&#xA;&#34;, word)&#xA;word[0] = &#39;b&#39;&#xA;word[len(word)-1] = &#39;z&#39;&#xA;fmt.Printf(&#34;%s&#xA;&#34;, word)&#xA;```&#xA;&#xA;```&#xA;fuzzy&#xA;buzzz&#xA;```&#xA;&#xA;So, now we can also modify existing values of elements in slices. &#xA;&#xA;## Deleting elements from Slice&#xA;&#xA;We can also remove an element from the slice i.e. shrink the length of the slice. There is no function to remove an element from the slice, but we can work around with the append function in golang. So, in the slice before the element&#39;s index to be deleted is appended with all the elements after the index of the element to be deleted.&#xA;&#xA;```&#xA;10 20 30 40 50 60&#xA;0  1  2  3  4  5&#xA;&#xA;Delete the element at index -&gt; 2&#xA;&#xA;Copy from 3 to 5 into the slice from 0 to 1&#xA;&#xA;// append(slice[:2], slice[2+1:]&#xA;            ^            ^&#xA;            |            | &#xA;          10 20       40 50 60        &#xA;&#xA;append 40 50 60 -&gt; 10 20&#xA;&#xA;10 20 40 50 60&#xA;```&#xA;&#xA;Here in the above example, we append the indices after the element to be deleted into the slice of elements before the `to be deleted element`.  &#xA;&#xA;```go&#xA;marklist := []int{80, 85, 90, 75, 60}&#xA;fmt.Println(marklist)&#xA;&#xA;var index int&#xA;fmt.Printf(&#34;Enter the index to be deleted: &#34;)&#xA;fmt.Scan(&amp;index)&#xA;&#xA;elem := marklist[index]&#xA;&#xA;&#xA;// append in such a way that the element to be removed is excluded&#xA;&#xA;marklist = append(marklist[:index], marklist[index+1:]...)&#xA;&#xA;&#xA;fmt.Printf(&#34;The element %d was deleted.&#xA;&#34;, elem)&#xA;fmt.Println(marklist)&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;&#xA;[80 85 90 75 60]&#xA;Enter the index to be deleted: 3&#xA;The element 75 was deleted.&#xA;[80 85 90 60]&#xA;```&#xA;&#xA;## Iterate through a slice&#xA;&#xA;As arrays are under the hood modifications of arrays, we have a quite similar approach to iterating over slices in golang.&#xA;&#xA;### Using three statements for loop&#xA;&#xA;We can use the three statements for loop i.e. the initialization, condition, and incrementation procedure. The counter is set from 0 or any other starting value as the index of the slice, next we have the end loop condition i.e. a condition to check until when to exit, and finally the amount to which we need to increment the counter.&#xA;&#xA;```go&#xA;code := [7]rune{&#39;g&#39;, &#39;o&#39;, &#39;l&#39;, &#39;a&#39;, &#39;n&#39;, &#39;g&#39;}&#xA;for i := 0; i &lt; len(code); i++ {&#xA;    fmt.Printf(&#34;%c&#xA;&#34;, code[i])&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;g&#xA;o&#xA;l&#xA;a&#xA;n&#xA;g&#xA;```&#xA;&#xA;### Using Range-based for loop&#xA;&#xA;We can use range-based for loops to iterate over the slice elements. The range keyword is passed with the slice name or the slice of an array to iterate over. Using the two variables i.e. the iterator and the copy of the element, we can access the index and the element in the slice.&#xA;&#xA;```go&#xA;scores := []int{80, 85, 90, 75, 60, 56, 83}&#xA;for _, s := range scores {&#xA;    fmt.Println(s)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;80&#xA;85&#xA;90&#xA;75&#xA;60&#xA;56&#xA;83&#xA;```&#xA;&#xA;We can also use a slice of slice i.e. scores[:4] to access a specific element in the range using index slicing.&#xA;&#xA;```go&#xA;scores := []int{80, 85, 90, 75, 60, 56, 83}&#xA;for _, s := range scores[1:4] {&#xA;    fmt.Println(s)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;85&#xA;90&#xA;75&#xA;```&#xA;&#xA;### Using for loop with range&#xA;&#xA;We can even use them for loop as a while loop to look and get a bit of both the above methods like the range method and the typical for loop access. &#xA;&#xA;```go&#xA;start, i, end := 2, 2, 5&#xA;&#xA;modes := []string{&#34;normal&#34;, &#34;command&#34;, &#34;insert&#34;, &#34;visual&#34;, &#34;select&#34;, &#34;replace&#34;}&#xA;&#xA;for range scores[start:end] {&#xA;    fmt.Printf(&#34;Element at index %d = %s &#xA;&#34;, i, modes[i])&#xA;    i++&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run slices.go&#xA;Element at index 2 = insert&#xA;Element at index 3 = visual&#xA;Element at index 4 = select&#xA;```&#xA;&#xA;So by using the range keyword we were able to iterate over the slice but without assigning the iterator and the copy of the element, we manually set a counter `i` and increment it as per our liking. Using index slicing we were able to get the elements between particular indices.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to understand the basics of slices in golang. We covered some basics stuff including the declaration, initialization, and iteration. We also covered the under the hood working of slices and how to relate with the arrays.&#xA;Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Arrays</title>
      <link>https://www.meetgor.com/posts/golang-arrays</link>
      <description>Introduction In this fifth section of Golang, we will be understanding the basics of arrays. We will be covering some basic operations on arrays like declaratio</description>
      <pubDate>Wed, 06 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this fifth section of Golang, we will be understanding the basics of arrays. We will be covering some basic operations on arrays like declaration, initialization, modifications, and iterating over an array.&#xA;&#xA;## Declaring Arrays&#xA;&#xA;Arrays are type of data structure that allow us to store multiple items at continuous memory locations of the same type. In golang, we can create arrays similar to any variable but by adding a few bits and pieces like the `[]` square braces, length of the array, values, etc. In golang, we cannot resize the length once it is initialized.&#xA;&#xA;To create a basic array in golang, we can use the following code:&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    var languages[4]string&#xA;    languages[0] = &#34;Python&#34;&#xA;    fmt.Println(languages)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;[Python   ]&#xA;```&#xA;&#xA;   Here, we have initialized a string array of size 4 and not initialized any values of the elements in the declaration. We later set the value of the 0th index or the first element in the array to a string and still rest of the elements are by default set to empty strings `&#34;&#34;`. The point ot be noted here, is that the size of the array cannot be changed later nor you can leave the size empty while declaring an array in Golang.&#xA;   &#xA;We can even initialize elements at the time of declaring the array as follows:&#xA;&#xA;```go&#xA;lang_array := [4]string {&#34;Python&#34;, &#34;Go&#34;, &#34;Javascript&#34;, &#34;C++&#34;}&#xA;fmt.Println(lang_array)&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;[Python Go Javascript C++]&#xA;```&#xA;&#xA;So, we use the walrus operator `:=` to initialize an array with the values at the time of declaration.  &#xA;&#xA;### Letting Compiler makeout the array length&#xA;&#xA;We can even let the compiler decide the length of the array, using the `...` syntax inside the brackets. This is limited by using the array literal that is by initializing values in the `{}` braces. So, all the elements need to be declared in the array.&#xA;&#xA;```go&#xA;cart := [...]string {&#34;Bag&#34;, &#34;Shirt&#34;, &#34;Watch&#34;, &#34;Book&#34;}&#xA;fmt.Println(cart)&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;[Bag Shirt Watch Book]&#xA;```  &#xA;   &#xA;## Access and Modify Elements&#xA;&#xA;To access an element in the array, we can use the index of that element which starts from 0 as usual in programming.&#xA;&#xA;```go&#xA;marks := [6]int {85, 89, 75, 93, 98, 60}&#xA;fmt.Println(marks[1])&#xA;fmt.Println(marks[5])&#xA;fmt.Println(marks[3])&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;89&#xA;60&#xA;93&#xA;```&#xA;   We can now, access the element at a particular index in the array. Now, we will see how to modify or edit the elements which are already initialized.&#xA;&#xA;```go&#xA;name := [5]byte {&#39;f&#39;,&#39;u&#39;,&#39;z&#39;,&#39;z&#39;,&#39;y&#39;}&#xA;fmt.Printf(&#34;%s&#xA;&#34;,name)&#xA;name[0] = &#39;b&#39;&#xA;name[4] = &#39;z&#39;&#xA;fmt.Printf(&#34;%s&#xA;&#34;,name)&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;fuzzy&#xA;buzzz&#xA;```&#xA;   By accessing the index of the element we can set a appropriate value to the element in the array and thus we have modified the contents of the array.&#xA;&#xA;## Find Length of Array&#xA;&#xA;To find the length of the Array, we have the `len` function. The [len](https://pkg.go.dev/builtin#len) function takes in the array as the parameter and returns the size of the array(int).&#xA;&#xA;```go&#xA;code := [7]rune {&#39;#&#39;, &#39;5&#39;, &#39;g&#39;, &#39;t&#39;, &#39;m&#39;, &#39;y&#39;, &#39;6&#39;}&#xA;fmt.Println(&#34;The length of the array is :&#34;, len(code))&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;The length of the array is : 7&#xA;```&#xA;   In the previous few section, we talked about letting the compiler make out the length of the array while declaring and initializing the array, we can use the len function to calculate the length of the array for further computation&#xA;&#xA;```go&#xA;cart := [...]string {&#34;Bag&#34;, &#34;Shirt&#34;, &#34;Watch&#34;, &#34;Book&#34;}&#xA;fmt.Printf(&#34;There are %d items in your cart&#xA;&#34;, len(cart))&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;There are 4 items in your cart&#xA;```&#xA;&#xA;We can now get the length of the arrays even with `[...]` syntax using the len function.&#xA;&#xA;## Iterate over an Array&#xA;&#xA;We can move on to the most important aspect when it comes to arrays i.e. to iterate over each element. We can use various types of for loops like the three statement for loops, range based loop or while loop.&#xA;&#xA;### Three statement for loop&#xA;&#xA;We can use the three statement for loop, the initialization statement as to `0`, condition to be the counter (i) should be less than the length of the array by using the `len` function and increment each time by one.&#xA;&#xA;```go&#xA;code := [7]rune {&#39;#&#39;, &#39;5&#39;, &#39;g&#39;, &#39;t&#39;, &#39;m&#39;, &#39;y&#39;, &#39;6&#39;}&#xA;&#xA;for i := 0; i&lt;len(code); i++{&#xA;    fmt.Printf(&#34;%c&#xA;&#34;,code[i])&#xA;}&#xA;```  &#xA;&#xA;```&#xA;$ go run basic.go&#xA;#&#xA;5&#xA;g&#xA;t&#xA;m&#xA;y&#xA;6&#xA;```&#xA;&#xA;   Thus, we can iterate over the array with three statement for loop in golang.&#xA;&#xA;### Using range-based loop&#xA;&#xA;We can use the `range` keyword to iterate over the arrays in golang. The range keyword is used to iterate over the array by taking two variables i.e. the iterator and the copy of the element in the iterator. We don&#39;t have any use of the iterator so we say it as `_`, otherwise it gives a warning/error of not using declared variables. So, we only require the copy of the element in this case, so sayit as `s` or any other name you like.&#xA;&#xA;```go&#xA;cart := [...]string {&#34;Bag&#34;, &#34;Shirt&#34;, &#34;Watch&#34;, &#34;Book&#34;}&#xA;&#xA;for _, s := range cart{&#xA;    fmt.Println(s)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run basic.go&#xA;Bag&#xA;Shirt&#xA;Watch&#xA;Book&#xA;```    &#xA;Thus, using the range based for loops we were able to iterate over the array for each element without needing any check condition and incrementation of the counter/iterator.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to understand the baiscs of arrays in golang. We covered from declaration of arrays to iteration.&#xA;Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Input</title>
      <link>https://www.meetgor.com/posts/golang-input</link>
      <description>Introduction In this fourth section of Golang, we will be understanding the basics of user input. In golang, we can get user input with several functions most o</description>
      <pubDate>Tue, 05 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this fourth section of Golang, we will be understanding the basics of user input. In golang, we can get user input with several functions most of which are similar to the `C` programming language like `scanf`. This type of input is quite powerful and gives more control on the input to be received.&#xA;&#xA;## Scan Function&#xA;&#xA;The [Scan](https://pkg.go.dev/fmt#Scan) function helps in getting a value with space as delimiter i.e. The input is stored before a space is encountered. This means the input is only limited to adding a space or a new line. We can use the function by passing the reference to the variable we are going to store the input value. So, we can have a basic input in Golang as follows:&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    var pname string&#xA;    fmt.Println(&#34;Enter your favourite programming language: &#34;)&#xA;    fmt.Scan(&amp;pname)&#xA;    fmt.Println(&#34;So, your favourite programming language is&#34;,pname)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run scan.go&#xA;Enter your favorite programming language:&#xA;python&#xA;So, your favorite programming language is python&#xA;```&#xA;&#xA;We need to declare the variable to take input as we need a reference of that variable to store the input. We will be talking about `&amp;` and pointers in a separate article. We use the `Scan` function by passing the reference to the variable `pname` like `&amp;pname` which means, fetch the memory address of the variable `name`, we just pass the address as `int` to the Scan function and it does the rest to store the input value in it. We then as usual access the variable and operations on it.&#xA;&#xA;Here, if you add a space in the input, the value after the space won&#39;t be picked by the Scan function. It strictly stops accepting values input after it sees space. We can use this to input multiple variables at once. We know scan gets input before encountering space, so we can pass multiple variable references and add them as input.&#xA;&#xA;```go&#xA;var (&#xA;    name   string&#xA;    age    int&#xA;    gender rune&#xA;)&#xA;fmt.Println(&#34;Enter your name age and gender: &#34;)&#xA;fmt.Scan(&amp;name, &amp;age, &amp;gender)&#xA;fmt.Printf(&#34;Hello %s, you are a %c and %d years old&#34;, name, gender, age)&#xA;```&#xA;&#xA;```&#xA;$ go run scan.go&#xA;Enter your name age and gender:&#xA;Meet 19 77&#xA;Hello Meet, you are a M and 19 years old&#xA;```&#xA;&#xA;Here, we are declaring multiple variables like `name`, `age`, and `gender` as `string`, `int`, and `rune` respectively. Then, we can input all of these in a single scan statement by comma-separated variables. Here, we need to input the `rune` as an int value because under the hood it is an integer alias. So, we inputted `77` which is equivalent to `M` in ASCII characters and even Unicode character sets. Thus, we were able to input multiple variables with the Scan function.&#xA;&#xA;## Scanf functions&#xA;&#xA;The [Scanf](https://pkg.go.dev/fmt#Scanf) function is quite similar to the `scanf` in C programming language as it allows to specify the type of the incoming input. This will solve the problem of us inputting `77` instead of `M` in the gender variable in the previous example. The Scanf function allows us to take input by specifying the placeholder types and the delimiters as well. The delimiter is basically the separator between two or more entities. We can either use space separation or `&#xA;` as an input delimiter i.e. the way we want to separate inputs from each other while taking input.&#xA;&#xA;```go&#xA;var (&#xA;    name   string&#xA;    age    int&#xA;    gender rune&#xA;)&#xA;fmt.Println(&#34;Enter your name age and gender: &#34;)&#xA;fmt.Scanf(&#34;%s %d %c&#34;, &amp;name, &amp;age, &amp;gender)&#xA;fmt.Printf(&#34;Hello %s, you are a %c and %d years old&#34;, name, gender, age)&#xA;```&#xA;&#xA;```&#xA;$ go run scanf.go&#xA;Enter your name age and gender:&#xA;Meet 12 M&#xA;Hello Meet, you are a M and 12 years old&#xA;&#xA;&#xA;```&#xA;&#xA;How cool is that? It definitely gives much more control on what and how to take input. We are taking input as only space-separated values. Let&#39;s now try to get more control over how the input will be taken and stored.&#xA;&#xA;```go&#xA;var (&#xA;    name   string&#xA;    age    int&#xA;    gender rune&#xA;)&#xA;fmt.Println(&#34;Enter your name age and gender: &#34;)&#xA;fmt.Scanf(&#34;%s &#xA; %d %c&#34;, &amp;name, &amp;age, &amp;gender)&#xA;fmt.Printf(&#34;Hello %s, you are a %c and %d years old&#34;, name, gender, age)&#xA;```&#xA;&#xA;```&#xA;$ go run scanf.go&#xA;Enter your name age and gender:&#xA;Meet&#xA;12 M&#xA;Hello Meet, you are a M and 12 years old&#xA;```&#xA;&#xA;By adding `&#xA;` between the `%s`(name) and `%d`(age), we want the user to type the name on one line and age with gender on a different line. The age and gender as before separated by space.&#xA;&#xA;&#xA;## Scanln function&#xA;&#xA;The [Scanln](https://pkg.go.dev/fmt#Scanln) function is a modification of the Scan function as it only stops the input after a newline/enter is pressed.  So, using this we can input multiple variables which are space-separated in a single line.&#xA;&#xA;```go&#xA;var s string&#xA;fmt.Println(&#34;Enter a string: &#34;)&#xA;fmt.Scanln(&amp;s)&#xA;fmt.Println(s)&#xA;```&#xA;&#xA;```&#xA;$ go run scanln.go&#xA;Enter a string:&#xA;&#xA;&#xA;&#xA;$ go run scanln.go&#xA;Enter a string:&#xA;Can&#39;t type&#xA;Can&#39;t&#xA;&#xA;$ ype&#xA;-bash: ype: command not found&#xA;```&#xA;&#xA;The Scanln function even accepts an empty string as input. It just needs to get the new line character and it will exit, it also only accepts space-separated values. The rest of the input after space is thrown away and is basically exited from the program stream. More specifically, the input `Can&#39;t Type` was treated only as `Can&#39;t` anything after the space is not considered in the input value.&#xA;&#xA;The key difference between Scan and Scanln is that Scanln will not accept input that is space-separated, Scan function considers the newline/enter as a space if there are multiple inputs. The below example will make things absolutely clear.&#xA;&#xA;```go&#xA;// scan.go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    var (&#xA;        name   string&#xA;        age    int&#xA;        gender rune&#xA;    )&#xA;    fmt.Println(&#34;Enter your name age and gender: &#34;)&#xA;    fmt.Scan(&amp;name, &amp;age, &amp;gender)&#xA;    fmt.Printf(&#34;Hello %s, you are a %c and %d years old&#34;, name, gender, age)&#xA;}&#xA;&#xA;```&#xA;&#xA;```go&#xA;//scanln.go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    var s string&#xA;    fmt.Println(&#34;Enter a string: &#34;)&#xA;    fmt.Scanln(&amp;s)&#xA;    fmt.Println(&#34;Inputted string : &#34;, s)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run scan.go&#xA;Enter your name age and gender:&#xA;&#xA;Meet&#xA;&#xA;14&#xA;&#xA;&#xA;&#xA;77&#xA;Hello Meet, you are a M and 14 years old&#xA;&#xA;$ go run scanln.go&#xA;Enter a string:&#xA;&#xA;Inputted string :&#xA;```&#xA;&#xA;We can see that, The Scan function won&#39;t exit until it has inputted all its input values even with newline and spaces. Whereas the Scanln function just waits for the newline character (Enter Key) to be pressed and it exits, thereby even allowing an empty string as input.&#xA;&#xA;That&#39;s it from this part. Reference for all the code examples and commands can be found in the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang/) GitHub repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, these are the basic input techniques in Golang. We saw functions in the `fmt` package like `Scan`, `Scanf`, and `Scanln` which allow us to get input in a specific pattern. Hopefully, from this article part, we can build a firm base for further exploration like Strings, Arrays, and the ways to input them. Thank you for reading. If you have any questions or feedback, please let me know in the comments or on social handles. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Conditionals and Loops</title>
      <link>https://www.meetgor.com/posts/golang-conditionals-loops</link>
      <description>Introduction Moving to the fourth part, we will be doing conditional statements and loops in golang. We will be seeing the basics of conditional statements like</description>
      <pubDate>Mon, 04 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Moving to the fourth part, we will be doing conditional statements and loops in golang. We will be seeing the basics of conditional statements like if-else and switch along with loops like for, while, and range-based loops. We won&#39;t be covering iterating over arrays in a loop as this requires an understanding of arrays. &#xA;&#xA;## Conditional statements&#xA;&#xA;Conditional statements are quite a fundamental aspect of learning a programming language. In golang, we have if-else conditional statements as well as switch cases. We will be exploring both of them in this section. Firstly, we will dive into if-else statements which are quite easy to understand.&#xA;&#xA;### if else&#xA;&#xA;An `if` statement is used for checking the validity of a condition. If the condition is true(condition is met), a particular sets of statements are executed else (condition is not satisfied) different statements gets executed. We can use a basic `if-else` statement in go with the following syntax:&#xA;&#xA;```go&#xA;if condition {&#xA;    // statements&#xA;}else{&#xA;    //statements&#xA;}&#xA;```&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import &#34;fmt&#34;&#xA;&#xA;func main() {&#xA;    age := 16&#xA;    if age &lt; 13{&#xA;        fmt.Println(&#34;Kid&#34;)&#xA;    }else{&#xA;        fmt.Println(&#34;Teenager&#34;)&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run if_else.go&#xA;Teenager&#xA;```&#xA;&#xA;We can also use else if for evaluating more than one condition rather than using nested if and else. &#xA;&#xA;```go&#xA;if condition {&#xA;    // statements&#xA;}else if condtion {&#xA;    //statements&#xA;}else if condition {&#xA;    //statements&#xA;}else {&#xA;    //statements&#xA;}&#xA;```&#xA;&#xA;```go&#xA;year := 2&#xA;&#xA;if year &lt; 1 {&#xA;    fmt.Println(&#34;Freshman&#34;)&#xA;} else if year == 2 {&#xA;    fmt.Println(&#34;Sophomore&#34;)&#xA;} else if year == 3 {&#xA;    fmt.Println(&#34;Junior&#34;)&#xA;} else if year == 4 {&#xA;    fmt.Println(&#34;Senior&#34;)&#xA;} else {&#xA;    fmt.Println(&#34;Probably a Graduate&#34;)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run if_else.go&#xA;Sophomore&#xA;```&#xA;&#xA;Using `else if` we can evaluate multiple conditions. This style is much better than using nested `if else` statements as it becomes harder to read for complex cases.&#xA;&#xA;&#xA;### switch &#xA;&#xA;We also have switch statements in golang which allow us to write cases for a given state of a variable. We can simply add cases for a given variable, the case should be a valid value that the variable can take. If a case is matched it breaks out of the switch statement without executing any statements below the matched case.&#xA;&#xA;The basic structure of the switch statements in golang is as follows:&#xA;&#xA;```go&#xA;switch variable{&#xA;case value1:&#xA;    //statements&#xA;case value2:&#xA;    //statements&#xA;&#xA;}&#xA;```&#xA;&#xA;```go&#xA;age := 19&#xA;var result string&#xA;switch {&#xA;case age &lt; 13:&#xA;    result = &#34;Kid&#34;&#xA;case age &lt; 20:&#xA;    result = &#34;Teenager&#34;&#xA;case age &lt; 25:&#xA;    result = &#34;Adult&#34;&#xA;case age &lt; 35:&#xA;    result = &#34;Senior&#34;&#xA;}&#xA;fmt.Println(&#34;The person is a&#34;,result)&#xA;```&#xA;&#xA;```&#xA;$ go run switch.go&#xA;The person is a Senior with age 27.&#xA;&#xA;$ go run switch.go&#xA;The person is a Teenager with age 19.&#xA;&#xA;$ go run switch.go&#xA;The person is a Kid with age 11.&#xA;```&#xA;&#xA;This gives a good understanding of switch-case statements. We can give a variable to the switch statement and pick its value in the respective case statements to evaluate the result accordingly. The `default` statement is evaluated when there is no match among the given cases. &#xA;&#xA;```go&#xA;language := &#34;&#34;&#xA;var devs string&#xA;switch language{&#xA;case &#34;go&#34;:&#xA;    devs = &#34;gopher&#34;&#xA;case &#34;rust&#34;:&#xA;    devs = &#34;rustacean&#34;&#xA;case &#34;python&#34;:&#xA;    devs = &#34;pythonista&#34;&#xA;case &#34;java&#34;:&#xA;    devs = &#34;Duke&#34;&#xA;default:&#xA;    language = &#34;javascript&#34;&#xA;    devs = &#34;developer&#34;&#xA;}&#xA;fmt.Println(&#34;A person who codes in&#34;,language,&#34;is called a&#34;,devs)&#xA;```&#xA;&#xA;```&#xA;$ go run switch.go&#xA;A person who codes in javascript is called a developer&#xA;&#xA;$ go run switch.go&#xA;A person who codes in python is called a pythonista&#xA;&#xA;$ go run switch.go&#xA;A person who codes in go is called a gopher&#xA;```&#xA;&#xA;This code will by default pick `javascript` and `developer` as the values for `language` and `devs` respectively if there is no match for the provided language or the language is left empty. &#xA;&#xA;We also have `fallthrough` in the golang switch which allows evaluating more than one case if one of them is matched. This will allow the switch and check for all the cases sequentially and evaluate all the matched and satisfied cases. &#xA;&#xA;```go&#xA;character := &#39;t&#39;&#xA;fmt.Printf(&#34;The input character is = %c&#xA;&#34;, character)&#xA;switch {&#xA;case character == 97:&#xA;    fmt.Printf(&#34;Its %c&#xA;&#34;, character)&#xA;    fallthrough&#xA;case character &lt; 107 &amp;&amp; character &gt; 96:&#xA;    fmt.Println(&#34;It&#39;s between a and k&#34;)&#xA;    fallthrough&#xA;case character &lt; 117 &amp;&amp; character &gt; 98:&#xA;    fmt.Println(&#34;It&#39;s between a and t&#34;)&#xA;    fallthrough&#xA;case character &lt; 122 &amp;&amp; character &gt; 98:&#xA;    fmt.Println(&#34;It&#39;s between u and z&#34;)&#xA;default:&#xA;    fmt.Println(&#34;Its not a lowercase alphabet&#34;)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run switch.go&#xA;The input character is = f&#xA;It&#39;s between a and k&#xA;It&#39;s between a and t&#xA;It&#39;s between a and u&#xA;&#xA;$ go run switch.go&#xA;The input character is = k&#xA;It&#39;s between a and t&#xA;It&#39;s between a and u&#xA;&#xA;$ go run switch.go&#xA;The input character is = a&#xA;Its a&#xA;It&#39;s between a and k&#xA;It&#39;s between a and t&#xA;It&#39;s between a and u&#xA;```&#xA;&#xA;So, here we can see that the fallthrough hits multiple cases. This is unlike the base case which exits the switch statement once a case has been satisfied. This can be helpful for situations where you really want to evaluate multiple conditions for a given variable.&#xA;&#xA;## Loops&#xA;&#xA;We can now move on to loops in golang. We only have a `for` loop so to speak but this can be used as any other looping statement like the `while` loop or range-based loop. We will first see the most fundamental loop statement in golang which is a three-component loop. &#xA;&#xA;### for loop&#xA;&#xA;We can have a simple for loop in golang by using the three statements namely `initialize`, `condition`, and `increment`. The structure of the loop is quite similar to the other programming languages.&#xA;&#xA;```go&#xA;for k := 0; k &lt; 4; k++ {&#xA;    fmt.Println(k)&#xA;}&#xA;```&#xA;&#xA;### Range-based loop&#xA;&#xA;We can even iterate over a string, using the range keyword in golang. We need to have two variables for using a range-based for loop in golang one is the index or the 0 based position of the element and the copy of the element in the array or string. Using the range keyword, we can iterate over the string one by one. &#xA;&#xA;```go&#xA;name := &#34;GOLANG&#34;&#xA;for i, s := range name{&#xA;    fmt.Printf(&#34;%d -&gt; %c&#xA;&#34;,i, s)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run for.go&#xA;0 -&gt; G&#xA;1 -&gt; O&#xA;2 -&gt; L&#xA;3 -&gt; A&#xA;4 -&gt; N&#xA;5 -&gt; G&#xA;```&#xA;&#xA;So, here we can see we have iterated over the string by each character. Using the range keyword in golang, The `i, s` is the index and the copy of the element at that index as discussed earlier. Using the index we get the value which we don&#39;t have to index the array for accessing it, that is already copied in the second variable while using the range loop. &#xA;&#xA;### while loop (Go&#39;s while is for)&#xA;&#xA;There are no while loops as such in golang, but the for loop can also work similarly to the while loop. We can use a condition just after the for a keyword to make it act like a while loop. &#xA;&#xA;&#xA;```go&#xA;for condition {&#xA;    // statements&#xA;}&#xA;```&#xA;&#xA;```go&#xA;count := 3&#xA;for count &lt; 9 {&#xA;&#x9;fmt.Println(count)&#xA;&#x9;count++&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run while.go&#xA;3&#xA;4&#xA;5&#xA;6&#xA;7&#xA;8&#xA;```&#xA;&#xA;We can see here that the condition is evaluated and the statements in the loop body are executed, if the condition evaluates to false, the flow is moved out of the loop and we exit the loop. &#xA;&#xA;### Infinite loop&#xA;&#xA;We can run an infinite loop again using a keyword. We do not have any other keywords for loops in golang. &#xA;&#xA;```go&#xA;for {&#xA;    // statements&#xA;    // should have conditons to exit&#xA;}&#xA;```&#xA;&#xA;```go&#xA;flag := 4&#xA;for {&#xA;&#x9;flag++&#xA;&#x9;fmt.Println(flag)&#xA;}&#xA;```&#xA;&#xA;This might be used with a base condition to exit the loop otherwise there should be a memory overflow and the program will exit with errors.&#xA;&#xA;### Break &#xA;&#xA;If we want to exit out of a loop unconditionally, we can use the `break` keyword. This will break the loop and help us to exit out of an infinite or a condition-bound-based loop too.&#xA;&#xA;```go&#xA;flag := 1&#xA;for {&#xA;    fmt.Println(flag)&#xA;    flag++&#xA;    if flag == 7 {&#xA;        fmt.Println(&#34;It&#39;s time to break at&#34;, flag)&#xA;        break&#xA;    }&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run infinite.go&#xA;1&#xA;2&#xA;3&#xA;4&#xA;5&#xA;6&#xA;It&#39;s time to break at 7&#xA;```&#xA;&#xA;As, we can see inside an infinite loop, we were able to break out of it by using a conditional statement and `break` keyword. This also applies to switch cases it basically is the opposite of `fallthrough` in switch-case statements. But by default(without using fallthrough), the case statement breaks the switch after a match has been found or the default case has been encountered. &#xA;&#xA;### Continue&#xA;&#xA;We also have the opposite of `break` i.e. `continue` which halts the execution of the loop and directs back to the post statement increment(in case of for loops) or evaluation(in case of while loop). We basically are starting to iterate over the loop again after we encounter the continue but by preserving the counter/iterator state values. &#xA;&#xA;```go&#xA;counter := 2&#xA;for counter &lt; 4 {&#xA;    counter++&#xA;    if counter &lt; 4 {&#xA;        continue&#xA;    }&#xA;    fmt.Println(&#34;Missed the Continue? at counter =&#34;, counter)&#xA;}&#xA;```&#xA;&#xA;```&#xA;$ go run infinite.go&#xA;Missed the Continue? at counter = 4&#xA;```&#xA;&#xA;For following up with the code for this and all parts of the series, head over to the [100 days of Golang](https://github.com/mr-destructive/100-days-of-golang) GitHub repository. &#xA;&#xA;## Conclusion&#xA;&#xA;So, from this section, we were able to understand the basics of conditional statements and loops in golang. We covered the things which are more important for understanding the core of the language than some specific things. There are certain parts that need to be explored further like iterating over arrays and slices which we&#39;ll cover after we have understood arrays and slices. Hopefully, you have understood the basics of the conditional statements and loops in golang. Thank you for reading, if you have any questions, or feedback please let me know in the comments or social handles. Until next time, Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Variables and Types</title>
      <link>https://www.meetgor.com/posts/golang-variables</link>
      <description>Introduction In the third part of the series, we will be covering the fundamentals for learning any programming language i.e. variables and data types. We will</description>
      <pubDate>Sun, 03 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the third part of the series, we will be covering the fundamentals for learning any programming language i.e. variables and data types. We will be covering from data types to variable declaration. We won&#39;t be seeing each and every detail related to the data types as some of them require a knowledge of loops and other topics, so that can be left for the different part.&#xA;&#xA;## Types in golang&#xA;&#xA;In Golang there are 3 major types : Numeric, Bool and String. Further we also have specific types for the three data types like int, float, rune, byte, etc. We will first see how to declare a simple variable and then explore the data types in Golang. &#xA;&#xA;```go&#xA;var name string&#xA;```&#xA;&#xA;This the variable declaration in Golang, we have the keyword `var` similar to Javascript but we optionally have to specify the type of the variable if we are not immediately assigning/defining it a value. &#xA;&#xA;To assign variable values, we can write the datatype of the assigned value but it is optional as the go compiler will know the datatype according to the assigned value. Further you cannot change the type of that variable once it is initialized.&#xA;&#xA;```go&#xA;var name string = &#34;Gopher&#34;&#xA;&#xA;OR &#xA;&#xA;var name string&#xA;name = &#34;Gopher&#34;&#xA;&#xA;OR&#xA;&#xA;var name = &#34;Gopher&#34;&#xA;&#xA;```&#xA;&#xA;We also have `const` for assigning constant values to a particular value set. You cannot change the value for a constant type, doing this will result in compile time error. An important property of `const` can be noted here, if you simply declare a `const` without initializing the value and try to access that constant, the go-compiler will throw an compilation error.&#xA;&#xA;```go&#xA;const name string = &#34;David&#34;&#xA;&#xA;OR&#xA;&#xA;const name string&#xA;name = &#34;Calvin&#34;&#xA;&#xA;OR&#xA;&#xA;const name = &#34;Smith&#34;&#xA;```&#xA;&#xA;By default, the values for string is an empty string`&#34;&#34;`, for integer and float it is `0` and for bool it is `false`.&#xA;&#xA;Each of these are valid declaration of variables in golang. Let&#39;s now dive into the data types and follow up with variable declaration in detail later.&#xA;&#xA;| Numeric    | String | Bool |&#xA;|------------|--------|------|&#xA;|            |        |      |      &#xA;|  int       | string | bool |&#xA;|            |        |      |&#xA;|  float     |        |      |&#xA;|            |        |      |&#xA;|  complex   |        |      | &#xA;|            |        |      | &#xA;|  rune      |        |      |&#xA;|            |        |      |  &#xA;|  byte      |        |      | &#xA;   &#xA;### Numeric&#xA;&#xA;Let&#39;s first explore the `numeric` data types in golang as you have guessed correctly, we have `int` and `float` as distinct categories but further we also have fine grained storage types for both of the types. &#xA;&#xA;#### Integer&#xA;&#xA;In integers as well we have two categories `signed` and `unsigned`, we can basically store only positive integers in `unsigned` integers giving us an extra bit to play with. &#xA;&#xA;For Integers, we have specific data storage types depending on the bits it can store like `int8` for storing 8 bit integers, `int16` for storing 16 bit integer value, `int32` and `int64` for the given number of bits in the integer. Similarly we will have these storage based integer values for unsigned integers like `uint8`, `uint16`, `uint32` and `uint64`. We can basically store double amount of positive integers in unsigned integers as `uint` than in signed integers `int`, this is because the most significant bit is not used as a sign bit since all values in the variable are positive and hence no sign bit is required. &#xA;&#xA;```go&#xA;var likes int = 140&#xA;fmt.Println(likes)&#xA;```&#xA;&#xA;```&#xA;$ go run int.go&#xA;140&#xA;```&#xA;&#xA;```go&#xA;var age int8 = 140&#xA;fmt.Println(&#34;Age = &#34;,age) &#xA;```&#xA;&#xA;```&#xA;$ go run int.go&#xA;# command-line-arguments&#xA;.\int.go:6:9: constant 140 overflows int8&#xA;```&#xA;&#xA;This will give us an error as `140` is above the limit for `int8`. So, unless you have specific requirements as storage limitation, you should be using `int` as the default data type for storing integers.&#xA;&#xA;So, we need to define variables as per the limits to which we are going to use them, if you just specify `int` the type will be selected based on your operating system, if it is `32bit`, it will take `int32`, for `64bit` OSes it will take as `int64` integer. If you define a variable with let say `16` bit storage and if it exceeds the limit for `16` bit storage, it would give a `overflow limit` error. &#xA;&#xA;Below are the limits for all the integer types in Golang: &#xA;&#xA;```&#xA;uint8 -&gt;  0  to  255&#xA;uint16 -&gt;  0  to  65535&#xA;uint32 -&gt;  0  to  4294967295&#xA;uint64 -&gt;  0  to  18446744073709551615&#xA;&#xA;int8 -&gt;  -128  to  127&#xA;int16 -&gt;  -32768  to  32767&#xA;int32 -&gt;  -2147483648  to  2147483647&#xA;int64 -&gt;  -9223372036854775808  to  9223372036854775807&#xA;```&#xA;&#xA;If you want to reality check for the boundary values of this data types, you can code a program in `go` as below: &#xA;&#xA;- To find the maximum value of uint, we can flip all the bits in `0` to get all the set bits in the integer thus we use `^` operator.&#xA;- To find the maximum value for signed integers, we can right shit one bit so as to unset the sign bit.&#xA;- The minimum value for uint is the default value `0`.&#xA;- The minimum value for int can be calculated by subtracting one from the negative of the max limit.&#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;    &#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;    var min_uint = 0&#xA;    var max_uint8 uint8 = ^uint8(0)&#xA;    var max_uint16 uint16 = ^uint16(0)&#xA;    var max_uint32 uint32 = ^uint32(0)&#xA;    var max_uint64 uint64 = ^uint64(0)&#xA;&#xA;    var max_int8 int8 = int8(max_uint8&gt;&gt;1)&#xA;    var min_int8 int8 = -max_int8 - 1&#xA;    var max_int16 int16 = int16(max_uint16&gt;&gt;1)&#xA;    var min_int16 int16 = -max_int16 - 1&#xA;    var max_int32 int32 = int32(max_uint32&gt;&gt;1)&#xA;    var min_int32 int32 = -max_int32 - 1&#xA;    var max_int64 int64 = int64(max_uint64&gt;&gt;1)&#xA;    var min_int64 int64 = -max_int64 - 1&#xA;&#xA;    fmt.Println(&#34;uint8 -&gt; &#34;, min_uint, &#34; to &#34;, max_uint8)&#xA;    fmt.Println(&#34;uint16 -&gt; &#34;, min_uint, &#34; to &#34;, max_uint16)&#xA;    fmt.Println(&#34;uint32 -&gt; &#34;, min_uint, &#34; to &#34;, max_uint32)&#xA;    fmt.Println(&#34;uint64 -&gt; &#34;, min_uint, &#34; to &#34;, max_uint64)&#xA;    fmt.Println(&#34;&#34;)&#xA;    fmt.Println(&#34;int8 -&gt; &#34;, min_int8, &#34; to &#34;, max_int8)&#xA;    fmt.Println(&#34;int16 -&gt; &#34;, min_int16, &#34; to &#34;, max_int16)&#xA;    fmt.Println(&#34;int32 -&gt; &#34;, min_int32, &#34; to &#34;, max_int32)&#xA;    fmt.Println(&#34;int64 -&gt; &#34;, min_int64, &#34; to &#34;, max_int64)&#xA;}&#xA;```&#xA;&#xA;This was the basic overview of Integers in Golang.&#xA;&#xA;Though rune and byte are Integer aliases, we will explore them in the String section as they make a lot of sense over there.&#xA;&#xA;#### Float&#xA;&#xA;Similar to integers, we also have floats in the numeric category. A float is a numeric data type that can allow numbers with decimal values. A simple float can be of either `float32` or `float64`. The two types are defined as the precision of the decimal values. Obliviously, the `float64` type is more precise than the counterpart and is also the default type assigned if not provided.&#xA;&#xA;```go&#xA;const percent = 30.5&#xA;fmt.Println(percent+50)&#xA;```&#xA;&#xA;You optionally provide the `float32` type to have a bit less precision than usual using the `float32` keyword as follows:&#xA;&#xA;```go&#xA;const percent float32 = 46.34&#xA;fmt.Println(percent - 3.555)&#xA;```&#xA;&#xA;The floating value precision of the float types in golang are as follows:&#xA;&#xA;```&#xA;float32&#x9;  --&gt;   -3.4e+38 to 3.4e+38.&#xA;float64&#x9;  --&gt;   -1.7e+308 to +1.7e+308.&#xA;```&#xA;&#xA;As quite logical reasons, the precision is almost double in the `float64` compared to `float32`. If we try to add(any operation) a `float64` number with a `flaot32`, we get an error as performing operations on two differently stored types can&#39;t be operated. &#xA;&#xA;#### Complex Numbers&#xA;&#xA;We also have complex numbers in golang. This are the numbers which deal with a real and a imaginary numbers. We initialize the complex variable using the `complex` function which takes two parameters the `real` part and the `imagianry` part. Both the parts or numbers are stored as float in the complex data type.&#xA;&#xA;So, since golang has `float32` and `float64` data types, we will have `complex64` and `complex128` as complex types. Since we are storing two `flaot64`, it has a `complex128` type and `complex64` for both parts as `float32`. Here as well, you cannot store the two parts(real and imaginary) as different float types i.e. You need to have both real and imaginary as either `flaot32` or `flaot64`.&#xA;&#xA;```go&#xA;var percent = 30.738&#xA;var f = 4.545&#xA;var comp1 = complex(f, percent)&#xA;var comp2 = complex(percent, f)&#xA;fmt.Println(comp1 - comp2)&#xA;```&#xA;&#xA;```&#xA;(-26.192999999999998+26.192999999999998i)&#xA;```&#xA;&#xA;Golang automatically adds the `i` or iota in the complex/imaginary part for better readability. &#xA;&#xA;### Strings &#xA;&#xA;We can now move onto the `string` data type in golang. It has several data types like `byte`, `rune`, `string`. In golang, `byte` and `rune` store individual characters whereas `string` can hold multiple characters. &#xA;&#xA;#### Byte&#xA;&#xA;A byte in golang is an unsigned 8 bit integer, which means it can hold numeric data from 0 to 255. So how is this displaying characters if it stores integer. Well, because each number it stores is mapped to the ASCII character set which is used to represent characters. &#xA;&#xA;A byte can be stored in a single quote `&#39;&#39;`, if we use double quotes`&#34;&#34;`, the variable is considered as string if we aren&#39;t specifying the data type.&#xA;&#xA;```go&#xA;const c byte = &#39;t&#39;&#xA;fmt.Println(c)&#xA;```&#xA;&#xA;```&#xA;$ go run byte.go&#xA;116&#xA;```&#xA;&#xA;This gives the output as a number between 0 and 255 depending on the character which you have stored. To print the actual character you need to type caste into a string like:&#xA;&#xA;```go&#xA;const c byte = &#39;t&#39;&#xA;fmt.Printf(&#34;Character = %c &#xA;Integer value = %d&#xA;&#34;, c, c)&#xA;```&#xA;&#xA;```&#xA;$ go run byte.go&#xA;Character = t&#xA;Integer Value = 116&#xA;```&#xA;&#xA;We can get the character equivalent of the byte representation number using the [Printf](https://cs.opensource.google/go/go/+/go1.18:src/fmt/print.go;l=212) function and the `%c` place holder for a character. The `&#xA;` is used to end the line just for displaying proper output.&#xA;&#xA;We can even store numbers between `0` and `255` as it is internally an `uint8`.&#xA;&#xA;#### Rune&#xA;&#xA;A rune is extended type of byte as it stores `int32` numbers and hence it represents `Unicode` characters. This is the default type if you do not specify `byte` and use single quotes to assign a character. Using rune, we can assign it an unicode characters to display some rich characters and literals like emoji or expressions.&#xA;&#xA;```go&#xA;var smiley_emoji = &#39;☺&#39;&#xA;fmt.Printf(&#34;Smiley Emoji --&gt; %c&#34;, smiley_emoji)&#xA;```&#xA;&#xA;![GO Rune Smiley Emoji](https://res.cloudinary.com/techstructive-blog/image/upload/v1648962460/blog-media/obw9ihlxsvhytbe8ito3.png)&#xA;&#xA;So, rune is pretty amazing type to play with characters in golang. As it is a default type assigned against byte if not provided while assignment. &#xA;&#xA;#### String&#xA;&#xA;Strings are basically a slice(list) of bytes. Each character in a string is a byte. By default the string will be empty if you don&#39;t initialize it with a value. &#xA;&#xA;```go&#xA;const language string&#xA;language = &#34;C++&#34;&#xA;&#xA;OR&#xA;&#xA;const language string= &#34;Python&#34;&#xA;&#xA;OR&#xA;&#xA;const language = &#34;Javascript&#34;&#xA;```&#xA;&#xA;We can even access particular character in the string using it&#39;s index.&#xA;&#xA;```go&#xA;const code = &#34;12AB34CD&#34;&#xA;fmt.Println(code[6])&#xA;```&#xA;&#xA;```&#xA;$ go run string.go&#xA;67&#xA;```&#xA;&#xA;This gives us a integer as we are accessing the byte from the string using its index. Thus, we can use `%c` in the `Printf` function to format and print the equivalent characters of the byte.&#xA;&#xA;```go&#xA;const code = &#34;12AB34CD&#34;&#xA;fmt.Printf(&#34;2nd Character in string = %c&#xA;&#34;, code[4])&#xA;```&#xA;&#xA;```&#xA;$ go run string.go&#xA;2nd Character in string = A&#xA;```&#xA;&#xA;We can also declare strings using backticks/backquotes or whatever you call it (```), assigning string with this allows us to write multi line string.   &#xA;&#xA;```go&#xA;var statement = `This is the first line&#xA;The next line&#xA;The last line`&#xA;&#xA;fmt.Println(statement)&#xA;```&#xA;&#xA;```&#xA;$ go run str-backticks.go&#xA;This is the first line&#xA;The next line&#xA;The last line&#xA;```&#xA;&#xA;Further in the loop article we will see how to loop/iterate over a string.&#xA;&#xA;### Boolean &#xA;&#xA;This type is used to store either `true` or `false` in golang. The default value of a boolean variable is `false`.&#xA;&#xA;```go&#xA;var power bool&#xA;fmt.Println(power)&#xA;```&#xA;&#xA;```&#xA;$ go run bool.go&#xA;false&#xA;```&#xA;&#xA;We can assign the variable as either `true` or `false`.  &#xA;&#xA;```go&#xA;const result = true&#xA;fmt.Printf(&#34;The statement is %t&#34;, result)&#xA;```&#xA;&#xA;```&#xA;$ go run bool.go&#xA;The statement is true&#xA;```&#xA;&#xA;So, using the `%t` we can print the value of a boolean value in golang in the `Printf` function.&#xA;&#xA;## Creating Variables &#xA;&#xA;Now, we have familiar with data types in golang, we can more expressively create, declare, initialize variables in golang.&#xA;&#xA;There are 3-4 primary ways to define a variable most of which we have already seen.&#xA;&#xA;### Declaring a Variable &#xA;&#xA;We can declare a variable without assigning it any value but for that we need to then provide the data type, this can be done using the following command:&#xA;&#xA;```go&#xA;var expereience int&#xA;&#xA;expereience = 2&#xA;```&#xA;&#xA;We can even use `const` for constant value in the given scope. &#xA;&#xA;Here, we can even declare multiple variables by separating each variable/constant with comma `,` which can be done as follows:&#xA;&#xA;```go&#xA;var a, b, c int&#xA;&#xA;OR&#xA;&#xA;const i, j, k bool&#xA;```&#xA;&#xA;### Defining and Initializing at the same time&#xA;&#xA;We can initialize a variable/constant in golang by explicitly giving it a value. We can do that by using `var` for variable value or `const` for a constant value. We can optionally provide the data type at this moment as golang will automatically detect the type and assign the memory according to the value given.&#xA;&#xA;```go&#xA;var place string = &#34;home&#34;&#xA;```&#xA;&#xA;Here, there is no compulsion to provide the `datatype` as the compiler will be able to know it from the asisgned value. Though if you want to provide a non-default value you can specify the datatype. &#xA;&#xA;### Declaring Multiple Variables&#xA;&#xA;We can assign multiple variables at once by separating them with comma`,`. The variable name to the left and the values to the right needs to separated with comm on both sides.&#xA;&#xA;```go&#xA;var x, y, z = 100, &#39;#&#39;, &#34;days&#34;&#xA;&#xA;fmt.Printf(&#34; x = %d &#xA; y = %c &#xA; z = %s &#xA;&#34;,x,y,z)&#xA;```&#xA;&#xA;```&#xA;$ go run multiplvar.go&#xA; x = 100&#xA; y = #&#xA; z = daysofcode&#xA;```&#xA;&#xA;We can are declaring and assigning multiple variables, the `x` variable is assigned an integer value, `y` with a `rune`(by default) and `z` with a string. We are using `Printf` function with place holders for int `%d`, rune/byte `%c` and string as `%s`. The `&#xA;` is for a new line.&#xA;&#xA;If we want to assign the variables with a particular data type, we can use the var keyword as a list of values.&#xA;&#xA;```go&#xA;var (&#xA;    x int8 = 100&#xA;    y byte = &#39;#&#39;&#xA;    z =  &#34;daysofcode&#34;&#xA;)&#xA;&#xA;fmt.Printf(&#34; x = %T &#xA; y = %T &#xA; z = %T &#xA;&#34;,x,y,z)&#xA;```&#xA;&#xA;```&#xA;$ go run multiplvar.go&#xA; x = int8&#xA; y = uint8&#xA; z = string&#xA;```&#xA;&#xA;This is not only limited to `var` we can also use `const` to declare multiple constants with type constraint. Also, note we are using the `%T` placeholder for getting the type of the data stored in the variable.&#xA;&#xA;Also, we can define(declare and initialize) multiple variable with same data type with command separated as follows:&#xA;&#xA;```go&#xA;var pi, e, G float32 = 3.141, 2.718, 6.67e-11   &#xA;var start, end byte = 65, 90&#xA;fmt.Println(pi, e, G)&#xA;fmt.Printf(&#34;%c %c&#xA;&#34;,start, end)&#xA;```&#xA;&#xA;```&#xA;$ go run multp.go&#xA;3.141 2.718 6.67e-11&#xA;A Z&#xA;```&#xA;&#xA;### Assigning Variable using Walrus Operator (Shorthand Declaration)&#xA;&#xA;We can skip usign `var` or the `datatype` by using the `:=` walrus operator. This type of assignment using `walruns` operator can only be allowed in the function body and not anywhere else, in the global scope this type of assignment is not allowed.&#xA;&#xA;```go&#xA;place := &#34;school&#34;&#xA;```&#xA;&#xA;This is such a simple shorthand for assigning variables though only in a function body.&#xA;&#xA;Also, multiple variable declaration is possible with walrus operator.&#xA;&#xA;```go&#xA;x, y, z := &#34;foo&#34;, 32, true&#xA;fmt.Println(x, y, z)&#xA;fmt.Printf(&#34;%T %T %T&#34;, x, y, z)&#xA;```&#xA;&#xA;```shell&#xA;$ go run walrus.go&#xA;foo 32 true&#xA;string int bool&#xA;```&#xA;&#xA;Links to all code and links are visible on the [GitHub](https://github.com/Mr-Destructive/100-days-of-golang) repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to understand variables and the various data types in Golang. Though we didn&#39;t got too much in detail still we can find ourselves a bit comfortable in understanding basic go scripts. In the next section, we will looking into conditional statements and loops. This would give a good grasp on iterating over a string and even learn arrays(just the basics) we will explore Arrays and slices(remember strings?) after that. &#xA;&#xA;So, if you have any questions, suggestions, or feedback please let me know in the comments or on the social handles. See you next time, Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: Installation and Hello-World</title>
      <link>https://www.meetgor.com/posts/hello-golang</link>
      <description>Introduction Moving on to the second day, we will be installing and setting up Go lang on our systems. The installation and setup are quite simple and not much</description>
      <pubDate>Sat, 02 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Moving on to the second day, we will be installing and setting up Go lang on our systems. The installation and setup are quite simple and not much demonstration is required, so further in the article, I will also make a hello-world program in GO. We will explore the basic program in GO and how to compile, run and build a GO program in this section. &#xA;&#xA;## Installing Go&#xA;&#xA;Installing Go is pretty straightforward. You have to install the binaries from the official website as per your operating system. &#xA;&#xA;Head on to the [go.dev](https://go.dev/) which is the official website for the Go language. Click on the [Download](https://go.dev/doc/install) Tab and there should be all the required information. Install the installer as per your specific operating system. &#xA;&#xA;If you wish not to lead to any errors, keep the configuration for the installer as default and complete the installation process. &#xA;&#xA;## Setting up Environment variables&#xA;&#xA;To make sure Go lang is perfectly installed in your system, type in CMD/Terminal/Powershell the following command:&#xA;&#xA;```&#xA;go version&#xA;```&#xA;&#xA;If you get a specific version of golang along with the platform and architecture of your system, you have successfully installed Go lang in your system. &#xA;&#xA;```&#xA;$ go version&#xA;go version go1.17.8 windows/amd64&#xA;```&#xA;&#xA;If you get an output as a command not found or anything else, this is an indication that your Go installation was not successful. You need to configure your Environment variables properly or re-install the installation script. &#xA;&#xA;```&#xA;$ go version&#xA;bash: go: command not found&#xA;```&#xA;&#xA;&#xA;## Hello Gophers&#xA;&#xA;Once the Go lang has been successfully installed in your system, we can start writing our first program. Yes, a `Hello World` program. It is not as simple as `print(&#34;hello world&#34;)` but a lot better than 10-15 lines of Java or C/C++. &#xA;&#xA;```go&#xA;package main&#xA;&#xA;import (&#xA;&#x9;&#34;fmt&#34;&#xA;)&#xA;&#xA;func main() {&#xA;&#x9;fmt.Println(&#34;Hello,Gophers!&#34;)&#xA;}&#xA;```&#xA;&#xA;So, this is so-called the `hello-world` program in Go, we will see each of the terms in detail next. But before that, let&#39;s get an idea of the style of the code. It is definitely similar if you are coming from C/C++ or Java, the package the main function. It will even feel like Python or Javascript when we explore other aspects. It has really a unique style of programming but feels familiar to programmers coming from any programming language, this might not be true for all programming languages though. &#xA;&#xA;### Packages &#xA;&#xA;A package in Go lang is a way to bundle up some useful functions or any other constructs. Using packages we can reuse components of a specific app in another. Packages in golang also help in optimizing the execution/compilation time by letting the compiler only compile the required packages. &#xA;&#xA;Here, we have the main package, which provides the entry point for the entire project. This is mandatory for creating executable programs as we need to have a start point. The file name can be anything, but for executing the code, you need to have a main package where your main function resides. It provides an entry point as a package, when we will run the file, we provide the file which actually acts as a package and the file that has a tag of main is the entry point for the program.&#xA;&#xA;### Main Function&#xA;&#xA;We have the main function where the main package is defined. It acts as a start point for a package. The main package will look for a main function inside the package. The main function doesn&#39;t take any parameter and nor does it return any value. When the function&#39;s scope ends, the program exits. &#xA; &#xA;The main function has significance only in the main package, if you define the main function in other packages excluding `main`, it works as a normal function. &#xA;&#xA;### Import Statements&#xA;&#xA;We have an `import` keyword for importing packages from the standard library or other external packages from the internet. There are a lot of ways to import packages in golang like single, nested, multiple, aliased, dot import, and blank imports. We will see these different import styles in a dedicated section. Right now, we are directly importing a package, a single package. The pacakge is called the [fmt](https://pkg.go.dev/fmt) pacakge. It handles the format, input, and output format in the console. It is a standard package to perform some basic operations in golang.  &#xA;&#xA;We can import it as a single direct import like:&#xA;&#xA;```go&#xA;import &#34;fmt&#34;&#xA;```&#xA;&#xA;OR &#xA;&#xA;Make multiple imports like:&#xA;&#xA;```go&#xA;import (&#xA;    &#34;fmt&#34;&#xA;)&#xA;```&#xA;&#xA;We can add multiple packages on each line, this way we do not have to write the keyword `import` again and again. It depends on what you want to do in the program. &#xA;&#xA;### Println function&#xA;&#xA;We can access the functions from the imported packages, in our case we can use the functions from the `fmt` package. We have access to one of the functions like `Println`, which prints string on a new line. Syntactically, we access the function and call it by using the `dot` operator like:&#xA;&#xA;```go&#xA;fmt.Println(&#34;Hi there!&#34;)&#xA;```&#xA;&#xA;The [Println]() function takes in a parameter as a string and multiple optional parameters that can be strings or any variable. We will see how to declare variables and types in the next section. &#xA;&#xA;Here, the `P` in `Println` has significance as it allows us to distinguish private methods(functions in structs aka classes) from public methods. If a function begins with an uppercase letter, it is a public function. In technical terms, if the first letter of a method is upper case, it can be exported to other packages. &#xA;&#xA;&#xA;## Running Scripts&#xA;&#xA;Let&#39;s run the code and GO programming language to our resume. You can run a go source file assuming it has a main package with the main function using the following command:&#xA;&#xA;```&#xA;go run &lt;filename.go&gt;&#xA;```&#xA;&#xA;![GO run command](https://res.cloudinary.com/techstructive-blog/image/upload/v1648833567/blog-media/o7i9spph2bfgemqonq8p.png)&#xA;&#xA;This will simply display the string which we have passed to the `Println` function. If you didn&#39;t have a main package this command won&#39;t run and return you an error:&#xA;&#xA;```&#xA;package command-line-arguments is not the main package&#xA;```&#xA;&#xA;By executing the run command, we can are creating a executable in a system&#39;s temp folder,&#xA;&#xA;For Windows, it&#39;s usually at:&#xA;```&#xA;C:\Users�cer\AppData\Local&#xA;```&#xA;&#xA;You can get the location of the temp directory using CMD/PowerShell:&#xA;&#xA;```&#xA;CMD:&#xA;echo %TEMP%&#xA;&#xA;PowerShell:&#xA;$env:Temp&#xA;```&#xA;&#xA;For Linux&#xA;```&#xA;/tmp&#xA;```&#xA;&#xA;You can get the location of the temp folder using Terminal in Linux/macOS:&#xA;&#xA;```&#xA;echo $TMPDIR&#xA;```&#xA;&#xA;It doesn&#39;t create any executable in the current project or folder, it only runs the executable that it has built in the temp folder. The run command in simple terms **compiles and executes the main package**. As the file provided to the run command needs to have the main package with the main function, it will thus compile that source code in the provided file. &#xA;&#xA;To get the location of the executable file that was generated by the `run` command, you can get the path using the following command:&#xA;&#xA;```&#xA;go run --work &lt;filename&gt;.go&#xA;```&#xA;&#xA;![GO Run TMP file](https://res.cloudinary.com/techstructive-blog/image/upload/v1648833643/blog-media/maqfd73xmiivrckc2acn.png)&#xA;&#xA;This will print the path to the executable that it currently has compiled. &#xA;&#xA;For further readings on the `run` command in Go, you can refer to the [documentation](https://pkg.go.dev/cmd/go#hdr-Compile_and_run_Go_program).&#xA;&#xA;## Creating Executables&#xA;&#xA;We can go a step further by creating binary/executables with our source file using the `build` command:&#xA;&#xA;```&#xA;go build &lt;filename&gt;.go&#xA;```&#xA;&#xA;If you run this you would get an error as building a package requires a few things. The most important is the mod file.&#xA;&#xA;```&#xA;go: cannot find main module, but found .git/config in D:\meet\Code\go\100-days-of-golang&#xA;    to create a module there, run:&#xA;    cd .. &amp;&amp; go mod init&#xA;```&#xA;&#xA;We need to create a mod file first before we build our script. &#xA;A mod file in golang is the file that specifies the go version along with the packages and dependencies. It is like the `requirement.txt` but a bit different. &#xA;&#xA;We use the following command with the file that contains the main package among the other packages in the folder. We can even provide other packages to add to the mod file(see in detail in the future) &#xA;&#xA;```&#xA;go mod init &lt;filename&gt;.go&#xA;```&#xA;&#xA;![GO Mod Init](https://res.cloudinary.com/techstructive-blog/image/upload/v1648833683/blog-media/pdvkdpnanl3e0yzoenqb.png)&#xA;&#xA;This will generate a `go.mod` file, this is a file that contains the list of dependencies and packages in the project. &#xA;If you look at the mod file, it looks as follows:&#xA;&#xA;```go&#xA;module script.go&#xA;&#xA;go 1.17&#xA;```&#xA;&#xA;Currently, this is pretty simple and has very little detail, but as your project increases in complexity, this file populates with the modules and packages imported and used in the project. &#xA;&#xA;So, after creating the mod file, we can build the script which we tried earlier.&#xA;&#xA;```&#xA;go build &lt;filename&gt;.go&#xA;```&#xA;&#xA;![GO Build Command](https://res.cloudinary.com/techstructive-blog/image/upload/v1648833745/blog-media/i0hnwsxxl0gglhh3rdoe.png)&#xA;&#xA;So, this command generates an exe right in the current folder. This will generate the file named after the package which is mainly `filename.exe`. &#xA;&#xA;If you have a `go.mod` file in your project, just running the command will generate the exe file:&#xA;&#xA;```&#xA;go build&#xA;```&#xA;&#xA;![GO Build Command - Directory level](https://res.cloudinary.com/techstructive-blog/image/upload/v1648833832/blog-media/sdw9zrlff3odtnhallyq.png)&#xA;&#xA;NOTE: For the above command to work, you need to be in the directory which has the mod file for your project. It basically bundles the listed packages and creates the executable named after the package which is named `main`. Thus it generates a different file name as `filename.go.exe`&#xA;&#xA;We can also provide an output file as the exe file name, this can be done with the following command:&#xA;&#xA;```&#xA;go build -o &lt;filename&gt;&#xA;```&#xA;&#xA;![GO Build Output file](https://res.cloudinary.com/techstructive-blog/image/upload/v1648833932/blog-media/t5dfhx0va7reyjjygfnq.png)&#xA;&#xA;For further reading on the `go build` command, head over to this [documentation](https://pkg.go.dev/cmd/go#hdr-Compile_packages_and_dependencies) page.&#xA;&#xA;Link to all of the code and resources is mentioned in this [GitHub](https://github.com/Mr-Destructive/100-days-of-golang) repository.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this second post, we were able to set up the Go language in our system and write our first `hello-world` program. This was a bit more than the setup and installation guide as it is quite boring to demonstrate the installation procedure being quite straightforward. Hopefully, you followed so far and you were able to pick up things in the go landscape. Thank you for reading and if you have any questions, suggestions, or feedback, let me know in the comments or the provided social handles. See you tomorrow, until then Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Golang: What and Why?</title>
      <link>https://www.meetgor.com/posts/golang-intro</link>
      <description>Introduction This is the first entry in my 100 days of Go lang challenge. I will share the things I will learn each day for 100 days related to Golang or its co</description>
      <pubDate>Fri, 01 Apr 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;This is the first entry in my 100 days of Go lang challenge. I will share the things I will learn each day for 100 days related to Golang or its configuration in my environment. In this post, we will understand what is golang and why you should learn it. It&#39;s not an in-depth guide but a high-level overview for understanding the context of learning Go lang. &#xA;&#xA;### A bit of Context&#xA;&#xA;In April 2021, I started learning Django and to date, I have more than 10 projects(not all deployed) created and publicly open on [GitHub](https://github.com/Mr-Destructive?tab=repositories&amp;q=django&amp;type=&amp;language=python&amp;sort=). It has been a great experience so far, I am still in between an internship with a project to be created in Django. The learning process after I started blogging in June 2021 has been a different fantasy, I have been enjoying the process of writing the things I learn. So, this Spring/Summer I am creating a challenge of 100 days for learning Go lang and documenting the process. Hopefully, this will be a useful place to learn for a beginner and more than that a place for me to reflect back on the future.&#xA;&#xA;Leaving the rant aside, let&#39;s start with the best question to answer, What is Golang?&#xA;&#xA;## What is Golang&#xA;&#xA;[Golang](https://go.dev/) (officially called Go) is a programming language developed by Google. It is statically typed like C\C++, Java, Kotlin, etc, and syntax is similar to dynamically languages like Python, Js, Ruby, Lua, etc. So, golang provides the best of both worlds, the swiftness to write code and also providing control and memory safety. Go is regarded as the language of the cloud i.e. it is heavily used in the cloud infrastructure, it involves creating microservices, APIs, containerization, orchestration, and a lot of other technologies. It can be a super useful programming language for DevOps folks and people into Cloud computing. &#xA;&#xA;Though Go is designed for Cloud Infrastructure, that doesn&#39;t mean you can&#39;t use it elsewhere. There are a lot of fields in which golang nicely fits. We&#39;ll discuss why you would want to learn golang and state the domains where it can be used.&#xA;&#xA;&gt; Go programmers are called Gophers&#xA;&#xA;## Why learn Golang?&#xA;&#xA;There are a lot of reasons for learning Go as your next programming language. Not only it is quite famous but it is significantly faster than other programming languages like python and javascript. It is a simple language to pick up, it is designed for modern world technologies. There is an added advantage to learning go lang as it is a new programming language(not as old as python). It was open-sourced in 2012, it got a great boost in the last five years as cloud companies are on the rise. &#xA;&#xA;### 1. GO lang is simple&#xA;&#xA;GO lang is a simple programming language, it is really easy to learn. It is quite similar to python in terms of its syntax, though it also inherits characteristics from the Java/C++ paradigm. It has a single entry point program via the `main` function similar to Java/C++ languages and is compiled into binary/executable code before running. It has the standard library that provides a lot of functionalities out of the box. It provides some quite handy tools and has support for creating web applications in the core language itself.&#xA;&#xA;### 2. GO lang is fast (Execution Speed)&#xA;&#xA;Though it might not be the fastest language of all time, it beats python and javascript by a significant magnitude in the context of backend development. Go lang is compiled language and hence it has no overhead of converting into byte code or the interpreter. Go lang also checks for unused variables, functions, identifiers, and packages, this can boost the speed of execution by a good amount additionally optimizing the memory footprint of the program. &#xA;&#xA;![Speed Comparison of programming languages](https://camo.githubusercontent.com/37d2b63be8996cd03b67e80d3dee322c698e9753d0602c7da8f8ef399fbed57a/68747470733a2f2f7261772e6769746875622e636f6d2f6e696b6c61732d686565722f73706565642d636f6d70617269736f6e2f6d61737465722f2e6769746875622f706c6f745f76312e342e706e67)&#xA;&#xA;We can see, that Go lang is almost equivalent to C and C++ porgramming language, though this might not be the best comparison of speed as it depends on quite a lot of factors and the context of programs. Still Go lang has a significant edge over Java and C++ which is a great factor because of the writing complexity involved in those programming languages.&#xA;&#xA;&#xA;### 3. GO lang is fast (Writing Speed)&#xA;&#xA;GO language is fast yes I get that, It is faster in terms of writing speed. If you sit down to write the same code in Java/C++ or Go, having time constraints, you will move out with Go as a winner. It is much easier to write code in Go than compared to C/C++, Java, or even Javascript. Golang prefers clear code over fast code, though it might not be as fastest as Rust or C++, it is definitely much easier and quicker to write than the other counterparts. Go is designed for writing software that can scale with large codebases and teams. So it&#39;s not a surprise it is used to create cloud computing technologies like Docker, Kubernetes, and other tons of cloud-native tools. &#xA;&#xA;### 4. GO lang is made for the modern world&#xA;&#xA;&gt; Go lang is the modern C&#xA;&#xA;Go lang is a modern language, it is not as ancient as C/C++ or Java. It is designed for Cloud-native applications which are scalable and optimized for performance meeting the needs of the modern demand for bleeding-edge technologies. It is designed by the people who created the UNIX operation, Ken Thompson. Though there were other go authors like Robert Griesemer and Rob Pike. Go lang has been designed keeping the idea of C but also considering the context of the modern world. It makes tasks like concurrency a lot simpler but it avoids dangerous and manual control of memory by not allowing pointer arithmetic.  &#xA;&#xA;## Where you can use Golang&#xA;&#xA;Go land finds its use case in a lot of domains including Cloud computing, Web applications, CLI and DevOps tools, and so on. As GO is a general-purpose programming language, you can use it anywhere in the relevant fields of technology. &#xA;&#xA;### Cloud-native application&#xA;&#xA;As Golang has capabilities to execute low-level system calls efficiently, cloud-native tools can be created in golang for improving efficiency and memory usage. As Go uses fewer resources like CPU and memory, it can optimally be used for applications that require almost no delay in the response. It also handles concurrency very well using goroutines which are way simpler to implement. All the factors contribute very well to constructing a modern cloud-native application.&#xA;&#xA;Golang is used in Cloud-native technologies like&#xA;&#xA;- [Docker](https://github.com/docker) - containerisation technology&#xA;- [Kubernetes](https://github.com/kubernetes/kubernetes) - Container Orchestration&#xA;- [Google Cloud](https://github.com/google/go-cloud) - Cloud Hosting Plaform&#xA;&#xA;This makes Golang widely adopted and has firm grounds for the future as well.&#xA;&#xA;### Web applications&#xA;&#xA;As Golang is highly efficient and fast it can be used to develop web applications. It can scale a web application very well as it is a statically typed ecosystem. It also has extensive support for integrating databases into web applications. Golang doesn&#39;t require any 3rd party integrations to create the web apps, it is developed using internal or external packages and modules. The performance of Golang is the core feature for creating web apps in it, it heavily outperforms Javascript(Node) and Python. As golang code is compiled before running, this makes a single executable depending on the operating system, this improves the performance to a great extent. So, golang should be an ideal choice for developing web applications.&#xA;&#xA;- [Soundcloud](https://developers.soundcloud.com/blog/go-at-soundcloud) - Music System Webapp&#xA;- [Uber](https://www.slideshare.net/RobSkillington/go-at-uber) - Cab Booking/Ride Sharing App&#xA;- [Allergo](https://github.com/allegro/marathon-consul/#marathon-consul-) - E-commerce Webappp&#xA;&#xA;### DevOps tools&#xA;&#xA;A lot of DevOps tools like Prometheus, Grafana, Terraform, Hugo are written natively in Go. Having appropriate features like simplicity, scalability, efficiency, and concurrency, go lang fits the DevOps space really well. Golang is already used in cloud-native technologies as mentioned earlier adopting it in the DevOps space is a great choice as it creates an even and supportive ecosystem. The `Write once and Run anywhere` paradigm of java is surprisingly built-in golang as well without conversion into byte code. This is a great feature any DevOps engineer would dream of, using a single code base for multiple platforms can save a lot of time and improve productivity.  &#xA;&#xA;- [Prometheus](https://github.com/prometheus/prometheus) - Monitoring system and time series database&#xA;- [Grafana](https://github.com/grafana/grafana) - Platform for monitoring and observability&#xA;- [Terraform](https://github.com/hashicorp/terraform) - Building, Changing, and Versioning Infrastructure &#xA;- [Minikube](https://github.com/kubernetes/minikube) - Run kubernetes locally&#xA;- [Kit](https://github.com/go-kit/kit)- standard library for microservices&#xA;&#xA;## Who uses Golang?&#xA;&#xA;Golang is a widely used programmming language, it is used by some of the big companies around the world. Of course Google uses it, apart from Google, it is being used by companies like:&#xA;&#xA;1. Netflix&#xA;2. PayPal&#xA;3. Microsoft&#xA;4. Dropbox&#xA;5. Meta (facebook)&#xA;6. Uber&#xA;7. Twitter&#xA;8. Twitch&#xA;&#xA;There are so many tools created tools using golang like the [gh-cli](https://github.com/cli/cli), [cockrach-db](https://github.com/cockroachdb/cockroach), [jwt](https://github.com/dgrijalva/jwt-go), etc. You can see a whole list of applications/tools/frameworks written in Go from this [awesome list](https://github.com/avelino/awesome-go)&#xA;&#xA;## Conclusion&#xA;&#xA;Go lang is a excellent choice for any one into web developement, cloud native, DevOps, CLI tools. Go lang being a recently developed language it has grown a lot faster than any other language in the programming world. Learning Python over the span of two years, I might want to dabble into Go but not completely leave Python. Python has a great community and is widely adopted in almost all fields in developement. Still trying out new things is what keeps you in the hunt of oppurtunitiees and meeting new people. &#xA;&#xA;So starting from today, I will share my learning a new thing in Go lang every day for 100 days. At least that&#39;s the plan. The post might be in the form of a long post or a short TIL on the blog. Check out the GitHub repository for the 100 day challenge [here](https://github.com/mr-destructive/100-days-of-golang).If you have any queries, suggestions, or feedback please let me know in the comments or on the mentioned social handles. Thank you for reading.  Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Feedparser: Python package for reading RSS feeds</title>
      <link>https://www.meetgor.com/posts/python-feedparser</link>
      <description>Introduction is a simple but powerful python package that can be used to extract information about a specific webpage or a publication with its RSS feed(not onl</description>
      <pubDate>Sat, 26 Mar 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;[Feedparser](https://pypi.org/project/feedparser/) is a simple but powerful python package that can be used to extract information about a specific webpage or a publication with its RSS feed(not only RSS). By providing the RSS feed link, we can get structured information in the form of python lists and dictionaries. It can be basically used in a pythonic way to read RSS feeds, it is really simple to use and it even normalizes different types of feeds.&#xA;&#xA;Today, we will be taking a look at the feedparser package in python and how to extract information from a given RSS feed.&#xA;&#xA;## What is feedparser&#xA;&#xA;Feedparser is a python package for parsing feeds of almost any type such as RSS, Atom, RDF, etc. It is a package that allows us to parse or extract information using python semantics. For example, all the latest posts from a given blog can be accessed on a list in python, further different attributes like links, images, titles, descriptions, can be accessed within a dictionary as key-value pairs. &#xA;&#xA;## Installing feedparser&#xA;&#xA;As feedparser is a python package you can install it with pip very easily.&#xA;&#xA;```&#xA;pip install feedparser&#xA;```&#xA;&#xA;This will install feedparser in your respective python environment, it can be a virtual environment or a global environment. &#xA;&#xA;&#xA;## Using feedparser&#xA;&#xA;To test out feedparser, you can open up a python repl, in the environment where you installed the Feedparser package.&#xA;&#xA;```&#xA;python&#xA;```&#xA;&#xA;Firstly import the package.&#xA;&#xA;```python&#xA;import feedparser&#xA;```&#xA;&#xA;Now, we can use the module in our application to get all of the functions or methods from the package.&#xA;&#xA;## Parse an RSS feed URL&#xA;&#xA;To parse an RSS feed link, we can simply use the `parse` function from the feedparser package. The [parse](https://feedparser.readthedocs.io/en/latest/introduction.html) function takes in a string that can be a URL or a file path. Generally, the URL seems to be more useful. So, we can look up any RSS feed on the internet like your blog&#39;s feed, publications feeds, and so on. &#xA;&#xA;```python&#xA;feedparser.parse(&#34;url_of_the_rss_feed&#34;)&#xA;```&#xA;&#xA;The parse function basically fetches the feed from the provided URL or the file. It extracts the feed in a systematic way storing each piece of information in a structured format. At the high level, it returns a dictionary with a few key-value pairs. Further, each key might have a list or nested dictionaries in it. The key identifiers are named in a uniform manner for any feed you parse in the function. Though there might be a few cases where there might be additional information to be parsed, it can even add more information ad shape the structure accordingly.&#xA;&#xA;This will give you a dictionary in python, that can have more or less similar keys. The most common keys that can be used in extracting information are `entries` and `feed`. We can get all the keys associated with a feed that is parsed using the `keys` function.&#xA;&#xA;```python&#xA;feedparser.parse(&#34;url_of_the_rss_feed&#34;).keys()&#xA;```&#xA;&#xA;![Feedparser Keys](https://res.cloudinary.com/techstructive-blog/image/upload/v1648370871/blog-media/ph6bsxobyifqmusumirx.png)&#xA;&#xA;The keys function basically gets all the keys in the dictionary in python.&#xA;&#xA;```&#xA;&gt;&gt;&gt; feedparser.parse(&#34;https://dev.to/feed/&#34;).keys()&#xA;dict_keys([&#39;bozo&#39;, &#39;entries&#39;, &#39;feed&#39;, &#39;headers&#39;, &#39;etag&#39;, &#39;href&#39;, &#39;status&#39;, &#39;encoding&#39;, &#39;version&#39;, &#39;namespaces&#39;])&#xA;```   &#xA;&#xA;This will give out a list of all the keys in the feed which we have parsed from the RSS feed previously. From this list of keys, we can extract the required information from the feed.&#xA;&#xA;Before we extract content from the feed, we can store the dictionary that we get from calling the parse function. We can assign it to a variable and store the dictionary for later use.&#xA;&#xA;```python&#xA;feed = feedparser.parse(&#34;url_of_the_rss_feed&#34;)&#xA;```&#xA;&#xA;&#xA;## Extract the contents from the feed&#xA;&#xA;Now, we have the dictionary of the feed, we can easily access the values from the listed keys. We can get the list of all the posts/podcasts/entries or any other form of content the feed is serving for from the `entries` key in the dictionary. &#xA;&#xA;To get more information and the most possible keys in the returned dictionary, you can refer to the feedparser [reference list](https://feedparser.readthedocs.io/en/latest/reference.html)&#xA;&#xA;### Access Articles from Feed&#xA;&#xA;To access the articles from the feed, we can access those as a list of the articles. Using the `entries` key in the dictonary as follows:&#xA;&#xA;```python&#xA;feedparser.parse(&#34;url_of_the_rss_feed&#34;)[&#34;entries&#34;]&#xA;&#xA;OR&#xA;&#xA;feedparser.parse(&#34;url_of_the_rss_feed&#34;).entries&#xA;```&#xA;&#xA;If you have already defined a variable set to the parse function, you can use that for more efficient extraction.&#xA;&#xA;```python&#xA;feed = feedparser.parse(&#34;url_of_the_rss_feed&#34;)&#xA;&#xA;feed[&#39;entries&#39;]&#xA;&#xA;OR &#xA;&#xA;feed.entries&#xA;```&#xA;&#xA;### Get Number of Articles/Entries from Feed&#xA;&#xA;To get the number of entries in the list, we can simply use the len function in python.&#xA;&#xA;```python&#xA;len(feed.entries)&#xA;&#xA;OR &#xA;&#xA;len(feedparser.parse(&#34;url_of_the_rss_feed&#34;).entries)&#xA;```&#xA;&#xA;![Feedparser Number of Entries](https://res.cloudinary.com/techstructive-blog/image/upload/v1648371042/blog-media/didijxcvsgvl4scrnhje.png)&#xA;&#xA;This gives us the number of entries in the provided feed. This is basically the list that stores all the content from the publication/website. So, we can iterate over the list and find all the different attributes we can extract.&#xA;&#xA;### Get details of the entries from the feed&#xA;&#xA;To get detail information about a particular article/entry in the feed, we can iterate over the `feed.entries` list and access what we require. &#xA;&#xA;So, we will iterate over the entries and simply print those entries one by one to inspect what and how we can extract them. &#xA;&#xA;```python&#xA;for entry in feed.entries:&#xA;  print(entry)&#xA;```&#xA;&#xA;It turns out that every entry in the list is a dictionary again containing a few key-value pairs like `title`, `summary`, `link`, etc. To get a clear idea of those keys we can again use the keys function in python.&#xA;&#xA;```python&#xA;feed = feedparser.parse(&#34;url_of_the_rss_feed&#34;)&#xA;print(feed.entries[0].keys())&#xA;```&#xA;&#xA;![Feedparser Entries Keys](https://res.cloudinary.com/techstructive-blog/image/upload/v1648371221/blog-media/c8uog85goe9jzrzl1pq1.png)&#xA;&#xA;```python&#xA;&gt;&gt;&gt; feed.entries[0].keys()&#xA;dict_keys([&#39;title&#39;, &#39;title_detail&#39;, &#39;authors&#39;, &#39;author&#39;, &#39;author_detail&#39;, &#39;published&#39;, &#39;published_parsed&#39;, &#39;links&#39;, &#39;link&#39;, &#39;id&#39;, &#39;guidislink&#39;, &#39;summary&#39;, &#39;summary_detail&#39;, &#39;tags&#39;])&#xA;```&#xA;&#xA;Now, we have all the keys associated with the entries we can now extract the specific details like the content, like `title`, `author`, `summary_detail`(actual content in this case).  &#xA;&#xA;Though this might not be the same for all RSS feeds, it might be very similar and a matter of using the right keyword for the associated keys in the list of dictionaries.  &#xA;&#xA;Let&#39;s say, we want to print out the titles of all the entries in the feed, we can do that by iterating over the entries list and fetching the title from the iterator as `entry.title` if `entry` is the iterator.&#xA;&#xA;```python&#xA;for entry in feed.entries:&#xA;  print(entry.title)&#xA;```&#xA;&#xA;![Feedparser List of Entries](https://res.cloudinary.com/techstructive-blog/image/upload/v1648372532/blog-media/lhofdzmr3ks0fuut7pxm.png)&#xA;&#xA;Similarly, we will get the links of the entries using the link key in the dictionary.&#xA;&#xA;```python&#xA;for entry in feed.entries:&#xA;  print(entry.link)&#xA;```&#xA;&#xA;### Get information about the Website/Publication&#xA;&#xA;To get the metadata about the information you are extracting from i.e. the website or any publication, we can use the key `feed`. This key stores another dictionary as its value which might have information like `title`, `description` or `subtitle`, `canonical_url`, or any other data related to the website company.&#xA;&#xA;```python&#xA;feed.feed&#xA;&#xA;or&#xA;&#xA;feedparser.parse(&#34;url_of_the_rss_feed&#34;).feed&#xA;```&#xA;&#xA;![Feedparser Feed](https://res.cloudinary.com/techstructive-blog/image/upload/v1648373487/blog-media/r7hiojfdrtrjqfhkjbdt.png)&#xA;&#xA;From this dictionary, we can now simply extract the specific information from the keys. But first, as in the previous examples, we need a clear idea of what are the keys in the dictionary where we can extract the specific value.&#xA;&#xA;```python&#xA;feed.feed.keys()&#xA;&#xA;or&#xA;&#xA;feedparser.parse(&#34;url_of_the_rss_feed&#34;).feed.keys()&#xA;```&#xA;&#xA;Using the keys like `title`, `links`, `subtitle`, we can get the information on the website/company level and not related to the specific post in the entries list. &#xA;&#xA;```python&#xA;# get the title of the webpage/publication&#xA;feed.feed.title&#xA;&#xA;# get the links associated with the webpage&#xA;feed.feed.links&#xA;&#xA;# get the cover-image for the webpage&#xA;feed.feed.image&#xA;``` &#xA;&#xA;You can further get information specific to your feed. &#xA;&#xA;## Checking for keys existence in the dictionary of feed&#xA;&#xA;We also need to check for the existence of a key in a dictionary in the provided feed, this can be a good problem if we are parsing multiple RSS feeds which might have a different structure. This problem occurred to me in the making of [podevcast](https://podevcast.netlify.app) where I had to parse multiple RSS feeds from different RSS generators. Some of the feeds had the cover image but most of them didn&#39;t. So, we need to make sure we have a check over those missing keys.&#xA;&#xA;```python&#xA;feedlist = [&#39;https://freecodecamp.libsyn.com/rss&#39;, &#39;https://feeds.devpods.dev/devdiscuss_podcast.xml&#39;]&#xA;&#xA;for feed in feedlist:&#xA;    feed = feedparser.parse(feed)&#xA;&#xA;    print(feed.entries[0].keys())&#xA;    for entry in feed.entries:&#xA;        if &#39;image&#39; in entry:&#xA;            image_url = entry.image&#xA;        else:&#xA;            image_url = feed.feed.image&#xA;        &#xA;        #print(image_url)&#xA;```&#xA;&#xA;```python&#xA;&gt;&gt;&gt; feedlist = [&#39;https://freecodecamp.libsyn.com/rss&#39;, &#39;https://feeds.devpods.dev/devdiscuss_podcast.xml&#39;]&#xA;&gt;&gt;&gt; for feed in feedlist:&#xA;...     feed = feedparser.parse(feed)&#xA;...     for entry in feed.entries:&#xA;...             if &#39;image&#39; in entry:&#xA;...                     image_url = entry.image&#xA;...             else:&#xA;...                     image_url = feed.feed.image&#xA;...     print(feed.entries[0].keys())&#xA;...&#xA;&#xA;dict_keys([&#39;title&#39;, &#39;title_detail&#39;, &#39;itunes_title&#39;, &#39;published&#39;, &#39;published_parsed&#39;, &#39;id&#39;, &#39;guidislink&#39;, &#39;links&#39;, &#39;link&#39;, &#39;image&#39;, &#39;summary&#39;, &#39;summary_detail&#39;, &#39;content&#39;, &#39;itunes_duration&#39;, &#39;itunes_explicit&#39;, &#39;subtitle&#39;, &#39;subtitle_detail&#39;, &#39;itunes_episode&#39;, &#39;itunes_episodetype&#39;, &#39;authors&#39;, &#39;author&#39;, &#39;author_detail&#39;])&#xA;&#xA;dict_keys([&#39;title&#39;, &#39;title_detail&#39;, &#39;links&#39;, &#39;link&#39;, &#39;published&#39;, &#39;published_parsed&#39;, &#39;id&#39;, &#39;guidislink&#39;, &#39;tags&#39;, &#39;summary&#39;, &#39;summary_detail&#39;, &#39;content&#39;, &#39;subtitle&#39;, &#39;subtitle_detail&#39;, &#39;authors&#39;, &#39;author&#39;, &#39;author_detail&#39;, &#39;itunes_explicit&#39;, &#39;itunes_duration&#39;])&#xA;```&#xA;&#xA;As we can see we do not have an image key in the second RSS feed which means each entry doesn&#39;t have a unique cover image, so we have to fetch the image from the `feed` key then the `image` key in the entries list.&#xA;&#xA;![Feedparser Cover Image Demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1648373275/blog-media/fzdqie5dubigxzfhtv2x.png)&#xA;&#xA;&#xA;As we can see here, the image_url will pick up the `image` key in the dictionary if it is present else we will assign it to another URL which is the website/podcast cover image. This is how we handle exceptions in providing the keys when there are multiple feeds to be extracted though they are quite similar, they will have subtle changes like this that need to be handled and taken care of.&#xA;&#xA;&#xA;## Conclusion&#xA;&#xA;From this little article, we were able to understand and use the feedparser Python package which can be used to extract information from different feeds. We saw how to extract contents for the entries, a number of entries in the feed, check for keys in the dictionary, and so on. Using Python&#39;s Feedparser package, some of the projects I have created include:&#xA;&#xA;- [podevcast](https://podevcast.netlify.app)&#xA;- [dailydotdev-bookmark-cli](https://pypi.org/project/dailydotdev-bookmark-cli/)&#xA;- [Django Newsletter](https://github.com/Mr-Destructive/newsletter)&#xA;&#xA;For further reading, you can specifically target a feed type by getting the appropriate methods from the feedparser [documentation](https://feedparser.readthedocs.io/en/latest/)&#xA;&#xA;Thank you for reading, if you have any suggestions, additions, feedback, please let me know in the comments or my social handles below. Hope you enjoyed reading. Happy Coding :)&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Pipx: A python package consumption tool for CLI packages</title>
      <link>https://www.meetgor.com/posts/pipx-intro</link>
      <description>Introduction Previously, I explored the as a python package management tool and this week it is time for exploring a python CLI package isolation tool called .</description>
      <pubDate>Sun, 20 Mar 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Previously, I explored the [pipenv](https://mr-destructive.github.io/techstructive-blog/pipenv-intro/) as a python package management tool and this week it is time for exploring a python CLI package isolation tool called `pipx`. It is a really simple and powerful tool for running commands as an executable from an isolated environment. Those commands should be associated with a python package that has CLI. In pipx, we install the package once and we can use the package anywhere on our system isolated from other virtual environments. &#xA;&#xA;In this article, We will explore the pipx package/tool in python. From installation to setting up your environment and removing some friction in your python workflow. This won&#39;t be an in-depth guide of pipx, though we would cover a few python packages that can be easily used in your environment.&#xA;&#xA;## What is Pipx&#xA;&#xA;Pipx is a python package that works similar to `pip` but it is quite specific to the usage. It can run CLI commands of any python package from anywhere on your system. It uses its own virtual environment for managing the packages and the python version. &#xA;&#xA;**NOTE**&#xA;&#xA;- Pipx is a tool to install and execute CLI commands provided in the python packages&#xA;- Pipx is not a python package management tool like pip or pipenv.&#xA;&#xA;Pipx is similar to `npx` for nodejs. Though the major difference is that pipx runs and installs all packages/tools globally unlike npx. &#xA;&#xA;&#xA;## Install Pipx&#xA;&#xA;**Pipx requires Python 3.6 or above.**&#xA;&#xA;- You can install it normally like a python package with pip:&#xA;&#xA;```&#xA;pip install pipx&#xA;```&#xA;&#xA;To find out if pipx was successfully installed, you can simply type in the command:&#xA;&#xA;```&#xA;pipx --version&#xA;```&#xA;&#xA;If it gives a number and no errors, it has been successfully installed in your system globally.&#xA;&#xA;## Pipx Commands &#xA;&#xA;The primary commands that pipx can use are:&#xA;&#xA;1. pipx install&#xA;2. pipx list&#xA;3. pipx run&#xA;4. pipx inject&#xA;5. pipx uninstall&#xA;&#xA;For further command options for a specific use case, you can use the `pipx -h` command to get the detailed list of commands and options available in pipx.&#xA;&#xA;Also, the documentation of pipx is really incredibly beginner-friendly. You can follow up with the article from the [documentation](https://pypa.github.io/pipx/) webpage or the [GitHub](https://github.com/pypa/pipx/) repository. The documentation and the webpage are simple single-page websites that really make learning the tool easier and it also has a well-documented package.&#xA;&#xA;### Pipx Install Command&#xA;&#xA;Pipx can install packages in a virtual environment specific to the package provided. This means that you will have different virtual envs for each package you install and not a single virtual env for a project, it creates and makes the package venvs accessible globally. The next step after installing the package in a different venv is to make that package commands accessible via the PATH environment variable. &#xA;&#xA;```&#xA;pipx install &lt;package_name&gt;&#xA;```&#xA;&#xA;If say we install [pgcli](https://pypi.org/project/pgcli/), a package for interacting with postgres database using python in the Command line. We will simply enter the following commands:&#xA;&#xA;```&#xA;pipx install pgcli&#xA;```&#xA;&#xA;![install package with pipx](https://res.cloudinary.com/techstructive-blog/image/upload/v1647783261/blog-media/du830o3fbqogvkaesxnq.png)&#xA;&#xA;We can use the package pgcli globally in our system. So, we have access to the databases in the local Postgres server. We can run the commands as normally as we want using the pgcli by just prefixing with `pipx run`.&#xA;&#xA;```&#xA;pipx run pgcli &lt;local_database_name&gt; -U postgres&#xA;```&#xA;&#xA;![PGCLI demonstration GIF](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782221/blog-media/dfpwlxkh5ybqj5pdos8d.gif)&#xA;&#xA;This is a really cool way to interact with CLI applications without installing separately or globally, using pipx the virtual environment for the specific package makes it a clean and persistent behavior for running the commands anywhere in the system.&#xA;&#xA;If you want to know more about pgcli tool/package in Python, you can it&#39;s [documentation](https://www.pgcli.com/docs) webpage or the [GitHub](https://github.com/dbcli/pgcli) repository.&#xA;&#xA;#### Pipx Default PATH&#xA;&#xA;The default path at which pipx stores the virtual environments for each package is by default the `~/.local/pipx/venvs`(here ~ refers to the root/default directory), the documentation says that it can be changed by editing the environment variable `PIPX_HOME`. &#xA;&#xA;The default path at which pipx stores the binaries/command apps for each package is stored in the path `~/.local/bin`, even this path can be modified by editing the environment variable `PIPX_BIN_DIR`&#xA;&#xA;### Pipx List Command&#xA;&#xA;Pipx list command simply lists the executables or the commands you can run with pipx. This command will display the commands associated with the packages that are installed in the pipx environment. &#xA;&#xA;```&#xA;pipx list&#xA;```&#xA;&#xA;![Pipx list output](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782349/blog-media/qu95nynklbuceqqd9qke.png)&#xA;&#xA;This command gives a detailed list of the commands associated with their respective packages. It also gives the meta-information like the Python version in which the package was installed. &#xA;&#xA;&#xA;### Pipx Run Command&#xA;&#xA;The most useful command in pipx has to be `pipx run`, this command can execute provide package along with the specified command associated with the package. &#xA;&#xA;The structure of the command is as follows:&#xA;&#xA;```&#xA;pipx run &lt;package_name&gt; &lt;command&gt;&#xA;```&#xA;The pipx documentation refers to the `command` as an `APP`, as it is an executable created from the package specification.&#xA;&#xA;If the APP/command name is not matching the `package_name`(most of the time it won&#39;t), you need to add an argument to the run command.&#xA;&#xA;```&#xA;pipx run --spec &lt;package_name&gt; &lt;command&gt;&#xA;```&#xA;&#xA;The `--spec` option allows specifying a certain package to be used while running the command/APP. Using this option with the `pipx run` command, we can run package-specific commands. For example, if we want to run an ipython from the terminal, without messing up the current environment i.e. without installing any package. You can do that by using pipx.&#xA;&#xA;```&#xA;pipx install dailydotdev-bookmark-cli&#xA;&#xA;pipx run --spec dailydotdev-bookmark-cli bookamrks&#xA;```&#xA;&#xA;![dailydotdev-bookmark-cli demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782547/blog-media/vsfmwlzazqtosuwttexb.gif)&#xA;&#xA;This is my shameless plug of installing my [first python package](https://github.com/Mr-Destructive/bookmarks-cli). The command used for the CLI is totally not related to the package name and hence we have to use the `--spec` option.&#xA;&#xA;## Install and Run packages from Pipx&#xA;&#xA;Any python package which provide a Command Line Interface can be installed and run with pipx. Some of the packages like [pytest](https://docs.pytest.org/en/7.1.x/contents.html), [pipenv](https://pipenv-fork.readthedocs.io/en/latest/install.html), [rich-cli](https://github.com/Textualize/rich-cli), [Black code formatter](https://github.com/psf/black), [markata](https://markata.dev/) and many others. &#xA;You can install any of the packages which do have a cli to interact with on the go with pipx. &#xA;&#xA;### Running Ipython shell&#xA;&#xA;We can use [Ipython](https://pypi.org/project/ipython/) shell from pipx as an isolated environment. We simply install the package first, after installing the package it creates the virtual environment. Creating a separate virtual environment registers the app/command binaries into the PATH environment of the system so that they can be accessed globally. &#xA;&#xA;```&#xA;pipx install ipython&#xA;&#xA;pipx run ipython&#xA;```&#xA;&#xA;![IPython pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782651/blog-media/ssgymybn0dwi8ocs6xpf.png)&#xA;&#xA;For further documentation on ipython using the CLI, you can refer to the [GitHub](https://github.com/ipython/ipython) link or the [documentation](https://ipython.org/ipython-doc/3/interactive/reference.html#command-line-usage) page.&#xA;&#xA;### Reading an IPYNB file from pipx with JUT&#xA;&#xA;If you just want to read an ipynb file from the terminal, you can do that using [jut](https://pypi.org/project/jut/) by specifying the command:&#xA;&#xA;```&#xA;pipx install jut&#xA;&#xA;pipx run jut &lt;notebook.ipynb&gt;&#xA;```&#xA;&#xA;![jut pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782686/blog-media/ddm5uiqarjjmedhqvcsk.png)&#xA;&#xA;You can find more information on JUT via their official [GitHub](https://github.com/kracekumar/jut) repository.&#xA;&#xA;### Using rich-cli to display text using pipx&#xA;&#xA;We can even use [Rich-CLI](https://pypi.org/project/rich-cli/) to print rich content in the project. Rich-CLI as the name suggests is a CLI for the Rich package in python. It is really intuitive and simple to use. If we want to take snaps of the rich content of a markdown file, source file, dataset, etc. rich-cli is a tool that can quickly do that. Use pipx to install the package globally and simply run wherever required in an isolated environment.&#xA;&#xA;```&#xA;pipx install rich-cli&#xA;&#xA;pipx run --spec rich-cli rich &#34;[blue]Hello[/], [yellow]World[/]!&#34; --print&#xA;```&#xA;&#xA;![rich-cli pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782738/blog-media/xvdyhxjpj2hsghl2x1ng.png)&#xA;&#xA;&#xA;This will give you a colorful display of &#34;Hello, World&#34; without you messing up your current project. Pipx has an isolated environment so it doesn&#39;t tinker with your local virtual environment or project. Though you can use your current project to use those CLI to execute commands which might require the source file in the project.&#xA;&#xA;We can even print the markdown file in a rich format, simply using pipx and rich-cli. There are a lot of things we can do with rich-cli.&#xA;&#xA;```&#xA;pipx run rich-cli &lt;markdown_file.md&gt;&#xA;```&#xA;&#xA;![rich-cli markdown demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647785474/blog-media/bwmvucrcgrtgwdv58ffj.png)&#xA;&#xA;For further reading on the rich-cli package, you visit their [GitHub](https://github.com/Textualize/rich-cli) link and the [Documentation](https://github.com/Textualize/rich-cli/blob/main/README.md) Readme.&#xA;&#xA;### Using pytest to perform a test for the current project&#xA;&#xA;[Pytest](https://pypi.org/project/pytest/) allows writing simple and scalable tests for Python apps, libraries, and packages. We can use it to write tests in the project without adding it as a dependency in the python environment. Simply install the package with pipx and run the tests in an isolated environment.&#xA;&#xA;```&#xA;pipx install pytest&#xA;&#xA;pipx run pytest temp.py&#xA;```&#xA;&#xA;![Pytest pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782771/blog-media/qutb0bxzlnwctl9mbuon.png)&#xA;&#xA;Here, we can see the pytest was performed on the `temp.py` file which can be any application file for your project. This gave us results without installing pytest in our application&#39;s virtual environment. This becomes really convenient to run certain package commands whose package should not be a dependency on your project in the virtual environment.&#xA;&#xA;You can visit Pytest&#39;s [GitHub](https://github.com/pytest-dev/pytest/), [Documentation](https://docs.pytest.org/en/stable/) and their [Home Page](https://pytest.org/).&#xA;&#xA;### Using Black (Python code formatter)&#xA;&#xA;We can use pipx to even format the python source files using [black](https://pypi.org/project/black/). We don&#39;t have to install `black` as a dependency in the project. Pipx will simply install black in its isolated environment and run the specified command.&#xA;&#xA;```&#xA;pipx install black&#xA;&#xA;pipx run black temp.py&#xA;```&#xA;&#xA;![Black pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782823/blog-media/dtnae85fvgae2y4aiqkg.png)&#xA;&#xA;Black is a code formatter in Python, it basically edits your source python files for any incorrect python semantics and syntax, it corrects/formats them without a compromise. You can visit the [GitHub](https://github.com/psf/black), [Documentation](https://black.readthedocs.io/en/stable) or the Black package [Playground webpage](https://black.vercel.app/).&#xA;&#xA;### Using httpie (curl equivalent in Python)&#xA;&#xA;We can use the [httpie](https://pypi.org/project/httpie/) package in python which is a CLI tool for HTTP clients. It is very similar to the `curl` command in Linux. We can even use that with pipx to test out API endpoints or any website that you might be working with without leaving the terminal. For further details on the httpie package, you can visit their [Home page](https://httpie.io/), [GitHub](https://github.com/httpie), or [PyPI webpage](https://pypi.org/project/httpie/). &#xA;&#xA;```&#xA;pipx install httpie&#xA;&#xA;pipx run --spec httpie http -v dev.to&#xA;```&#xA;&#xA;![httpie pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647782857/blog-media/jbaudm3pbnnsjtzy5ok7.png)&#xA;&#xA;Here, we can see the package simply gives a simple output of the headers of the request. It is smart enough to understand the web link as `https://dev.to` unlike CURL which needs to have an exact match.&#xA;&#xA;### Running Markata blog with pipx &#xA;&#xA;As said, every package that bundles itself with a CLI can be installed and run from anywhere. So, [markata](https://pypi.org/project/markata/) which is a plugin-driven static site generator which is a python package also bundles with a CLI. Using the base app(markata) we can run its associated commands like `build`, `list`, etc. I use this to build my blog.&#xA;&#xA;```&#xA;pipx install markata&#xA;&#xA;pipx run markata build&#xA;```&#xA;&#xA;![Markata pipx demo](https://res.cloudinary.com/techstructive-blog/image/upload/v1647783021/blog-media/cygzwixyzwv4wwsccbfc.gif)&#xA;&#xA;You can visit the Markata&#39;s [documentation](https://markata.dev/) web page or the [GitHub](https://github.com/WaylonWalker/markata) repository for further insights on the Static site generator.&#xA;&#xA;**If you want to explore more packages that you can use with pipx, then you can look at the docs guide [programs-to-try](https://pypa.github.io/pipx/programs-to-try/) section.**&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this simple introduction, we were able to understand the basics of pipx which is a python package for isolating and running CLI-specific package commands. We saw how to setup pipx, install packages, run the commands from the package and interact with the virtual environment that it creates for each package we install. &#xA;&#xA;Thank you for reading, if you have any comments, suggestions, feedback please let me know in the comments. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Pipenv: Python&#39;s Official Package Management tool</title>
      <link>https://www.meetgor.com/posts/pipenv-intro</link>
      <description>Introduction Pipenv is Python&#39;s officially recommended Package management tool, as the name suggests it combines the power of and . It is really powerful and gi</description>
      <pubDate>Mon, 14 Mar 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Pipenv is Python&#39;s officially recommended Package management tool, as the name suggests it combines the power of `pip` and `virtualenv`. It is really powerful and gives control of package management for a python project. It also has support for adding dev packages, python versions, smartly recognizing main and derived package installation/uninstallation, and so on. It is the [official package management tool for Python](https://packaging.python.org/en/latest/tutorials/managing-dependencies/#managing-dependencies). &#xA;&#xA;It is quite similar to npm for Nodejs in Javascript, or bundle for Ruby, cargo for Rust, and so on. It really simple and easy to use as it manages the overhead for package management for us and hence it is also a high-level package management tool as opposed to pip, which is not as powerful as Pipenv. So, in this article, we&#39;ll explore [Pipenv](https://pypi.org/project/pipenv/) package manager for Python and how you can use it in your next python project. Let&#39;s get started. &#xA;&#xA;## Install Pipenv&#xA;&#xA;[Pipenv](https://pypi.org/project/pipenv/) is just like any other package in python, you can install it with pip as normally you install any other package with the command:&#xA;&#xA;```&#xA;pip install pipenv&#xA;```&#xA;&#xA;![Pipenv Install](https://res.cloudinary.com/techstructive-blog/image/upload/v1647193069/blog-media/hnhoeigfhx2hsypexgm5.png)&#xA;&#xA;You can refer to the documentation of Pipenv from [here](https://pipenv.pypa.io/en/latest/).&#xA;&#xA;## Set up a Python Environment&#xA;&#xA;This step is not mandatory but it avoids any mistakes that you can make in the future while installing the package, so to simply create a new python environment for your project, you simply have to write the following command:&#xA;&#xA;```&#xA;pipenv shell&#xA;```&#xA;&#xA;![Pipenv initialize](https://res.cloudinary.com/techstructive-blog/image/upload/v1647192853/blog-media/dahwaqnblvvvqyyw62uq.png)&#xA;This will create the virtual environment for Python in the local folder. It is quite similar to installing `virtualenv` and then activating the env/venv folder script. Though pipenv is quite powerful as it automatically detects the environment. &#xA;&#xA;If you look at it carefully, there will be a file generated after the command has been executed successfully. The file called `Pipfile` without any extension will have been created in your current folder where you executed the command from. The file contains the version of python used in this project along with the list of dependencies(currently empty). Also the source from where it will download and manage the dependencies. &#xA;&#xA;The Pipfile after simply creating the virtualenv via the command `pipenv shell` looks something like follows:&#xA;&#xA;```&#xA;[[source]]&#xA;url = &#34;https://pypi.org/simple&#34;&#xA;verify_ssl = true&#xA;name = &#34;pypi&#34;&#xA;&#xA;[packages]&#xA;&#xA;[dev-packages]&#xA;&#xA;[requires]&#xA;python_version = &#34;3.8&#34;&#xA;```&#xA;&#xA;If you want to set up a specific version of python for the virtual environment, you can do this using pipenv. The version should be installed in your system though for the pipenv to pick it up, if you have the specific version of python set up with all path variables configured, you can use the below command to set up the version of python in pipenv.&#xA;&#xA;```&#xA;pipenv --python version-number&#xA;```&#xA;&#xA;For example : `pipenv --python 3.9` will set up the virtual environment with python version `3.9`. &#xA;&#xA;## What is Pipfile&#xA;&#xA;Pipfile is basically a TOML file that has all the details of the different dependencies/packages and the version of Python used in the project/directory. A TOML is a simple configuration file that is reader-friendly, it is a map of keys and values as configuration data structures. In Pipenv, we can have keys like `package-names` and the value as the `version-number`, certain groups of dependencies like `dev-packages` and `packages`, etc. Pipenv is the file that Pipenv implements its package management environment. The file is really important and powerful as it can install all dependencies even if provided with `requirements.txt` file. Yes, it can automatically detect that if you provide the path to that file. &#xA;&#xA;Pipenv also has additional features like adding dev dependencies/packages in the project as a separate dependency. So if you want to test a feature with a specific package you can add it as a dev package and it will be stored separately. The pipfile will segregate the dependencies so that Pipenv can install/uninstall from the virtual environment. In short, Pipfile lets you have great control over your project&#39;s packages management. &#xA;&#xA;## Installing Python Packages&#xA;&#xA;Once your Pipenv is initialized as a virtual environment for Python, we can install dependencies with either `pip` or `pipenv`. This is the mistake that might get you trapped, if you already have not run the command `pipenv shell` and installed any dependencies with `pip`, you will install that dependency globally and make some mess of the project. So, it is advised to use `pipenv shell` in order to activate your virtual environment. If you do not wanna do that, you can use the command to install any dependency without activating the virtual environment.&#xA;&#xA;```&#xA;pipenv install &lt;package-name&gt;&#xA;```&#xA;&#xA;![Pipenv Install Flask](https://res.cloudinary.com/techstructive-blog/image/upload/v1647192980/blog-media/dfsokm6e1easwvxtgehh.png)&#xA;&#xA;This will detect any virtual environment in the project, if it&#39;s not created already it will create it and install that package in that environment.&#xA;&#xA;Installing any package using `pipenv` will update the Pipenv file and the package to its packages list.&#xA;```toml&#xA;[[source]]&#xA;url = &#34;https://pypi.org/simple&#34;&#xA;verify_ssl = true&#xA;name = &#34;pypi&#34;&#xA;&#xA;[packages]&#xA;flask = &#34;*&#34;&#xA;&#xA;[dev-packages]&#xA;&#xA;[requires]&#xA;python_version = &#34;3.8&#34;&#xA;```&#xA;&#xA;OR &#xA;&#xA;If you wish to install with pip, as usual, you need to be in the virtual subshell. If you already are, then Pipenv will add that dependency to the virtual environment. Note though, if you install any package with `pip` and not with `pipenv`, the package won&#39;t be added to Pipfile but would be installed in the virtual environment. &#xA;&#xA;![Pipenv pip install](https://res.cloudinary.com/techstructive-blog/image/upload/v1647193149/blog-media/suaykqsyrgn1w0mou7f0.png)&#xA;&#xA;Here, if we type the command `pipenv graph` it will show us a nice detailed list of all the installed dependencies. &#xA;&#xA;```&#xA;pipenv graph&#xA;```&#xA;```shell&#xA;$ pipenv graph&#xA;Flask==2.0.3&#xA;  - click [required: &gt;=7.1.2, installed: 8.0.4]&#xA;    - colorama [required: Any, installed: 0.4.4]&#xA;  - itsdangerous [required: &gt;=2.0, installed: 2.1.1]&#xA;  - Jinja2 [required: &gt;=3.0, installed: 3.0.3]&#xA;    - MarkupSafe [required: &gt;=2.0, installed: 2.1.0]&#xA;  - Werkzeug [required: &gt;=2.0, installed: 2.0.3]&#xA;requests==2.27.1&#xA;  - certifi [required: &gt;=2017.4.17, installed: 2021.10.8]&#xA;  - charset-normalizer [required: ~=2.0.0, installed: 2.0.12]&#xA;  - idna [required: &gt;=2.5,&lt;4, installed: 3.3]&#xA;  - urllib3 [required: &gt;=1.21.1,&lt;1.27, installed: 1.26.8]&#xA;```&#xA;&#xA;As you might have guessed, the above command is equivalent to the `pip freeze` command, but just compare the details both tools have. Pipenv really shines here.&#xA;&#xA;&#xA;If you compare the output of Pipfile and `pipenv graph` you get a bit confused as to why is there are so fewer packages in the file. So, **Pipfile doesn&#39;t store the sub-packages/dependencies of a certain base package**. Let&#39;s take, for example, Flask here, we have Flask as the main package, and `click`, `Jinja2`, `Werkzeug` are its sub dependencies, further `colorama` and `MarkupSafe` are in turn dependencies of `click` and `Jinja2`. So, Pipfile just includes the top-level packages, pipenv is smart enough to install the rest. It also doesn&#39;t include `requests` but it is indeed included in the `graph`. This is because Pipenv will only put the dependencies in the Pipfile if it has been installed via the `pipenv` command.&#xA;&#xA;![Pipenv Graph vs pip freeze](https://res.cloudinary.com/techstructive-blog/image/upload/v1647193333/blog-media/gzo95pbquaioujtqvntu.png)&#xA;&#xA;You can use `pip freeze` here as well as we are technically in a Python virtual environment. And you can clearly see the difference. Pipenv is a high-level tool compared to pip, it technically uses `pip` and `virtualenv` and leverages both of them to become a smart package management tool. &#xA;&#xA;## What is the Pipfile.lock&#xA;&#xA;- If you are coming from `Javascript` world, it is similar to `package-lock.json` file.&#xA;- If you are coming from `Ruby` world, it is similar to the `Gemfile.lock` file.&#xA;- If you are coming from `Rust` world, it is similar to the `cargo.lock` file.&#xA;&#xA;Ok, you get the idea, it is a file that more robustly specifies the version of the packages without conflicting with the other version or the Python version itself. If you look at the Pipfile.lock also has hashes that store the sub-packages as well. The file format here is JSON as opposed to TOML for the Pipfile. &#xA;&#xA;## Configuring the Pipenv environment&#xA;&#xA;Now, a question you might have is where is the virtual environment? Is it there? Of course, it will be there, it is configured to a different location though. By default, it will be stored in the `~\.virtualenvs\` folder. &#xA;&#xA;You can get the location of the current virtualenv with the following command:&#xA;&#xA;```&#xA;pipenv --venv&#xA;```&#xA;&#xA;```&#xA;$ pipenv --venv&#xA;C:\Users�cer\.virtualenvs\pipenv-blog-gHY6vF9t&#xA;```&#xA;   For Windows, it is in the Admin user(in my case it is named `acer`) followed by the hidden folder `virtualenvs`, this folder will contain all the virtualenvs for different projects using `pipenv`. &#xA;&#xA;If you wished to change this location and keep the virtual environment folder in the same directory as your project, you can set up the environment variable for it as follows:&#xA;&#xA;For Linux/macOS:&#xA;Add the following to your `~/.bashrc` or other equivalent shell configuration file.&#xA;```&#xA;export PIPENV_VENV_IN_PROJECT=&#34;enabled&#34;&#xA;```&#xA;&#xA;For Windows:&#xA;&#xA;Add it to the PATH Environment variable.&#xA;```&#xA;set PIPENV_VENV_IN_PROJECT=&#34;enabled&#34;   &#xA;```&#xA;&#xA;This will make sure the virtualenvs for the project in `pipenv` are created inside the current folder itself and not in a single `~\.virtualenvs\` folder. &#xA;&#xA;## Creating the requirements.txt file&#xA;&#xA;Let&#39;s say you also want to create a requirements.txt file for distributing your project to someone else, as not everyone will use Pipenv to manage their dependencies. It is really straightforward and quick to create a requirements.txt file from the Pipenv environment. &#xA;&#xA;The below command will make the `requirements.txt` file from the existing Pipenv project.&#xA;&#xA;```&#xA;pipenv lock -r &gt;requirements.txt&#xA;```&#xA;&#xA;![Pipenv to requirements.txt file](https://res.cloudinary.com/techstructive-blog/image/upload/v1647193388/blog-media/e6t68b7ckrsrvadvmeqa.png)   &#xA;This will create the requirements.txt file, based on the Pipfile. Pipenv is smart again to provide all of the required dependencies to the requirements.txt in order that pip will be able to install all the required dependencies. &#xA;&#xA;## Using requirements.txt in Pipenv&#xA;&#xA;We can install all the dependencies from the requirements.txt file while we are migrating from bare-bones virtualenv and pip to Pipenv. Pipenv will install all the mentioned dependencies and it will also add its checks for the appropriate checks for dependencies. &#xA;&#xA;```&#xA;pipenv install -r requirements.txt&#xA;```&#xA;This will install the dependencies mentioned in the requirements.txt file into the Pipenv virtual environment. &#xA;&#xA;## Managing Dev Packages &#xA;&#xA;Let&#39;s say we want to install a package but we are not sure to use it in production or the actual script, just a test for how it will work. Thus we have dev packages to install optional or testing packages. &#xA;&#xA;To install a dev-dependency or package, you can install via the following command:&#xA;&#xA;```&#xA;pipenv install &lt;package-name&gt; --dev&#xA;```&#xA;&#xA;![Pipenv dev package install](https://res.cloudinary.com/techstructive-blog/image/upload/v1647194653/blog-media/x5dimgfd2ikm2ercbzhv.png)&#xA;&#xA;If we see the Pipfile, the `django` package that we installed tagged with `--dev` will be in the `dev-packages`&#xA;&#xA;```&#xA;$ cat Pipfile&#xA;[[source]]&#xA;url = &#34;https://pypi.org/simple&#34;&#xA;verify_ssl = true&#xA;name = &#34;pypi&#34;&#xA;&#xA;[packages]&#xA;flask = &#34;*&#34;&#xA;&#xA;[dev-packages]&#xA;django = &#34;*&#34;&#xA;&#xA;[requires]&#xA;python_version = &#34;3.8&#34;&#xA;```&#xA;&#xA;If we wanted to uninstall all the dev-packages, we can simply enter the command:&#xA;&#xA;```&#xA;pipenv uninstall --all-dev&#xA;```&#xA;&#xA;![Pipenv uninstall devpackages](https://res.cloudinary.com/techstructive-blog/image/upload/v1647261079/blog-media/bg9n7aj6rfxsvdwflnah.gif)&#xA;&#xA;This will simply uninstall all the dev dependencies/packages from the pipenv environment and also remove them from the packages list in Pipfile. If you wished to uninstall a specific package in pipenv, you can uninstall it by the simple command:&#xA;&#xA;```&#xA;pipenv uninstall &lt;package-name&gt;&#xA;````&#xA;&#xA;&#xA;## Installing/Uninstalling all packages&#xA;&#xA;To install only the default packages and not dev-packages. &#xA;&#xA;```&#xA;pipenv install &#xA;```&#xA;&#xA;To install or configure a project, if you want to test the project with all the dev dependencies, you can install them with:&#xA;&#xA;```&#xA;pipenv install --dev&#xA;```&#xA;&#xA;This will install all the packages both `packages` and `dev-packages`. &#xA;&#xA;If you want to uninstall all the packages in pipenv, you can use the command :&#xA;&#xA;```&#xA;pipenv uninstall --all&#xA;```&#xA;&#xA;This will uninstall all the default and dev packages from pipenv. This is like starting a fresh virtual environment. &#xA;&#xA;### References:&#xA;&#xA;- [Pipenv documentation](https://pipenv.pypa.io/en/latest/)&#xA;- [Real Python - Pipenv Guide](https://realpython.com/pipenv-guide/)&#xA;&#xA;## Conclusion&#xA;&#xA;So, this was a brief introduction to `pipenv` which is Python&#39;s recommended package management tool. We saw that we have control over the dependencies and packages in a virtual environment which is taken care of automatically for us. We don&#39;t have to activate the environment to install or manage any package for a project.</content>
      <type></type>
    </item>
    <item>
      <title>Podevcast: A single source for developer podcasts</title>
      <link>https://www.meetgor.com/posts/podevcast-project</link>
      <description>Introduction Hello Developers! Want to listen to programming podcasts from a single place? Podevcast is the place you should be searching for. I am Meet Gor and</description>
      <pubDate>Fri, 18 Feb 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Hello Developers! Want to listen to programming podcasts from a single place? Podevcast is the place you should be searching for. &#xA;&#xA;I am Meet Gor and I present this project as a submission to the Netlify x Hashnode Hackathon. Podevcast is a webpage(static) for listening to podcasts centered around developers and programming. Just pick your favorite one and start listening straight away. Let&#39;s dive into the making of Podevcast. Head on to https://podevcast.netlify.app/ to check out the live app.&#xA;&#xA;## What is Podevcast&#xA;&#xA;Podevcast is a web application or a static site that renders the top programming/development podcasts. You can listen to the top podcasts around the developer community from a single source. &#xA;&#xA;&gt; Listen to your favorite developer podcasts with Podevcast&#xA;&#xA;Podevcast is a static site generated using a script. There is a static site generator that is heavily done in Python and deployed to Netlify. You can simply listen to the podcasts on the web page or go to the canonical page of the podcast episode. From the canonical page, you can choose to hop to your chosen music player, but the default music player should be fine for casual listening. The core idea is to keep things in a single place for developer podcasts.&#xA;&#xA;[Podevcast](https://podevcast.netlify.app/)&#xA;&#xA;[Source Code](https://github.com/Mr-Destructive/podevcast)&#xA;&#xA;## Preview&#xA;&#xA;Podevcast has popular developer podcasts like `Command Line Heroes`, `The Python Podcast`, `The freeCodeCamp Podcast`, and many others to choose from. You can go into categories for looking at a specific podcast. &#xA;&#xA;### Application Demonstration&#xA;&#xA;Here&#39;s a small demonstration of the Podevcast application.&#xA;&#xA;![Podevcast Preview gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1645200224921/GC8gmxUzX.gif)&#xA;&#xA;Podevcast has multiple pages like:&#xA;&#xA;1. [Home page](https://podevcast.netlify.app/)&#xA;2. [Podcast page](https://podevcast.netlify.app/list)&#xA;3. [Episode page](https://podevcast.netlify.app/the_real_python_podcast/ep/1/)&#xA;4. [Podcast List page](https://podevcast.netlify.app/command_line_heroes/)&#xA;5. [Categories page](https://podevcast.netlify.app/category/)&#xA;&#xA;The Home page has the latest episode of all the podcasts. It also has an audio player to play on the go.&#xA;&#xA;![Podevcast home page](https://res.cloudinary.com/techstructive-blog/image/upload/v1645113477/blog-media/iafi8nthhj0vvvrcbhka.png)&#xA;&#xA;The Podcast List page has the list of all the Podcasts available in the project. It has the name of the podcast with the link to the podcast page that has the list of all the episodes of that podcast.&#xA;&#xA;![Podevcast Podcast list](https://res.cloudinary.com/techstructive-blog/image/upload/v1645113598/blog-media/cnprgufs3lrouvgdl8jn.png)&#xA;&#xA;The categories page has a list of categories of the podcasts like Web-development, backend, frontend, data science, DevOps, and so on. More categories will be added soon.&#xA;&#xA;![Podevcast Categories](https://res.cloudinary.com/techstructive-blog/image/upload/v1645113626/blog-media/uloq4xi1d4zfo8sfl7bm.png)&#xA;&#xA;The Episode page has the audio player, the summary of the episode, canonical episode, and podcast page. &#xA;&#xA;![Podevcast Episode page](https://res.cloudinary.com/techstructive-blog/image/upload/v1645113654/blog-media/omqks44p8b3u7jclkhgz.png)&#xA;&#xA;## Why Podevcast?&#xA;&#xA;Listening to music is one thing and listening to podcasts is different. I wanted a place from where developers can listen to developer-specific podcasts from a single source not just give out the article **&#34;Top 10 podcast you should be listening to as a developer&#34;**. Having played around with python and some libraries like feedparser and jinga previously I saw this Hackathon as an opportunity to convert the idea into a project. It fits the JAMStack area well from the Hackathon and project perspective.  &#xA;&#xA;## Tech Stack&#xA;&#xA;- Python&#xA;  - [feedparser](https://pypi.org/project/feedparser/)&#xA;  - [jinga2](https://pypi.org/project/Jinja2/)&#xA;- GitHub Actions&#xA;- HTML / CSS&#xA;&#xA;The data is extracted from various RSS Feeds using the feedparser library in Python. &#xA;&#xA;Using GitHub Actions, the feed is refreshed every 24 hours to fetch the latest episodes from the respective podcast feeds. Basically, the GitHub action triggers a Netlify deployment that in turn generates the static site by running the script.&#xA;&#xA;The command for running the script on Netlify and generating the `Podevcast` webpage is :&#xA;&#xA;```&#xA;pip install -r rquirements.txt &amp;&amp; python script.py&#xA;```&#xA;&#xA;And the directory for deployed web pages (published directory) is `site` which contains all the `HTML` files that can be rendered as the website itself. &#xA;&#xA;### Source Code&#xA;&#xA;The project is available on [GitHub](https://github.com/Mr-Destructive/podevcast). Feel free to open a PR to add a Podcast or a Category. The project only has a few python files, the main script is `script.py` which actually creates the home and the podcast list pages along with the episode pages. The `src` folder contains some extra bits of scripts like creating the categories and category podcast list pages. Also, it has certain config files like `runtime.txt`, `requirements.txt`, and so on. Finally, there is the `podlist.json` for the list of podcasts and `categorylist.json` for the categories of podcasts. &#xA;&#xA;### Core Script Snippet &#xA;&#xA;The python script looks a lot bigger than the below snippet but it is doing the same process multiple times for different pages. There is also some type checking and tiny details that are added as per the requirement of the templates. &#xA;&#xA;```python&#xA;import feedparser&#xA;from jinja2 import Environment, FileSystemLoader&#xA;from pathlib import Path&#xA;&#xA;template_env = Environment(loader=FileSystemLoader(searchpath=&#39;./layouts/&#39;))&#xA;index_template = template_env.get_template(&#39;index.html&#39;)&#xA;episode_template = template_env.get_template(&#39;episode.html&#39;)&#xA;&#xA;feed = feedparser.parse(&#34;https://freecodecamp.libsyn.com/rss&#34;)&#xA;&#xA;pod_name = feed[&#39;feed&#39;][&#39;title&#39;]&#xA;&#xA;for i in range(0, len(feed[&#39;entries&#39;]):&#xA;    &#xA;    ep_title = feed[&#39;entries&#39;][i][&#39;title&#39;]&#xA;    audio = feed[&#39;entries&#39;][i][&#39;links&#39;][1][&#39;href&#39;]&#xA;    cover_image = feed[&#39;entries&#39;][i][&#39;image&#39;][&#39;href&#39;]&#xA;    og_link = feed[&#39;entries&#39;][i][&#39;links&#39;][0][&#39;href&#39;]&#xA;&#xA;    episode_obj = {}&#xA;    episode_obj[&#39;title&#39;] = ep_title&#xA;    episode_obj[&#39;audiolink&#39;] = audio&#xA;    episode_obj[&#39;cover&#39;] = cover_image&#xA;    episode_obj[&#39;link&#39;] = og_link&#xA;&#xA;    with open(os.path.join(f&#34;site/{pod_name}/ep/{i}/index.html&#34;), &#39;w&#39;, encoding=&#39;utf-8&#39;) as ep_file:&#xA;         ep_file.write(&#xA;            episode_template.render(&#xA;            episode = episode_obj&#xA;            )&#xA;         )&#xA;```&#xA;&#xA;   Above is a simple snippet of the core functionality of the script. It basically takes the RSS Feed `URL` of the podcast and using `feedparser` the data is retrieved in the form of a dictionary in Python. &#xA;&#xA;   - Iterate over the `feed[&#39;entries&#39;]` which is a list of lengths same as the number of episodes of that podcast, we then assign a set of values like `episode title`, `audio link`, `cover image`, `canonical link for the episode`, `date` and so on. &#xA;   - Create a dictionary and store the key value as the mentioned data to access from the template. &#xA;   - Open a file in the structured file format and then parse the `episode_obj` which is a dictionary to the episode template. &#xA;   - Access the dictionary using jinga2 templating tags. &#xA;&#xA;```html&#xA;&lt;html&gt;&#xA;    &lt;head&gt;&#xA;        &lt;title&gt;Podevcast&lt;/title&gt;&#xA;    &lt;/head&gt;&#xA;    &lt;body&gt;&#xA;        &lt;h3 class=&#34;ep-title&#34;&gt;{{ episode.title }}&lt;/h3&gt;&#xA;        &lt;img src=&#34;#&#34;&gt;&#xA;        &lt;a class=&#34;ep-link&#34; href=&#34;#&#34;&gt;Episode &lt;/a&gt; &#xA;        &lt;audio controls=&#34;enabled&#34; preload=&#34;none&#34;&gt;&#xA;            &lt;source src=&#34;#&#34; type=&#34;audio/mpeg&#34;&gt;&#xA;        &lt;/audio&gt;   &#xA;    &lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;We can use `{{  }}` to access any value parsed to the template via the script. Also, we can make use of `{% %}` to run loops, conditionals, blocks, and other tags in the template. &#xA;&#xA;![Feedparser Illustration](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1645110268/blogmedia/uwdzcwn07oxhppiptem9.png)&#xA;&#xA;So, we can see the feed is basically a dictionary that has a key-value pair and further, it can be a nested dictionary or a list as a value of a key. As in the case of `feed[&#39;entries&#39;]` is a list with the length of the number of episodes of a podcast. And in the script, we use various keys to access various components, obviously, this requires a bit of exploration of the dictionary initially but it becomes easy thereafter to automate using Python. &#xA;&#xA;### Episode List&#xA;&#xA;Currently, the episodes are added using the JSON file. It is not that user-friendly but still not a big task to simply add a link in a file. This is a #TODO that will require some external tooling to integrate into the webpage to ask for a form to submit a new Podcast. &#xA;&#xA;```json&#xA;{&#xA;    &#34;Command Line Heroes&#34;: &#34;https://feeds.pacific-content.com/commandlineheroes&#34;,&#xA;    &#34;Python Podcast__init__&#34;: &#34;https://www.pythonpodcast.com/feed/mp3/&#34;,&#xA;    &#34;Real Python Podcast&#34;: &#34;https://realpython.com/podcasts/rpp/feed&#34;,&#xA;    &#34;The freeCodeCamp Podcast&#34;: &#34;https://freecodecamp.libsyn.com/rss&#34;,&#xA;    &#34;CodeNewbie&#34;: &#34;http://feeds.codenewbie.org/cnpodcast.xml&#34;,&#xA;    &#34;Linux For Everyone&#34;: &#34;https://feeds.fireside.fm/linuxforeveryone/rss&#34;,&#xA;    &#34;JavaScript Jabber&#34; : &#34;https://feeds.fireside.fm/javascriptjabber/rss&#34;&#xA;}&#xA;```&#xA;&#xA;The process requires a manual test to validate a given RSS Feed as not all feeds are generated the same way and thus there are a few exceptions that need to be sorted out manually. For example, the Python Podcast doesn&#39;t have a cover image parsed into the RSS Feed, so there needs to be a check for it in the script and also in the template to restrict parsing and displaying the cover image link. &#xA;&#xA;### Episode Categories&#xA;&#xA;This is also a JSON file that holds the keys as the category and the value as a list of episode names (strictly the name from `feed[&#39;feed&#39;][&#39;title&#39;]`). There needs to be a human decision to be taken to add the podcast into a specific category. &#xA;&#xA;```json&#xA;{&#xA;   &#34;Python&#34;:[&#xA;      &#34;Talk Python To Me&#34;,&#xA;      &#34;The Python Podcast.__init__&#34;,&#xA;      &#34;The Real Python Podcast&#34;,&#xA;      &#34;Python Bytes&#34;&#xA;   ],&#xA;   &#34;Javascript&#34;:[&#xA;      &#34;Full Stack Radio&#34;,&#xA;      &#34;JavaScript Jabber&#34;&#xA;   ],&#xA;   &#34;Linux&#34;:[&#xA;      &#34;Command Line Heroes&#34;,&#xA;      &#34;LINUX Unplugged&#34;,&#xA;      &#34;The Linux Cast&#34;,&#xA;      &#34;Linux For Everyone&#34;&#xA;   ],&#xA;   &#34;Data Science&#34;:[&#xA;      &#34;DataFramed&#34;,&#xA;      &#34;Data Skeptic&#34;,&#xA;      &#34;The Banana Data Podcast&#34;&#xA;   ],&#xA;   &#34;Dev Ops&#34;:[&#xA;      &#34;DevOps Cafe Podcast&#34;,&#xA;      &#34;Arrested DevOps&#34;,&#xA;      &#34;Pulling the Strings&#34;,&#xA;      &#34;Azure DevOps Podcast&#34;,&#xA;      &#34;DevOps and Docker Talk&#34;&#xA;   ]&#xA;}&#xA;```&#xA;&#xA;   Though the JSON file is managed manually the generation of the categories is automated. Please feel to add other categories of your choice. &#xA;&#xA;## What&#39;s Coming?&#xA;&#xA;Certain features like adding podcast using a form, adding more podcasts, and categories for sure. Though what looks a bit cloudy in my opinion is adding accessibility links to music players because the RSS feed doesn&#39;t contain direct links to them. Though I still to explore and find out if it can be obtained from the feed itself. &#xA;&#xA;- Search box for searching podcasts&#xA;- Accessible Links to other platforms (Spotify, Itunes, etc)&#xA;- More depth in categories (Languages/Frameworks/Niche-specific podcasts)&#xA;&#xA;I&#39;ll add these features after checking the feasibility of the ideas and the response from the community after releasing them.&#xA;&#xA;## Final Words&#xA;&#xA;This project wouldn&#39;t have existed without this Hackathon as it gives a deadline to finish and hope to win something. Specially thanks to Hashnode and Netlify for organizing such a great opportunity in the form of a hackathon. Also, the maintainers of Python libraries like feedparser and jinja. The project would have been impossible without them. &#xA;&#xA;If you like the project please give it a star on [GitHub](https://github.com/Mr-Destructive/podevcast). Have any feedback? Please let me know in the comments or on [Twitter](https://twitter.com/MeetGor21).  Thank you for reading, Hope you have a good time using Podevcast. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: CRUD API</title>
      <link>https://www.meetgor.com/posts/django-basics-crud-api</link>
      <description>Introduction After understanding the admin section and setting up a database, we can finally move on to the CRUD API in Django. We will create a API in pure Dja</description>
      <pubDate>Thu, 03 Feb 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;After understanding the admin section and setting up a database, we can finally move on to the CRUD API in Django. We will create a API in pure Django with Model Form, Class Based Views and templates. We will go in two parts in this section, first revising the workflow of the app creation and setup, the next part all about CRUD API. We&#39;ll dive into a lot of familiar topics and concepts from previous parts but there are is still a lot to learn beyond the basics. &#xA;&#xA;The following is the demonstration of the CRUD API we will be making in this section:&#xA;&#xA;![CRUD API- demonstration](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643985336/blogmedia/hoxf3u9a872uvcbyehov.gif)&#xA;&#xA;## Quickly Setting up an app&#xA;&#xA;We have seen how to create an app in the [folder structure part](https://mr-destructive.github.io/techstructive-blog/django/python/web-development/2021/11/24/Django-Basics-P3.html) of the series, after that we have created all the components namely the views, urls, templates, etc in distinct parts. In this article, we will create an app together in a single part. We&#39;ll discuss all the process in short because its important to get the workflow of Django to ease the actual coding process. Let&#39;s dive in the part one of this section of creating a CRUD API i.e. to set up an app.&#xA;&#xA;### Creating App&#xA;&#xA;To quickly set up an application, we need to execute an command so that python under the hood creates a folder for an app in the current Django project. &#xA;&#xA;```&#xA;python manage.py startapp api &#xA;```&#xA;&#xA;   This will set up an folder `api` in the base directory of current django project. Now, we&#39;ll need to configure the Django settings for the project to pick up this app while running the server, making any migrations or any other project level process.&#xA;&#xA;### Registering the App-name in settings&#xA;&#xA;Simply append the name of the app in a string in the `INSTALLED_APPS` list inside the `settings.py` file. &#xA;&#xA;```python&#xA;# project_name / settings.py&#xA;&#xA;# Application definition&#xA;&#xA;INSTALLED_APPS = [&#xA;    &#39;django.contrib.admin&#39;,&#xA;    &#39;django.contrib.auth&#39;,&#xA;    &#39;django.contrib.contenttypes&#39;,&#xA;    &#39;django.contrib.sessions&#39;,&#xA;    &#39;django.contrib.messages&#39;,&#xA;    &#39;django.contrib.staticfiles&#39;,&#xA;&#xA;    &#39;api&#39;,&#xA;]&#xA;```&#xA;&#xA;### Setting up the URLS&#xA;&#xA;We need to create a `urls.py` file for the `api` app and link it in the main project&#39;s URLs. &#xA;&#xA;```python&#xA;# app-name / urls.py&#xA;&#xA;from django.urls import path&#xA;&#xA;urlpatterns = [&#xA;&#xA;]&#xA;```&#xA;   We&#39;ll keep the url patterns empty but after setting and configuring the project and the application, we&#39;ll map the routes to the views in this app. &#xA;&#xA;Next, we need to update the `urls.py` file in the project folder to include the `api` routes/urls. I have kept it &#39;&#39; or base route, it could be anything as per your application design. &#xA;&#xA;```python   &#xA;# project_name / urls.py&#xA;&#xA;from django.contrib import admin&#xA;from django.urls import path, include&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;admin/&#39;, admin.site.urls),&#xA;    path(&#39;api/&#39;, include(&#39;app_name.urls&#39;)),&#xA;]&#xA;```&#xA;After configuring the URLs we need to set up the templates and static files.&#xA;&#xA;### Setting up Templates and Static files&#xA;&#xA;To set up the templates, we need to configure the `settings.py` file to look for the templates in a specific directory. Below is the snippet to change the default configuration.&#xA;&#xA;```python&#xA;# project_name / settings.py&#xA;&#xA;import os&#xA;&#xA;TEMPLATES = [&#xA;    {&#xA;        &#39;BACKEND&#39;: &#39;django.template.backends.django.DjangoTemplates&#39;,&#xA;        &#39;DIRS&#39;: [os.path.join(BASE_DIR, &#39;templates&#39;),],&#xA;        &#39;APP_DIRS&#39;: True,&#xA;        &#39;OPTIONS&#39;: {&#xA;            &#39;context_processors&#39;: [&#xA;                &#39;django.template.context_processors.debug&#39;,&#xA;                &#39;django.template.context_processors.request&#39;,&#xA;                &#39;django.contrib.auth.context_processors.auth&#39;,&#xA;                &#39;django.contrib.messages.context_processors.messages&#39;,&#xA;            ],&#xA;        },&#xA;    },&#xA;]&#xA;```&#xA;&#xA;So, you can now create your templates (HTML documents) inside the `templates` folder after creating the folder in the base project directory. &#xA;&#xA;TO configure static files, we need to also make modifications to the `settings.py` file as follows:&#xA;&#xA;```python&#xA;# project_name / settings.py&#xA;# import os&#xA;&#xA;STATIC_URL = &#39;/static/&#39;&#xA;STATICFILES_DIRS = [os.path.join(BASE_DIR, &#34;static&#34;)]&#xA;STATIC_ROOT  = os.path.join(BASE_DIR, &#39;staticfiles&#39;)&#xA;```&#xA;   Here, we are configuring the static files ( CSS, Js, Assets) in the `static` folder. This can be anything you like but the folder name should be then changed accordingly. Similar to the Templates folder, the static folder is also located in the root directory of the project.&#xA;&#xA;This is all the necessary configuration for simple full stack application you can make, still we have to configure the backend and write the actual logic for the application. We&#39;ll move on to the second part of this section i.e. creating the CRUD API.&#xA;&#xA;## Making an CRUD API&#xA;&#xA;There are certain things to be planned before we move ahead like the selection of database, the schema of the database and basically the structure of the backend. For now, we&#39;ll go with SQL database, a REST API with PostgreSQL hosted locally on the system. &#xA;&#xA;### Configure the database &#xA;&#xA;To configure the database, we simply need to first create the database. This can be done by using a CLI or the Admin interface of the particular database we are working with in our case it&#39;s PostgreSQL. Postgres comes with pgAdmin to create and manage the databases and the server locally. The detailed explanation of the [database creation](https://mr-destructive.github.io/techstructive-blog/django/python/web-development/2022/01/16/Django-Basics-P8.html) is explained in the previous parts of the series. &#xA;&#xA;After creating the database locally, you need to tweak the `settings.py` file database configuration object as per your credentials of the database instance.&#xA;&#xA;```python&#xA;# project_name / settings.py&#xA;&#xA;DATABASES = {&#xA;    &#39;default&#39;: {&#xA;        &#39;ENGINE&#39;: &#39;django.db.backends.postgresql&#39;,&#xA;        &#39;NAME&#39;: &#39;blogapp&#39;,&#xA;        &#39;USER&#39;: &#39;postgres&#39;,&#xA;        &#39;PASSWORD&#39;: &#39;@1234567&#39;,&#xA;        &#39;HOST&#39;: &#39;localhost&#39;,&#xA;        &#39;PORT&#39;: &#39;5432&#39;,&#xA;    }&#xA;}&#xA;```&#xA;   If you have already created the databse, there is no problem in reusing the same one but for production level applications make sure to keep things separate. Also the model name has to be distinct in each app to create tables in the database.&#xA;&#xA;### Creating Model&#xA;&#xA;We can now finally move on to the logic of the application, this is the part which acts as the backbone of backend development in django. You need to design the model carefully and programmatically. Make sure to include most of the logic as you can inside the models as it is a standard and a good practise to develop in professional projects. &#xA;&#xA;For this app, we simply are going to create a blog post model. We are going to have some simple attributes and structure.&#xA;&#xA;```python&#xA;# app_name / models.py&#xA;&#xA;from django.db import models&#xA;from django.contrib.auth.models import User&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=127, verbose_name=&#34;headling&#34;)&#xA;    post = models.TextField(verbose_name=&#39;content&#39;)&#xA;    author = models.ForeignKey(User, on_delete=models.CASCADE)&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;&#xA;    def __str__(self):&#xA;        return self.title&#xA;&#xA;    class Meta:&#xA;        verbose_name_plural = &#39;Articles&#39;&#xA;```   &#xA;This is again a simple model for demonstration of a lot of things like the `str` function, `Meta` class, `verbose_names` and so on. You can design models as per your requirements and proper planning. It really determines the structure of an application.&#xA;&#xA;Now, after we have created the models, we need to migrate the schema into the database. Remember it acts as a checkpoint in the history of changes to the database. &#xA;&#xA;```&#xA;python manage.py makemigrations&#xA;```&#xA;This will create a checkpoint in the `migrations` folder in the app. &#xA;&#xA;To make changes to the database, we use the migrate command.&#xA;&#xA;```&#xA;python manage.py migrate&#xA;```&#xA;&#xA;Here&#39;s a quick demonstration of everything covered so far.&#xA;&#xA;![app setup and config](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643648979/blogmedia/xnqabmxtenajntqepqo3.gif)&#xA;&#xA;### Creating Form&#xA;&#xA;Django has a functionality to convert the fields in a model to a form which can be used for input in the frontend side. This allows us to just configure the Form and map the form to a view for the specific operation and simply add a tiny snippet of HTML to make a basic API and thus handle the entire heavy lifting.&#xA;&#xA;[Model Forms](https://docs.djangoproject.com/en/4.0/topics/forms/modelforms/#modelform) are the type of forms in django that does the above mentioned functionality. We need to create a form class in a new file inside the app called `forms.py` and add the following content:&#xA;&#xA;```python&#xA;# app_name / forms.py&#xA;&#xA;from django import forms&#xA;from .models import Article&#xA;&#xA;class ArticleForm(forms.ModelForm):&#xA;    class Meta:&#xA;        model = Article&#xA;        fields = [&#xA;            &#39;title&#39;,&#xA;            &#39;post&#39;,&#xA;        ]&#xA;```&#xA;   In this snippet, we are creating a Form called `ArticleForm` it can be anything but (Model_NameForm) is a standard way to name a Model Form. It is a child class of the ModelForm, i.e. it inherits the parent&#39;s attributes/properties and methods. So we have the access to it&#39;s Meta class which defines some of the attributes like model, fields, etc. &#xA;&#xA;   Here, we are only adding `title` and `post`, because `created` and `updated` are automatically added. So what about `author`? Well, we can add it automatically by processing the request that will be sent when the form is submitted (we&#39;ll discus while creating views). &#xA;&#xA;   In a model form, we also have other attributes like `exclude` which is the opposite of `fields`, say you have a lot of attributes in a model and you want to skip certain fields then you don&#39;t use fields and use exclude to specify only the attributes to  be excluded. Also `widgets` which can be used to style and properly process the fields in the form to have more control on how the form should be validated and presented.&#xA;&#xA;### Updating Models&#xA;&#xA;Now, when we have chosen to use Model Form, we need to update the model logic (not the structure so no migrations). &#xA;&#xA;We&#39;ll have to add some methods and redefine certain default parameters in able to use the Class based views and Model forms to the fullest strength. &#xA;&#xA;```python&#xA;# app_name / models.py&#xA;&#xA;from django.db import models&#xA;from django.contrib.auth.models import User&#xA;from  django.core.serializers import serialize&#xA;import json&#xA;&#xA;class ArticleQuerySet(models.QuerySet):&#xA;    def serialize(self):&#xA;        list_value = list(self.values(&#34;id&#34;,&#34;author&#34;,&#34;title&#34;,&#34;post&#34;))&#xA;        return json.dumps(list_value)&#xA;&#xA;class ArticleManager(models.Manager):&#xA;    def get_queryset(self):&#xA;        return ArticleQuerySet(self.model,using=self._db)&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=127, verbose_name=&#34;heading&#34;)&#xA;    post = models.TextField(verbose_name=&#39;content&#39;)&#xA;    author = models.ForeignKey(User, on_delete=models.CASCADE)&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;&#xA;    objects = ArticleManager() &#xA;&#xA;    def __str__(self):&#xA;        return self.title&#xA;&#xA;    def serialize(self):&#xA;        data = {&#xA;            &#34;id&#34;: self.id,&#xA;            &#34;title&#34;: self.title,&#xA;            &#34;author&#34;: self.author.id,&#xA;            &#34;post&#34;: self.post,&#xA;        }&#xA;        data = json.dumps(data)&#xA;        return data&#xA;&#xA;    class Meta:&#xA;        verbose_name_plural = &#39;Articles&#39;&#xA;```&#xA;   Let&#39;s break down what was added:&#xA;&#xA;- Created two new classes (`ArticleManager` and `ArticleQuerySet`)&#xA;  This was done to manage and serialize the model attributes. We need a standard like JSON to parse and return the data from the client to the server and vice versa. The `ArticleQuerySet` actually converts the `QuerySet` into list and dumps into a json object. **A QuerySet in Django is collection of SQL queries.** &#xA;- Function serialize&#xA;  The serialize function actually converts the `QuerySet` into a JSON object which in turn is used as a utility function for the model. **It returns the JSON object hence a serialized object from a queryset.**&#xA;- Editing the object for the model&#xA;  The object for the model is re initialized as a object of the `ArticleManager` class which in turn manages the and process the query set and returns the serialized object.&#xA;&#xA;  That&#39;s all done from the models for the app and now we finally move to the views.&#xA;&#xA;### Creating Views&#xA;&#xA;Now, we can start writing the views to actually add the CRUD functionality ourselves and slowly develop the frontend as well. &#xA;&#xA;To start creating views, we need to import some built-in classes used for making APIs like the Django generic views. Some of the core views(classes based) for making a CRUD API include:&#xA;1. [CreateView](https://docs.djangoproject.com/en/4.0/ref/class-based-views/generic-editing/#createview)&#xA;2. [UpdateView](https://docs.djangoproject.com/en/4.0/ref/class-based-views/generic-editing/#updateview)&#xA;3. [DeleteView](https://docs.djangoproject.com/en/4.0/ref/class-based-views/generic-editing/#deleteview)&#xA;4. [DetailView](https://docs.djangoproject.com/en/4.0/ref/class-based-views/generic-display/#detailview) &#xA;5. [ListView](https://docs.djangoproject.com/en/4.0/ref/class-based-views/generic-display/#listview)&#xA;&#xA;These views help in making the API in Django easily. We simply need to add the template name to add a customize the layout, make forms, define the sets of fields to input from the client and which fields to process by the server side.&#xA;&#xA;Let&#39;s create them one by one and understand the structure of class based views.&#xA;&#xA;#### Create View&#xA;&#xA;This view as the name suggests is used for creating a entry in a model(database) or we can also say that it will act as the `Create`(POST request) in the CRUD API.&#xA;&#xA;```python&#xA;# app_name / views.py&#xA;&#xA;from django.views.generic.edit import ( &#xA;    CreateView, &#xA;    UpdateView, &#xA;    DeleteView )&#xA;&#xA;from .models import Article&#xA;from .forms import ArticleForm&#xA;&#xA;class ArticleCreateView(CreateView):&#xA;    model = Article&#xA;    form_class = ArticleForm&#xA;    success_url = &#39;/api/&#39;&#xA;&#xA;    def form_valid(self, form):&#xA;        form.instance.author = self.request.user&#xA;        return super(ArticleCreateView, self).form_valid(form)&#xA;```&#xA;&#xA;Here, we have created a Class based view called `ArticleCreateView` which inherits the `CreateView` class from the `django.views.generic.edit` module. Here as similar to the `ArticleForm` class, we have certain attributes to pass like the model, form class and the success url. &#xA;- The `model` simply indicates to which table or model we are creating the view for.&#xA;- The `form_class` denotes the ModelForm class we are using for the Create view.&#xA;- The `success_url` is the url route to which to redirect after a successful POST request/form submission.&#xA;&#xA;Remember we discussed about adding `author` field automatically from the request from the form. This process is carried out in the `form_valid` function. [Form_valid](https://docs.djangoproject.com/en/4.0/ref/class-based-views/mixins-editing/#django.views.generic.edit.FormMixin.form_valid) is a helper built-in function to redirect to the success_url when the form data is being posted. Before actually doing that, we add the author field by setting it as the User by accessing the `self.request` object. The `self.request` object actually holds the meta-data about the request that is being sent to the API so we can access the User who is currently logged in. &#xA;&#xA;Though we don&#39;t have a login system, we can assume the user is logged in with some user account. Now to handle certain exceptions we can add [Mixins](https://docs.djangoproject.com/en/4.0/topics/auth/default/#django.contrib.auth.mixins.AccessMixin). We can use [LoginRequiredMixin](https://docs.djangoproject.com/en/4.0/topics/auth/default/#django.contrib.auth.mixins.LoginRequiredMixin) to only allow the form submission for logged in users and so on. &#xA;&#xA;#### Update View&#xA;&#xA;```python&#xA;# app_name / views.py&#xA;&#xA;class ArticleUpdateView(UpdateView):&#xA;    model = Article&#xA;    form_class = ArticleForm&#xA;    success_url = &#39;/api/&#39;&#xA;&#xA;    def form_valid(self, form):&#xA;        form.instance.author = self.request.user&#xA;        return super(ArticleUpdateView, self).form_valid(form)&#xA;```&#xA;   This is quite similar to the `CreateView` except we have to use `UpdateView` as the base model of the `ArticleUpdateView` the rest of the attributes remain the same and functioning of the form like pre-rendering the fields to edit are all managed by the `UpdateView` out of the box.&#xA;&#xA;#### Delete View&#xA;&#xA;```python&#xA;# app_name / views.py&#xA;&#xA;class ArticleDeleteView(DeleteView):&#xA;    model = Article&#xA;    form_class = ArticleForm&#xA;    success_url = &#39;/api/&#39;&#xA;&#xA;    def form_valid(self, form):&#xA;        form.instance.author = self.request.user&#xA;        return super(ArticleDeleteView, self).form_valid(form)&#xA;```&#xA;&#xA;Again, here we do not have to change anything as for attributes and the `valid_form` function. We&#39;ll see the details when we look at the templates.&#xA;&#xA;#### List View&#xA;&#xA;```python&#xA;#app_name / views.py&#xA;&#xA;class ArticleView(ListView):&#xA;    model = Article&#xA;    template_name = &#39;api/list.html&#39;&#xA;&#xA;    def get_context_data(self, **kwargs):&#xA;        context = super().get_context_data(**kwargs)&#xA;        context[&#39;articles&#39;] = Article.objects.filter(author=self.request.user)&#xA;        return context&#xA;```&#xA;   This view is for listing all the objects/articles for the current logged in author. If we want something like a homepage for all the articles, the thing is as simple as replacing&#xA;&#xA;```python&#xA;context[&#39;articles&#39;] = Article.objects.filter(author=self.request.user)&#xA;```&#xA;by&#xA;```python&#xA;context[&#39;articles&#39;] = Article.objects.all()&#xA;```&#xA;   You can even apply filters for fetching the latest post by `Article.objects.filter().order_by(&#34;-created&#34;)`. You get the idea. It boils down to simple python and library function.&#xA;   Also, we have a new attribute `template_name` which allows us to use the data in our custom template. In this case we will create the template in the `templtes/api/` folder called `list.html`. The function `get_context_data` is used to fetch the objects from the database and return it as a special dictionary(JSON object) called `context` which can be rendered in the templates by the key in this case, the `articles` key will give us all the articles by the current user. &#xA;&#xA;#### Detail View&#xA;&#xA;```python&#xA;# app_name / views.py&#xA;&#xA;class ArticleDetailView(DetailView):&#xA;    model = Article&#xA;    template_name = &#39;api/post.html&#39;&#xA;```&#xA;   As opposite to the ListView, we have the DetailView that renders the details of the particular article. Here we don&#39;t have to write the `get_context_data` function as the default object for accessing the model data is `object`. So, we simply need to specify the `template_name` and the `model`. Here, the template is `api/post.html` in the templates folder.&#xA;&#xA;### Mapping the URLS&#xA;&#xA;After completing the views, we can map those into a urls to access them as endpoints in the api app. Let&#39;s build upon the `urls.py` file that we created with no url paths.&#xA;&#xA;```python&#xA;# app_name / urls.py&#xA;&#xA;from django.urls import path&#xA;from .views import ( &#xA;        ArticleView, ArticleDetailView, ArticleCreateView, &#xA;        ArticleUpdateView, ArticleDeleteView&#xA;        )&#xA;&#xA;urlpatterns = [&#xA;        path(&#39;&#39;, ArticleView.as_view(), name=&#34;listpost&#34;),&#xA;        path(&#39;article/&lt;pk&gt;/&#39;, ArticleDetailView.as_view(), name=&#34;detailpost&#34;),&#xA;        path(&#39;create/&#39;, ArticleCreateView.as_view(), name=&#34;createpost&#34;),&#xA;        path(&#39;update/&lt;pk&gt;/&#39;, ArticleUpdateView.as_view(), name=&#34;updatepost&#34;),&#xA;        path(&#39;delete/&lt;pk&gt;/&#39;, ArticleDeleteView.as_view(), name=&#34;deletepost&#34;),&#xA;]&#xA;```&#xA;   So, we can see the views are imported from the .views and are mapped to a particular route. We use `.as_view` function to take a request from a callable class based view and returns the processed response. The `&lt;pk&gt;` is the id for accessing a particular object. We use the id for detail view, update and delete views. &#xA;&#xA;### Creating Templates and Static files&#xA;&#xA;We now, have to join the final piece of the puzzle i.e. to create templates. It&#39;s quite simple but requires a bit of logic to connect things together. The backend is handled flawlessly by Model Form and Generic views, we simply need to put simple HTML and Django templating language to its correct use. &#xA;&#xA;```html&#xA;# templates / index.html&#xA;&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;    {% load static %}&#xA;    &lt;link rel=&#34;stylesheet&#34; href=&#34;#&#34;css/style.css&#39; %}&#34;&gt;&#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    &lt;h1&gt;Articles&lt;/h1&gt;&#xA;   {% block body %}&#xA;   {% endblock %}&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;   A simple HTML doc with link to a stylesheet. We also have a block to further inherit this as the base template.&#xA;&#xA;```css&#xA;# static / css / style.css&#xA;&#xA;body &#xA;{&#xA;    background-color:#1d1dff;&#xA;    color:white;&#xA;}&#xA;&#xA;h1&#xA;{&#xA;    text-align:center&#xA;    font-family: monospace;&#xA;}&#xA;a{&#xA;    text-decoration-color: #00ffff;&#xA;    color: #ff6600;&#xA;}&#xA;&#xA;p&#xA;{&#xA;    color:#ff6600;&#xA;    font-weight:500;&#xA;}&#xA;&#xA;ul&#xA;{&#xA;    list-style-type:square;&#xA;}&#xA;```&#xA;&#xA;The below template is a detail view page, that renders the details of the article. We are using the `object` key from the `context` dictionary provided by default as mentioned [here](). We are also embedding the `Update` and `Delete` buttons to manage the Article. &#xA;&#xA;This is the time to explain about the dynamic urls in Django. We have used the `{% url &#39;updatepost&#39; object.id %}` to create a dynamic url that will redirect to the `updatepost` url and parse with it the id of the object. This is important as the url itself takes the `&lt;pk&gt;` value to update the post. So we just pass the id like a parameter to the URL. This is the way we create dynamic urls in Django templates which is quite similar to static file urls. &#xA;&#xA;```html&#xA;# templates / app_name / post.html&#xA;&#xA;{% extends &#39;index.html&#39; %}&#xA;&#xA;{% block body %}&#xA;    &lt;h2&gt;{{ object.title&#34; }}&lt;/h2&gt;&#xA;    &lt;p&gt;{{ object.post&#34; &lt;/p&gt;&#xA;&#xA;    &lt;button type=&#34;submit&#34; onclick=&#34;window.location.href=&#34;#&#34;updatepost&#39; object.id %}&#39;&#34;&gt;&#xA;    Update&#xA;    &lt;/button&gt;&#xA;&#xA;    &lt;button type=&#34;submit&#34; onclick=&#34;window.location.href=&#34;#&#34;deletepost&#39; object.id %}&#39;&#34;&gt;&#xA;    Delete&#xA;    &lt;/button&gt;&#xA;&#xA;{% endblock %}&#xA;```&#xA;![CRUD API - Detail View](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643911468/blogmedia/n6z33yu4nq9tjsr6nba2.png)&#xA;&#xA;We have used the dynamic URLs in Update and Delete View buttons. &#xA;&#xA;The below template is for creating a home page like structure but for the current logged in user. We are displaying all the articles of the logged in the form of a list with a for loop as discussed in the  templating tutorial part. We have access to the key `articles` from the dictionary `context` and hence we iterate over the loop from that object and display the title with the url for the post detail view.&#xA;&#xA;Here as well we are using the dynamic url by parsing the `article.id` as the `&lt;pk&gt;` primary key to the url. &#xA;```html&#xA;# templates / app_name / list.html&#xA;&#xA;{% extends &#39;index.html&#39; %}&#xA;&#xA;{% block body %}&#xA;    &lt;ul&gt;&#xA;        {% for article in articles %}&#xA;        &lt;li&gt;&lt;a href=&#34;#&#34;detailpost&#39; article.id %}&#34;&gt;{{ article.title }}&lt;/a&gt;&lt;/li&gt;&#xA;        {% endfor %}&#xA;    &lt;/ul&gt;&#xA;{% endblock %}&#xA;```&#xA;![CRUD API - Home Page/List View](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643911630/blogmedia/lakpdkldqwopimhixxya.png)&#xA;&#xA;&#xA;```html&#xA;# templates / app_name / article_confirm_delete.html&#xA;&#xA;{% extends &#39;index.html&#39; %}&#xA;&#xA;{% block body %}&#xA;    &lt;form method = &#34;post&#34;&gt;&#xA;        {% csrf_token %}&#xA;        &lt;p&gt; Are you sure to delete Post:&lt;b&gt; {{ object.title }}&lt;/b&gt; ? &lt;/p&gt;&#xA;        &lt;input type=&#34;submit&#34; value=&#34;Delete&#34;/&gt;&#xA;{% endblock %}&#xA;```&#xA;![CRUD API - Delete View](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643911775/blogmedia/qvilf8p3qpj9qfdkpkn4.png)&#xA;&#xA;```html&#xA;# templates/ app_name / article_form.html&#xA;&#xA;&lt;form method=&#34;POST&#34; enctype=&#34;multipart/form-data&#34;&gt;&#xA;    {% csrf_token %}&#xA;    {{ form.as_p&#34; }}&#xA;    &lt;input type=&#34;submit&#34; /&gt;&#xA;&lt;/form&gt;&#xA;```&#xA;&#xA;   This is the template that forms the basis for all the forms like Create and Update. We use the template variable `{{ form }}` to render the form in the template. Additionally we use `.as_p` to make the form fields as a paragraph tag in HTML. The `{% csrf_token %}` is the Cross site forgery token allowing secure posting of a form. Don&#39;t bother about it much it is important for identifying the user posting the data from the site. The final piece of element is the submit button used to submit the form.  &#xA;&#xA;![CRUD API - CREATE View](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643911374/blogmedia/xbwxtg3ry2x4oegtzxmi.png)&#xA;&#xA;![CRUD API - UPDATE View](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643911726/blogmedia/zeapgshsk9x9agsw4gsu.png)&#xA;&#xA;For any references, you can take a look at the [GitHub](https://github.com/Mr-Destructive/django-blog) repository for the CRUD API in pure Django.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this section, we were able to create a CRUD API in pure Django. This should be the end of the core series of Django Basics, but there are more parts coming up still which are not basics but still worth learning as a beginner. We&#39;ll further explore DRF, Forms, CORS, etc. We&#39;ll see some other libraries specific for Django. Hopefully from this tutorial series, you were able to learn something and if you have any queries or issues please let me know. &#xA;&#xA;Thank you for reading, please provide feedback on how you felt about the series. Though the series is not over, it&#39;s sufficient for a beginner to get pace in Django. Hope you had a great time. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Admin Section</title>
      <link>https://www.meetgor.com/posts/django-basics-admin</link>
      <description>Introduction In the previous section, we configured the database in our Django project. We will move ahead and interact with the Admin Section in Django. Django</description>
      <pubDate>Wed, 26 Jan 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In the previous section, we configured the database in our Django project. We will move ahead and interact with the Admin Section in Django. Django has a built-in Admin Section which we can use to manage our apps and models. We will create a admin account(superuser) and try to render our custom Model in the Admin Interface. We shall further discuss the customization in the Admin section as per our requirements and necessity. &#xA;&#xA;## What is the Admin Section?&#xA;&#xA;Admin Section is a powerful built-in utility provided by Django. It gives the administrative rights over the web project, the interface is neat and provides out of the box functionality to interact with the models in our project without us manually creating any mapping the views and urls. It is restricted to only superusers or trusted users to use as it is for administrative purpose. &#xA;&#xA;The Admin section is present by default for any django application. The interface provides the Django User and Group Model by default. Additionally we can have our own custom models to interact with. For every registered model you have the CRUD (create / read / update / delete ) functionality which makes it very easy and convenient to test the working of model before working around with APIs or moving ahead in the project.  &#xA;&#xA;## Setting up an admin account (superuser)&#xA;&#xA;In order to access the Admin section, we need to create a superuser. A superuser as the name suggests is a user who has the supreme authority for performing operations in the project in this case a web application. To create a super user we need to run a command from the command line that takes our name, email and password as input to create the super user. &#xA;&#xA;```bash&#xA;python manage.py createsuperuser&#xA;```&#xA;&#xA;This will prompt you for a couple of things like :&#xA;- `username` the default is `admin`.&#xA;- `email` it&#39;s not necessary to put one.&#xA;- `password` should be at least eight characters long&#xA;&#xA;The password input will be silent which means you cannot see what you type for security reasons, and the password field will be confirmed once, so you&#39;ll have to enter the password once more. But that&#39;s all you have to do to create a super user in Django for your web project.&#xA;&#xA;![createsuperuser demo](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643198415/blogmedia/etubc7efwls47n3cj2bw.gif)&#xA;&#xA;## Navigating the admin section &#xA;&#xA;After creating a superuser, we can now navigate the admin section from the browser. The admin section is by default located in the `/admin` url-route i.e. you need to navigate to `http://127.0.0.1:8000/admin` here `8000` can be any port as your preferred port number for the django application. &#xA;&#xA;After vising the Admin route, you will be prompted to a Login Screen. You simply need to add in the username and password which you entered while creating the superuser a while ago and you should be in the Admin Section. The default admin section as of Django `3.2.9` looks like following: &#xA;&#xA;![Django Admin Section](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643199349/blogmedia/h0k9jkqxozrtxvwsapkj.png)&#xA;&#xA;## Components of the Admin Section&#xA;&#xA;The Admin interface looks quite simple but is quite powerful and customizable. We have the Default Models in Django which are labelled in the `Authentication and Authorization` section namely the `Users` and `Groups`. You can see we have `+ Add` and the `Change` links to actually create the data associated with those Models. In the Admin Section you can basically play around with your models, it was not designed to act as the frontend for your application that&#39;s what the Django [documentation](https://docs.djangoproject.com/en/4.0/ref/contrib/admin/#module-django.contrib.admin) says and is absolutely correct. &#xA;&#xA;So, we don&#39;t have much things to explore when the UI is concerned as it is simple and straight forward to understand. We&#39;ll dive into how to register our models into the Admin section and from there on we can explore the UI to perform CRUD operations.&#xA;&#xA;### Built-in Models&#xA;&#xA;Django has two built-in and registered models in the Admin Section as said earlier.&#xA;&#xA;1. Users&#xA;2. Groups&#xA;&#xA;Users is basically the Django&#39;s User Model which provides the basic Authorization functionalities which further can be added to the we application. The Super User that was created from the `createsuperuser` command was associated with the Django User model. &#xA;&#xA;We have basic fields in the User model like:&#xA;&#xA;1. Username&#xA;2. Email-ID&#xA;3. Password&#xA;&#xA;If we go to the route `http://127.0.0.1:8000/admin/auth/user/add/`, we can see a form like UI that allows us to add a User.&#xA;&#xA;![User Creation Form - Admin](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643208170/blogmedia/m3fdul2qcmgvgegm5r6y.png)&#xA;&#xA;But there is no option for the `Email-ID` that&#39;s because for the User model has been modifies from the base [User Model](https://docs.djangoproject.com/en/4.0/ref/contrib/auth/#user-model) and we can see the `superuser` has all the attributes the `User` class has like the email, first name, last name and so on. &#xA;&#xA;![Super User Model - Admin](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643208828/blogmedia/wrdfkiqaqmw4wxtpopgn.png)&#xA;&#xA;We can even extend this functionality or modify the existing attributes of the User Model in our custom Model. For example, we can add Age, Phone number, etc in to our Custom User Model. How? We&#39;ll look into that later but that is to give an idea about the User Model.&#xA;&#xA;There is a lot more than just the details like Username, email and password. We need a way to manage which user is allowed to access what components. So, this is termed as `Permissions` in the User model, for the super user we might have the access to the admin page but a regular User might not. This is a permission which is by default implemented by Django in the User model. Like wise we can extend this functionality to add more permissions depending on the Model we are working with. &#xA;&#xA;### Groups &#xA;&#xA;This is the model which can hold a group of certain Model. The typical example hers is a group of User and its permissions. We can have a group for the developers of a project(this project) and a separate Group for rest of the Users. This creates a well-defined boundary for different types of User in a larger application.  &#xA;&#xA;Currently, we don&#39;t have any groups created by default. Since it is a concept to be learnt for a quite large project with thousands of Users.&#xA;&#xA;## Registering Models in Admin Section&#xA;&#xA;Now, what are the default Admin section looks like, we can move on to register our own models in the Admin section. To do that, inside the app folder(a django app) you will see a `admin.py` file. Make sure to be in the app in which you have created a model. We need to register a Model to the admin section. &#xA;&#xA;```python&#xA;# app_name/admin.py&#xA;&#xA;from django.contrib import admin&#xA;from .models import Article&#xA;&#xA;admin.site.register(Article)&#xA;```&#xA;The `admin.site.register` basically adds a Model to the Admin Interface. &#xA;The article Model is defined as follows:&#xA;&#xA;```python&#xA;# app_name/models.py&#xA;&#xA;from django.db import models&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=127)&#xA;    post = models.TextField()&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;```&#xA;&#xA;This will add the Model `Article` in the Admin Section. We can now perform CRUD operations in the Admin section. &#xA;   &#xA;## Performing actions with Admin Section&#xA;&#xA;So, perform CRUD operations, we can navigate to the Model Article and simply click the `Add Article` Button to add a object of the model Article. We will be presented a form to fill. Now here we, can see the fields which are actually to be inputted by the user. We don&#39;t see the fields like `created` and `updated` as they are automatically set as per the current time. &#xA;&#xA;![Add Article - Admin](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643215768/blogmedia/bq0gvbxhhxzwiwutgqpi.png)&#xA;After filling this form you will see `Article object (1)` which looks like a non-sense thing to look at. This is where the `__str__` [function](https://docs.djangoproject.com/en/4.0/ref/models/instances/) comes handy and saves the day. &#xA;&#xA;![Article Object](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643215997/blogmedia/sy7ygnskxfl0psgaj4z4.png)&#xA;&#xA;Now, we can see we also have the ability to Delete and Update the Post. This is the best thing about the Django admin interface. It&#39;s quite intuitive for complex model to be tested before moving ahead and making necessary corrections if required.&#xA;&#xA;### Modifying the Model (without migrations)&#xA;&#xA;Now, at this stage if we forgot to add the `__str__` function, we need to add it into our models. But what about the migrations? We do not need to migrate this changes as there is no change in how to model is structures. We are changing how to Admin Interface should present our model. So, we&#39;ll make changes to our model but only aesthetically. &#xA;&#xA;```python&#xA;#app_name/models.py&#xA;&#xA;from django.db import models&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=127)&#xA;    post = models.TextField()&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;&#xA;    def __str__(self):&#xA;        return self.title&#xA;```&#xA;&#xA;And now if we refresh the Admin page, we can see the title is the object name :)&#xA;&#xA;![Model str function](http://res.cloudinary.com/dgpxbrwoz/image/upload/v1643216721/blogmedia/vwyoccgmhnl4aosqc6qf.png)&#xA;&#xA;This looks a trivial change but makes a big difference for large applications and dataset. &#xA;&#xA;### Verbose name&#xA;&#xA;We can add a verbose name for an attribute inside an Model. A verbose name is a human readable name for a field. So, let&#39;s say we have a attribute/field called `fname`, the person who might have created the model might know it stands for `first name` but someone else might not. So in this case, we can add the `verbose_name` to be used as the name in the Admin Section. &#xA;&#xA;We also have `verbose_name_plural` which will be handy for a model name. We do not wnat Django to just add `s` before any Model name, it might look good for `Articles`, `Questions`, `Posts` but for `Quizs`, `Categorys`, `Heros` look too funny than `Quizzes`, `Categories` and `Heroes` respectively. &#xA;&#xA;```python&#xA;# app_name/models.py&#xA;&#xA;from django.db import models&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=127, verbose_name=&#34;headline&#34;)&#xA;    post = models.TextField(verbose_name=&#39;content&#39;)&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;&#xA;    def __str__(self):&#xA;        return self.title&#xA;&#xA;```&#xA;&#xA;![Verbose Name Attribute](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643219495/blogmedia/v5zphhohi27tvwsa3gsp.png)&#xA;&#xA;Again, we do not need to migrate any changes to the database as it is not a logical change in the schema of the database.&#xA;&#xA;```python &#xA;# app_name/models.py&#xA;&#xA;from django.db import models&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=127, verbose_name=&#34;title&#34;)&#xA;    post = models.TextField(verbose_name=&#39;content&#39;)&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;&#xA;    def __str__(self):&#xA;        return self.title&#xA;&#xA;    class Meta:&#xA;        verbose_name_plural = &#39;Articless&#39;&#xA;```&#xA;&#xA;![Verbose Name Plural](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643219521/blogmedia/sf77v52ic9dheyvv0pvi.png)&#xA;&#xA;Though `Articless` sounds weird, it is just made for realizing that Django by default adds `s` to the Model name for representing as a Class in the Admin. &#xA;&#xA;### Admin Register Class&#xA;&#xA;We can even list not only the title but many things in a particular format, we need to define a class which will be derived by the `admin.ModelAdmin` class. &#xA;&#xA;```python&#xA;# app_name/admin.py&#xA;&#xA;from django.contrib import admin&#xA;from .models import Article&#xA;&#xA;@admin.register(Article)&#xA;class Article(admin.ModelAdmin):&#xA;    list_display = [&#xA;            &#39;title&#39;,&#xA;            &#39;created&#39;,&#xA;            &#39;updated&#39;,&#xA;            ]&#xA;```&#xA;&#xA;![Admin-List](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1643217326/blogmedia/y32jmboypbmzyypt68l1.png)&#xA;&#xA;We are using the `admin.ModelAdmin` class as the base class to overwrite the `list_display` list which will display the mentioned items in the Admin Section as a list. We are also using the `admin.register` as the class Decorator to actually register the model in the Admin section. &#xA;&#xA;Now, we can see a lot of customization in the Admin section. This is just a glimpse of what customization is. This can be tailored as per your model and needs. &#xA;&#xA;## Conclusion&#xA;&#xA;So, in this section we were able to interact with the Admin section in Django. We were able to register our custom Model in the Admin interface and then customize the format of how it is displayed in the interface. Hopefully, from this part we are able to interact with the data and get our application to test how to plan ahead. In the next section we shall cover how to map up all of these together to create something meaningful and finally understand the Django development process. &#xA;&#xA;Thank you for reading, if you have any questions or improvements to suggest, please let me know in the comments. I&#39;ll be grateful if you you provide a feedback. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Creating Models</title>
      <link>https://www.meetgor.com/posts/django-basics-models</link>
      <description>Introduction We have seen the basics of Django templating in the previous parts of the series. Now, we can move on to the more backend stuff in Django which dea</description>
      <pubDate>Tue, 04 Jan 2022 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We have seen the basics of Django templating in the previous parts of the series. Now, we can move on to the more backend stuff in Django which deals with the Databases, queries, admin section, and so on. In this particular part, we&#39;ll cover the fundamental part of any application in Django i.e the `Model`. We&#39;ll understand what the model is, how to structure one, how to create relationships and add constraints on the fields, etc. &#xA;&#xA;## What ate Models?&#xA;&#xA;A model is a Django-way(Pythonic) to structure a database for a given application. It is technically a class that can act as a table in a database generally and inside of the class, the properties of it act as the attributes of that database. It&#39;s that simple. Just a blueprint to create a table in a database, don&#39;t worry about what and where is our database. We will explore the database and its configuration in the next part. &#xA;&#xA;By creating a model, you don&#39;t have to write all the basic SQL queries like &#xA;&#xA;```sql&#xA;CREATE TABLE NAME(&#xA;attrb1_name type,&#xA;attrb2_name type,&#xA;.&#xA;.&#xA;.&#xA;);&#xA;```&#xA;&#xA;If your application is quite big or is complex in terms of the relations among the entities, writing SQL queries manually is a daunting task and also quite repetitive at times. So Django handles all the SQL crap out of the way for the programmer. So Models are just a Pythonic way to create a table for the project/application&#39;s database.&#xA;&#xA;## How to create a Model?&#xA;&#xA;Creating a model for an application is as easy as creating a class in python. But hey! It&#39;s more than that as there are other questions to address while designing the class. You need to design the database before defining the fields in the model.&#xA;&#xA;OK, we&#39;ll it&#39;s not straightforward as it seems to but still for creating simple and dummy projects to start with. You can use certain tools like [lucidchart](https://www.lucidchart.com/pages/database-diagram/database-design-tool), [dbdiagrams.io](https://dbdiagram.io/home), and other tools you are comfortable with. It&#39;s important to visualize the database schema or the structure of the application before tinkering with the actual database inside the project. Let&#39;s not go too crazy and design a simple model to understand the process.&#xA;&#xA;Here&#39;s a basic model for a Blog:&#xA;&#xA;```python&#xA;#from django.db import models&#xA;from django.contrib.auth.models import User&#xA;&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=255)&#xA;    post = models.TextField()&#xA;    author = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&#39;Article&#39;)&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;```   &#xA;&#xA;Ignore the `from django.db import models` as it is already in the file created by Django. If not, please uncomment the line and that should be good to go.&#xA;This is a basic model you might wanna play with but don&#39;t dump it anywhere. &#xA;&#xA;We define or create our models in the application inside the project. Inside the application there is already a file called `models.py` just **append** the above code into it. The application can be any application which makes the most sense to you or better create a app if not already created and name it as `article` or `post` or anything you like.&#xA;&#xA;If you are familiar with Python OOP(object-oriented programming), we have basically inherited the `models.Model` class from the `django.db` module into our model.&#xA;&#xA;If you want more such examples, let&#39;s see more such models :&#xA;&#xA;An E-Mail application core model. Attributes like `sender`, `subject` of the mail, `body` of the mail, `recipients_list` i.e. the `To:` section in a mail system and the `attachment_file` for a file attachment to a mail if any.&#xA;&#xA;```python&#xA;#from django.db import models&#xA;from user import EmailUser&#xA;&#xA;class EMail(models.Model):&#xA;    sender = models.EmailField(max_length = 255) &#xA;    subject = models.CharField(max_length = 78)&#xA;    body = models.CharField(max_length = 40000)&#xA;    recipients_list = models.ManyToManyField(EmailUser, related_name = &#39;mail_list&#39;)&#xA;    attachment_file = models.FileField(blank=True)&#xA;```&#xA;&#xA;A sample model for a note-taking app, consisting of a Note and a Book. A book might be a collection of multiple notes i.e. a single book can have multiple notes so we are using a `ManyToManyField`, what is that? We&#39;ll see that shortly.&#xA;&#xA;```python&#xA;from django.db import models&#xA;from user.models import User&#xA;&#xA;class Notes(models.Model):&#xA;    author = models.ForeignKey(User, on_delete=models.CASCADE)&#xA;    title = models.CharField(max_length = 1024)&#xA;    content = models.Textfield()&#xA;    created = models.DateTimeField(auto_now_add = True)&#xA;    modified = models.DateTimeField(auto_now = True)&#xA;    book = models.ManyToManyField(Book, related_name = &#39;book&#39;)&#xA;&#xA;class Book():&#xA;    name = models.CharField(max_length = 1024)&#xA;```   &#xA;&#xA;   These are just dummies and are not recommended to use anywhere especially in a serious project. &#xA;So, we have seen a model, but what are these fields and the constraints like `on_delete`, `max_length`, and others in the upcoming section on fields.&#xA;&#xA;## Fields in Django&#xA;&#xA;Fields are technically the attributes of the class which here is the model, but they are further treated as a attribute in a table of a database. So the model becomes a list of attributes which will be then parsed into an actual database. &#xA;&#xA;By creating attributes inside a class we are defining the structure for a table. We have several types of fields defined already by django for the ease of validating and making a constrained setup for the database schema.&#xA;&#xA;Let&#39;s look at some of the types of fields in Django Models.&#xA;&#xA;### Types of Fields&#xA;&#xA;Django has a lot of fields defined in the models class. If you want to go through all the fields, you read through the django docs [field references](https://docs.djangoproject.com/en/4.0/ref/models/fields/#model-field-types). We can access the fields from the `models` module like `name = models.CharField(max_length=10)`, this is a example of defining a attributes `name` which is a CharField. We can set the max_length which acts a constraint to the attribute as we do not want the name field to be greater than 10 and hence parsing the parameter `max_length` to 10. &#xA;&#xA;We have other field types like:&#xA;&#xA;- `IntegerField` -&gt; for an integer value.&#xA;- `TextField` -&gt; for long input of text (like text area in html).&#xA;- `EmailField` -&gt; for an single valid email field.&#xA;- `DateField` -&gt; for inputting in a date format. &#xA;- `URLField` -&gt; for input a URL field.&#xA;- `BooleanField` -&gt; for a boolean value input.&#xA;&#xA;And there are other fields as well which can be used as per requirements.&#xA;&#xA;We also have some other fields which are not directly fields so to speak but are kind of relationship defining fields like:&#xA;&#xA;- `ForeignKey` -&gt; Define a many-to-one relationship to another model/class. &#xA;- `ManyToManyField` -&gt; define a many-to-many relationship to another model/class.&#xA;- `OneToOneField` -&gt; define a one to one relationship between different tables/model/class.&#xA;&#xA;So, that&#39;s about the field types for just a feel of how to structure or design a database table using a model with some types of attributes. We also need to talk about constraints which needs to added to the fields inside the models. &#xA;&#xA;### Field Options/Arguments&#xA;&#xA;We can add constraints and pass arguments to the fields in the models. We can add arguments like `null`, `blank`, `defualt`, `choices`, etc. &#xA;&#xA;- `null=True/False` -&gt; Set a check for the entry in the table as not null in the database.&#xA;- `blank=True/False` -&gt; Set a check for the input validation to empty or not.&#xA;- `unique=True/False` -&gt; Set a constraint to make the entry unique throughout the table.&#xA;- `defualt=anyvalue` -&gt; Set a default value for the field.&#xA;- `choices=list` -&gt; Set a list of defined choices to select in the field (a list of two valued tuple).&#xA;&#xA;We also have another constraint specific to the fields like `max_length` for `CharField`, `on_delete` for ForeignKey which can be used as a controller for the model when the related model is deleted, `verbose_name` to set a different name for referencing the entry in the table/model from the admin section compared to the default name of the model, `verbose_name_plural` similar to the `verbose_name` but for referencing the entire table/model. Also `auto_now_add` and `auto_now` for `DateTimeField` so as to set the current date-time by default.&#xA;&#xA;More options and arguments that can be passed to the fields in models are given in the django docs [field options](https://docs.djangoproject.com/en/4.0/topics/db/models/#field-options)&#xA;&#xA;These are some of the options or arguments that we can or need to pass to the fields to set up a constrained schema for our database. &#xA;&#xA;### Meta class&#xA;&#xA;Meta class is a nested class inside the model class which is most of the times used for ordering the entries(objects) in the table, managing permissions for accessing the model, add constraints to the models related to the attributes/fields inside it, etc.&#xA;&#xA;You can read about the functionalities of the Meta class in the [documentation](https://docs.djangoproject.com/en/4.0/ref/models/options/).&#xA;&#xA;## Model methods&#xA;&#xA;As a class can have functions, so does a model as it is a Python class after all. We can create kind of a helper methods/functions inside the model. The model class provides a helpful `__str__()` function which is used to rename an object from the database. We also have other predefined helper functions like `get_absolute_url` that generates the URL and returns it for further redirection or rendering.&#xA;&#xA;Also, you can define the custom functions that can be used as to help the attributes inside the model class.&#xA;&#xA;## Django ORM&#xA;&#xA;Django has an Object Relational Mapper is the core concept in Django or the component in Django that allows us to interact with the database without the programmer writing SQL/DB queries. It is like a Pythonic way to write and execute sql queries, it basically abstracts away the layer to manually write SQL queries. &#xA;&#xA;We&#39;ll explore the details of how the ORM works under the hood but it&#39;s really interesting and fascinating for a Beginner to make web applications without learning SQL(not recommended though personally). For now, its just magical to see Django handling the DB operations for you. You can get the references for learning about the Queryset in ORM from the [docs](https://docs.djangoproject.com/en/4.0/ref/models/querysets/)&#xA;&#xA;## Example Model&#xA;Let us set up a model from what we have learned so far. &#xA;&#xA;We&#39;ll create a model for a Blog Post again but with more robust fields and structure. &#xA;&#xA;```python&#xA;#from django.db import models&#xA;from django.contrib.auth.models import User&#xA;&#xA;class Article(models.Model):&#xA;&#xA;    options = (&#xA;    (&#39;draft&#39;, &#39;Draft&#39;),&#xA;    (&#39;published&#39;, &#39;Published&#39;),&#xA;    )&#xA;&#xA;    title = models.CharField(max_length=255, unique=True)&#xA;    slug = models.SlugField(max_length=255, unique_for_date=&#39;publish&#39;)&#xA;    post = models.TextField()&#xA;    author = models.ForeignKey(User, on_delete=models.CASCADE, related_name=&#39;Posts&#39;)&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;    status = models.CharField(max_length=16, choices=option, default=&#39;draft&#39;)&#xA;    &#xA;    def __str__()&#xA;        return self.title&#xA;&#xA;    class Meta:&#xA;        ordering = (&#39;-publish&#39;,)&#xA;      &#xA;```&#xA;&#xA;   We can see in the above model that we have defined the Meta class which is optional and is generally written to modify how to entries inside the table appear or order with other functionalities as well. We have also added the choices option in the status field which has two choices `Draft` and `Publish` one which is seen by the django interface and the other to the end-users. We have also added certain fields like slug that will create the URL for the blog post, also certain options like `unique` has been set to restrict duplicate entries being posted to the database. The `related_name` in the `ForeignKey` refers to the name given to the relation from the Article model to the User model in this case. &#xA;&#xA;   So, we can see that Django allows us to structure the schema of a database. Though nothing is seen as an end result, when we configure and migrate the model to our database we will see the results of the hard work spent in creating and designing the model. &#xA;&#xA;## Database Specific fields&#xA;&#xA;By this time, you will have gotten a feel of what a database might be. Most of the projects are designed around SQL databases but No-SQL databases and others are also used in cases which suite them the most. We have tools to manage this database in SQL we call it the Database Management System (DBMS). It&#39;s just a tool to manage data, but there is not just a single Database management tool out there, there are gazillions and bazillions of them. Most  popular include `MySQL`, `PostgreSQL`, `SQLite`, `Oracle`, `Microsoft Access`, `Maria DB`, and tons of others. &#xA;&#xA;Well, these different DBMS tools are almost similar with a few hiccups here and there. So, different Database tools might have different fields they provide. For Example, in Database `PostgreSQL` provides the ListField which `SQLite` doesn&#39;t that can be the decision to be taken before creating any project. There might be some fields that some DBMS provide and other doesn&#39;t.&#xA;&#xA;## Conclusion&#xA;&#xA;We understood the basics of creating a model. We didn&#39;t touch on the database yet but the next part is all about configuration and migration so we&#39;ll get hands-on with the databases. We covered how to structure our database, how to write fields in the model, add constraints and logic to them and explore the terminologies in Django like ORM, Database Types, etc. &#xA;&#xA;Thank you for reading the article, if you have any feedback kindly let me know, and until then Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Static Files</title>
      <link>https://www.meetgor.com/posts/django-basics-static-files</link>
      <description>Introduction After creating templates, it should be rather tempting to add some styles and logic to them. Well yes, we&#39;ll see how to add static files in a web a</description>
      <pubDate>Mon, 27 Dec 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;After creating templates, it should be rather tempting to add some styles and logic to them. Well yes, we&#39;ll see how to add static files in a web application using django. Static files are not only CSS, but also media/images and Javascript files as well. In this part of the series, we&#39;ll cover the basics of working with static files in django including the configuration, rendering and storing of the static files. &#xA;&#xA;## What are Static Files?&#xA;&#xA;Static files as the name suggests are the files that don&#39;t change, your style sheets(css/scss) are not gonna change for every request from the client side, though the template might be dynamic. Also your logo, images in the design will not change unless you re-design it XD So these are the static files that needs to be rendered along with the templates.&#xA;&#xA;We have basically 3 types of static files, CSS, Javascript files and media files/static templates,etc. They are all rendered in the same way but as per their conventions and usage. &#xA;&#xA;You can learn about the theoretical information on [static files](https://docs.djangoproject.com/en/4.0/howto/static-files/) from the django documentation.&#xA;&#xA;## How to configure Static Files&#xA;&#xA;Firstly you can create a folder for all the static files in the root folder. Usually the convention is `static` as the name of the folder. So, if you have created the template folder in the root directory, similar to that static folder can be created in that path. &#xA;&#xA;Next after creating the static folder in the project root folder, we need to configure the `settings.py` file to actually tell Django web server to look for all our static files in that folder. To do that, go to the `settings.py` file, now by this time you would have known where the `settings.py` file is (inside the project-named folder). Add the following at the end of the `settings.py` file.&#xA;&#xA;```python&#xA;# import os&#xA;# STATIC_URL = &#39;/static/&#39;&#xA;&#xA;STATICFILES_DIRS = (&#xA;    os.path.join(BASE_DIR, &#34;static/&#34;),&#xA;)&#xA;```   &#xA;&#xA;Ignore the `import os` if you already have imported and the `STATIC_URL` if already there in the file. The `STATICFILES_DIRS` is the configuration that we tell the django environment to look for all our static files in the base/root directory of the project where the `static/` folder is. The `os.path.join()` actually gets the path of the directory in our operating system to the folder specified in the case of our project the `BASE_DIR` is the path of the project and we add in the static folder to actually the project path. The final piece and the crucial one is the `&#34;static/&#34;` path, this can be other location where you have created your static folder within the project.&#xA;&#xA;That&#39;s it! Yes, it&#39;s that simple. We can now create static files and render them in our templates. &#xA;&#xA;## Creating and Storing Static files&#xA;&#xA;Now this part is customizable and it depends on your preference, how you want to organize the static folder. The convention that I follow is creating separate folders namely for `css`, `js` and `assets`(or `img`) mostly. And inside of this folders you can store the respective static files. This also creates the project more scalable in terms of it&#39;s maintenance. &#xA;&#xA;```&#xA;static\&#xA;  |__css\&#xA;  |__js\&#xA;  |__assets\&#xA;```&#xA;&#xA;Let&#39;s create a static file and an image to demonstrate the concept of static files in django. &#xA;&#xA;- css/style.css&#xA;&#xA;```css&#xA;body &#xA;{&#xA;    background-color:#1d1dff;&#xA;    color:white;&#xA;}&#xA;&#xA;h1&#xA;{&#xA;    text-align:center&#xA;    font-family: monospace;&#xA;}&#xA;&#xA;p&#xA;{&#xA;    color:#ff6600;&#xA;    font-weight:500;&#xA;}&#xA;&#xA;ul&#xA;{&#xA;    list-style-type:square;&#xA;}&#xA;```&#xA;&#xA;- assets/tbicon.png &#xA;&#xA;Demo Image (that&#39;s my blog icon)   &#xA;&#xA;![Demo image](https://github.com/Mr-Destructive/techstructive-blog/blob/gh-pages/assets/img/tbicon.png?raw=true)&#xA;&#xA;## Rendering Static Files from Templates&#xA;&#xA;So, after configuring and creating the static files, we now can inject them into our templates. If you try to do the traditional way i.e. linking stylesheets/images/script files with HTML, it just won&#39;t work as you expect to and there&#39;s no point in using traditional way while creating a web application with a framework. So, there is a framework specific way to do things which make it easier and efficient for the project. &#xA;&#xA;To render any static file, we need to load the static tag which allows us to embed links for the static files into the templates. This means if the static files are not loaded directly instead in production(deploying our application) the static files are stored in a folder `STATIC_ROOT` which the server then loads, we&#39;ll see how that internally works when we get to deployment techniques for Django project. &#xA;&#xA;To load the static files from our configuration, we can simpy include the tag on top of the template.&#xA;&#xA;```&#xA;{% load static %}&#xA;```&#xA;&#xA;The above templating tag will load the `static` tag which allows us to embed the links to the static files as explained earlier. &#xA;&#xA;Now, we can actually access any file with the static folder in our templates with a particular syntax as below:&#xA;&#xA;```html&#xA;&lt;link rel=&#34;stylesheet&#34; href=&#34;#&#34;css/style.css&#39; %}&#34;&gt;  &#xA;```   &#xA;Its just a example how to load the file, we are calling the static tag which we have loaded in previously and from there we are referencing the css file. The compact syntax would be : `{% static  &#39;path-to-file&#39;  %}`      &#xA;&#xA;**NOTE: The path to the static file is relative from the Static folder, i.e. enter the path of the file considering the static folder as the base directory.** &#xA;&#xA;### Demonstration of the static file&#xA;&#xA;Let&#39;s render the static file which we created earlier i.e. the css file and the image into a template. &#xA;&#xA;Assuming you have a app called `post` in your django project, you can render static files as below:&#xA;&#xA;# templates/home.html &#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;    {% load static %}&#xA;    &lt;link rel=&#34;stylesheet&#34; href=&#34;#&#34;css/style.css&#39; %}&#34;&gt;  &#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    &lt;h1&gt;Hello, World!&lt;/h1&gt;&#xA;    {% block body %}&#xA;    &lt;p&gt;This is not going to get inherited &lt;/p&gt;&#xA;    {% endblock %}&#xA;    &lt;p&gt;This will be inherited&lt;/p&gt;&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```   &#xA;We are loading the static tag and then loading the css file using the tag syntax as explained above.       &#xA;&#xA;# static/css/style.css      &#xA;```css&#xA;body &#xA;{&#xA;    background-color:#1d1dff;&#xA;    color:white;&#xA;}&#xA;&#xA;h1&#xA;{&#xA;    text-align:center&#xA;    font-family: monospace;&#xA;}&#xA;&#xA;p&#xA;{&#xA;    color:#ff6600;&#xA;    font-weight:500;&#xA;}&#xA;&#xA;ul&#xA;{&#xA;    list-style-type:square;&#xA;}&#xA;```   &#xA;This is the static file,`style.css` stored inside the css folder of the static folder. This contains basic (very lame) CSS styling as we can understand.     &#xA;&#xA;# post/views.py &#xA;&#xA;```python&#xA;from django.shortcuts import render&#xA;&#xA;def home(request):&#xA;    return render(request, &#39;home.html&#39;)&#xA;```&#xA;The `views.py` file has the function that renders the template `home.html` from the templates folder inside the application specific folder.   &#xA;&#xA;# post/urls.py   &#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns = [&#xA;        path(&#39;&#39;, views.home, name=&#34;home&#34;),&#xA;        ]&#xA;```   &#xA;This is the application level configuration for the url routes to the views linking the views(functions) from the `views.py` file. The url in this file(code-snippet) is linking the root url(&#39;&#39;) to the home view in the `views.py` file.&#xA;&#xA;# Blog/urls.py&#xA;```python&#xA;from django.contrib import admin&#xA;from django.urls import path, include&#xA;from django.views.generic import TemplateView&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;admin/&#39;, admin.site.urls),&#xA;    path(&#39;&#39;, include(&#39;post.urls&#39;)),&#xA;]&#xA;```&#xA;The urls file in the project folder is the core configuration for project level url routes to individual applications within the project.&#xA;&#xA;Append the following if your templates and static files are not configured properly.&#xA;&#xA;# Blog/settings.py&#xA;```python&#xA;import os   &#xA;&#xA;TEMPLATES = [&#xA;    {&#xA;        &#39;BACKEND&#39;: &#39;django.template.backends.django.DjangoTemplates&#39;,&#xA;        &#39;DIRS&#39;: [os.path.join(BASE_DIR, &#39;templates&#39;),],&#xA;        &#39;APP_DIRS&#39;: True,&#xA;        &#39;OPTIONS&#39;: {&#xA;            &#39;context_processors&#39;: [&#xA;                &#39;django.template.context_processors.debug&#39;,&#xA;                &#39;django.template.context_processors.request&#39;,&#xA;                &#39;django.contrib.auth.context_processors.auth&#39;,&#xA;                &#39;django.contrib.messages.context_processors.messages&#39;,&#xA;            ],&#xA;        },&#xA;    },&#xA;]&#xA;STATIC_URL = &#39;/static/&#39;&#xA;STATICFILES_DIRS = (&#xA;    os.path.join(BASE_DIR, &#34;static/&#34;),&#xA;)&#xA;```&#xA;SO, the result of the above code is as simple template as shown in the picture below:&#xA;&#xA;![Static file demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1640621276/blogmedia/static-1_vu41gf.png)&#xA;&#xA;This will also work if you do it with traditional HTML syntax, but I&#39;d explained why it&#39;s not recommended to do it while using frameworks.&#xA;&#xA;Let&#39;s see how static files are rendered in inherited templates. We&#39;ll tinker with the `for.html` template created in the [previous part](https://mr-destructive.github.io/techstructive-blog/django/python/web-development/2021/12/14/Django-Basics-P5.html).  &#xA;&#xA;# template/for.html&#xA;```django&#xA;{% extends &#39;home.html&#39; %}&#xA;{% load static %}&#xA;&#xA;{% block body %}&#xA;    &lt;img src=&#34;#&#34;assets/tbicon.png&#39; %}&#34; height=&#34;50px&#34; width=&#34;50px&#34; /&gt;&#xA;    &lt;ul&gt;&#xA;        {% for sport in sport_list %}&#xA;        &lt;li&gt;{{ sport }}&lt;/li&gt;&#xA;        {% endfor %}&#xA;    &lt;/ul&gt;&#xA;{% endblock %}&#xA;```&#xA;We will have re-load the static tag for each template only if we need to include a new static file in the template. So we use the `{% load static %}` again as we are loading the static file (image) in this template.&#xA;&#xA;# post/views.py&#xA;```python&#xA;from django.shortcuts import render&#xA;&#xA;def for_demo(request):&#xA;    sports = (&#39;football&#39;, &#39;cricket&#39;, &#39;volleyball&#39;, &#39;hockey&#39;, &#39;basketball&#39;)&#xA;    return render(request, &#39;for.html&#39;, {&#39;sport_list&#39;: sports})&#xA;&#xA;def home(request):&#xA;    return render(request, &#39;home.html&#39;)&#xA;```&#xA;&#xA;# post/urls.py&#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns = [&#xA;        path(&#39;&#39;, views.home, name=&#34;home&#34;),&#xA;        path(&#39;for/&#39;, views.for_demo, name=&#34;fordemo&#34;),&#xA;        ]&#xA;```&#xA;&#xA;So, that&#39;s the url and view map created, we can now be able to see the result in the `127.0.0.1:8000/for/` url to see the below result:&#xA;&#xA;![Static demo for inheritance of tempaltes](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1640622976/blogmedia/static-tempinh_peyjrg.png)&#xA;&#xA;The list style has been changed and thus we can see that the CSS from the parent template is also being inherited. &#xA;&#xA;Here is the django project structure which I have created with this series so far:&#xA;&#xA;![Folder tree structure](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1640624705/blogmedia/trr-static_bgt9du.png)&#xA;&#xA;So that has been it for the Static files in Django. Though there are lot of depth for rendering and loading the static files, we&#39;ll explore as we get our grasp in the django and web development terminologies. &#xA;&#xA;## Conclusion&#xA;&#xA;So, from this article, we were able to configure and render static files like CSS/Images and optionally Javascript into the Django application. We covered from ground how to configure, load and structure the folder for storing all the static files at the project level. &#xA;&#xA;Hope you found it helpful and if you have any queries please let me know. We&#39;ll start with the databases probably from the next part in the Django Basics Series. Until then have a great week and as always Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Templates</title>
      <link>https://www.meetgor.com/posts/django-basics-templates</link>
      <description>Introduction After learning the basics of views and URLs, we can now move on to the next concept i.e. Templates. In Django, Templates are quite an important com</description>
      <pubDate>Tue, 14 Dec 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;After learning the basics of views and URLs, we can now move on to the next concept i.e. Templates. In Django, Templates are quite an important component for the application as it acts as the `frontend` for the web application. With the help of templates and some features provided by Django, it becomes very intuitive and simple to make dynamic web content.&#xA;In this part, we understand what are templates and what is the way to render them. &#xA;&#xA;## What are Templates&#xA;&#xA;Templates are simply a `html` document or kind of a wireframe for content to be displayed for the web app. Templates allow us to render some more relevant pieces of data rather than simple text HTTP responses as we did earlier. We can even re-use certain components of a template in other using the Django Templating Language (more on this later). &#xA;&#xA;So, using HTML templates, we can write a complete Webpage. If you are unfamiliar with HTML, you can check out the basics of HTML with this [playlist](https://www.youtube.com/playlist?list=PL081AC329706B2953). &#xA;&#xA;Even If you are not familiar with HTML, this tutorial might be quite basic and not overwhelm you with all the tags.&#xA;&#xA;## Creating Templates&#xA;&#xA;To create a Template, we can write a simple HTML document like the below:&#xA;&#xA;Create a folder `templates` in the base folder, inside the templates folder, create a file `index.html`&#xA;&#xA;**templates\index.html**&#xA;&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    &lt;h1&gt;Hello, World!&lt;/h1&gt;&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;This is a simple HTML template, with the `&lt;h1&gt;` tags. As Django is a framework, there is a standard for storing all the templates for the project and application. There are a couple of standard of options:&#xA;- One of which is creating a `templates` folder in the root folder as discussed earlier, also we need to modify the `project_name/settings.py` file. &#xA;&#xA;Inside the `settings.py` file, we need to locate the `TEMPLATES` section and modify as below:&#xA;&#xA;```python&#xA;import os&#xA;&#xA;TEMPLATES = [&#xA;    {&#xA;        &#39;BACKEND&#39;: &#39;django.template.backends.django.DjangoTemplates&#39;,&#xA;        &#39;DIRS&#39;: [os.path.join(BASE_DIR, &#39;templates&#39;),],&#xA;        &#39;APP_DIRS&#39;: True,&#xA;        &#39;OPTIONS&#39;: {&#xA;            &#39;context_processors&#39;: [&#xA;                &#39;django.template.context_processors.debug&#39;,&#xA;                &#39;django.template.context_processors.request&#39;,&#xA;                &#39;django.contrib.auth.context_processors.auth&#39;,&#xA;                &#39;django.contrib.messages.context_processors.messages&#39;,&#xA;            ],&#xA;        },&#xA;    },&#xA;]&#xA;```   &#xA;&#xA;In this snippet, we have changed the `DIRS` option to search the templates in the folder `templates` in the root directory. &#xA;&#xA;- The other standard is creating a templates folder in each application.&#xA;&#xA;We can create the templates folder in each application instead of a single folder. &#xA;&#xA;## Rendering Templates&#xA;&#xA;After creating a template and making the required settings to make sure Django is able to pick up those templates, we need to work with views and URLs to actually render those templates. &#xA;&#xA;There are a couple of ways to render templates in Django and some of them are discussed below:&#xA;&#xA;### Using TemplateView&#xA;&#xA;[TemplateView](https://docs.djangoproject.com/en/3.2/ref/class-based-views/base/#django.views.generic.base.TemplateView) is a class which is comes with `django.views.generic` library. This class allows us to render a template by providing in the name of the template, arguments or variables to be parsed, and so on. &#xA;&#xA;The simplest way to render a template is by the following way:&#xA;&#xA;```python&#xA;from django.contrib import admin&#xA;from django.urls import path, include&#xA;&#xA;from django.views.generic import TemplateView&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;admin/&#39;, admin.site.urls),&#xA;    path(&#39;&#39;, TemplateView.as_view(template_name=&#34;index.html&#34;), name=&#34;index&#34;),&#xA;]&#xA;```&#xA;&#xA;We need to import the `TemplateView` from the `django.core.generic` so as to use the class for rendering the template. &#xA;&#xA;The `TemplateView` class takes in a couple of arguments, we&#39;ll use the `template_name` as an argument that takes in the name of the template. Here, we use the `index.html` as the template which we created earlier. We don&#39;t need to specify the entire path to the template as we make modifications in the `settings.py` file to pick the template from the mentioned directory. We use `as_view` function to load the class as a function/view.  &#xA;&#xA;Activate the virtual environment for the proper functioning of the project.&#xA;&#xA;After activating the virtual environment we can run the server as follows:&#xA;```terminal&#xA;python manage.py runserver&#xA;```&#xA;   We can now see the following output and thus, we are now rendering a simple HTML template in Django.&#xA;&#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639384994/blogmedia/templ1_vbwp5d.png)&#xA;&#xA;### Using render&#xA;&#xA;We can also use the [render function](https://docs.djangoproject.com/en/4.0/topics/http/shortcuts/#render) from `django.shortcuts` to simply render a template. But we will create a Python function or a View to actually render the template. So, we&#39;ll create a View-URL map as we created in the [previous part](https://mr-destructive.github.io/techstructive-blog/django/python/web-development/2021/11/30/Django-Basics-P4.html).&#xA;&#xA;Firstly, let&#39;s create a view function in the `post/views.py` file, more generally (`app_name/views.py` file). Firstly, we need to import the render function from `django.shortcuts` and then return the function call of render.&#xA;&#xA;```python&#xA;from django.shortcuts import render&#xA;&#xA;def home(request):&#xA;    return render(request,&#39;index.html&#39;)&#xA;```   &#xA;&#xA;And in the URLs, we&#39;ll create a different pattern like for e.g. &#39;home/&#39;&#xA;&#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns=[&#xA;        path(&#39;home/&#39;,views.home,name=&#34;home&#34;),&#xA;        ]&#xA;```&#xA;&#xA;So, after creating the View-URL map and making sure the URL of the app is loaded in the project URLs, we can see the result as a simple HTML template.   &#xA;&#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639386932/blogmedia/templ2_rgoppj.png)&#xA;   &#xA;## Django Templating Language &#xA;&#xA;The [Django Templating Language](https://docs.djangoproject.com/en/3.2/ref/templates/language/) is Django&#39;s way of making templates more dynamic and easy to write dynamic web applications. We&#39;ll take a brief look at what we can do with this type of Templating Language in Django.&#xA;&#xA;### Variables&#xA;&#xA;This is the most common use case for the Django Templating Language/Engine as we can use the [variables](https://docs.djangoproject.com/en/3.2/ref/templates/language/#variables) from the Backend and inject it in the template. We can parse the variable into the template by the syntax : `{{ variable_name &#34; }}}}`&#xA;&#xA;To show its use cases, we can declare a variable in a view and then parse it in the Template. Though it is not dynamic right now we can later on fetch values from the database and store them in the form of variables in our views. &#xA;&#xA;**templates/home.html**&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    &lt;h1&gt;Hello, {{ name }}&lt;/h1&gt;&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;&#xA;**post/views.py**&#xA;```python&#xA;from django.shortcuts import render&#xA;&#xA;def variable_demo(request):&#xA;    name = &#34;Kevin&#34;&#xA;    return render(request, &#39;home.html&#39;, {&#39;name&#39;:name})&#xA;    #The name can be anything, like a database query object, form detail, etc&#xA;&#xA;```&#xA;&#xA;As we can see the variable in views is passed as a dictionary in python. The reference key along with a value of the variable as the name of the variable. We will use the key in the templates to parse the value of the variable.&#xA;&#xA;**post/urls.py**&#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns=[&#xA;        path(&#39;home/&#39;,views.home,name=&#34;home&#34;),&#xA;        path(&#39;vardemo/&#39;,views.variable_demo, name=&#34;var&#34;),&#xA;        ]&#xA;```&#xA;&#xA;![variable demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639389288/blogmedia/templ3_wvhagw.png)&#xA;&#xA;As we can see, we were able to load the variable into the template using the Django Templating Engine. &#xA;&#xA;### Conditional statement&#xA;&#xA;We can even use the conditional statement in the Template using a very simple syntax. We can use `{% if condition&#34;  }} %}` to use certain special kinds of blocks in the Template. We need to end those blocks as well using the syntax `{% endif  %}`, here `if` can be other blocks which we&#39;ll explore ahead.&#xA;&#xA;To create a basic if condition in the template, we can understand with the following example.&#xA;&#xA;**app_name/views.py**&#xA;```python&#xA;from django.shortcuts import render&#xA;from random import randint&#xA;&#xA;def if_demo(request):&#xA;   number = randint(1,10)&#xA;   return render(request, &#39;if_else.html&#39;, {&#39;num&#39;:number})&#xA;```&#xA;&#xA;Here, we have used the key name as `num` indicating we can give different names to the key which needs to be used in the template to render the values.&#xA;&#xA;**app_name/urls.py**&#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns = [&#xA;        path(&#39;if/&#39;, views.if_demo, name=&#34;ifdemo&#34;),&#xA;        ]&#xA;```&#xA;&#xA;**templates/if_else.html**&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    {{ num }}&#xA;    {% if num &gt; 5 %}&#xA;        &lt;h2&gt;It&#39;s Greater then 5&lt;/h2&gt;&#xA;    {% elif num == 5 %}&#xA;        &lt;h2&gt;It&#39;s five!&lt;/h2&gt;&#xA;    {% else %}&#xA;        &lt;h2&gt;It&#39;s less than 5&lt;/h2&gt;&#xA;    {% endif %}&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;![if-else demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639411425/blogmedia/templ3_exj0fv.png)&#xA;&#xA;So, as we can see that, we can use the if-else conditions in the template and that is already powerful. This can be a bit messy as to handle mathematical operations or conditions into a single condition. This can really be used for really large datasets that can be shimmed down to really less coding and also improve readability.&#xA;&#xA;&#xA;### For loop&#xA;&#xA;Now, the most crucial component of the Django templating language is the loops. We can actually iterate over objects/lists in the template. This becomes a huge concept for actually making a dynamic web application. We n\might want to iterate over all the entries in a database, or any other form of data which can make the app a lot dynamic and feel real-time. &#xA;&#xA;The syntax of for loop is almost similar to the if-else condition. We just replace the condition with the iterator and the list/object from the view context. `{% for i in list %}`, also end the for loop like `{% endfor %}`.&#xA;&#xA;**app_name/views.py**&#xA;```python&#xA;from django.shortcuts import render&#xA;&#xA;def for_demo(request):&#xA;    sports = (&#39;football&#39;, &#39;cricket&#39;, &#39;volleyball&#39;, &#39;hockey&#39;, &#39;basketball&#39;)&#xA;    return render(request, &#39;for.html&#39;, {&#39;sport_list&#39;: sports})&#xA;&#xA;```&#xA;&#xA;We have created a simple Python list called `sports` and we parse them to the template using a dictionary object, `sport_list` as the key for storing the value of the `sports` list.&#xA;&#xA;**app_name/urls.py**&#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns = [&#xA;        path(&#39;for/&#39;, views.for_demo, name=&#34;fordemo&#34;),&#xA;        ]&#xA;```&#xA;&#xA;**templates/for.html**&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    &lt;ul&gt;&#xA;        {% for sport in sport_list %}&#xA;        &lt;li&gt;{{ sport }}&lt;/li&gt;&#xA;        {% endfor %}&#xA;    &lt;/ul&gt;&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;&#xA;![for loop demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639475328/blogmedia/templ3_q8z8fr.png)&#xA;&#xA;We have used simple for loop in Pythonic syntax, we use an iterator in this case, `sport` acts as an iterator. We use this to store values one by one from the list `sport_list` which was earlier passed in the views as a key in the dictionary. &#xA;&#xA;Hence, this is quite scalable and used to fetch the objects/entries in the database and hence making it a lot easier to make a dynamic web application faster.&#xA;&#xA;## Template Inheritance&#xA;&#xA;So, far we have seen that we need to create the base template again and again like all the basic HTML elements, title, and all the basic structure. But what if, we can reuse a specific template in another and extend the functionality of that template into a new one. This avoids the redundancy of writing the entire basic template or the layout of a web app over and over again.&#xA;&#xA;To do that, Django has the Template inheritance. We can use a template as its basic layout or a specific component in the web application. Again, similar to the for, if-else blocks the syntax for inheriting a template is quite similar. &#xA;&#xA;Take, for example, the home.html which consisted of only a `&lt;h1&gt;` tag in it. We can use this kind of template in other templates to really make it the home page. For that, we first need to enclose the template in a `block`, which is what allows us to use it in other templates.&#xA;To create a `block`, we simply need to write the following syntax before the component which we do not want in other templates:&#xA;&#xA;**templates/home.html**&#xA;```html&#xA;&lt;!DOCTYPE html&gt;&#xA;&lt;html lang=&#34;en&#34;&gt;&#xA;&lt;head&gt;&#xA;    &lt;meta charset=&#34;UTF-8&#34;&gt;&#xA;    &lt;title&gt;Django Blog&lt;/title&gt;&#xA;&lt;/head&gt;&#xA;&lt;body&gt;&#xA;    &lt;h1&gt;Hello, World!&lt;/h1&gt;&#xA;    {% block body %}&#xA;    &lt;p&gt;This is not going to get inherited &lt;/p&gt;&#xA;    {% endblock %}&#xA;    &lt;p&gt;This will be inherited&lt;/p&gt;&#xA;&lt;/body&gt;&#xA;&lt;/html&gt;&#xA;```&#xA;In this we have used the `blocks` with a name like `body` as `{% block body %}` this can be anything you like. We end the block with the similar syntax as the for/if blocks as `{% endblock %}`. Anything in between the blocks i.e `block block_name` and `endblock` is not inherited i.e it is unique to this template.&#xA;&#xA;We will see how we can use this template in other templates. We will actually extend this template and use the blocks to render the content in the template.&#xA;   &#xA;&#xA;**templates/if_else.html**&#xA;```html&#xA;{% extends &#39;home.html&#39; %}&#xA;{% block body %}&#xA;    {{ num }}&#xA;    {% if num &gt; 5 %}&#xA;    &lt;h2&gt;It&#39;s Greater then 5&lt;/h2&gt;&#xA;    {% elif num == 5 %}&#xA;    &lt;h2&gt;It&#39;s five!&lt;/h2&gt;&#xA;    {% else %}&#xA;    &lt;h2&gt;It&#39;s less than 5&lt;/h2&gt;&#xA;    {% endif %}&#xA;{% endblock %}&#xA;```&#xA;   So, we first say to Django to extend the `home` template i.e. the Django will load the blocks from this template only, remember it will just load and not use the blocks until we explicitly tell it to. &#xA;&#xA;To use the blocks or kind of plug in the template content in the `if_else.html` or any other template, we need to again call the `blocks`. Here, we need to write the content inside the `blocks` to properly parse the blocks as this is an HTML template. The order of opening and closing elements do matter. &#xA;So, when we say `endblock` the last part of the base template is loaded i.e. the closing `body` and `html` tags. This is like plugging the template as it is before and after the block body. &#xA;&#xA;**app_name/views.py**&#xA;```python&#xA;from django.shortcuts import render&#xA;&#xA;def home(request):&#xA;    return render(request, &#39;home.html&#39;)&#xA;&#xA;from random import randint&#xA;&#xA;def if_demo(request):&#xA;   number = randint(1,10)&#xA;   return render(request, &#39;if_else.html&#39;, {&#39;num&#39;:number})&#xA;```   &#xA;&#xA;**app_name/urls.py**&#xA;```python&#xA;from django.urls import path&#xA;from post import views&#xA;&#xA;urlpatterns = [&#xA;        path(&#39;&#39;, views.home, name=&#34;home&#34;),&#xA;        path(&#39;if/&#39;, views.if_demo, name=&#34;ifdemo&#34;),&#xA;        ]&#xA;```   &#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639479714/blogmedia/tempinher2_enisls.png)&#xA;![template inheritance demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639477721/blogmedia/tempinher_lk0op0.png)      &#xA;&#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1639479954/blogmedia/template-inh_lc8szo.gif)&#xA;&#xA;The above gif illustrates the example in a neat way. The block is loaded from the given template as the extended template and hence it plugs the block into the frame of the template. &#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to understand the concept of Templates in Django, we were able to use variables, loops, conditional statements, and template inheriting in a Django application. In the next part, we&#39;ll try to touch up with the static files and see how to properly structure and configure them.&#xA;&#xA;Thank you for reading, if you didn&#39;t understand any of the examples, please let me know, I&#39;ll be happy to share the code. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Views and URLS</title>
      <link>https://www.meetgor.com/posts/django-basics-views-urls</link>
      <description>Introduction After getting familiar with the folder structure of the Django framework, we&#39;ll create our first view in an app. The basics of creating and mapping</description>
      <pubDate>Tue, 30 Nov 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;After getting familiar with the folder structure of the Django framework, we&#39;ll create our first view in an app. The basics of creating and mapping a view with a URL will be cleared by the end of this part.&#xA;&#xA;## Creating Views&#xA;&#xA;&gt; Views are the functions written in python as a logic control unit of the webserver&#xA;&#xA;To create a view or typically-like function, we need to write a function in the `views.py` file inside of the application folder. The function name can be anything but should be a sensible name as far as its usability is concerned. Let&#39;s take a basic example of sending an HTTP response of &#34;Hello World&#34;.&#xA;&#xA;#### project_name/app_name/views.py&#xA;```python&#xA;from django.http import HttpResponse&#xA;&#xA;def index(request):&#xA;  return HttpResponse(&#34;Hello World&#34;)&#xA;```  &#xA;&#xA;Yes, we are simply returning an HTTP Response right now, but rendering Templates/HTML Documents is quite similar and easy to grasp in Django. So, this is a view or a piece of logic but there is a piece missing in this. Where should this function be used? Of course a URL i.e a path to a web server.&#xA;&#xA;We&#39;ll see how to map the views to an URL in Django in the next section&#xA;&#xA;## Mapping the Views to a URL&#xA;&#xA;We need to first create a `urls.py` file in the application folder to create a map of the URL to be mapped with the view. After creating the file in the same app folder as the `views.py`, import the function in the view into the file.&#xA;&#xA;#### project_name/app_name/urls.py&#xA;```python&#xA;from .views import index&#xA;from django.urls import path &#xA;&#xA;urlpatterns = [&#xA;    path(&#39;&#39;, index, name=&#34;index&#34;),&#xA;]&#xA;```&#xA;The path can be anything you like but for simplicity, we&#39;ll keep it blank(&#39;&#39;) for now.   &#xA;&#xA;Now, you have the path for your view to work but it&#39;s not linked to the main project. We need to link the app urls to the project urls. &#xA;&#xA;To link the urls of your app to the main project folder, you need to just add a single line of code in the `urls.py` file of the project folder.&#xA;&#xA;In projectname folder -&gt; urls.py&#xA;&#xA;#### project_name/urls.py&#xA;```python&#xA;from django.contrib import admin&#xA;from django.urls import path, include&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;admin/&#39;, admin.site.urls),&#xA;    path(&#39;&#39;, include(&#39;post.urls&#39;)),&#xA;]&#xA;```&#xA;&#xA;You need to add the line `path(&#39;&#39;, include(&#39;post.urls&#39;)),` and also import the `include` function from `django.urls`. This additional statement includes the urls or all the `urlpatterns` in the `post` app from the `urls.py` file into the project&#39;s url-routes. &#xA;&#xA;Here, the URL path can be anything like `&#39;home/&#39;`, `&#39;about/&#39;`, `&#39;posts/&#39;`, etc. but since we are just understanding the basics, we&#39;ll keep it `&#39;&#39;` i.e. the root URL. &#xA;&#xA;You can also see that there is another route in our project `&#39;admin/&#39;` which is a path to the admin section. We&#39;ll explore this path and the entire Admin Section in some other part of this series.&#xA;   &#xA;Now if you start the server and visit the default URL i.e `http://127.0.0.1:8000`, you will see a simple HTTP message `Hello World`.&#xA;&#xA;![Hello World view](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1638194390/blogmedia/uv1_xf4byq.png)&#xA;&#xA;## Breaking the `path` function in urlpatterns&#xA;&#xA;The path function in the urlpatterns takes in at least 2 parameters, i.e. the URL pattern and the view of any other function that can be related to the webserver. &#xA;&#xA;```&#xA;path( &#39; &#39;,   view,    name )&#xA;       ^       ^        ^ &#xA;       |       |        |&#xA;       |       |     url_name&#xA;       |   function_name&#xA;   url_path    &#xA;```   &#xA;&#xA;### URL path&#xA;&#xA;The URL Path is the pattern or literally the path which you use in the Browser&#39;s search bar. This can be static i.e. some hard-coded text like `home/`, `user/`, `post/home/`, etc. and we can also have dynamic URLs like `post/&lt;pk:id&gt;/`, `user/&lt;str:name&gt;/`, etc. here the characters `&lt;pk:id&gt;` and `&lt;str:name&gt;` will be replaced by the actual id(integer/primary key) or the name(String) itself. &#xA;&#xA;This is used in an actual web application, where there might be a user profile that needs the unique user-id to render it specifically for that user. The User-Profile is just an example, it can anything like posts, emails, products, any form of a content-driven application. &#xA;&#xA;### View&#xA;&#xA;The view or the function is the name of the function that will be attached to that URL path. That means once the user visits that URL, the function is literally called. **View is just a fancy word for a function(or any logic basically).** There is a lot to be covered when it comes to `View` as there are a lot of ways to create it, there are two types of views, how to use them for various use-cases that can be learned along the way because it is a topic where the crust of Django exists.  &#xA;&#xA;We&#39;ll learn to create different implementations and structure our views, for time-being just consider them as the unit where every operation on the web can be performed. We can create other standalone functions in python to work with the views to make it a bit structured and readable.&#xA;&#xA;### URL Name&#xA;&#xA;This is an optional parameter to the path function as we do not mandatorily need to give the URL map a name. This can be really useful in multi-page application websites where you need to link one page to another and that becomes a lot easier with the URL name. We do not need this right now, we&#39;ll touch it when we&#39;ll see the Django Templating Language. &#xA;&#xA;## Example Views&#xA;&#xA;Let&#39;s create some examples to understand the working of Views and URLs. We&#39;ll create a dynamic URL and integrate the Python module in the views to get familiarized with the concept.&#xA;&#xA;### Dynamic URLs &#xA;&#xA;We can use the dynamic URLs or placeholder variables to render out the content dynamically. Let&#39;s create another set of View and URL map.&#xA;&#xA;#### project_name/app_name/views.py&#xA;```python&#xA;def greet(request, name):&#xA;    return HttpResponse(&#34;Welcome, &#34; + name)&#xA;```&#xA;&#xA;This view or function takes an additional argument called `name` and in response, it just says `Welcome, name` where the name can be any string. Now after creating the view, we need to map the view to a URL pattern, We&#39;ll add a path for this greet function. &#xA;&#xA;#### project_name/app_name/urls.py&#xA;```python&#xA;path(&#39;greet/&lt;str:name&gt;/&#39;, greet, name=&#34;greet&#34;),&#xA;```&#xA;&#xA;You can see how we have created the url-pattern here. The greet part is static but the `&lt;str:name&gt;` is a variable or just a URL parameter to be passed to the view as the value of the variable `name`. We have also given the URL map a name called greet, just for demonstration of its creation. &#xA;&#xA;You&#39;ll get an error, 100% if you are blindly following me! Didn&#39;t you forget something?&#xA;&#xA;Import the greet function from the views like so:&#xA;&#xA;```python&#xA;from .views import index, greet  &#xA;```&#xA;&#xA;So, after we visit the URL `https://127.0.0.1:8000/greet/harry`, you should see a response `Welcome, harry` as simple as that. &#xA;&#xA;![Greet URL Demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1638252762/blogmedia/uv-greet_e2wg5o.gif)&#xA;&#xA;Now, how is this working? We see the view first. The function takes two parameters one is most common the request which stores the meta-data about the request, the other parameter is the name that we will be use to respond to the server dynamically. The name variable is used in the string with the HttpResponse function to return a simple string.&#xA;&#xA;Then, in the URLs, we need to find a way to pass the variable name to the view, for that we use the `&lt;string:name&gt;` which is like a URL parameter to the view. The path function automatically parses the name to the appropriate view and hence we call the greet function with the name variable from the URL.&#xA;&#xA;### Using Pythonic things&#xA;&#xA;We&#39;ll use some Python libraries or functions in the Django App. In this way, we&#39;ll see it&#39;s nearly no-brainer to use Python functions or libraries in the Django framework as indeed all files which we are working with are Python files.&#xA;&#xA;#### project_name/app_name/views.py&#xA;```python&#xA;from random import randint&#xA;&#xA;def dice(request):&#xA;    number = randint(1,6)&#xA;    return HttpResponse(f&#34;It&#39;s {number}&#34;)&#xA;```&#xA;&#xA;This view is using the random module, you can pretty much use other web-compatible modules or libraries. We have used the `random.randint` function to generate the pseudo-random number between 1 and 6. We have used the f-string (`f&#34;{variable}&#34;`)styled Response string as int is not compatible with the response concatenation. So this is the logic of our map, now we&#39;ll need to link it to a URL-path. &#xA;&#xA;#### project_name/app_name/urls.py&#xA;```python&#xA;path(&#39;throw/&#39;, dice, name=&#34;dice&#34;),&#xA;```&#xA;&#xA;Also, import the view name from the views as `from .views import dice` also add other views if present. Now if we go to the URL `https://127.0.0.1:8000/throw/`, we shall see a random number in the response. This is how we used Python to make the logic of our view.&#xA;&#xA;![Dice URL Demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1638252951/blogmedia/uv-dice_bsodzq.gif)&#xA;&#xA;So, that was the basics of creating and mapping views and urls. It is the most fundamental of the workflow in Django project development. You need to get familiar with the process of mapping Views and urls before diving into Templates, Models, and other complex stuff. &#xA;&#xA;## Conclusion&#xA;&#xA;From this part of the series, we touched upon the basics of views and URLs. The concept of mapping URLs and views might have been much cleared and it will be even gripping after we explore the Template handling and Static files in the next part. If you have any queries or mistakes I might have made please let me know. Thanks for reading and Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Folder Structure</title>
      <link>https://www.meetgor.com/posts/django-basics-folder-struct</link>
      <description>Introduction After setting up the development for the Django framework, we will explore the project structure. In this part, we understand the structure along w</description>
      <pubDate>Wed, 24 Nov 2021 00:00:00 UTC</pubDate>
      <content>### Introduction&#xA;&#xA;After setting up the development for the Django framework, we will explore the project structure. In this part, we understand the structure along with the various components in the Project as well as individual apps. We will understand the objective of each file and folder in a brief and hopefully by the end of this part, you&#39;ll be aware of how the Django project is structured and get a good overview of the flow of development in the Django project.&#xA;&#xA;## Project Structure&#xA;&#xA;We will create a Django project from scratch and understand it from the ground up. As in the previous part, I&#39;ve shown you how to create a project. In this section, we&#39;ll create a project `Blog`. TO do that, we&#39;ll create a folder called `Blog`, install and set up the virtual environment as discussed and explained in the previous part.&#xA;&#xA;After the virtual environment is created and activated, we&#39;ll create the project.&#xA;&#xA;```shell&#xA;django-admin startproject Blog .&#xA;```&#xA;After this command, if we see the directory structure, it should look something like this:&#xA;&#xA;![Basic Django folder structure](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1637661221/blogmedia/tree_ak3mgm.png)&#xA;&#xA;As we can see there are 6 files and a folder. The base folder is for the configuration at a project level. I have actually not shown the `venv` ( using `-I venv` option on tree command) as it is out of the scope of this series. The `venv` folder contains modules and scripts which are installed in the virtual environment. &#xA;&#xA;So, lets break the folder structure down into understandable files.&#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1637745146/blogmedia/16_qenomh.png)&#xA;### manage.py&#xA;&#xA;Our project consists of a `manage.py` file which is to execute several commands at a project level. We do not have to edit any of the contents of this file (never). It is the file that allows us to run the server, apply migrations, create an Admin account, create apps, and do a lot of crucial things with the help of python. &#xA;&#xA;So, it&#39;s just the command-line utility that helps us interact with the Django project and applications for configurations. &#xA;&#xA;### Project Folder&#xA;&#xA;Now, this is the folder where the project-configuration files are located. **The name of the folder is the same as that of the project**. This makes the folder unique and hence creates a standard way to store files in a structured way. &#xA;&#xA;The folder is a python package which is indicated by the `__init__.py` file. The purpose of the `__init__.py` file is to tell the Python environment that the current folder is a Python package. &#xA;&#xA;The folder consist of several files(5 files):&#xA;&#xA;### settings.py&#xA;&#xA;This is a really important file from the project&#39;s point of view. This contains certain configurations that can be applied to the rest of the project (or all the apps). &#xA;&#xA;In the `settings.py` file, we can do some of the following operations :&#xA;&#xA;- List of `applications` that might be pre-installed or user-defined.&#xA;- Configure the Middleware. &#xA;- Configure and connect the Database.&#xA;- Configure Templates/Static/Media files.&#xA;- Custom Configuration for Time-Zones, Emails, Authentication, CORS, etc.&#xA;&#xA;Besides the above-mentioned options, there is a lot of project-specific configurations or application-specific settings as well. &#xA;&#xA;Here, you&#39;ll have a question,&#xA;&#xA;### WHAT IS AN APPLICATION?&#xA;&#xA;An application is a component of a project. There are also Python packages that are made to be used as a Django app that allows reusing the components. But when we are developing the project, we can break a complex process/project into individual apps. &#xA;&#xA;For Example, a project of `Blogging Platform` might have an application for `posts`, `users`, `api`, `homepage`, etc. So the project `Blogging Platform` might have separated the components like its API, Homepage, Post, Users, and so on to keep the development independent and well organized.&#xA;&#xA;So, we can understand apps as separate components of a large project. We can also understand apps as reusable components, you can use the `posts` app in another project or in a particular app of the same project making it easier and faster to create the project.&#xA;&#xA;### urls.py &#xA;&#xA;This is a file for managing the `URL` routes of the project. We&#39;ll discuss URLs and Views in their own part in the series. This file basically has a list of URLs that should be paired with a `view` or any other function. In the project folder, the URL patterns mostly link a baseurl to the URL file of the particular application. Don&#39;t worry if you can&#39;t get some of the terms, you&#39;ll clearly understand when we see them in the future parts of this series.&#xA;&#xA;### wsgi.py&#xA;&#xA;WSGI or Web Server Gateway Interface is a file that is used to configure the project for production or deployment. This takes care of the server when we deploy into production. It is a Synchronous Web Server i.e. it listens to only one request and responds to that at a time.&#xA;&#xA;Some of the common WSGI servers are [Gunicorn](https://gunicorn.org/), [Apache](https://docs.djangoproject.com/en/3.2/howto/deployment/wsgi/modwsgi/), [uWSGI](https://docs.djangoproject.com/en/3.2/howto/deployment/wsgi/uwsgi/), [cherrypy](https://docs.cherrypy.dev/), [Aspen](https://github.com/buchuki/aspen/blob/master/aspen/wsgi.py), etc.&#xA;&#xA;### asgi.py&#xA;&#xA;ASGI or Asynchronous Server Gateway Interface is also similar to the WSGI file but it serves as an asynchronous web server. This file handles the requests which might be asynchronous i.e. the web server can respond to multiple requests and respond to them at a time. We can even send tasks to the background using this type of server configuration. &#xA;&#xA;Some of the common ASGI servers are [Uvicorn](https://www.uvicorn.org/), [Daphne](https://docs.djangoproject.com/en/3.2/howto/deployment/asgi/daphne/), [Hypercorn](https://docs.djangoproject.com/en/3.2/howto/deployment/asgi/hypercorn/), etc.&#xA;&#xA;## Creating a Django Project Application &#xA;&#xA;So, let&#39;s create an application to see the structure of the basic app in Django. To create an app, we can use the `startapp` option with the `python manage.py` command followed by the name of the app like:&#xA;&#xA;```shell&#xA;python manage.py startapp name&#xA;```&#xA;&#xA;Here, `name` can be any app name you&#39;d like to give. &#xA;&#xA;## Application Structure&#xA;&#xA;After creating an app, let the name be anything it should have a similar structure as :&#xA;&#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1637731715/blogmedia/apptree_zr12s0.png)&#xA;&#xA;As we can see there are a couple of things to be discussed here. The main components that we are going to work on within developing the application in the project are: `models.py`, `views.py`, `test.py`. There are other files that we will create manually like the `urls.py`, `serializers.py`, etc.&#xA;&#xA;You also need to add the name of the app in quotes in the `INSTALLED_APPS` list in the `settings.py` file. Something like this:&#xA;&#xA;![](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1637748258/blogmedia/installed_apps_ozir1p.png)&#xA;&#xA;The application files can be summarized as :&#xA;&#xA;![App structure summary](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1637745151/blogmedia/17_xyefpq.png)&#xA;&#xA;Let us look at each of the files and folders in the application:&#xA;&#xA;### models.py&#xA;&#xA;As the same suggests, we need to define the model of a database here. The actual structure and the relationship are created with the help of python and Django in this file. This is the place where the crust of the web application might be defined. &#xA;&#xA;There are various aspects in creating a model like `Fields`, `Relationship`, `Meta-data`, `methods`, etc. These are defined with the help of python along with the Django Models. In most cases, a model is like a single `table` in an actual database. &#xA;&#xA;The file is quite important and interesting as it abstracts away the manual work of typing `SQL` queries to create the database. &#xA;&#xA;### migrations&#xA;&#xA;This migrations folder is a way for Django to keep track of the changes in the database. At every `migration` or actual query that runs to create the table or the database structure. There might be multiple steps or iteration of the database, this folder stores those pieces of information.&#xA;&#xA;To make an analogy, it is like a `.git` folder but for keeping track of the migrations or changes to the database. &#xA;&#xA;### admin.py&#xA;&#xA;This is the file for performing the operations at the admin level. We generally use this file to register the models into the `Admin section` without touching any frontend part. It provides a built-in `CRUD`(Create Read Update Delete) functionality to the model. This is really good for testing up the model manually before putting effort into the frontend part. &#xA;&#xA;Other than this, we can customize the admin section with this file. We will see the details in the part of the `Admin section` in this series.&#xA;&#xA;### views.py&#xA;&#xA;This is a file, that acts as a controller/server logic for the Django framework. We can define functions and classes as a response to the incoming requests from the server via the `urls.py` file. There are a couple of approaches when it comes to writing the format of the functions like `class-based views`, `function-based views`, and others depending on the type of operation is been done.&#xA;&#xA;As said earlier, it is the `V`(View) in the `MVT` architecture in Django Framework. This is the place where we write the logic from the server-side to let&#39;s say render HTML pages(templates), query to the database with CRUD operations, return an HTTP response, etc.  &#xA;&#xA;### urls.py&#xA;&#xA;This is the file in which a list of URL patterns is mapped to the particular view function. This `urls.py` is specific to the app and it might be prefixed with the URL route mentioned in the project folder&#39;s `urls.py` file.  &#xA;&#xA;So, not going much deeper, simply to put it&#39;s a map of a particular URL path with a function associated to it which gets triggered(called) when the user visits the URL. &#xA;&#xA;### tests.py&#xA;&#xA;This is a file where we can perform automated tests on the application. This might be in integration with models, other applications, project settings, etc. This is a component that Django makes it to have easy and quick unit testing along with the Python modules for advanced testing. It is quite easier to integrate python modules and libraries into almost anything in the Django Project.&#xA;&#xA;### apps.py&#xA;&#xA;This is the file for app-level configuration. We can change the default fields, app name, email settings, other module-specific settings that can be used in the models, views, or in another place that can be defined here. &#xA;&#xA;## Other Folders/files&#xA;&#xA;Apart from the app folder and the project folder, we may have other folders like the `templates`, `static`, `media`, etc. There are also python package-specific folders for which you may need to create folders.&#xA;&#xA;### Templates&#xA;&#xA;There are a couple of standard ways you can set up your Templates folder. Either in the root project or inside individual apps. The choice is yours, however, you feel comfortable. I personally use only one template folder in the root directory, but you can keep it wherever you want, but these two are the standard ones for ease of reading and maintaining the projects.&#xA;&#xA;&#xA;### Static &#xA;&#xA;The Static folder is the folder in which you store your `css`, `javascript`, and `images`(images or media files that are used in the templates). This is a good way to improve the performance as in the production the webserver collects all the static files and stores them in a single place for responding to the requests. &#xA;The template folder if present in the root folder, has a sub-folder as the application names and inside the `app-name`, we put in all the `.html` or other template files. &#xA;&#xA;As similar to the `template` folder, the location can be modified or set as a configuration from the settings.py file. Usually, the static files(`.css`, `js`, etc) are stored in the root folder with app names as subfolders. &#xA;&#xA;### Media&#xA;&#xA;The media folder is where you can store the media-specific to the user or the application processed data. For example, we can store the profile pictures of users, email attachments if it&#39;s an email application, thumbnails of the posts for a blogging platform, etc. &#xA;&#xA;The configuration of the Media folder is quite similar to the Static folder but it has certain additional configurations. We&#39;ll look at them in their sections in this series.&#xA;&#xA;Phew! That was all the folder structure you need to get started with Django. There might be other folders and files specific for project, application, python modules but it won&#39;t be much hard to understand those as well.&#xA;&#xA;## Conclusion&#xA;&#xA;From this part, we were able to understand the folder structure of the Django framework. We explored the various files and folders with their use cases and their purpose. So, by reading the above description of the files and folders you might have got a rough idea about the flow of the development cycle in Django.&#xA;&#xA;In the next part, we&#39;ll start with actually getting hands dirty in the code and making our first view. Thank you for reading and Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Setup and Installation</title>
      <link>https://www.meetgor.com/posts/django-basics-setup</link>
      <description>Introduction The crucial aspect of starting to learn any framework is the ease to set it up and Django by far is the easiest of the options out there. There is</description>
      <pubDate>Sat, 20 Nov 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;The crucial aspect of starting to learn any framework is the ease to set it up and Django by far is the easiest of the options out there. There is just a few lines of code to install django if you already have python installed in your system. In this article, we see how to setup a django project along with a virtual environment. &#xA;&#xA;If you already have python and pip installed, you can move on to the [virtual environment setup](#setting-up-virtual-environment-in-python).&#xA;&#xA;## Installing Python and PIP&#xA;&#xA;Django is a python based framework so that makes sense to have Python installed along with its package manager to use Django.  &#xA;&#xA;To install Python, you can visit the official [Python](https://www.python.org/downloads/) website to download any relevant version for your system (recommended 3.7 and above). &#xA;&#xA;Mostly the Python installation comes with the option to install `pip`(python&#39;s package manager) but if you missed that, that&#39;s fine, you can install the [get-pip.py](https://bootstrap.pypa.io/get-pip.py) file into your system and run the below code:&#xA;&#xA;```&#xA;python get-pip.py   &#xA;```&#xA;&#xA;Make sure the include the relative path to the file if you are not in the same folder as the file.&#xA;&#xA;So, that should be python setup in your local machine. To check that python was installed correctly, type in `python --version` and `pip --version` to check if they return any version number. IF they do, Congratulations !! You installed Python successfully and if not, don&#39;t worry there might be some simple issues that can be googled out and resolved easily. &#xA;   &#xA;---&#xA;&#xA;Let&#39;s move on to the actual setting of the Django project set up.   &#xA;&#xA;## Setting up Virtual Environment in python&#xA;&#xA;Virtual Environment is a software which isolates the installation of dependencies and libraries for a specific project, making it a clean and safe environment for deployment as well as maintenance. &#xA;&#xA;In Python, we have a virtual environment package known as `virtualenv` that does this thing. It is for installing the Python related packages into a isolated folder. So, we can install the `virtualenv` package in python by following the following steps:&#xA;&#xA;### Installing Virtualenv&#xA; &#xA;Firstly, install the virtual environment package, it&#39;s not mandatory but it keeps things simple and easy for your project in correspondence to the entire OS. So in python, we have a module to create the virtual environment pretty easily,&#xA;&#xA;```&#xA;pip install virtualenv&#xA;```&#xA;&#xA;You can use `pip3` or `pip -m`, or however you install normal python modules. This just installs the python virtual environment, we need to create a virtual environment in the current folder.&#xA;&#xA;### Creating a virtual environment&#xA;&#xA;We need to create the environment so as to give the Python interpreter an indication to consider the current folder as an isolated Python environment. We need to create a virtual environment in the current folder, so for that navigate to the folder where you want to create the project and enter the following command: &#xA;&#xA;```&#xA;virtualenv venv&#xA;``` &#xA;&#xA;Here, `venv` can be anything like `env` just for your understanding and simplicity it&#39;s a standard name kept for the same. After this, you will see a folder of the same name i.e. `venv` or any other name you have used. This is the folder where python will keep every installation private to the local folder itself. &#xA;&#xA;### Activating Virtual environment &#xA;&#xA;Now, we need to activate the virtual environment, this means that any thing installed in the prompt with the virtualenv activated will be isolated from the entire system and will be installed on in the virtual environment. To activate the environment, we can use the command :&#xA;&#xA;#### for Linux/macOS :&#xA;&#xA;```&#xA;source venv/bin/activate&#xA;```&#xA;&#xA;#### for Windows:&#xA;&#xA;```&#xA;venv\Scripts�ctivate&#xA;```&#xA;&#xA;After this, your command prompt will have a `(venv)` attached in the beginning. This indicates you are in a virtual environment, things you do here, may it be module installation or any configuration related to python will stay in the local folder itself.&#xA;&#xA;&#xA;## Installing Django&#xA;&#xA;After the virtual environment is set up and activated, you can install Django and get started with it. Django is a python module or package, which can be easily installed using its package manager `pip`. &#xA;&#xA;Install Django using pip:&#xA;&#xA;```&#xA;pip install django&#xA;```&#xA;&#xA;## Create a Django Project&#xA;&#xA;After the installation is completed, you can start a Django project in the current folder from the django package we installed. There are several commands available in the django module which you can execute in the command line that we&#39;ll discuss later.&#xA;For now, we will use the command `startproject` this is one of the [management commands](https://github.com/django/django/tree/main/django/core/management/commands) in Django. The [django-admin](https://docs.djangoproject.com/en/3.2/ref/django-admin/) is a command line utility for doing the administrative tasks related to Django.&#xA;&#xA;```&#xA;django-admin startproject myproject&#xA;```&#xA;&#xA;Here `myproject` can be your project name. After this, you will see one new folder and one file pop up.&#xA;&#xA;Namely, the `&lt;project-name&gt;` folder and `manage.py` file. We don&#39;t have to touch the `manage.py` file but we use it in most of the commands to use the Django functionalities, it is quite similar to the `django-admin` command. &#xA;&#xA;You can now run your basic server using the command : &#xA;&#xA;```&#xA;python manage.py runserver&#xA;```&#xA;&#xA;OR&#xA;&#xA;You can use `djagno-admin` command, but you need to set certain environment variables and modify the settings.py file as per the project-name. You can use the `django-admin` as the steps given in the django [documentation](https://docs.djangoproject.com/en/3.2/ref/django-admin/#cmdoption-settings).&#xA;&#xA;The output of the command `python manage.py runserver` should be visible in the browser at `https://127.0.0.1:8000` as below :&#xA;&#xA;![Django-Base-Project](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/screenshotr_2021-11-20T15-40-50.png)&#xA;&#xA;That&#39;s it the base django project is installed in your system. To stop the server simply press `Ctrl+C`. &#xA;&#xA;Follow the below GIF for a clear understanding of those instructions:&#xA;&#xA;![Django-basics-part2-setup](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/djp2.gif)&#xA;&#xA;---&#xA;&#xA;## Quick-Setup-Script&#xA;&#xA;You can avoid manually typing the commands once you get the idea of the process in setting up a django project by executing a simple shell script (for Linux/macOS) or a batch script (for Windows). The script looks something like:&#xA;&#xA;For Linux/macOS:&#xA;&#xA;```shell&#xA;#!/usr/bin/env bash&#xA;&#xA;mkdir $1&#xA;cd $1&#xA;pip install virtualenv&#xA;virtualenv venv&#xA;source venv/bin/activate&#xA;&#xA;pip install django&#xA;django-admin startproject $1 .&#xA;clear&#xA;```&#xA;&#xA;Save as commands.sh file&#xA;&#xA;For Windows: &#xA;&#xA;```batch&#xA;mkdir %1 &#xA;cd %1&#xA;pip install virtualenv&#xA;virtualenv env&#xA;call env\Scripts�ctivate&#xA;&#xA;pip install django&#xA;django-admin startproject %1 .&#xA;cls&#xA;&#xA;```   &#xA;save as commands.bat file&#xA;&#xA;For further instructions you can checkout the [GitHub repository](https://github.com/Mr-Destructive/django-quick-setup-script) or a detailed [article](https://mr-destructive.github.io/techstructive-blog/django/web-development/python/2021/08/15/Django-Quick-Setup.html) about it.&#xA;&#xA;## Conclusion&#xA;&#xA;From this section, we were able to setup the Django project in our local system. In the next part, we will cover the `folder structure` of the Django project. We won&#39;t directly go into the code part because that is very easy once you understand the flow of the framework and its internal working. So, thanks for reading and Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: What is it?</title>
      <link>https://www.meetgor.com/posts/django-basics-intro</link>
      <description>Introduction Welcome to Django Basics series, in this series we&#39;ll explore the basics of the Django web framework. In this part, we&#39;ll understand what this web</description>
      <pubDate>Tue, 16 Nov 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Welcome to Django Basics series, in this series we&#39;ll explore the basics of the Django web framework. In this part, we&#39;ll understand what this web framework provides and what actually the back-end development consists of. We&#39;ll discuss where Django is used and why it is a great choice for beginners as well as experienced developers. &#xA;&#xA;## What is Django?&#xA;&#xA;[Django](https://developer.mozilla.org/en-US/docs/Learn/Server-side/Django/Introduction) is a back-end web framework. It is based on python which means you have to write most of the project&#39;s code in Python. But Django comes with a lot of boilerplate code and thus it becomes quite quick in the development. &#xA;&#xA;Django is an open-source framework, it is maintained by the Django Software Foundation Organization. You can view the source code at [GitHub](https://github.com/django/django).&#xA;&#xA;### BACKEND ?&#xA;&#xA;The term `backend` refers to the section or an essential component in Web development, it consists of a `database`, `API`, and the `web server` itself which allows the components to connect together. There might be other components like `load-balancers`, `middleware`, etc. But the core of web applications revolves around **Databases** and **API**. &#xA;&#xA;#### Database&#xA;&#xA;A database is a technology or tool that lets you store the data which might be used for serving the actual application, that might be a frontend app, standalone API, etc. The data you want to store might be generally the User Accounts, Content of the App, basically any some form of data(there are exceptions here, you can&#39;t directly store media files in DB). The Database allows to make content management and the application dynamic and can be personalized. We have certain types of databases like SQL(relational), NO-SQL, Cloud, Network, etc. The tools of these database management are PostgreSQL, MySQL, MongoDB, HarperDB,etc. These tools allow you to manage your database in a convenient way.  &#xA;&#xA;#### API&#xA;&#xA;An API or Application Programming Interface is a way for any frontend app, outside the system to access the database. API allows you to query to the database with GET, POST, DELETE, PUT, etc kinds of operation/requests to the database via the webserver. In API, we have endpoints or (URL routes) at which a particular designated operation can be performed. In APIs, we currently have four primary architectures namely RESTful (quite famous and well established), SOAP, gRPC, and GRAPHQL (new and increasing in popularity). &#xA;&#xA;### Framework?&#xA;&#xA;A framework is a tool to do a certain task efficiently and avoid some repetitive patterns by abstracting many layers in developing it. Django is a high-level framework which means it abstracts certain processes in making the application. It is ideal for beginners to get up and running with a professional full-stack web application(though it requires some learning).&#xA;&#xA;Django makes the project ideal for experienced as well as beginner web developers. The community and the ecosystem of Python are quite amazing as well as there are a ton of resources to get you through your projects.  &#xA;&#xA;![](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/dj-1.png)&#xA;&#xA;The above is a high-level view of how Django project development works, the application might be not only one but several other standalone applications working together to make one project in Django. There is a lot of abstraction in Django like the Middleware, Session Management, Security, etc. This should be a good overview of the development map in Django.&#xA;&#xA;Django follows an MVT architecture. Architecture is a standard in developing an application/project for the ease of the workflow and making it an even experience. &#xA;&#xA;![](https://cdn.hashnode.com/res/hashnode/image/upload/v1635079576954/WcjcokoiX.png)&#xA;&#xA;The above diagram depicts the architecture in Django, the components in the Django server include the Model, View, and Template.&#xA;&#xA;### Model&#xA;&#xA;`Model` refers to the design of the database or a blueprint of the data that is bound with the application in the project. &#xA;&#xA;### View&#xA;&#xA;The `View` is the part to control the way the data should be presented or the how response should be given back to a request from the server(client)&#xA;&#xA;### Template&#xA;&#xA;The `Template` is the markup or the form of document that is to be rendered on the client-side and these are controlled by the views and parsed with the data from the models.&#xA;&#xA;## Why would you need it?&#xA;&#xA;As a developer you would find a need to host your projects on the internet, for that learning and deploying a web server from the ground up might be quite complex and time-consuming, Django solves this problem quite well. Not only it is easy but even scalable at a production level, making it quite a robust choice for anyone. And as a bonus thing, it is based on Python, which makes it even easier to write code for people staying at an abstracted perspective in programming. Python has by far the richest sets of libraries and utilities for any domain, this integration with Django is a deadly combination. &#xA;&#xA;#### Batteries included?&#xA;&#xA;Django solves many problems by abstracting away many things like managing the database, rendering dynamic templates(HTML), properly structuring and serving static and media files, well-organized project structure, and many other things. You just have to get the actual thing done i.e. the server logic(or how to design the API/Database models). On top of that, Django has a built-in fully fledged Admin section and a User model. An Admin section is where you can manage the project in a better way without touching the code. It also has certain applications/libraries to make the development of APIs, integrating various databases, forms for posting data, support for Bootstrap a lot easier. It&#39;s like a `plug and play` kind of thing for the development of web applications. &#xA;&#xA;Hence, it is rightly called the `Batteries Included` web framework.&#xA;&#xA;### Key features of Django&#xA;&#xA;- Ease in integrating a database&#xA;- Flawless Django Template Engine&#xA;- Easy to scale up/down&#xA;- Python libraries support out of the box&#xA;- Amazing Documentation / Helpful community&#xA;- Developing Production-ready projects quickly&#xA;- Baked in support for testing, APIs, cookies, sessions, etc&#xA;- Optimized for security, SEO, and DRY(don&#39;t repeat yourself) principles&#xA;&#xA;## Applications built with Django&#xA;&#xA;Django is used in quite a famous application that you might be using daily. &#xA;&#xA;Django along with Python powers the top applications on the internet like:&#xA;&#xA;1. YouTube&#xA;2. Instagram&#xA;3. Spotify&#xA;4. Disqus&#xA;5. Dropbox&#xA;6. Pinterest&#xA;7. National Geographic&#xA;8. Mozilla&#xA;9. BitBucket&#xA;10. Discovery Network&#xA;&#xA;You have to say, it is powerful and has firm grounds in the tech industry. It&#39;s highly unlikely that Django will be overtaken by another framework at least some years from now.  &#xA;&#xA;&gt; Django is a tool to build web applications fast and in a scalable and Pythonic way&#xA;&#xA;## What will this series cover?&#xA;&#xA;Learning Django from the ground up. We will learn the setup, folder structure, architecture of Django, What are apps, views, URLs, models, serializers, static and template files, and there is a ton of stuff to be covered. &#xA;&#xA;### Resources to learn Django&#xA;&#xA;- [Django Official Docs](https://www.djangoproject.com/start/)&#xA;- [Very Academy - Django Playlist](https://www.youtube.com/c/veryacademy/playlists?view=50&amp;sort=dd&amp;shelf_id=2)&#xA;- [Codemy.com - Django](https://www.youtube.com/watch?v=HHx3tTQWUx0&amp;list=PLCC34OHNcOtqW9BJmgQPPzUpJ8hl49AGy)&#xA;- [Corey Schafer](https://www.youtube.com/watch?v=UmljXZIypDc&amp;list=PL-osiE80TeTtoQCKZ03TU5fNfx2UY6U4p)&#xA;- [Telusko](https://www.youtube.com/watch?v=SIyxjRJ8VNY&amp;list=PLsyeobzWxl7r2ukVgTqIQcl-1T0C2mzau)&#xA;&#xA;## Conclusion&#xA;&#xA;From this article, we were able to understand the Django framework, what is it, and why it should be used on a high level. Further, we explored the web application(backend) components which are targeted by Django for ease of developing applications. We also saw the baseline architecture that Django uses to make projects. &#xA;&#xA;In the next section, we&#39;ll start the actual coding in Django, firstly how to set up the environment and understanding the folder structure, and so on. So I hoped you enjoyed the article. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Basics of curl command</title>
      <link>https://www.meetgor.com/posts/curl-basics</link>
      <description>Introduction We all might have used the curl command but might be unaware of it. It&#39;s super old and still serves a great purpose. It has been available since 19</description>
      <pubDate>Fri, 05 Nov 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We all might have used the curl command but might be unaware of it. It&#39;s super old&#xA;and still serves a great purpose. It has been available since 1996 and still is&#xA;widely used in many embedded technologies, web API testing, CLI applications,&#xA;etc. In this article, we&#39;ll see some basics of using the curl command along with&#xA;its applications.&#xA;&#xA;## What is the curl command?&#xA;&#xA;Curl or cURL command is the utility or tool to access the internet from the command&#xA;line interface using various protocols. This looks trivial but it can blow up&#xA;your mind! Most people use this tool for fetching and processing the&#xA;data from the servers/internet from their terminal without the browser but&#xA;there is a lot more to it. It is used in various embedded devices for accessing&#xA;the network in a lightweight and accessible way. Let&#39;s see how you can use the curl&#xA;command from the very basics.&#xA;&#xA;&#xA;## Why do we need it?&#xA;&#xA;Before we talk about how to use the curl command let&#39;s talk about why might we need&#xA;that? There are a lot of reasons and it even depends on the application you are&#xA;using.  You can use curl to test your API, well there are other tools like&#xA;POSTMAN, Insomnia, etc but for keeping things simple you can quickly get in&#xA;with curl and test some endpoints.  You might require curl for creating some&#xA;CLI applications that require fetching/posting to an URL over the internet.&#xA;If you are using the terminal, curl integrates really very well with the shell&#xA;programming languages like BASH, ZSH, etc So, after making WHY out of the way,&#xA;let&#39;s start with the actual content. &#xA;&#xA;&#xA;## Structure of curl command&#xA;&#xA;**curl or Client URL is a command-line utility that helps in accessing/posting&#xA;data with various protocols over the internet.** It basically serves as a&#xA;bare-bones browser URL search bar.  You can&#39;t render those pages like the&#xA;actual GUI, and all but you can get is the HTML source code, JSON response,&#xA;etc.  That&#39;s still quite powerful and used in tons of applications. &#xA;&#xA;```&#xA;curl URL arguments &#xA;```&#xA;&#xA;The above is a basic structure of the curl command. We see the argument&#xA;structure in-depth in the next section. Firstly, let&#39;s take a simple curl command with just the URL is given.&#xA;&#xA;```bash&#xA;curl &#34;https://github.com&#34;   &#xA;```&#xA;From this query to `github.com`, you are literally going to `GitHub.com` and getting a response as the entire HTML source code of the page.&#xA;If you don&#39;t want to spam the output in the terminal, you can redirect the output to a file.&#xA;&#xA;```bash&#xA;curl &#34;https://github.com&#34; &gt;temp.html&#xA;```&#xA;With this command, we store the output of the command in the file temp.html, it can be any other file you like. &#xA;&#xA;### Arguments &#xA;&#xA;It turns out that you can even parse in certain arguments to the `curl` command to get some desired and modified results. Let&#39;s take a look at some of them.&#xA;The [entire list of arguments](https://curl.se/docs/manpage.html) is quite huge&#xA;and baffling, but this shows how customizable the command is. &#xA;&#xA;- `-s` (silent the progress bar)&#xA;- `-X` (web requests `POST, GET, etc` to the URL)&#xA;- `-o` (output to a file)&#xA;- `-H` ( provide Header to the request)&#xA;- `-d` (providing the data e.g. in POST request)&#xA;&#xA;```bash&#xA;curl -s -o &#34;https://github.com&#34; temp.html&#xA;```&#xA;&#xA;This command doesn&#39;t load the progress bar and simply outputs the response in a&#xA;file, making the execution process in the terminal clean.&#xA;&#xA;### Integration with other commands &#xA;&#xA;As said, the `curl` command can be well integrated with the other commands using piping in shell, assigning to variables, and so on.&#xA;&#xA;Let&#39;s see how we can convert the `JSON` response to a BASH variable.&#xA;&#xA;```bash&#xA;resp=$(curl -H &#34;api-key: N2vDzMyEeYGTxjUTePhC8bYd&#34; https://dev.to/api/users/me)&#xA;&#xA;echo $resp&#xA;```   &#xA;Here, we are fetching the `JSON` response from the `dev.to` [API](https://developers.forem.com/api/),The wired string `N2vDzMyEeYGTxjUTePhC8bYd` is my [dev.to API token](https://dev.to/settings/account)(don&#39;t worry I have revoked it:) ) we have provided an argument `-H` that is a Header for accepting a `Json` response. &#xA;We can store the contents of the curl command by using the `$( )` and assigning that to the variable name of your choice.&#xA;&#xA;```bash&#xA;username=$(curl -H &#34;api-key: N2vDzMyEeYGTxjUTePhC8bYd&#34; https://dev.to/api/users/me | grep -o -P &#39;(?&lt;=username&#34;:&#34;).*(?=&#34;,&#34;name)&#39;)&#xA;```&#xA;Here, we have stored the username from a `JSON` response to the variable username. We have piped the curl command so that we can work with that `JSON` response and modify the contents and then store the final results in a variable.&#xA;In this case, we are using `grep` to filter out the content between the key `username` and `name`, thus we get the value we desired. To see the value you can always run the echo command as below:&#xA;```bash&#xA;echo $username&#xA;```   &#xA;So, that&#39;s how the `curl` command integrates flawlessly with BASH and other shell programming languages. &#xA;&#xA;## Where is it used?&#xA;&#xA;`curl` is actually used in API testing, CLI applications, Web Scrapping, etc. It&#39;s a great tool for terminal lovers. Let&#39;s see where we can use the curl command actually to make some good projects.&#xA;&#xA;### API Testing&#xA;&#xA;We can use, `curl` to test an API, it might be API you would have made or to simply test and play with other API available publicly. You can get an in-depth guide about [Testing a REST API with curl](https://www.codepedia.org/ama/how-to-test-a-rest-api-from-command-line-with-curl/).&#xA;Actually, curl can do more than just testing, I have made a [bash script](https://gist.github.com/Mr-Destructive/80860664b1014ef0b94092d68ead1044) that actually posts some data over a database through the API so that I don&#39;t have to do that manually. That is the kind of automation I love to do and curl! Just did that.&#xA;&#xA;### Web Scrapping&#xA;&#xA;Web-scrapping is usually trending with Python, but I have done that with BASH.&#xA;That might be an outdated idea but is a good task to learn the basics of&#xA;Web-scrapping with BASH ;). I must say that sed, awk, grep are the tools are&#xA;powerful like heck in doing these tricks. I have made this&#xA;[crypto-currency](https://mr-destructive.github.io/techstructive-blog/bash/2021/07/15/BASH-Crypto-Coingecko.html)&#xA;and&#xA;[dictionary](https://mr-destructive.github.io/techstructive-blog/bash/2021/07/27/BASH-script-dictionary-scrap.html)&#xA;scrapper with BASH. Web-scrapping can be done with the curl command by fetching to&#xA;an API if any or any website. We need to search and find the particular fields,&#xA;classes, or ids the elements the required data might be into and then extract&#xA;and filter using the tools like grep, sed or awk.&#xA;&#xA;&#xA;### CLI Applications&#xA;&#xA;We can make CLI applications like creating a terminal view of existing&#xA;applications using their APIs or website. I recently made a CLI for&#xA;[cross-posting articles](https://github.com/Mr-Destructive/crossposter) to&#xA;dev. to, hashnode and medium. That is a project still in progress(tons of bugs)&#xA;but still serving a decent job. Definitely `curl` might not be the only command&#xA;that works here, but the project might look so incomplete without `curl`.&#xA;&#xA;**There might be other applications as well, who knows there is a lot to do with this command.** If you know one, please let everyone know in the comments.&#xA;&#xA;### References:&#xA;&#xA;Special Thanks to the creator of the curl command - [Magnus Daniel Stenberg](https://github.com/bagder) and the developers who are still contributing and maintaining the great project.&#xA; &#xA;### Conclusion&#xA;&#xA;So, from this article, we were able to understand the basics of the `curl` command and understand its applications in actual programming stuff. Hope you liked it. Thanks for reading and until then Happy Coding :)&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Crossposting with a single script: Crossposter.sh</title>
      <link>https://www.meetgor.com/posts/crossposter-shellscript</link>
      <description>Introduction If you have been writing articles you know the pain to get some attention, if you have already been cross-posting your articles it usually takes so</description>
      <pubDate>Sun, 31 Oct 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;If you have been writing articles you know the pain to get some attention, if you have already been cross-posting your articles it usually takes some time to do that. This task can be automated with a shellscript. If you have been cross-posting articles on `medium.com`, `dev.to` and at `hashnode.com`, then I have a treat for you. &#xA;&#xA;Introducing **crossposter.sh**!!&#xA;&#xA;## What is Crossposter.sh?&#xA;&#xA;### Crosspost to dev.to/hahsnode/medium from the command line.&#xA;&#xA;Crossposter.sh is a shellscript(BASH) to automate crossposting to platforms like dev.to, medium.com and hashnode.com. The script takes in markdown version of your post with a few inputs from you and posts it to those platforms. You would require a token/key for each of those platforms to post it from the command line. You can check out the official repository of [Crossposter](https://github.com/Mr-Destructive/crossposter).&#xA;&#xA;The actual script is still not perfect (has a few bugs). Though it posts on `dev.to` and `medium.com` easily, the `hashnode.com` is buggy as it parses the raw markdown into the post and doesn&#39;t render as desired. So, **its a under-development script**, fell free to raise any issues or PRs on the official GitHub repo.   &#xA;&#xA;Run the script on a bash interpreter with the command:&#xA;&#xA;`bash crosspost.sh`&#xA;&#xA;For posting the article you need to provide the following details:&#xA;&#xA;## Front-Matter&#xA;&#xA;### Meta data about the post&#xA;&#xA;- Title of Post&#xA;- Subtitle of Post&#xA;- Publish status of post(`true` or `false`)&#xA;- Tags for the post (comma separated values)&#xA;- Canonical Url (original url of the post)&#xA;- Cover Image (URL of the post&#39;s image/thumbnail)&#xA;&#xA;This information is a must for `dev.to` especially the `title`. This should be provide in the same order as given below:&#xA;&#xA;```yaml&#xA;&#xA;---&#xA;title: The title of the post&#xA;subtitle: The description of your article&#xA;published: true&#xA;tags: programming, anythingelse&#xA;canonical url: url of your original blog&#xA;cover_image: coverimage_url&#xA;---&#xA;```&#xA;&#xA;There is no need to enclose any of them with quotation marks. `Published` argument will be `true` if you want to publish it and `false` if you want to keep it in your Drafts.&#xA;&#xA;In the demonstrations, we just need to enter the tokens once. The tokens will be stored locally in the `keys.txt` file and retrieved later within the script.&#xA;&#xA;## Posting on **dev.to**:&#xA;&#xA;Posting on dev.to requires their `API key` which can be generated by going on the [Dev Community API Keys](https://dev.to/settings/account/). From there you can generate a new key with any name you like. You just need to enter the key to CLI once or manually enter in the `keys.txt` file with the format `dev.to:key` on the first line. This will be used for the future cross-posting whenever you execute the shell script(`bash crosspost.sh`)&#xA;&#xA;You can provide the [front matter](#front-matter) manually in your markdown file or you will be prompted for the input. So, that is all you will require for posting on dev.to from the Command line.  &#xA;&#xA;Lets see the script in action &#xA;&#xA;![dev.to](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/devto.gif)&#xA;   &#xA;If you want to add in more stuff to the post, you can check out the [DEV.to API docs](https://developers.forem.com/api#operation/createArticle) which is powered by [Forem](https://www.forem.com/), there a ton of options you can hook to the front-matter in the shellscript.&#xA;&#xA;**NOTE: There is a limit of 10 requests per 30 seconds, so keep in mind while testing the script and don&#39;t try to spam**&#xA;&#xA;## Posting on **hashnode.com**:&#xA;&#xA;This part is still under development as it only displays the raw markdown in the post, also the `tags` are too heavy to implement from the API as `id` of every tag is required along with the `slug` and `name`. Still it serves some purpose at least. For posting on `hashnode.com`, we need `Personal Access Token`. This can be generated by going to [Developer Settings](https://hashnode.com/settings/developer). You will also require the user-id of your `hashnode` account. You can get your user-id/username from the [settings](https://hashnode.com/settings) tab under profile information. We require Username for posting to the Publication Blog if any. As usual, the `Personal Access Token` for interacting with the [Hashnodes&#39; GraphQL API](https://api.hashnode.com/). The API is quite user friendly and provides everything in one place. There are docs for running each and every `query` and `mutations` present in the API. &#xA;&#xA;You can paste the token when prompted from the script or manually type in the `keys.txt` text file as `hashnode:token` on the 4th line. Yes, that should be on the `4th` line, thats make retrieving much more easier and safe. Next also input in the `username` when the script asks for the input or again type in on the `5th` line, `hashnode_id:username` in the text file `keys.txt`. Please enter the credentials from the script prompt so as to avoid errors and misconfigurations when doing manually&#xA; &#xA;This will create the Post on hashnode with the title, subtitle, cover image correctly but will mess up the content. I tried hard but its just not happening. There needs to be some character for newline as the API rejects the `rn` characters passed in, so I have substited them with `br` and the result is raw markdown. **As the Hashnode API is still under development and they are bringing changes and new features in, the API should improve in its core functionality and make it much easier for creating some common queries**. So, I&#39;ll create issue on GitHub for posting the actual content via the script.&#xA;&#xA;So, this is the demonstration of posting on hashnode.&#xA;&#xA;![hashnode](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/hashnode.gif)&#xA;&#xA;&#xA;## Posting on **medium.com**:&#xA;&#xA;Medium API is much more versatile and markdown friendly, though it has some limitation on the number of posts you can make in a day. For posting on `Medium.com`, we will require the `Integration Token` which can be generated on the [settings tab](https://medium.com/me/settings). As similar to `hashnode`, you can name the token whatever you like and then get the token. Paste the token when prompted from the script or manually type in the `keys.txt` text file as `medium:token` on the `2nd` line. We also require the Medium_id, but we can get that from the token itself, so inside the script once the token is obtained, the curl command is executed to fetch in the `id` and it is stored on the next(`3rd`) line in the `keys.txt` file for actually posting on `medium.com`. So that is all the configuration you need for posting on `medium.com`.&#xA;&#xA;There is some documentation on [Medium API](https://github.com/Medium/medium-api-docs), we can even post to a Publication, that shall be created in future. Also the cover images can be posted on medium, it is not currently done but that can again be a #TODO. **The tags are not rendered on Medium yet with the script.** The way we can parse  strings is limited in BASH, so this might still be a doable thing later. Most of the checkboxes are ticked like title, subtitle, cover-image, canonical url, and importantly the content.&#xA;&#xA;Let&#39;s look at post on medium from the script.&#xA;&#xA;![medium](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/medium.gif)&#xA;&#xA;&#xA;## All platforms: &#xA;&#xA;Now, once you have configured every thing, you can opt for the `4` choice that is post on all platforms(dev.to, hashnode and medium), but as hashnode is not looking a good option right now, so there is the `5` option for only `dev.to` and `medium`. &#xA;&#xA;![allplatforms](https://gitlab.com/MR_DESTRUCTIVE/tblog-img/-/raw/main/crossposter.gif)&#xA;&#xA;## Why use Crossposter.sh?&#xA;&#xA;This can be not so big of an issue for most of the people but it was a good side project to work and learn more about how APIs work and get some ideas on the design of the platform. Though it is quite time saving to cross post on 3 different platforms within a minute or two. You can tailor your own script as per your specifications and desire.   &#xA;&#xA;So, if you are an author on all of the mentioned platforms, please give it a try. Other Platforms are welcome for contributions. If you found any unexpected things, please hit them in the `issues` tab. &#xA;&#xA;## Script&#xA;&#xA;The script mostly leverages `curl`, `sed`, `cat` and some other basic utilities in BASH. &#xA;&#xA;### Using `curl` for posting the article from APIs&#xA;&#xA;Curl is a life saver command for this project, without this tool, the project might not be as great and efficient. Let&#39;s see some quick commands used in the script.&#xA;&#xA;&#xA;```bash&#xA;curl -H &#34;Content-Type: application/json&#34; -H &#34;api-key&#34;: \&#34;&#39;&#34;$key&#34;&#39;\&#34; -d &#39;{&#34;content&#34;:\&#34;&#39;&#34;$body&#34;&#39;\&#34;}&#39; &#34;$url&#34;&#xA;```&#xA;&#xA;So, the above command is quite basic, some more additions are also added as per the specifications of the Platform. But, let us understand the structure of the command we are sending to the APIs. The first part is the Header (`-H`), in here we specify the content that is going to get parsed and the api-keys to access the API. Next, we have the body or the data (`-d`), here we parse in the actual contents, it might be the front matter along with the markdown content. Finally we have the `url` where we send the `POST` request i.e. the `API endpoint`. The is the escape character that is used to preserve the literal value of the next character and in short we can shorten the command to fit in the next line.&#xA;&#xA;The wired `$body` is used to parse the value of the variable `body` inside of single quotes as in BASH, we can only access the variables&#39; value in double quotes. We are using single quotes as we have to pass the `json` object and which has already double quotes in it.  &#xA;&#xA;### Using `sed` for editing text&#xA;&#xA;Sed is a super-powerful stream editor, its somewhat similar to Vim without an interface, only commands. We use this tool to manipulate the front-matter for posting on the platforms by parsing them to variables in BASH. We also use to enter the api keys inputted by user from variables into the file at a specific position to retrieve later. &#xA;   &#xA;```bash&#xA;sed -i &#34;1a title: $title&#34; file.txt&#xA;```   &#xA;&#xA;&#xA;Here, we are appending(`a`) to the 1st line, text `title: $title`, here `$title` is the variable, so we are technically parsing the value of the variable `title`. We are editing the file `file.txt` in-place `-i` i.e. we are editing it live without creating any temp or backup files.       &#xA;&#xA;```bash&#xA;sed -n -e &#34;s/dev.to://p&#39; keys.txt&#34;&#xA;```&#xA;&#xA;Here we are essentially getting the text after a particular pattern. In this case we are searching in `keys.txt` file for the string `dev.to:` and anything after that till the end of line is returned, we can further store it in the variable and do all sorts of operation.&#xA;      &#xA;### Using `awk` for programmatic editing &#xA;&#xA;```bash&#xA;awk &#39;{print $0&#34;&#xA;&#34;}&#39; temp.txt &gt;file.txt&#xA;```&#xA;&#xA;AWK is a command-line utility for manipulating or writing certain operations/patterns programmatically. We use this tool so as to add `4r4n` to the end of each line, the APIs can&#39;t parse the file contents directly so we have to add certain characters before the end of line and do further operations.&#xA;&#xA;```bash&#xA;cat temp.md | tr -d &#39;&#xA;&#39; &gt; temp.txt&#xA;```   &#xA;&#xA;After we have added the `&#xA;` characters to the end of the file, we simply can use `cat` and `tr` to merge all the lines into a single line. This is how we parse the contents to the API more safely and concisely, of course we need to parse them in a variable by reading the file.&#xA;&#xA;OK, I won&#39;t bore anyone with more BASH but that were some of the quite important commands in the script that form the backbone of the cross-posting and handling text with the APIs.&#xA;&#xA;## Conclusion&#xA;&#xA;So, we can see `crosspost.sh` is a BASH script that cross-posts markdown articles with a bit of inputs to 3 different platforms within a couple of minutes. This article was basically to demonstrate the project and its capabilities also highlighting the issues present. I hope you liked the project, please do try it and comment the feedback please. Thank you for reading, Until next time, Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Hacktoberfest 2021</title>
      <link>https://www.meetgor.com/posts/hacktoberfest-2021</link>
      <description>Introduction This was my second year as a Hacktoberfest participant, First year was very dry and not so exciting as this turned out to be a cracker of an experi</description>
      <pubDate>Thu, 28 Oct 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;This was my second year as a Hacktoberfest participant, First year was very dry and not so exciting as this turned out to be a cracker of an experience. With more than month gone in with only a few contributions, the second half of October was a splendid one for me where I made a lot more than in entire 2 years.  &#xA;&#xA;This is my Contributions so far in this month:&#xA;&#xA;![](https://pbs.twimg.com/media/FCXfmqEVIAMmwug?format=jpg&amp;name=large)&#xA;&#xA;## Maintaining the Momentum&#xA;&#xA;Since the beginning of September, I started to contribute to open-source little by little so as to really progress my way into learning new stuff and networking with more people. I also had Hacktoberfest in my mind and so I started with simple issues and got a different perspective of Open-source as it is another way to do things pretty quickly and feeling of getting things done for someone else as a community work. Finding Issues perfect for you is hard but shaping yourself with the requirements for the issue is what you can achieve over a period of time.&#xA;&#xA;So I basically had some 5-6 pull requests merged in some Repositories, I don&#39;t trust numbers when it comes to open-source as it might be some `documentation changes`, `branch name changes`, `translations`, etc. This might not be the best contribution I could have made but breaking the ice was an important step, for that I bought my expectations down to contribute to big organizations and projects and started sifting some doable issues. Since then, Hacktoberfest was around the corner and it just carried the momentum of my open-source contribution further.&#xA;&#xA;## Getting the hang of Open Source&#xA;&#xA;So, till mid-October, I just had 2 pull requests with only 1 merged. I was out of town for a week and had some college projects to be completed so I couldn&#39;t catch up with the contributions but after a few days of getting back on a regular schedule, I was able to submit half a dozen of PRs in a week (all merged). One of them was from the official `Jekyll` repository :) I was pretty much elated with that contribution though it was only about adding 4-5 lines of documentation, it was something that can be useful for people as it was for me and it was not previously there in the official docs. &#xA;&#xA;You will say, is open-source about merging PRs? No, it&#39;s not especially the `hacktoberfest` labeled ones. I have contributed to only 5 projects with hacktoberfest accepted label others were the stuff which I personally liked and wanted to genuinely contribute to. I also would like to address some issues and things which are not in the spirit of Open source, firstly **most of the issues with hacktoberfest accepted labels are from front end, so trivial, self-assigned. I understand it&#39;s meant for beginners but it&#39;s not only for beginners or is it? I might be wrong here.** Still, the maintainers need to up the standards a bit for equal opportunity for everyone to contribute with the spirit of open source.&#xA;&#xA;I wanted to be a maintainer this year but didn&#39;t feel much confident about it. It can be a challenge for maintainers as well to fetch attention and come up with good issues and projects. That&#39;s a thought for another year until then I will be a happy contributor.&#xA;&#xA;&gt; ### Hacktoberfest is only about Open Source but Open source is not only about Hacktoberfest&#xA;&#xA;Well, Hacktoberfest is about open-source right? But Open source is not only about Hacktoberfest, like just contributing to getting 4 PRs merged and just forgetting open source because it was mentioned so(also get some Swags). This was my mistake last year :( I just didn&#39;t touch open-source after Hacktoberfest. This year I would definitely not repeat the mistake and continue providing value and gain more experience. &#xA;&#xA;## What were my learnings&#xA;&#xA;I learned some skills that I might not have otherwise thought about learning. Understanding `Svelte`, `Github Actions` for `Go` lang, leveraging `Django` and `Bash` skills were some of my key takeaways from this year of Hacktoberfest. This is too much for me compared to the previous year in which I just used some `C++` and `Bash` to create an Algorithm or function. How stupid of me for contributing to that kind of project and pulling up 4 PRs for that, though I was a beginner and have come too far from making that mistake again. Thanks to everyone who gave me a chance to contribute.   &#xA;&#xA;Let&#39;s see my top 4 contributions as per my opinion:&#xA;&#xA;- [Created a ripple effect button as a `Svelte` component](https://github.com/Ananto30/golpo-svelte/pull/14)&#xA;- [Created a BASH script to convert a pip list to freeze format output](https://github.com/soumya997/Listreqs/pull/2)&#xA;- [Added `GitHub` Actions workflow for linting in `go`](https://github.com/julien-bouquet/geo-api/pull/18)&#xA;- [Additional Documentation for running Jekyll local](https://github.com/jekyll/jekyll/pull/8852)&#xA;&#xA;&#xA;## What&#39;s next in Open Source for me&#xA;&#xA;I&#39;ll continue to sort and pick up some issues in my spare time and try to learn something that I have never worked with or am too excited to work on. I have a lot of plans for some Open source projects of existing applications and would like to make them from the ground up. &#xA;&#xA;Some of the things I&#39;m looking for in open source next:&#xA;&#xA;- Open-source alternative to existing projects/apps&#xA;- Any new application/project&#xA;- Automation of tasks (using Python/BASH)&#xA;- Writing documentation for existing projects&#xA;&#xA;Don&#39;t take my word for it, you&#39;ll see it in action soon(probably in 2022) XD. Until then it&#39;s about learning and documenting the process.&#xA;&#xA;## Conclusion&#xA;&#xA;So, it&#39;s almost Halloween with Diwali around the corner and sadly an end-of-a-month-long celebration of Open source. Though you can anytime contribute to open source this is a month you can do more than just open source. This year for me was an absolute delight and definitely improved from the horrendous previous experiences. The things I spent time learning in lock-down when there was no result/reward for the effort, are finally getting rewarded, and am much confident than I ever was. Hopefully, you have not read the entire article but that&#39;s fine unless you have taken time to read it. Thank you for being here. Until then Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Flask and Django - the backend web frameworks</title>
      <link>https://www.meetgor.com/posts/flask-django-frameworks</link>
      <description>Introduction We all have seen the buzz around web frameworks like Django, Flask, Node.js, etc but have you taken time to learn all of them? No, and you shouldn&#39;</description>
      <pubDate>Sun, 24 Oct 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We all have seen the buzz around web frameworks like Django, Flask, Node.js, etc but have you taken time to learn all of them? No, and you shouldn&#39;t! Because many web frameworks share the same principle and workflow with a bit of difference. It&#39;s just like learning one programming language and applying the same concepts in a different syntax and mechanism. In the world of web frameworks, this is the case as well, but most of them will disagree with it as every web framework is unique in its design and that&#39;s true, don&#39;t get me wrong.&#xA;&#xA;Before we get into frameworks let us understand the key components of the web application&#xA;- **Database** - It holds the data for our application.&#xA;- **Server** - Used to fetch/store/manage requests from the client. &#xA;- **API** - Used as an interface between the client and the Database. &#xA;- **Client** - The browser or any client that requests for resources.&#xA;&#xA;![djflask-webapp.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1635081505223/rLnSyA_7Y.png)&#xA;&#xA;Every web framework will serve the same purpose with different design, architecture, language but it will have a similar pattern in developing the application. Let&#39;s clear the concepts in this article.&#xA;&#xA;## What is a back-end Web framework?&#xA;&#xA;A web framework is a tool/application meant for designing, creating, testing web applications a lot quicker with a lot of ease. Without web frameworks, you will have been writing some code that will tire you very quickly. &#xA;&#xA;It even seems impossible to manually write markups for each piece of data in the application, which is taken care of by dynamic templating in Python-based frameworks like Django, Flask. The database queries are managed by the web frameworks as well, otherwise, you will have been writing SQL queries manually! How painful and frustrating that would look, of course, you can create scripts for querying to the database but you are then creating a component of a framework. `Don&#39;t waste time` that&#39;s a takeaway from the philosophy of all the web frameworks.&#xA;&#xA;Another thing that back-end web frameworks do is create homogeneity in development across different environments and applications. It also creates a developer-friendly environment. We must not forget how easy and quick applications can be built using the back-end web frameworks. &#xA;&#xA;&#xA;&#xA;### A back-end Web framework provides some of the features like:&#xA;&#xA;- Handle web requests&#xA;- Manage DB by just using some simple scripts&#xA;- Render Dynamic Templates&#xA;- Provide a lot of native-language libraries integration&#xA;- Organize a project much easily and effectively&#xA;- Options to scale the application at any level&#xA;- Provide some standard and secure way to run the server(production)&#xA;- Design APIs much easily&#xA;&#xA;&#xA;Let us look at two of the most popular frameworks in the Python community.&#xA;&#xA;### 1. Flask&#xA;### 2. Django&#xA;&#xA;![backend framework ranking](https://cdn.hashnode.com/res/hashnode/image/upload/v1635070666410/JbMc7NKP0.png)&#xA;&#xA;We are seeing that Django and Flask are among the top 3 back-end web frameworks in 2021. So there is no double thought on why you should learn these technologies.&#xA;&#xA;### What is Flask&#xA;&#xA;Flask is the bare-bones framework that provides a lot of customizability with a lot less boilerplate code. It is a framework that provides a lot of third-party libraries to add functionalities to our application. &#xA;&#xA;&gt; Flask is a micro web framework&#xA;&#xA;Flask as per the official documentation is a `micro` framework indicating it has a very minimal setup. It is a back-end web framework that can be structured as per needs with a very little configuration overhead. That being said, it can get a bit limited in structuring and functionalities as it needs to taken care of manually.&#xA;&#xA;**Flask is the easiest back-end web framework to get started and learn the fundamentals of server-side**. Flask is quite flexible in terms of scalability and maintenance of decent-sized applications as well. Though the community is not that big and absence of standardization in Flask, it is a go-to back-end web framework for beginners as well as experts due to its simplicity and flawless integration with Python libraries. &#xA;&#xA;The main concepts in Flask might be:&#xA;&#xA;- Virtual Environment &#xA;- WSGI as a web server&#xA;- App routing&#xA;- Jinga2 as a templating language&#xA;- Creating Database connections&#xA;&#xA;So, **Flask is kind of a DIY back-end web framework with rich sets of libraries and customizability out of the box**. This can easily be a beginner&#39;s choice and a right one too.&#xA;&#xA;### What is Django&#xA;&#xA;Django is also a back-end web framework based on Python programming language but it is more standardized and high-level. Django encourages a defined pattern for development but with customization and freedom in mind.&#xA;&#xA;Django also modularizes the components into so-called `apps` to provide a scalable experience. It has a lot of boilerplate code to get up and running quite easily, it also has a `Admin section` pre-built with all the functionalities. Similar to `Flask`, it also provides flawless integration with all the Python libraries. It provides a much easier Database integration and pre-built `User` authentication along with its model ready to plug in and use. &#xA;&#xA;&gt; Django is a Batteries included Framework&#xA;&#xA;That means it has baked in functionalities like User-Authentication, Admin Section, Database Integration, RSS/Atom syndication feeds, etc. &#xA;&#xA;![djflask-dj.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1635079576954/WcjcokoiX.png)&#xA;&#xA;The main concepts in Django include:&#xA;&#xA;- Virtual Environment&#xA;- WSGI/ASGI as web servers&#xA;- Project structure&#xA;- `Model View Controller` Architecture in developing apps&#xA;- Django Templating Language for rendering Dynamic Templates &#xA;- `Object-Relational Mapping` in creating the applications&#xA;&#xA;Unlike Flask, Django is already baked in with a lot of functionalities and integration with a ton of features. It should be good for beginners but many things are already taken care of that can be a huddle in **actual learning process**, that being said it is a much scalable and production-ready web framework (not only back-end).&#xA;&#xA;### What are the similarities between them?&#xA;&#xA;Well, if you learn one the other will be quite easy enough to pick up. The overall development is almost similar but unique in its own way. &#xA;&#xA;- Pythonic syntax and libraries&#xA;- Project Structure is quite similar to `blueprints` in Flask and `apps` in Django&#xA;- Templating Language is almost similar&#xA;- Static Files are handled similarly with a different syntax &#xA;- URL Routing is the same as it binds the view(functions) with a pattern&#xA;- Ease in Deployment with minimal configuration&#xA;&#xA;## What should you learn?&#xA;&#xA;That question is dependent on the type of application you are trying to make but for a beginner trying to get hands dirty on the server-side, I would recommend `Flask` as it is quite minimal and helps in constructing the base for the concepts like APIs, Databases, Requests, Admin section, etc. &#xA;&#xA;This might not be that difficult for people trying to learn back-end from scratch but for people with a bit of programming and server-side experience, `Django` should be a go-to framework for all their needs. &#xA;&#xA;At the end of the day, it hardly matters what you do with which framework, what people see is the end result.&#xA;&#xA;## Conclusion&#xA;&#xA;Thus, from this article, you might have got a bit understanding of why are the frameworks used in making applications and also the similarities and differences in the Python-based back-end web frameworks like Django and Flask. If you have any thoughts please let me know in the comments or on my social handles, any kind of feedback is much appreciated. &#xA;&#xA;Thank you for reading till here, until then as always Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Comment/Uncomment Code: Vim for Programmers</title>
      <link>https://www.meetgor.com/posts/vim-un-comment-p1</link>
      <description>Introduction We as programmers always fiddle with commenting out code for code testing, documenting the function of code, and most importantly debugging. So you</description>
      <pubDate>Thu, 07 Oct 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We as programmers always fiddle with commenting out code for code testing, documenting the function of code, and most importantly debugging. So you can&#39;t wait to comment on a large chunk of code manually, as it is quite a tedious thing to do. Let&#39;s do it effectively in Vim.&#xA;&#xA;In this part of the series, I&#39;ll cover how to comment/uncomment chunks/blocks of code effectively in Vim. We will see and use some commands, keybindings for doing so, and also we would add certain components to our vimrc file as well to design some custom key mappings.  Let&#39;s get faster with Vim.&#xA;  &#xA;## How to comment multiple lines effectively&#xA;&#xA;To comment on multiple lines of code, we can use the Visual Block mode to select the lines, and then after entering into insert mode, we can comment a single line and it would be reflected on all the selected lines.&#xA;&#xA;1. Press `CTRL+V` and Select the line using j and k&#xA;&#xA;2. After Selecting the lines, Press `Escape`&#xA;&#xA;3. Press `Shift + I`, to enter insert mode&#xA;&#xA;4. Enter the comment code (`//`, `#`, or other)&#xA;&#xA;&#xA;![vimcoment.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1633518136135/06dfBTq2T.gif)&#xA;&#xA;So, using just simple steps you can comment out large chunks of code quite easily and effectively. If you are using some other language that has multiple characters for commenting like `//`, `- -`, etc, you can type in any number of characters while being in insert mode after selecting the lines.&#xA;&#xA; &#xA;![vimcppcom.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1633520509953/0q-k2ZHC7.gif)&#xA;&#xA;This might look a bit wired on the first try but just try it every day, It is a life-saving and very satisfying experience once applied in a real-world scenario.&#xA;&#xA;&#xA;## How to uncomment multiple lines effectively&#xA;&#xA;Now, as we have seen to comment out a large chunk of code, we can even uncomment the code very easily. It&#39;s even simpler than commenting the code.&#xA;&#xA;1. Press `CTRL + V` to enter Visual Block mode&#xA;&#xA;2. Select the commented characters&#xA;&#xA;3. Press `d` to delete the comments&#xA;&#xA;4. Press `Escape`&#xA;&#xA;![vimuncoment.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1633518156818/GJzRPTI3I.gif)&#xA;&#xA;We can simply use the CTRL + V to select the comment, and then press d to delete all the comment characters. &#xA;&#xA;**We are using the Visual Block mode as we only want the comment to be selected and not the entire code associated with the lines.**&#xA;&#xA;## Using Multiline Comments for Programming languages&#xA;&#xA;Now you might say, why use multiple single-line comments when we can use multiline comments in almost all programming languages. Well, Of course, you can do that, it&#39;s easier for reading the code if syntax highlighting is accurate and greys out the commented part. We can simply add those characters to the start of the block and at the end of the block.  &#xA;&#xA;But in Vim, we can customize that too, just imagine when you just select the chunk/block of code that you need to comment out and then simply press a few keystrokes (just 2) and the multiline comments are automatically (programmatically) added as per the programming language extension of the file.&#xA;&#xA;Isn&#39;t that cool? Well, you just need to copy-paste the below code to your Vimrc file and source it and you are good to go. &#xA;&#xA;```vim&#xA;function! Comment()&#xA;    let ext = tolower(expand(&#39;%:e&#39;))&#xA;    if ext == &#39;py&#39; &#xA;        let cmt1 = &#34;&#39;&#39;&#39;&#34;&#xA;&#x9;    let cmt2 = &#34;&#39;&#39;&#39;&#34;   &#xA;    elseif ext == &#39;cpp&#39; || ext ==&#39;java&#39; || ext == &#39;css&#39; || ext == &#39;js&#39; || ext == &#39;c&#39; || ext ==&#39;cs&#39; || ext == &#39;rs&#39; || ext == &#39;go&#39;&#xA;&#x9;    let cmt1 = &#39;/*&#39;&#xA;&#x9;    let cmt2 = &#39;*/&#39;&#xA;    elseif ext == &#39;sh&#39;&#xA;&#x9;    let cmt1 = &#34;:&#39;&#34;&#xA;&#x9;    let cmt2 = &#34;&#39;&#34;&#xA;    elseif ext == &#39;html&#39;&#xA;&#x9;    let cmt1 = &#34;&lt;!--&#34;&#xA;&#x9;    let cmt2 = &#34;--&gt;&#34;&#xA;    elseif ext == &#39;hs&#39;&#xA;&#x9;    let cmt1 = &#34;{-&#34;&#xA;&#x9;    let cmt2 = &#34;-}&#34;&#xA;    elseif ext == &#34;rb&#34;&#xA;&#x9;    let cmt1 = &#34;=begin&#34;&#xA;&#x9;    let cmt2 = &#34;=end&#34;&#xA;    endif&#xA;    exe line(&#34;&#39;&lt;&#34;).&#34;normal O&#34;. cmt1 | exe line(&#34;&#39;&gt;&#34;).&#34;normal o&#34;. cmt2 &#xA;endfunction&#xA;&#xA;function! UnComment()&#xA;    exe line(&#34;&#39;&lt;&#34;).&#34;normal dd&#34; | exe line(&#34;&#39;&gt;&#34;).&#34;normal dd&#34;   &#xA;endfunction&#xA;&#xA;&#xA;vnoremap ,m :&lt;c-w&gt;&lt;c-w&gt;&lt;c-w&gt;&lt;c-w&gt;&lt;c-w&gt;call Comment()&lt;CR&gt;&#xA;vnoremap m, :&lt;c-w&gt;&lt;c-w&gt;&lt;c-w&gt;&lt;c-w&gt;&lt;c-w&gt;call UnComment()&lt;CR&gt;&#xA;&#xA;```&#xA;The below screencast is an example of `HTML` snippet in a file that is getting commented using mapping with the keys `,m` you can put any other keybinding you like. &#xA;![htmcm.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1633595891674/hbhrbtRHd.gif)&#xA;&#xA;---&#xA;Similarly for the next screencast is of an `Javascript` snippet in a file which is getting commented using a mapping with the keys `,m` and uncommented using `m,` again you can put any other keybinding you like. &#xA;&#xA;![jscom.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1633595919104/xGTh5ztWu.gif)&#xA;&#xA;---&#xA;The following screencast is of a shell script(BASH) snippet.&#xA;![shcom.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1633596156121/tbGHQBSSA.gif)&#xA;&#xA;---&#xA;### Multiline Comments in various Programming Languages:&#xA;&#xA;#### 1. C / C++ / Java / Javascript / CSS / C# / Rust / Go / PHP / Swift / Dart / Kotlin&#xA;```&#xA;/*&#xA;*/&#xA;```&#xA;#### 2. Python&#xA;```&#xA;&#39;&#39;&#39;&#xA;&#39;&#39;&#39;&#xA;```&#xA;You can even use `&#34;&#34;&#34;` double quotes instead of single quotes&#xA;&#xA;#### 3. BASH (Shell Scripting)&#xA;```&#xA;: &#39;&#xA;&#39;&#xA;```&#xA;You can even use `: &#34;` and `&#34;` double quotes instead of single quotes&#xA;&#xA;#### 4. Haskell&#xA;&#xA;```&#xA;{-&#xA;-}&#xA;```&#xA;#### 5. Ruby&#xA;&#xA;```&#xA;=begin&#xA;=end&#xA;```&#xA;&#xA;#### 6. HTML&#xA;&#xA;```&#xA;&lt;!--&#xA;--&gt;&#xA;```&#xA;&#xA;#### 7. Julia&#xA;&#xA;```&#xA;#=&#xA;=#&#xA;```&#xA;&#xA;### Understanding the Commands / Keymapping&#xA;&#xA;**NOTE : You need to go from the top to bottom while commenting on the block of code, otherwise, it would be a mismatch in commenting for specific language syntax. While uncommenting the order doesn&#39;t matter.**&#xA;&#xA;&#xA;#### Getting the extension (filetype)&#xA;&#xA;In Vim, we can get the file extension i.e. we can get the programming language associated with the current file. To do that we can use, `expand(&#39;%:e&#39;)`.&#xA;&#xA;This will give us the file extension of the current file. Just for simplicity, `%` means the current file, added to it is `:e` for excluding the filename and keeping the extension. We convert the extension into lowercase just for keeping things safe and programmatic and store it in a variable `ext`. &#xA;&#xA;#### Checking for programming language&#xA;We then can then use an if-else ladder to check for the programming languages and assign two variables `cmt1` for the initial characters in the multiline comment and `cmt2` for enclosing the comment. &#xA;&#xA;#### Typing in the characters&#xA;&#xA;We can use the function `line(&#34;&#39;&lt;&#34;)` to get the line number of the previous visual selection. Similarly, `line(&#34;&#39;&gt;&#34;)` for the ending line. We are using the `exe` command to execute the function `line` and so we have to use a concatenation of the commands even to write the raw commands like `i` to insert mode, `o` to insert mode but a line below the cursor. So, we use `normal` command for that. This command indicates the interpreter to execute the following commands from the normal mode. &#xA;&#xA;We have to enclose the `normal` command in double-quotes/single quotes. We can simply use the variable again with concatenation.&#xA;&#xA;```vim&#xA;exe line(&#34;&#39;&gt;&#34;).&#34;normal o&#34;. cmt2 &#xA;```  &#xA;&#xA;The above command will fetch the last line&#39;s number of the previous visual selection followed by entering `o` from the normal mode and concatenated with the value of the variable `cmt2` which we have already initialized in the `Comment` function. We are using `|` for running multiple commands as we also need to include the comment at the beginning of the visual selection. &#xA;&#xA;For uncommenting the code, we are simply deleting the entire first and the last line in the visual selection. For that, we have used `dd` from the normal mode.  &#xA;&#xA;&#xA;### Conclusion&#xA;&#xA;So, from the following type of tutorial, we were able to set up our Vim editor for efficient code commenting/ uncommenting using some commands, key shortcuts, and configuring the vimrc for making custom keymappings. We were also able to understand the multiline comments in various programming languages and use them in Vim very effectively with a simple addon to the config vimrc file.&#xA;&#xA;Thank you for reading, hope you found this article helpful. If you have any queries or wanna add multiline comments for some more programming languages please let me know in the comments or contact section. &#xA;&#xA;Happy Coding :)&#xA;&#xA;### References&#xA;&#xA;- [StackOverflow - Commenting lines in Vim ](https://stackoverflow.com/questions/1676632/whats-a-quick-way-to-comment-uncomment-lines-in-vim/1676690)&#xA;&#xA;- [Liz Lam - 3 ways to comment code in Vim](https://dev.to/grepliz/3-ways-to-comment-out-blocks-of-code-in-vi-6j4)&#xA; &#xA;- [StackExchange - Use variable in normal command](https://vi.stackexchange.com/questions/9644/how-to-use-a-variable-in-the-expression-of-a-normal-command)</content>
      <type></type>
    </item>
    <item>
      <title>What is Machine Learning?</title>
      <link>https://www.meetgor.com/posts/ml-intro</link>
      <description>Introduction We all hear about terms like , , and others, but what do they actually mean and why do you need to care about these as a developer. This won&#39;t be a</description>
      <pubDate>Wed, 29 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We all hear about terms like `Machine Learning`, `Artificial Intelligence`, and others, but what do they actually mean and why do you need to care about these as a developer. This won&#39;t be a perfect guide in terms of experience but surely enough to get anyone through the basics of Machine Learning.&#xA;&#xA;This is not the kind of article I write but, having such challenges can help me become a better technical writer, this is the challenge put forward in the Hashnode Bootcamp 4 to get out of my comfort zone. Here&#39;s my take on what I know about Machine Learning till now (P.S. Half of the stuff I discovered and re-learned during writing).&#xA;&#xA;## What is Machine Learning?&#xA;&#xA;Machine Learning is a technique in software development to predict and react to the inputs without being explicitly programmed or written. We can use the if-else condition till a point in time, after seeing real-world examples like customer service, driving, playing games(chess, checkers, etc), image prediction, and so on. You can&#39;t write code for every single case of these applications, So that is where we see Artificial Intelligence.&#xA;&#xA;&gt; Artificial Intelligence is a process of simulating human-like behavior into computers /robots/ electronic systems.&#xA;&#xA;These are two quite similar terms(A.I., M.L.) but they have their own differences. Let&#39;s look at those differences:&#xA;&#xA;- **Artificial Intelligence is a technology that enables computer systems to act and decide like humans.**&#xA;&#xA;-  **Machine Learning is a process of extracting data and learning from the past experience or outcomes from that data.**&#xA;&#xA;Machine learning is actually a subset of AI. Machine learning actually is about training the computer system about an outcome using the data feed into it. We will actually look at the detailed process about it in the next few sections. &#xA;&#xA;## The Process of Machine Learning&#xA;&#xA;The first step should be to choose an idea or a goal that you would like to make the system predict or output the results.&#xA;&#xA;1. Data Gathering&#xA;2. Filtering Data&#xA;3. Selecting an Algorithm &#xA;4. Training the system&#xA;5. Verifying and Evaluation of Training&#xA;6. Improving and Deploying the Model&#xA;&#xA;Let&#39;s take the example of classifying a picture as either dog or cat.&#xA;&#xA;![Images.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1632902918426/AoHlrL13z.gif)&#xA;&#xA;### 1. Data Gathering&#xA;&#xA;We can now move on to collecting the medium of data that will be used by our system to get the desired outcome, it might be to predict something, classify certain things, take decisions, etc.&#xA;&#xA;In our example, as we want to classify an image into either a dog or a cat is classifying things from the given data set. For that, we will require images that depict this constraint.  We can use our personal data, public data and from other sources, you would like to get your hands on.&#xA;&#xA;These are some of the popular places to get data publicly:&#xA;   - [Kaggle](https://www.kaggle.com/)&#xA;   - [Google Data Search](https://datasetsearch.research.google.com/)&#xA;   - [Reddit Datasets](https://www.reddit.com/r/datasets/top/?sort=top&amp;t=all)&#xA;   - [Public Datasets on GitHub](https://github.com/awesomedata/awesome-public-datasets#machinelearning)&#xA;   - [AWS Registry of Open Data](https://registry.opendata.aws/)&#xA;&#xA;### 2. Filtering Data&#xA;&#xA;After you have collected the data from some sources, you will notice that it is not perfect as per your needs. And to be honest, there is no dataset that is perfect for your requirements, because otherwise there will be a ton of data to work with, it might be inefficient for humans to create and sort out from that data. So we may have to do it manually or take help from a data scientist. &#xA;&#xA;But if you are just learning, it will be helpful for you to filter and clean the data yourself. There will be things in the data sets missing or there will be unwanted things in it. This is a critical step that everyone tends to ignore but at the end of the day,  spend about 80% of the time unknowingly. This is quite an important step as it decides the efficiency of the model you will have made. &#xA;&#xA;- Remove/Fill in the rows which are empty.&#xA;- Remove the columns which are not related to your objective.&#xA;- Fix certain wrong or inconsistent data.&#xA;&#xA;#### Group data as Training and Testing &#xA;After the procedure has been applied, you can now separate the data set as Training and Testing Data. You have to create two datasets from one, the prior for training and the latter for testing the model system after evaluating the tests.&#xA;&#xA;For our example, we have to separate the images which will be relatively easier to distinguish in the training data, and the tough ones for the testing data as it will challenge the model appropriately. &#xA;&#xA;### 3. Selecting an Algorithm &#xA;&#xA;Now, this is again an important step as it will make your project&#39;s backbone. This will be the algorithm that will identify, predict or decide on the outcomes from the data given to it.&#xA;&#xA;We have the following types of algorithms &#xA;&#xA;- Linear Regression&#xA;- Logistic Regression&#xA;- Decision Tree&#xA;- Artificial Neural Network&#xA;- k-Nearest Neighbors (KNN)&#xA;- k-Means&#xA;&#xA;You can choose any one of the above or find other types which will be more or less based on these algorithms. This algorithm will be decided by the outcomes you want, for example, whether you have to predict, classify, recommend, cluster, etc. the outcome from the given data. Different algorithms have different complexity as they have a completely different approaches.&#xA;&#xA;You can research this more and find more about which will be suitable for your objectives or application.  &#xA;&#xA;Now an important topic that is misleading, Model is the program that will work with the data in association with the algorithm and output the actual objective. Model is not the algorithm but it works with the chosen algorithm and processes the actual learning in machine learning. &#xA;&#xA;So,&#xA;&#xA;&gt; Model = Algorithm + Data&#xA;&#xA; The model will actually process the data according to the algorithm given and fill in the objectives may they be classifying or predicting. &#xA;&#xA;### 4. Training the system&#xA;&#xA;Training is a step that is very interesting as it involves actually testing the model and it&#39;s really fun. We provide the model the `training data` that we segregated while filtering the data. In this process we try to minimize the loss by making changes to the algorithm, fix some data set or bring in some additional dataset as per needs and again evaluate the results. This is a loop called `model fitting`.&#xA;&#xA;This step depends on the learning into consideration, whether you want to provide any supervision or not. &#xA;&#xA;### 5. Verifying and Evaluation of Training&#xA;&#xA;This is a part of `model fitting` as it is the part of the loop which allows us to evaluate and verify the model.  We can evaluate the model based on its accuracy, precision, labels, etc. So based on those parameters we should be able to decide its complexity and performance.&#xA;&#xA;These are important aspects to consider in evaluating the model.&#xA;- Accuracy &#xA;- Precision  &#xA;- Recall &#xA;&#xA;You can get the details of the mathematics and logic involved in evaluating the model with some references like:&#xA;&#xA;- [Evaluating ML model - Jeremy Jordan](https://www.jeremyjordan.me/evaluating-a-machine-learning-model/)&#xA;- [Ways to Evaluate ML model - Towards DataScience ](https://towardsdatascience.com/various-ways-to-evaluate-a-machine-learning-models-performance-230449055f15)&#xA;&#xA;### 6. Improving and Deploying the Model&#xA;&#xA;This might be the final step generally but it depends on the project, there are certain aspects that need to be taken care of like:&#xA;&#xA;- Creating an API endpoint&#xA;- Analysis and Visualization integration with client-side (web/android/ios/desktop app)&#xA;- Creating a Pipeline for data input and output from the model.&#xA;&#xA;There might be other options like CI/CD, Testing, feedback, and other production level details that need to be taken care of, you can read more about the deployment of machine learning models  [here](https://christophergs.com/machine%20learning/2019/03/17/how-to-deploy-machine-learning-models/). &#xA;&#xA;You can learn about deploying an ML model for your learning and testing for free with the recommendations of  [FCC](https://www.freecodecamp.org/news/deploy-your-machine-learning-models-for-free/).&#xA;&#xA;## Different Types of Machine Learning&#xA;&#xA;There are four basic types of Machine Learning :&#xA;&#xA;### 1. Supervised Learning&#xA;&#xA;In this type of ML, the model is given the labeled data in the training dataset and is evaluated. We provide both input and output to the model and hence it is supervised or tracked throughout the process. &#xA;&#xA;&#xA;![3.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632909214720/1uHALiOG-.png)&#xA;&#xA;The above image is just for reference and not directly a model, height and weight can be a parameter to consider and are not only the thing to be considered her. It&#39;s just for making understand the concept of the learning process. &#xA;&#xA;### 2. Unsupervised Learning&#xA;&#xA;In this type of machine learning, the model is trained with unlabeled data. It is on the algorithm to actually see the pattern or logic in the dataset provided and give the output. The output will be known to the user but is not given to the model, hence called unsupervised learning. &#xA;&#xA;&#xA;![4.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632909227565/KtU7Oppkc.png)&#xA;&#xA;### 3. Semi-Supervised Learning&#xA;&#xA;As the name suggests, it is a combination of both Supervised and Unsupervised learning. The dataset is given with the label but the model is also allowed to process its own label(kind of) into the output. Hence having the best of both worlds. There might be even some labeled and some unlabeled datasets as per the requirements of the application.&#xA;&#xA;![5.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632909276475/TPGy91_LQ.png)&#xA;&#xA;&#xA;### 4. Reinforcement Learning&#xA;&#xA;In reinforcement learning, the model learns from feedback. It might look similar to supervised learning but here the feedback might not be instant and hence causing delay and improper decision making from the model. Though it is used in many places, it is like the realistic learning behavior of humans depicted to computers with this learning system.  &#xA;&#xA;**There are other types of learning systems as well but these are used quite commonly and are quite versatile as well.**&#xA;&#xA;&#xA;## Applications of Machine Learning&#xA;&#xA;The applications of Machines Learning are all around you. Just look carefully, you would have even feedback on a model! &#xA;- Personal Assistants (Google Assistant/ Siri/ Alexa)&#xA;- Gmail Inbox filter.&#xA;- [Youtube Video Recommendation system](https://towardsdatascience.com/how-youtube-recommends-videos-b6e003a5ab2f) .&#xA;- Face recognition ( [DeepFace](https://en.wikipedia.org/wiki/DeepFace) )&#xA;- Product Recommendations.&#xA;- Self-Driving Cars( [Tesla](https://www.tesla.com/autopilot) )&#xA;- Traffic Alerts (Google Map)&#xA;- Text Improvement ([Grammarly](https://www.grammarly.com/blog/how-grammarly-uses-ai/))&#xA;&#xA;This list is quite huge and is increasing every day with new technologies and growing popularity. &#xA;&#xA;## Can GitHub Copilot take away developers&#39; jobs?&#xA;&#xA;This just doesn&#39;t focus on developers, it&#39;s every human&#39;s job on target this day but really? Is it a matter of concern? &#xA;I don&#39;t think so, because,&#xA;&#xA;&gt; the number of jobs lost = the number of jobs given.&#xA;&#xA;There will be a need for humans in some or the other way, remember a computer cannot is **not smart** like humans, surely it has improved from what we thought a couple of years ago. But who discovered this? HUMANS. &#xA;&#xA;Yes, Machine Learning is quite a powerful technique but humans will remain the essence in the world. It will be dependent on humans how we treat the models and use them to our and nature&#39;s advantage and not use them against nature to face the consequences later. &#xA;&#xA;## Conclusion&#xA;&#xA;Ok, so from this big article, we can summarize the Machine Learning concept.&#xA;&#xA;&gt; Machine learning is a program that has a dataset and algorithm along with the model for the objective, we train the model as per the requirements and objectives with our dataset. &#xA;&#xA;We were able to understand some common processes involved in Machine Learning. We even discussed the applications and the state of Machine Learning in today&#39;s world. I hope you found this article helpful. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django + Auth0 Quick Setup</title>
      <link>https://www.meetgor.com/posts/djagno-auth0-script</link>
      <description>Introduction This is a guide and a walkthrough of how to quickly set up a base Django project with Auth0 as integration for authentication and authorization. I</description>
      <pubDate>Tue, 28 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;&#xA;This is a guide and a walkthrough of how to quickly set up a base Django project with Auth0 as integration for authentication and authorization. I will walk you through the Django setup and how to use and integrate the functionalities of the Auth0.  I will also discuss how why you should be using Auth0 and why I love it.&#xA;&#xA;The script takes `2:44` minutes time to do everything from scratch. From installing virtualenv in python to integrating the Auth0 application. &#xA;&#xA;Here&#39;s how the script works:&#xA;&#xA;![authodj.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1632830813802/MOvedPYdt.gif)&#xA;&#xA;### Contents&#xA;&#xA;- [What is Auth0](#what-is-auth0)&#xA;- [Why I love Auth0](#why-i-love-auth0)&#xA;- [Set up a Django Project](#set-up-a-django-project)&#xA;- [Integrate Auth0 to a Django project](#integrate-auth0-to-a-django-project)&#xA;- [Creating a BASH Script for integrating Auth0](#creating-a-bash-script-for-integrating-auth0)&#xA;     - [Appending to a file](#appending-to-a-file)&#xA;     - [Adding text before a particular line using `sed` ](#adding-text-before-a-particular-line-using-sed)&#xA;     - [Appending to a line using `sed`](#appending-to-a-line-using-sed)&#xA;- [Complete BASH Script](#complete-bash-script)&#xA;- [Conclusion](#conclusion)&#xA;&#xA;## What is Auth0&#xA;&#xA;Auth0 (`Auth zero`) is a platform that provides easy authentication and authorization for a number of platforms in various programming languages and frameworks. The easy-to-follow documentation, availability for almost all web frameworks across platforms make it a big bonus for developers. They actually make the Developer experience flawless and beginner-friendly. &#xA;&#xA;According to Auth0,&#xA;&gt; They make your login box awesome&#xA;&#xA;And how true is that they make things pretty convenient and wicked fast to integrate a smooth functional backend for authentication and authorization. Of course, there are more things they offer than just making authentication systems but it is by far what the world knows them for.&#xA;&#xA;## Why I love Auth0&#xA;&#xA;Auth0 is a generous company that provides a free tier for a limited capacity of authentication and that might be more than enough for a developer getting its feet wet in the web development (backend). &#xA;&#xA;They even provide a nice user interface out of the box for login/signup and even a dashboard ready-made, which is quite a lot of heavy lifting already done for you. Also, there is a dashboard for analyzing the number of sign-in/logins into the particular app. This provides the admin/developer of the app to get a closer look at the user registered in a day/week/months, number of active users, and so on. &#xA;&#xA;So, who would not love it? I am willing to write and use their service for some of my projects. I already have used one for the Hashnode x Auth0 Hackathon, I made [devquotes](https://github.com/Mr-Destructive/devquotes) using the authentication of Auth0 in my Django application. &#xA;&#xA;## Set up a Django Project&#xA;&#xA;If you are reading this you already know how to set up a Django project, I assume. But nevertheless, I can just include a quick introduction on how to do it. I have a script to do this. &#xA;&#xA;```bash&#xA;#!/usr/bin/env bash&#xA;&#xA;mkdir $1&#xA;cd $1&#xA;pip install virtualenv&#xA;virtualenv env&#xA;source env�in�ctivate&#xA;&#xA;pip install django&#xA;django-admin startproject $1 .&#xA;clear&#xA;```&#xA;You can check out  [Django Quick Setup Script](https://techstructiveblog.hashnode.dev/django-quick-setup-script)  for the details of this script and also a more in-depth guide of Django project setup.&#xA;&#xA;But if you want to understand the basics of the Django project setup here is a little guide about it:&#xA;&#xA;Firstly, create a virtual environment, it&#39;s not mandatory but it keeps things simple and easy for your project in correspondence to the entire OS. So in python, we have a module to create the virtual environment pretty easily,&#xA;&#xA;```shell&#xA;pip install virtualenv&#xA;```&#xA;You can use `pip3` or `pip -m`, or however you install normal python modules. This just installs the python virtual environment, we need to create one in the current folder, so for that navigate to the folder where you want to create the project and enter the following command:&#xA;&#xA;```shell&#xA;virtualenv venv&#xA;``` &#xA;&#xA;Here, `venv` can be anything like `env` just for your understanding and simplicity it&#39;s a standard name kept for the same. After this, you will see a folder of the same name i.e. `venv` or any other name you have used. This is the folder where python will keep every installation private to the local folder itself. Now, we need to activate the virtual environment, for that we can use the command :&#xA;&#xA;```bash&#xA;# for Linux/macOS :&#xA;source venv/bin/activate&#xA;```&#xA;&#xA;```batch&#xA;# for Windows:&#xA;venv\Scripts�ctivate&#xA;```&#xA;After this, your command prompt will have a `(venv)` attached to its start. This indicates you are in a virtual environment, things you do here, may it be module installation or any configuration related to python will stay in the local folder itself.&#xA;&#xA;After the virtual environment is set up and activated, you can install Django and get started with it. Firstly, install Django using pip:&#xA;&#xA;```shell&#xA;pip install django&#xA;```&#xA;After the installation is completed, you can start a Django project in the current folder using the command: &#xA;&#xA;```shell&#xA;django-admin startproject name&#xA;```&#xA;Here name can be your project name. After this, you will see one new folder and one file pop up.&#xA;Namely, the `project named` folder and `manage.py` file. So you don&#39;t have to touch the `manage.py` file but we use it in most of the commands to use the Django functionalities. &#xA;&#xA;You can now run your basic server using the command : &#xA;```shell&#xA;python manage.py runserver&#xA;```&#xA;There is a base installed/setup of the Django project. Moving on in integrating the Auth0 login functionality in our webpage.&#xA;&#xA;&#xA;## Integrate the Auth0 app in your project&#xA;&#xA;So, for integrating the Auth0 app for your web application, you need to have an Auth0 account, you can signup here. After this you can create an Auth0 application for any type of application, we have a couple of options:&#xA;&#xA;- Native Application&#xA;- Single Page Application&#xA;- Regular Web Application&#xA;- Machine to Machine Application&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632748408741/sUeS8AgrX.png)&#xA;&#xA;You can choose as per your needs, but mostly it would be a Regular Web application if you have a backend like Django, Nodejs, PHP, or other types of frameworks and languages. So, moving ahead we have created an application for the Django framework as a backend. Now, we have a `Settings` tab in the application dashboard, where we have different credentials for the Auth0 app to talk to our application.&#xA;&#xA;The credentials needed to be stored safely are:&#xA;- domain&#xA;- Client ID (Key)&#xA;- Client Secret&#xA;&#xA;This has to be secured for our local application which will go into production when ready. You can use several options like dotenv, environment variables, and so on when the application is being deployed but for now, let&#39;s hardcode them in our Django project. &#xA;&#xA;Now, you can follow the simple straightforward procedure to copy-paste your credentials from the  [Auth0 official documentation](https://auth0.com/docs/quickstart/webapp/django/01-login#logout). It&#39;s quite straightforward to follow the steps even for a beginner. &#xA;&#xA;After the Auth0 app has been configured following the procedure in the documentation, you need to integrate several files like dashboard and index templates into your custom templates.&#xA;&#xA;Following additional changes are also to be made if you have a user-defined app for your Django project.&#xA;&#xA; In the `auth0login` app, `view.py` file:&#xA;1. The `index` function renders the base file for your project if the user is logged in.&#xA;2. The `dashboard` function renders the baked version of your profile/dashboard of users on your app.&#xA;&#xA;You would also need to add the root URIs of your app that you will be using for testing or in production. For example, we can add `http://127.0.0.1:8000` to allow and use Auth0 in our development environment locally.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632828981455/2gc4h7tTh.png)&#xA;&#xA;You also need to specify the callback URLs for your application which is `/complete/auth0` by default.&#xA;&#xA;## Creating a BASH Script for integrating Auth0&#xA;&#xA;So, we can now dive into creating the BASH Script to set up the Django x Auth0 application in minutes. The script is quite large, like 200 lines but don&#39;t worry! Its automation reduces the pain of integrating a User Authorization flawlessly. I am also thinking of adding the `cURL` command and parsing in the Client ids, keys, and secret keys, etc. &#xA;&#xA;### Appending to a file&#xA;&#xA;We can use the `cat` command to append text to a file, using the syntax as below:&#xA;&#xA;```shell&#xA;cat &lt;&lt; EOF &gt;&gt; filename&#xA;text&#xA;more text&#xA;EOF&#xA;```&#xA;Remember here EOF is just a label to stop the command and save it to the file. &#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632826339707/_g_RXP3NE.png)&#xA;&#xA;So, we can see that we were able to append to a file, multiple lines using the cat command.  &#xA;&#xA;We have used this concept in adding configuration and credentials to the `settings.py` or the `urls.py` files.&#xA;&#xA;### Adding text before a particular line using `sed` &#xA;&#xA;`sed` is a great command, and there is nothing you can&#39;t do with it, OK there might be exceptions. We can get write to a file directly (don&#39;t display the output) and specify the line number before which we want to append the text. We can then add the text we want and followed by the filename.&#xA;&#xA;```shell&#xA;sed -i &#39;33 i sometext here&#39; filename&#xA;```&#xA;Here, `33` is the line number in the file which we want to insert before. We have used `&#39;&#34;&#39;` to add a `&#39;` inside a `&#39;`, this might feel a bit wired but that is how it is in BASH. &#xA;&#xA;Let&#39;s say you want to add `print(&#39;Hello, World!&#39;)` to a particular line, we have to enclose `&#39;` with these `&#34;&#39;`( double and single quotes),&#xA;&#xA;```shell&#xA;sed -i &#39;2i print(&#39;&#34;&#39;Hello, World&#39;&#34;&#39;)&#39; hello.py&#xA;```&#xA;This will add the line `print(&#39;Hello World&#39;)` to the file `hello.py`&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632824742835/Uj8AF07UG.png)&#xA;&#xA;### Appending to a line using sed&#xA;&#xA;We can even append text to a particular line using sed, we can use some escape characters and regex to add the text from the end of the line.&#xA;&#xA;```shell&#xA;sed -i &#39;2i s/$/ textgoes here /&#39; filename&#xA;``` &#xA;Here 2 is any number of line you want to add text to, next `i` a prompt for inserting text and then we have regex like `s/$/ /`, this will put the text enclosed in `/ /` to the end of the line as indicated by `$`.  We have the filename at its usual place as before.&#xA;&#xA;So, lets say, I want to add a comment to the second line in the previous example, I can use the following command to do it:&#xA;&#xA;```shell&#xA;sed -i &#39;2 s/$/ # another comment/&#39; hello.py&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632825067925/0eU2mkCDI.png)&#xA;&#xA;We have used these commands to add the `include` function in the `urls.py` in the project folder.&#xA;&#xA;So those were all the operations we used for doing some automated editing for the Auth0 app integration to our Django project.&#xA; &#xA;Below is the entire script and is also uploaded on [GitHub](https://github.com/Mr-Destructive/django-auth0-quick-setup). &#xA;&#xA;## Complete BASH Script&#xA;&#xA;You can run the file by parsing the name of your project.&#xA;&#xA;```shell&#xA;bash script.sh mywebsite&#xA;```&#xA;&#xA;Wait for some 2-3 minutes, and the script will produce the Django application with the Auth0 app integrated. You will have to enter the credentials manually wherever applicable.&#xA;&#xA;&#xA;```bash&#xA;#!/usr/bin/env bash&#xA;&#xA;mkdir $1&#xA;cd $1&#xA;pip install virtualenv&#xA;virtualenv venv&#xA;source venv/Scripts/activate&#xA;&#xA;pip install django&#xA;django-admin startproject $1 .&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; requirements.txt&#xA;social-auth-app-django~=3.1 &#xA;python-jose~=3.0 &#xA;python-dotenv~=0.9&#xA;EOF&#xA;&#xA;pip install -r requirements.txt&#xA;&#xA;pip freeze &gt; requirements.txt&#xA;&#xA;python manage.py startapp auth0login&#xA;&#xA;touch auth0login/urls.py&#xA;mkdir auth0login/templates&#xA;touch auth0login/templates/index.html&#xA;touch auth0login/templates/dashboard.html&#xA;&#xA;sed -i &#39;40 i \    &#39;&#34;&#39;&#34;&#39;social_django&#39;&#34;&#39;&#34;&#39;,&#39; $1/settings.py&#xA;sed -i &#39;41 i \    &#39;&#34;&#39;&#34;&#39;auth0login&#39;&#34;&#39;&#34;&#39;,&#39; $1/settings.py&#xA;sed -i &#39;21 i \    path(&#39;&#34;&#39;&#39;&#34;&#39;, include(&#39;&#34;&#39;auth0login.urls&#39;&#34;&#39;)),&#39; $1/urls.py&#xA;sed -i &#39;17 s/$/, include/&#39; $1/urls.py &#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; $1/settings.py&#xA;SOCIAL_AUTH_TRAILING_SLASH = False  # Remove trailing slash from routes&#xA;SOCIAL_AUTH_AUTH0_DOMAIN = &#39;YOUR_DOMAIN&#39;&#xA;SOCIAL_AUTH_AUTH0_KEY = &#39;YOUR_CLIENT_ID&#39;&#xA;SOCIAL_AUTH_AUTH0_SECRET = &#39;YOUR_CLIENT_SECRET&#39;&#xA;EOF&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; $1/settings.py &#xA;SOCIAL_AUTH_AUTH0_SCOPE = [&#xA;    &#39;openid&#39;,&#xA;    &#39;profile&#39;,&#xA;    &#39;email&#39;&#xA;]&#xA;EOF&#xA;&#xA;python manage.py migrate&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt;auth0login/auth0backend.py&#xA;&#xA;from urllib import request&#xA;from jose import jwt&#xA;from social_core.backends.oauth import BaseOAuth2&#xA;&#xA;&#xA;class Auth0(BaseOAuth2):&#xA;    &#34;&#34;&#34;Auth0 OAuth authentication backend&#34;&#34;&#34;&#xA;    name = &#39;auth0&#39;&#xA;    SCOPE_SEPARATOR = &#39; &#39;&#xA;    ACCESS_TOKEN_METHOD = &#39;POST&#39;&#xA;    REDIRECT_STATE = False&#xA;    EXTRA_DATA = [&#xA;        (&#39;picture&#39;, &#39;picture&#39;),&#xA;        (&#39;email&#39;, &#39;email&#39;)&#xA;    ]&#xA;&#xA;    def authorization_url(self):&#xA;        return &#39;https://&#39; + self.setting(&#39;DOMAIN&#39;) + &#39;/authorize&#39;&#xA;&#xA;    def access_token_url(self):&#xA;        return &#39;https://&#39; + self.setting(&#39;DOMAIN&#39;) + &#39;/oauth/token&#39;&#xA;&#xA;    def get_user_id(self, details, response):&#xA;        &#34;&#34;&#34;Return current user id.&#34;&#34;&#34;&#xA;        return details[&#39;user_id&#39;]&#xA;&#xA;    def get_user_details(self, response):&#xA;        # Obtain JWT and the keys to validate the signature&#xA;        id_token = response.get(&#39;id_token&#39;)&#xA;        jwks = request.urlopen(&#39;https://&#39; + self.setting(&#39;DOMAIN&#39;) + &#39;/.well-known/jwks.json&#39;)&#xA;        issuer = &#39;https://&#39; + self.setting(&#39;DOMAIN&#39;) + &#39;/&#39;&#xA;        audience = self.setting(&#39;KEY&#39;)  # CLIENT_ID&#xA;        payload = jwt.decode(id_token, jwks.read(), algorithms=[&#39;RS256&#39;], audience=audience, issuer=issuer)&#xA;&#xA;        return {&#39;username&#39;: payload[&#39;nickname&#39;],&#xA;                &#39;first_name&#39;: payload[&#39;name&#39;],&#xA;                &#39;picture&#39;: payload[&#39;picture&#39;],&#xA;                &#39;user_id&#39;: payload[&#39;sub&#39;],&#xA;                &#39;email&#39;: payload[&#39;email&#39;]}&#xA;&#xA;EOF&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; $1/settings.py&#xA;&#xA;AUTHENTICATION_BACKENDS = {&#xA;    #&#39;YOUR_DJANGO_APP_NAME.auth0backend.Auth0&#39;,&#xA;    &#39;django.contrib.auth.backends.ModelBackend&#39;&#xA;}&#xA;&#xA;EOF&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; $1/settings.py&#xA;&#xA;LOGIN_URL = &#39;/login/auth0&#39;&#xA;LOGIN_REDIRECT_URL = &#39;/dashboard&#39;&#xA;EOF&#xA;&#xA;cat &gt; auth0login/views.py&lt;&lt;EOF&#xA;&#xA;from django.shortcuts import render, redirect&#xA;from django.contrib.auth.decorators import login_required&#xA;from django.contrib.auth import logout as log_out&#xA;from django.conf import settings&#xA;from django.http import HttpResponseRedirect&#xA;from urllib.parse import urlencode&#xA;import json&#xA;&#xA;def index(request):&#xA;    user = request.user&#xA;    if user.is_authenticated:&#xA;        return redirect(dashboard)&#xA;    else:&#xA;        return render(request, &#39;index.html&#39;)&#xA;&#xA;&#xA;@login_required&#xA;def dashboard(request):&#xA;    user = request.user&#xA;    auth0user = user.social_auth.get(provider=&#39;auth0&#39;)&#xA;    userdata = {&#xA;        &#39;user_id&#39;: auth0user.uid,&#xA;        &#39;name&#39;: user.first_name,&#xA;        &#39;picture&#39;: auth0user.extra_data[&#39;picture&#39;],&#xA;        &#39;email&#39;: auth0user.extra_data[&#39;email&#39;],&#xA;    }&#xA;&#xA;    return render(request, &#39;dashboard.html&#39;, {&#xA;        &#39;auth0User&#39;: auth0user,&#xA;        &#39;userdata&#39;: json.dumps(userdata, indent=4)&#xA;    })&#xA;&#xA;def logout(request):&#xA;    log_out(request)&#xA;    return_to = urlencode({&#39;returnTo&#39;: request.build_absolute_uri(&#39;/&#39;)})&#xA;    logout_url = &#39;https://%s/v2/logout?client_id=%s&amp;%s&#39; % \&#xA;                 (settings.SOCIAL_AUTH_AUTH0_DOMAIN, settings.SOCIAL_AUTH_AUTH0_KEY, return_to)&#xA;    return HttpResponseRedirect(logout_url)&#xA;&#xA;EOF&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; auth0login/templates/index.html&#xA;&#xA;&lt;div class=&#34;login-box auth0-box before&#34;&gt;&#xA;    &lt;img src=&#34;https://i.cloudup.com/StzWWrY34s.png&#34; /&gt;&#xA;    &lt;h3&gt;Auth0 Example&lt;/h3&gt;&#xA;    &lt;p&gt;Zero friction identity infrastructure, built for developers&lt;/p&gt;&#xA;    &lt;a class=&#34;btn btn-primary btn-lg btn-login btn-block&#34; href=&#34;#&#34;&gt;Log In&lt;/a&gt;&#xA;&lt;/div&gt;&#xA;EOF&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; auth0login/templates/dashboard.html&#xA;&#xA;&lt;div class=&#34;logged-in-box auth0-box logged-in&#34;&gt;&#xA;    &lt;h1 id=&#34;logo&#34;&gt;&lt;img src=&#34;//cdn.auth0.com/samples/auth0_logo_final_blue_RGB.png&#34; /&gt;&lt;/h1&gt;&#xA;    &lt;img class=&#34;avatar&#34; src=&#34;#&#34;/&gt;&#xA;    &lt;h2&gt;Welcome {{ user.username }}&lt;/h2&gt;&#xA;    &lt;pre&gt;{{ userdata }}&lt;/pre&gt;&#xA;&lt;/div&gt;&#xA;EOF&#xA;&#xA;cat &lt;&lt; EOF &gt;&gt; auth0login/urls.py&#xA;from django.urls import path, include&#xA;from . import views&#xA;&#xA;urlpatterns = [&#xA;    path(&#39;&#39;, views.index),&#xA;    path(&#39;dashboard&#39;, views.dashboard),&#xA;    path(&#39;logout&#39;, views.logout),&#xA;    path(&#39;&#39;, include(&#39;django.contrib.auth.urls&#39;)),&#xA;    path(&#39;&#39;, include(&#39;social_django.urls&#39;)),&#xA;]&#xA;&#xA;EOF&#xA;&#xA;python manage.py makemigrations&#xA;python manage.py migrate&#xA;&#xA;```&#xA;## Conclusion&#xA;&#xA;Ok, so this was it, a quite big script but that&#39;s how automation can be. We were able to set up a Django base application with a ready app of Auth0 to extend the functionality. This was just a basic script also you can extend the functionalities like adding a curl command to fetch the credentials and make it more automated but that was not the aim of this article. &#xA;&#xA;If you had any issues using the script please let me know, I&#39;ll be happy to fix those. Thanks for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Object-Oriented Programming in C++: Constructors</title>
      <link>https://www.meetgor.com/posts/oop-cpp-p3</link>
      <description>Introduction After learning a bit about classes, objects, and access modifiers, we can move on to another important concept of Constructors and Destructors. Thi</description>
      <pubDate>Thu, 23 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;After learning a bit about classes, objects, and access modifiers, we can move on to another important concept of Constructors and Destructors. This part will cover Constructors. We&#39;ll explore what are constructors, why we need them, types of constructors, and their semantics in C++ as usual. &#xA;&#xA;So without wasting any time, let&#39;s get started.&#xA;&#xA;## What is a constructor&#xA;&#xA;A constructor is a method just like another method in a class, but it has a special name for a reason. The name itself speaks, `Constructor`, which will get called just after the object gets created.  A constructor is just a method but it should only have the name same as of the class, that&#39;s the thing that distinguishes it from other methods. &#xA;&#xA;The main thing that stands out for a constructor is that you don&#39;t have to call it. It gets called automatically (just magically), yes you don&#39;t have to call it explicitly. That&#39;s a great feature in Object-Oriented Programming. We can control the flow of the program according to our needs. &#xA;&#xA;A constructor has certain features or principle that makes it different from the rest of the class methods:&#xA;&#xA;- There is no return type for a constructor.&#xA;- The name of the constructor should be the same as the class name.&#xA;- There can be multiple constructors for a class (depending on the parameters passed to it).&#xA;- A constructor gets called automatically when an object is instantiated (created).&#xA;&#xA;## Why do we need a constructor?&#xA;&#xA;Let&#39;s suppose you wanted to initialize some variables (properties) before the main loop of a program starts or initiates. This can be done manually by accessing them in the mail function but for many objects, this can get really boring to write every time. This is where the constructor comes in and just improves the overflow structure and manageability of the program.&#xA;&#xA;![oopcpp2-const.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632390379922/V2avlZhzP.png)&#xA;&#xA;This looks silly but is actually used a lot just beyond initialization, you might want certain things to be done as soon as the object gets created. All ideas and flow of the program in a particular desired manner are all possible due to constructors. It is also a way of automating several repetitive tasks for better design and maintaining the codebase. &#xA;&#xA;## Define a constructor for a class in C++&#xA;&#xA;Now, it&#39;s time to get into code, and let&#39;s see how to define a constructor in a class using C++.&#xA;&#xA;```cpp&#xA;#include &#34;iostream&#34;&#xA;using namespace std;&#xA;&#xA;class blog&#xA;{&#xA;public:&#xA;    // defining a constructor for the blog class&#xA;    blog()&#xA;    {&#xA;         cout&lt;&lt;&#34;Call from constructor&#xA;&#34;;&#xA;     }&#xA;};&#xA;&#xA;int main(){&#xA;&#xA;&#x9;blog post1;&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632392596021/MY5H0maqx.png)&#xA;This is how you define a class in C++. A constructor doesn&#39;t even have a return type of void, literally, it doesn&#39;t return anything.  A Constructor can do anything another normal method can do except to return anything. We can also pass certain parameters to it as well. &#xA;&#xA;The thing here is that, the constructor should be public if you want to call it from the main function or anywhere else outside the class(class itself / friend class / derived class). You can make it private or protected as per your needs and the hierarchy of your application. If you want to know more about those access modifiers(public/private/protected), you can check the previous part of this series.&#xA;&#xA;&gt; Remember, you cannot call the constructor from the main function if it is not public.&#xA;&#xA;We can actually create some good examples where the constructor plays a vital role in the design and the feasibility of the actual idea.&#xA;&#xA;Let&#39;s say we want to create three objects, each taking the same input of properties from the user but the value will be different as obvious and also determine another property based on an input. Should we write those inputs for every object or write a function for the same. The latter will be the best choice, but which function are you thinking of? If that&#39;s a constructor then you are correct! Ya we can even write a normal method but we have a print function called up already, that might create some undesired behavior like unreferenced variables and segmentation fault while dealing with complex data structures or large applications.&#xA;&#xA;```cpp&#xA;#include &#34;iostream&#34;&#xA;#include &#34;string&#34;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string name;&#xA;&#x9;&#x9;string type;&#xA;&#xA;&#x9;&#x9;Animal()&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Who is the Animal? &#34;;&#xA;&#x9;&#x9;&#x9;cin&gt;&gt;type;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Enter the name of Animal? &#34;;&#xA;&#x9;&#x9;&#x9;cin&gt;&gt;name;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;if(type.compare(&#34;dog&#34;)==0)&#xA;&#x9;&#x9;&#x9;&#x9;legs=4;&#xA;&#x9;&#x9;&#x9;else if(type.compare(&#34;monkey&#34;)==0)&#xA;&#x9;&#x9;&#x9;&#x9;legs=2;&#xA;&#x9;&#x9;&#x9;else&#x9;&#xA;&#x9;&#x9;&#x9;&#x9;legs=0;&#xA;&#x9;&#x9;}&#xA;&#x9;    &#x9;void print()&#xA;        &#x9;{&#xA;&#x9;&#x9;&#x9;if(legs==0)&#xA;&#x9;&#x9;&#x9;&#x9;cout&lt;&lt;type&lt;&lt;&#34;&#39;s name is &#34;&lt;&lt;name&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;else&#xA;&#x9;&#x9;&#x9;    cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;endl;&#xA;&#x9;    &#x9;}&#xA;};&#xA;&#xA;int main(){&#xA;&#xA;&#x9;Animal obj1,obj2,obj3;&#x9;&#xA;&#x9;obj1.print();&#xA;&#x9;obj2.print();&#xA;&#x9;obj3.print();&#xA;&#x9;return 0;&#xA;}&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632396836336/b6NLE-8287.png)&#xA;&#xA;We can see how well this can scale for creating many objects. This is just one of the examples where the constructor just single-handedly takes care of many things.  We can always create an array of objects but to keep things simple here, we have hardcoded the object names. &#xA;&#xA;## Types of Constructors&#xA;&#xA;Well, won&#39;t it be nice to pass the values to the constructor and then do certain computations? There is a solution to that, we can overload functions i.e we can create multiple constructors each having a unique combination of parameters associated with it. &#xA;&#xA;The following are the types of constructors:&#xA;&#xA;1. Default Constructors&#xA;2. Copy Constructors&#xA;3. Parameterized Constructor&#xA;&#xA;### Default Constructor&#xA;&#xA;This is the constructor which we have used so far, even if we don&#39;t define a constructor, the C++ compiler automatically creates one which does nothing. &#xA;&#xA;We have already seen the normal constructors without any parameters, just a simple function that does some I/O to the console. &#xA;&#xA;### Copy Constructor&#xA;&#xA;This is a great feature of constructors, you can copy certain properties of previously created objects or define new ones and pass them to the constructor. This is why I said we can have multiple constructors depending on the number of parameters or the type of parameters passes to it. &#xA;&#xA;So, with that said, we can basically define constructors for different purposes with different requirements and conditions. This just opens up a new dimension of creating complex functions and programs with great flexibility and adaptivity. &#xA;&#xA;```cpp&#xA;#include &#34;iostream&#34;&#xA;#include &#34;string&#34;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string name;&#xA;&#x9;&#x9;string type;&#xA;&#xA;&#x9;&#x9;Animal()&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Who is the Animal? &#34;;&#xA;&#x9;&#x9;&#x9;cin&gt;&gt;type;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Enter the name of Animal? &#34;;&#xA;&#x9;&#x9;&#x9;cin&gt;&gt;name;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;if(type.compare(&#34;dog&#34;)==0)&#xA;&#x9;&#x9;&#x9;&#x9;legs=4;&#xA;&#x9;&#x9;&#x9;else if(type.compare(&#34;monkey&#34;)==0)&#xA;&#x9;&#x9;&#x9;&#x9;legs=2;&#xA;&#x9;&#x9;&#x9;else&#x9;&#xA;&#x9;&#x9;&#x9;&#x9;legs=0;&#xA;&#x9;&#x9;}&#xA;&#xA;        // copy constructor with reference of an object of the same class&#xA;&#x9;&#x9;Animal(Animal &amp;obj)&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;This is my pet &#34;&lt;&lt;obj.type&lt;&lt;&#34;, &#34;&lt;&lt;obj.name&lt;&lt;endl;&#xA;&#x9;&#x9;}&#xA;&#xA;&#x9;    &#x9;void print()&#xA;        &#x9;{&#xA;&#x9;&#x9;&#x9;if(legs==0)&#xA;&#x9;&#x9;&#x9;&#x9;cout&lt;&lt;type&lt;&lt;&#34;&#39;s name is &#34;&lt;&lt;name&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;else&#xA;&#x9;&#x9;&#x9;    cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;endl;&#xA;&#x9;    &#x9;}&#xA;};&#xA;&#xA;int main(){&#xA;&#xA;&#x9;Animal obj1;&#x9;&#xA;&#x9;Animal obj4(obj1);&#xA;&#x9;obj1.print();&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632403268364/cfjwyvgUeN.png)&#xA;&#xA;In the above example, we have defined another constructor that accepts a parameter that is a reference to another created object and basically does some I/O operations. &#xA;&#xA;Why would we need this?&#xA;&#xA;Well, the name itself says it all. `To copy the values of existing objects` with slight or minor modifications. It basically depends on how you want to copy the object into another. This can also be applied to additional class properties and just change the copy constructor and there it is, big tasks performed with minimal changes. We can also be used to modify the existing objects (though it can get a security issue, still pretty cool) by accessing the properties of that object in the copy constructor. &#xA; &#xA;&#xA;### Parameterized Constructor&#xA;&#xA;Now, we know that we can pass parameters to our constructors, let&#39;s exploit that functionality to create all sorts of constructors. No, I&#39;m just kidding, you can create multiple constructors by passing in various combinations of parameters to it.&#xA;&#xA;Let&#39;s take an example, instead of taking input from the default constructor we can pass in the values while creating the object, and the constructor which matches the parameters gets called. &#xA;&#xA;```cpp&#xA;#include &#34;iostream&#34;&#xA;#include &#34;string&#34;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string name;&#xA;&#x9;&#x9;string type;&#xA;&#xA;        // default constructor&#xA;&#x9;&#x9;Animal()&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Who is the Animal? &#34;;&#xA;&#x9;&#x9;&#x9;cin&gt;&gt;type;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Enter the name of Animal? &#34;;&#xA;&#x9;&#x9;&#x9;cin&gt;&gt;name;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;if(type.compare(&#34;dog&#34;)==0)&#xA;&#x9;&#x9;&#x9;&#x9;legs=4;&#xA;&#x9;&#x9;&#x9;else if(type.compare(&#34;monkey&#34;)==0)&#xA;&#x9;&#x9;&#x9;&#x9;legs=2;&#xA;&#x9;&#x9;&#x9;else&#x9;&#xA;&#x9;&#x9;&#x9;&#x9;legs=0;&#xA;&#x9;&#x9;}&#xA;&#xA;        // constructor with two parameters (both String)&#xA;&#x9;&#x9;Animal(string type, string name)&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;This is my pet &#34;&lt;&lt;type&lt;&lt;&#34;, &#34;&lt;&lt;name&lt;&lt;endl;&#xA;&#x9;&#x9;}&#xA;&#xA;        // constructor with one parameter(String)&#xA;&#x9;&#x9;Animal(string name)&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;&#34;Hello, &#34;&lt;&lt;name&lt;&lt;endl;&#xA;&#x9;&#x9;}&#xA;&#xA;&#x9;    &#x9;void print()&#xA;        &#x9;{&#xA;&#x9;&#x9;&#x9;if(legs==0)&#xA;&#x9;&#x9;&#x9;&#x9;cout&lt;&lt;type&lt;&lt;&#34;&#39;s name is &#34;&lt;&lt;name&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;else&#xA;&#x9;&#x9;&#x9;    cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;endl;&#xA;&#x9;    &#x9;}&#xA;};&#xA;&#xA;int main(){&#xA;&#xA;&#x9;Animal obj1;&#x9;&#xA;&#x9;Animal obj2(&#34;dog&#34;, &#34;scooby&#34;);&#xA;&#x9;Animal obj3(&#34;Pomello&#34;);&#xA;&#x9;obj1.print();&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632402898960/mc1D2S_iQ.png)&#xA;&#xA;We created the first object without any specification but the second object was passed with two parameters and we indeed called the correct constructor. The third object was just given a single parameter and again called the required constructor. &#xA;&#xA;So, we can see that a particular constructor is called which matches the parameters passed to it. That&#39;s quite an intelligent and smart design. This just improves the overall diversity and adaptability hand in hand. &#xA;&#xA;#### Function/Constructor Overloading&#xA;&#xA;&gt; When a single function(same name) has different implementations depending on the parameters passed to it is called Function overloading.&#xA;&#xA;The thing above i.e Parameterized Constructors is technically called Function Overloading (Constructor in this case). We will look into this topic in much detail when we cover more on methods. But this might be enough to get your head around the concept of overloading in OOP. &#xA;&#xA;Let&#39;s say we have a function that takes two parameters by default but also can take three parameters when given. We can process them by having a different implementation. This creates a lot of freedom and broadens the scope of customization in complex applications. &#xA;&#xA;```cpp&#xA;#include &#34;iostream&#34;&#xA;using namespace std;&#xA;&#xA;void intro(string name, int age)&#xA;{&#xA;&#x9;cout&lt;&lt;name&lt;&lt;&#34; is &#34;&lt;&lt;age&lt;&lt;&#34; years old.&#xA;&#34;;&#xA;}&#xA;&#xA;void intro(string name)&#xA;{&#xA;&#x9;cout&lt;&lt;&#34;Hello, &#34;&lt;&lt;name&lt;&lt;endl;&#xA;}&#xA;&#xA;&#xA;int main(){&#xA;&#xA;&#x9;intro(&#34;Ash&#34;,10);&#xA;&#x9;intro(&#34;Brock&#34;);&#xA;&#x9;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1632404708549/4h1izLpdS.png)&#xA;&#xA;In this example, we have a single function `intro` but it has two implementations, one has a two-parameter and the other has just one. This is called Function Overloading. We just applied this in the functions in classes called constructors but the concept is the same. We&#39;ll dive into Overloading in the next few sections. &#xA;&#xA;## Conclusion&#xA;&#xA;That is it from this part. We&#39;ll see `Destructors` in the next one. From this part, we learnt about constructors and how important they are in creating the desired flow to our application.  We also touched on the types of constructors and function overloading a bit, which are also important aspects of OOP. That&#39;s it from this one.&#xA;&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>What&#39;s in Techstructive Blog</title>
      <link>https://www.meetgor.com/posts/techstructive-blog-init</link>
      <description>Introduction **Hello World!** This is an article in which I&#39;ll introduce why Techstructive Blog exists and what it covers in brief. This is just a casual introd</description>
      <pubDate>Sun, 19 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;**Hello World!** This is an article in which I&#39;ll introduce why Techstructive Blog exists and what it covers in brief. This is just a casual introduction to my blogging niche. P.S. It&#39;s not any technical blog but has challenged me enough to write and complete it.  &#xA;&#xA;&gt; Techstructive Blog is my personal blog hosted on GitHub pages as well as on  [Hashnode](https://techstructiveblog.hashnode.dev/). It first started on Hashnode and soon created my own using Jekyll on GitHub pages. On this blog, I write articles mainly Vim, BASH, and some programming languages and tools. &#xA;&#xA;Firstly thanks to Hashnode for providing a free platform for developers to share their journey to the world. I am happy that I have found such a gold mine of a supportive and growing community of developers that help find and inspire each other&#39;s way.&#xA;&#xA;&#xA;## What Techstructive Blog covers?&#xA;&#xA;&gt; Vim + Linux + Random Stuff&#xA;&#xA;Techstructive Blog started around 20th February 2021, where I first found  Hashnode. There I wrote my first article in my technical writing journey. The article was about `Vim`. It covered some quite basic stuff on how to get started in Vim. After few months I really found the need to blog about my learnings as I felt more confident about my skills after writing 10-12 articles on the  [GeeksforGeeks portal](https://auth.geeksforgeeks.org/user/meetgor/articles). I again started writing articles about Vim and BASH as well. &#xA;&#xA;Fast forward today,  [Techstructive Blog](https://techstructiveblog.hashnode.dev/)  has 26 articles on it, 14 being about Vim and 8 being about BASH. That is the niche of Techstructive Blog revolving around GNU\Linux and its applications. I also talk about Django and C++ as they are technologies I love to work with and create some personal projects with them.  &#xA;&#xA;## Rambling about Vim&#xA;&#xA;I don&#39;t know it might feel to the readers like I am rambling and boasting about how much I know about Vim, but I honestly love and adore the text editor as it is so customizable and wicked fast. I just write about what might be helpful and important to others as a setup or just editing in Vim. I personally use Vim every day so I feel more confident and excited to write about it. I feel it&#39;s just the start of Vim for me, I can go on writing articles about Vim that will be helpful for someone or the other. I have 7 articles already planned up for Vim, I just need to finish off certain pending tasks and projects first. &#xA;&#xA;I am gonna primarily focus on how to use Vim for various languages or frameworks. This also applies to using Vim in different environments and improving the workflow of daily usage. Also notice that all my Vim articles are short and simple. They just cover the topics as mentioned in the title, because I don&#39;t feel like spitting everything in one article that no one can understand and connect topics together. Instead, I break topics into series, just like the Vim:  [Tabs](https://techstructiveblog.hashnode.dev/vim-tabs), [ Window Splits](https://techstructiveblog.hashnode.dev/vim-window-splits), and  [Buffers](https://techstructiveblog.hashnode.dev/vim-buffers)  articles. &#xA;&#xA;I plan ahead as more and more articles about Vim and maybe some weekly newsletter of Vim tricks and commands. That&#39;s some exaggerated thing but I have decent skills to tackle that. Who knows what the future holds? but it&#39;s important to plan your way.&#xA;&#xA;## Linux &#xA;&#xA;Well, I technically don&#39;t have any articles about `Linux` but I have a lot of things to share about it. I have a series to be written about it with a minimum of 10 articles in it. But I am not able to start my way off in search of a perfect time. Though I have some BASH articles and a  [series](https://techstructiveblog.hashnode.dev/series/bash-scripting)  about scripting already. I have always preferred to use Linux even if I use Windows 7 daily, I have a virtual machine to satisfy some of my needs and curiosity. That&#39;s aside I have used Termux which is a terminal emulator for Android, I have literally spent the Lockdown, coding on it all the months and weeks. &#xA;&#xA;I feel to talk about the need to use Linux and how it has revolutionized programming and development around the world. This is really a biased topic because people who use certain OS are gonna keep using it but I hope that they would get some insights at least from those articles.&#xA;&#xA;## Web development &#xA;&#xA;I have created several projects/websites using Vanilla Javascript and Django, so I can pretty much talk about the things which I&#39;ve learned and the mistakes that have made. I have a good grasp on Django now so I feel much confident to write series about it, just covering the concepts and the terminologies in which I faced challenges in getting through.&#xA;&#xA;&#xA;## General Programming&#xA;&#xA;Well, I can&#39;t lie but I feel too excited when it comes to learning new things and I genuinely learn them until I create something from it, may it be C/C++, Python. Django, Javascript, BASH, or other tools or frameworks. I am not limiting myself to any language or topic, but ya I can proficiently write about one and slowly gather around others as well. &#xA;&#xA;&gt; Jack of all trades, master of none.&#xA;&#xA;That might be a good quote applicable to me but I believe in  [PI-Shaped Learning](https://appunite.com/blog/become-a-pi-shaped-developer). PI-Shaped learning emphasizes broadening the knowledge in all domains but having expertise in one. Like I can talk about Web development, Android development, DevOps but I will be majorly inclined towards one and have a deep knowledge of ins and outs of that domain. &#xA;&#xA;## Conclusion&#xA;&#xA;Phew! That took a while to finish. But hey! Thanks a lot for reading till here. I hope you got my thoughts about the scope of this blog. I owe to continue this blogging journey in the future and improve my as well as the reader&#39;s skills and experience.&#xA;&#xA;Well, until then HAPPY CODING :)</content>
      <type></type>
    </item>
    <item>
      <title>Object-Oriented Programming in C++: Access Modifiers</title>
      <link>https://www.meetgor.com/posts/oop-cpp-p2</link>
      <description>Introduction Moving on to Part 2 of Object-Oriented Programming in C++, we will look into the concept of access modifiers which we skipped in the first section.</description>
      <pubDate>Sat, 18 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Moving on to Part 2 of Object-Oriented Programming in C++, we will look into the concept of access modifiers which we skipped in the first section. This won&#39;t be a comprehensive guide on access modifiers as it requires some other concepts like `Inheritance` which we will introduce in the next few sections. This series will be like connecting the pieces of OOP together and building a good foundation.  &#xA;&#xA;In this part, we will discuss the access modifiers and their related concepts. The below is a gist of what will be covered in this part. &#xA;&#xA;- Understanding Access Modifiers&#xA;- Type of Access Modifiers&#xA;    1. Private&#xA;        - Getters &#xA;        - Setters&#xA;    2. Public&#xA;    3. Protected&#xA;        - Introduction to Derived and Friend classes&#xA; &#xA;Without wasting any time, let&#39;s roll in.&#xA;&#xA;## What are Access Modifiers&#xA;&#xA;Access Modifiers as the name suggests they are used to change the accessibility of properties and methods in a Class. That means if we want a certain scope of our code to access the class members we can specify that using `access modifiers`. Access Modifiers form the basics or the pillar for `Data Encapsulation` in OOP.&#xA;&#xA;### Data Encapsulation&#xA;&#xA;Data Encapsulation is terminology in OOP to keep all the components in a single entity or unit. Here the components are `properties` and `methods` and they are contained in a single unit called `Class`.  We need to carefully use the class members i.e. properties and methods in order to avoid security and limiting certain members to be available for a specific scope or block of code.&#xA;&#xA; Hence, the Encapsulation of class members helps in structuring the class in a secure and allows access in a particular expected way. &#xA;&#xA; &#xA;## Types of Access Modifiers in C++&#xA;&#xA;We do have certain modifiers or limiters for accessing class members in C++. We can either keep the class members limited to the class itself (and friend class), keep them open to the global scope, or restrict them to certain classes only(derived and friend classes). &#xA;&#xA;Let&#39;s discuss them one by one.&#xA;&#xA;- ### Private&#xA;&#xA;This access modifier allows us to access the class members only to the class methods and the friend functions and classes. We will discuss what friend classes are in the next few parts of the series. This restricts the usage of the class members directly from the main function or other scopes in our program. You can use `private:` to indicate the below declared properties and methods are private to the class. &#xA;&#xA;```cpp&#xA;class Name&#xA;{ &#xA;    private:&#xA;        int name;&#xA;        //other properties and methods&#xA;}&#xA;```&#xA;&#xA;OR (don&#39;t specify anything it **is private by default**)&#xA;&#xA;```cpp&#xA;class Name&#xA;{ &#xA;    int name;&#xA;    //other properties and methods&#xA;}&#xA;```&#xA;&#xA;This is assigned to every member of the class by default. So that is why we explicitly told to make it public in an example in the previous part of the series. Let&#39;s see what happens if we do not make it public. &#xA;&#xA;```cpp&#xA;#include&lt;iostream&gt;&#xA;using namespace std;&#xA;&#xA;class Animal{&#xA;&#x9;int legs;&#xA;&#x9;string name;&#xA;&#x9;void print()&#xA;    &#x9;{&#xA;&#x9;&#x9;cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;}&#xA;};&#xA;&#xA;int main()&#xA;{&#xA;&#xA;&#x9;Animal dog;&#xA;&#x9;dog.name=&#34;Floyd&#34;;&#xA;&#x9;dog.legs=4;&#xA;&#x9;dog.print();&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631940306089/l4zOKB-RY.png)&#xA;&#xA;You can see we cannot directly use those properties and methods which are set as private. It is a convention to keep properties private and create public methods to access those from the rest of the program. The public methods used to access and modify the value of the private properties are called `getters` and `setters` respectively.&#xA;&#xA;#### Getters&#xA;&#xA;Getters are the public methods of a class that are used to access a value to the private properties of that class. It is a function that returns the data of that particular property. We access the function as normally as we access the public functions.&#xA;&#xA;We define the getter function as:&#xA;&#xA;```cpp&#xA;datatype getpropertyname()&#xA;{&#xA;    return propertyname;&#xA;}&#xA;```&#xA;&#xA;We can access the getter function as a normal public function, but this function returns a value, so we can store it in a variable and do all sorts of things.&#xA;&#xA;```cpp&#xA;classname objname;&#xA;court&lt;&lt;objname.getpropertyname()&lt;&lt;endl;&#xA;```&#xA;&#xA;#### Setters&#xA;&#xA;Setters are the public methods of a class that are used to assign/modify the value of the private properties of that class. It is a function that simply assigns the private properties to data of the data which is passed in as an argument to the function. We access the function as normally as we access the public functions. We have to pass in the value to assign the property to the function.&#xA;&#xA;We define the setter function as:&#xA;&#xA;```cpp&#xA;datatype setpropertyname(datatype x)&#xA;{&#xA;    propertyname = x;&#xA;}&#xA;```&#xA;&#xA;We can access the setter function as a normal public function, but this function takes in a parameter, so need to pass in the value to assign it the same as the correspondent data type of that property.&#xA;&#xA;```cpp&#xA;classname objname;&#xA;objname.setpropertyname(data);&#xA;```&#xA;&#xA;So, after applying the getter and setter concept to our example, we can use them and make the properties private without any issues.&#xA;&#xA;&#xA;```cpp&#xA;#include&lt;iostream&gt;&#xA;using namespace std;&#xA;&#xA;class Animal{&#xA;&#x9;int legs;&#xA;&#x9;string name;&#xA;&#x9;public:&#xA;&#xA;    // take in a parameter of type same as of the property &#xA;&#x9;void setName(string dogname)   &#xA;&#x9;{&#xA;&#x9;&#x9;name=dogname; &#xA;        // assign the property a value same as of the parameter       &#xA;&#x9;}&#xA;&#xA;    // return type should be same as of the property &#xA;&#x9;string getName()   &#xA;&#x9;{&#xA;&#x9;&#x9;return name;&#xA;&#x9;}&#xA;&#xA;&#x9;void setleg(int noflegs)&#xA;&#x9;{&#xA;&#x9;&#x9;legs=noflegs;&#xA;&#x9;}&#xA;&#xA;&#x9;int getleg()&#xA;&#x9;{&#xA;&#x9;&#x9;return legs;&#xA;&#x9;}&#xA;};&#xA;&#xA;int main()&#xA;{&#xA;&#x9;Animal dog;&#xA;&#x9;dog.setName(&#34;Flyod&#34;);&#xA;&#x9;dog.setleg(4);&#xA;&#x9;cout&lt;&lt;dog.getName()&lt;&lt;endl;&#xA;&#x9;cout&lt;&lt;dog.getleg()&lt;&lt;endl;&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631949135706/zVd5cMPKa.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631956955383/MKp5YDMv5.png)&#xA;&#xA;The above code runs successfully, and hence we make our program more secure and provide limited access to the rest of the components. You can see how we have passed the data as the parameter to the setter function, it should be the appropriate data-type with the property you are trying to set and also the same return type for the getter function.&#xA;&#xA;The code is also much readable and structured for others to read and understand. It might seem a silly thing but it really improves the maintainability in the longer run. &#xA;&#xA;&#xA;- ### Public&#xA;&#xA;This might be familiar till now, we have been using this access modifier till now and it is quite straightforward to understand. We have created the getter and setter function using this access modifier.&#xA;&#xA;So, the `public` access modifier makes the class members be accessible anywhere in the program. This might be OK for many small applications but it is not ideal to use them for production-level applications as it might cause undesired consequences i.e. BUGS.&#xA;&#xA;We need to explicitly write public in the class definition, as `private` is set by default. So, as a simple example, as we saw in the previous part, it can be a lot easier to go with the public to understand OOP but the main OOP really shines in aspects like Encapsulation and Inheritance of Classes, which we will surely see in the upcoming parts.&#xA;&#xA;This is the example from the previous part:&#xA;&#xA;```cpp&#xA;#include&lt;iostream&gt;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string name;&#xA;&#x9;    &#x9;void print()&#xA;        &#x9;{&#xA;&#x9;&#x9;    cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;    &#x9;}&#xA;};&#xA;&#xA;int main()&#xA;{&#xA;&#x9;Animal dog;            // create an object of class Animal&#xA;&#xA;&#x9;dog.name=&#34;Floyd&#34;;     // assign the class property to actual data in memory&#xA;&#x9;dog.legs=4;                 &#xA;&#x9;dog.print();          // call a method(function) associated to object&#39;s class&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;![public-class](https://cdn.hashnode.com/res/hashnode/image/upload/v1631795000896/kUvcfVU7Y.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631958067712/yU8uvd5KQ.png)&#xA;&#xA;As explained earlier, it is accessible to the main function or other scopes as well. We can access them using the dot separator(`.`) to assign it or to call the method.&#xA;&#xA;- ### Protected&#xA;&#xA;This is quite a handy access modifier, as it allows us to have the flexibility to keep the class members private and accessible to its derived or friend class. We will see the derived and friend classes in the next few parts. But for now, let&#39;s understand in an abstracted way.&#xA;&#xA;#### Derived Class(Child Class/ Sub Class)&#xA;&#xA;This is a concept in Inheritance, i.e. to pass the members of a class to another. So, there has to be two classes, the main(parent class) and another class that will inherit or take in the members from the parent class. So, the derived class has access to its public and protected members only.&#xA;&#xA;&#xA;#### Friend class&#xA;&#xA;A friend class is a class that is allowed to access its **private** and protected properties or methods. It is kind of a special tag assigned to a class that it can access certain class&#39; members. We will see it in detail afterward, right now, it&#39;s enough to understand, friend class is a class that can access a particular class&#39; members may it be private or protected. &#xA;&#xA;The difference between a friend and a derived class is that a friend class can access the private members of the class to which it is a friend, but a derived class can&#39;t. Friend class also can&#39;t be inherited. Again, we will see this is in detail. &#xA;&#xA;So, it doesn&#39;t make sense for me to explain protected here. But just assume friend classes and derived classes are a thing.&#xA;&#xA;```cpp&#xA;#include&lt;iostream&gt;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string type;&#xA;&#x9;&#x9;&#xA;&#x9;&#x9;void print()&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;type&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;&#x9;}&#xA;&#xA;&#x9;protected:&#xA;&#x9;&#x9;string name;&#xA;};&#xA;&#xA;&#xA;int main()&#xA;{&#xA;&#xA;&#x9;Animal dog;&#xA;&#x9;dog.type=&#34;dog&#34;;&#xA;&#x9;dog.legs=4;&#xA;    &#x9;dog.name=&#34;Floyd&#34;;&#xA;&#x9;dog.print();&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631956345308/epOckpl6X.png)&#xA;&#xA;This gives an error as protected members can be accessed only by derived or friend classes since we are accessing it from the main function, it&#39;s not allowed in the global scope. We can only access the protected members from the derived or friend classes. &#xA;&#xA;So, let&#39;s derive a class from the base class(Animal), and after that, we can see protected members in action.&#xA;&#xA;```cpp&#xA;#include&lt;iostream&gt;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string type;&#xA;&#x9;&#x9;&#xA;&#x9;&#x9;void print()&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;type&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;&#x9;}&#xA;&#xA;&#x9;protected:&#xA;&#x9;&#x9;string name;&#xA;};&#xA;&#xA;class Pet:public Animal &#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;void print()&#xA;&#x9;&#x9;{&#xA;&#x9;&#x9;&#x9;name=&#34;Floyd&#34;;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;name&lt;&lt;&#34; is a &#34;&lt;&lt;type&lt;&lt;endl;&#xA;&#x9;&#x9;&#x9;cout&lt;&lt;type&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;&#x9;}&#xA;&#xA;};&#xA;&#xA;int main()&#xA;{&#xA;&#xA;&#x9;Pet dog;&#xA;&#x9;dog.type=&#34;dog&#34;;&#xA;&#x9;dog.legs=4;&#xA;&#x9;dog.print();&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631956708698/HJTOZN9sw.png)&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631956815750/a2y5QLiPT.png)&#xA;&#xA;We have accessed the protected property `name` in the derived class `Pet`. Yes, this is how we derive a class in C++, &#xA;&#xA;```cpp&#xA;// A child class serived from base class&#xA;class NewClassName: public BaseClassName&#xA;{&#xA;     // properties and methods&#xA;};&#xA;```&#xA;So, after deriving the `Pet` class from `Animal` class, we have access to its public and protected class as well. We simply assigned one of the protected members from its base class i.e. `name` and used it in the function `print`. &#xA;&#xA;This is how you can use protected members in a derived class, also you can do it in friend class, but we will look at that in a separate part. Remember you can **use getters and setters as well to assign and access those protected members**.  Keeping it simple and easy to understand is what a protected access modifier can provide. &#xA;&#xA;**Protected access modifiers are quite great and provide much more flexibility than `private` and more privacy than `public` access modifiers.**&#xA;&#xA;## Which to use when?&#xA;&#xA;It is often a good practice to use private members, but it might be not possible to make everything private, so we can use certain properties as private, some methods as public and protected as well as per requirement and complexity. &#xA;&#xA;&#xA;![acmod.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631953460021/_4WlLGDI_.png)&#xA;&#xA;Looking at the above chart, you can analyze your problem and work a way around to find the perfect secure match for your application. &#xA;&#xA;Remember to use public members only when you have to explicitly use through the program. If you find that, this member shouldn&#39;t be introduced in certain scope then use make it private or protected.&#xA;&#xA;If there is some kind of hierarchy in the program and its components, analyze and make a structure of it to have a better understanding of the program flow.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part, we were able to understand the access modifiers and got somewhat of a dive into Inheritance. There were some concepts like Friend and Derived classes, Inheritance, Encapsulation which were just explained in short, but they are a topic that deserves separate attention. We&#39;ll discuss them in the upcoming parts.&#xA;&#xA;Thanks for reading. Hope you understood the concepts in a better way. Until then, as always, Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Object-Oriented Programming in C++: Classes and Objects</title>
      <link>https://www.meetgor.com/posts/oop-cpp-p1</link>
      <description>Introduction We all know that C++ is famously known for Object-Oriented Programming, but what is Object-Oriented Programming? Well, this is the article, where w</description>
      <pubDate>Thu, 16 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;We all know that C++ is famously known for Object-Oriented Programming, but what is Object-Oriented Programming? Well, this is the article, where we will explore the basics of Object-Oriented Programming, its building blocks i.e Classes and Objects, and get familiar with its basic semantics in C++. Let&#39;s get into it straight away!&#xA;&#xA;## What is Object-Oriented Programming?&#xA;&#xA;Object-Oriented Programming(OOP) is a way to structure and design a program or an application. Object-Oriented Programming deals with creating classes and objects and it&#39;s related terminologies. OOP helps in following the principles like DRY(Don&#39;t Repeat Yourself) and KISS(Keep it Simple Stupid). Using OOP certain applications and problems can be solved in a simple and efficient way. It helps in better understanding and readability of the code.&#xA;&#xA;&gt; Virus is a class and COIVD-19 is an object of the Virus class XD&#xA;&#xA;In OOP we create a collection of the data and functionalities by organizing in a structure called `Class`. We then instantiate or create the actual data by creating an `object` of the particular class. Once we have created a class, we can simply create as many objects as we need to. Objects are basically the storage of the actual data in memory. Let&#39;s explore them in the next few sections.&#xA;&#xA;NOTE: There are a lot of terminologies in OOP, just remember the basics of those terms and not the different names of those terms.&#xA;&#xA;## What are Classes&#xA;&#xA;Classes are the structure or template of the data and its associated behavior. The data or the variables in the classes are called `properties` or `attributes`(also `data members`). We also have functions that define the behavior of the properties or the data present in the class and are called `methods`.&#xA;&#xA; A Class in C++ should be defined outside of the main function. In C++, `class` is a reserved word and hence it is used to declare and define a class. It has a general structure like:&#xA;```cpp&#xA;class Name&#xA;{&#xA;    // properties and methods&#xA;};&#xA;```&#xA;&#xA;It&#39;s a convention to use the initial letter of a Class Name Uppercased. We can define class methods outside the class body as well, we will see it in the next few parts of the series.  &#xA;&#xA;Let&#39;s create a basic class in C++,&#xA;```cpp&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string name;&#xA;&#x9;void print()&#xA;    {&#xA;&#x9;&#x9;cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;}&#xA;};&#xA;&#xA;```&#xA;Here in the above code, we have a class called `Animal` and it has 2 properties called `legs` which is an int and `name` a string. We also have a method called `print` which displays the current object&#39;s name and the leg count. We will see what is the current object in the next section. &#xA;&#xA;![oopcpp1-class.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631797771422/6jCNpCYz_q.png)&#xA;&#xA;We use those properties or the variables defined in the class in our application either by accessing them through the objects or in the methods of that class. The variables or properties in classes are not assigned to any memory address i.e. they are just kind of a blueprint or placeholder tags to match for the data.&#xA;&#xA;### Access Specifier&#xA;We have written `public` before the definition of the properties of the class, it is an access modifier. Don&#39;t worry we will see it in detail in the next few parts of the series, just for time being understand that we can change whether we want the main function(or any function globally) to access the class&#39; properties or not. &#xA;&#xA;We are saying `public` which means we can access the properties of this class anywhere outside the class. This is not a good practice but just for simplicity of understanding, we are using public. **If you do not specify anything like `public`, it is by default `private`, which means the properties of the class are not accessed anywhere outside the class**. Let keep it simple right now.&#xA;&#xA;We also have other concepts like defining the methods outside/inside the class, header files, constructors, destructors, and many others related to a class definition, we will cover it in the next few parts.&#xA;&#xA;## What are Objects&#xA;&#xA;Now, we have created a class but where is the data actually? It will be in `objects`, We actually assign the data or properties to a memory address by creating the objects of that particular class. &#xA;&#xA;We can create objects of a particular class in C++ by writing the name of the class that this object will belong to and the name of the object,`classname objectname;`. This will create or assign memory to the properties of the class to the object. &#xA;&#xA;After the object has been created, we can assign the value to the properties of the class in the object. We can access the properties of the class by referencing the name of the object with `.` and the name of the property or variable of the class, `objectname.propertyname`, we can assign the value simply by using the assignment operator `=` and give the properties the value, input from the user or however you like it.  &#xA;&#xA;```cpp&#xA;#include&lt;iostream&gt;&#xA;using namespace std;&#xA;&#xA;class Animal&#xA;{&#xA;&#x9;public:&#xA;&#x9;&#x9;int legs;&#xA;&#x9;&#x9;string name;&#xA;&#x9;void print()&#xA;    {&#xA;&#x9;&#x9;cout&lt;&lt;name&lt;&lt;&#34; has &#34;&lt;&lt;legs&lt;&lt;&#34; legs.&#xA;&#34;;&#xA;&#x9;}&#xA;};&#xA;&#xA;int main(){&#xA;&#xA;&#x9;Animal dog;            // create an object of class Animal&#xA;&#xA;&#x9;dog.name=&#34;Floyd&#34;;     // assign the class property to actual data in memory&#xA;&#x9;dog.legs=4;                 &#xA;&#x9;dog.print();          // call a method(function) associated to object&#39;s class&#xA;&#x9;&#xA;&#x9;return 0;&#xA;}&#xA;&#xA;```&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631795000896/kUvcfVU7Y.png)&#xA;&#xA;If Animal is the class, then the dog here is the object. As simple as you can think. We are technically **instantiating an object** when we say `Animal dog`.  We assign the variables the memory as defined in the class. We give some value to the properties using the `.`(dot syntax), after the class name. &#xA;&#xA;So `dog.legs = 4;` will assign the value of 4 in the memory address of the object&#39;s property(dog is the class and legs is the property).&#xA;&#xA;Similarly, we can call the function associated with the class as we do for properties but we use `()` to indicate to calling the function.&#xA;&#xA;&#xA;![oopcpp1-obj.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631799152681/UhNDYdEXK.png)&#xA;&#xA;So that is how we create objects in C++.&#xA;&#xA;## Why and When to use OOP&#xA;&#xA;Object-Oriented programming makes quite complex problems and applications structure and scale quite easily and enhances the readability of the codebase. We can use OOP to create the applications for bigger and real-life applications, it allows us to add new features quite easily and thereby improving the maintaining ability.&#xA;&#xA;The following might be the reasons to use OOP in an application:&#xA;&#xA;- If the application cannot be stated in a single sentence, you need to create the components into classes and hence use OOP.&#xA;&#xA;- It is quite easy to maintain OOP applications even if there are a lot of maintainers to the project.&#xA;&#xA;- If you see certain parts of the code repeating in many places, OOP is the way to go.&#xA;&#xA;- If you want your application to be secure in terms of data from the rest of the components, OOP is again a great option as you can use Encapsulation to the advantage.&#xA;&#xA;&#xA;## Conclusion&#xA;&#xA;So, we have studied the concept of Object-Oriented Programming and how to define classes and create objects in C++. There are a lot of concepts in OOP to grab up in the next few parts of the series, but this should build the foundation of OOP as a paradigm or a way of thinking when it comes to problem-solving and programming.&#xA;&#xA;I hope you have got a good introduction to the basics of Object-Oriented Programming and its semantics in C++. Thank you for reading. In the next part, we will explore the Access Modifiers/Specifiers in the Classes. Until then Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>My Journey as a Technical Writer</title>
      <link>https://www.meetgor.com/posts/technical-writer-journey</link>
      <description>Introduction Hello World! I am Meet Gor, a self-taught web developer, Linux enthusiast, and a newbie technical writer. I am currently studying in an Engineering</description>
      <pubDate>Tue, 14 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Hello World! I am Meet Gor, a self-taught web developer, Linux enthusiast, and a newbie technical writer. I am currently studying in an Engineering college pursuing my B. Tech in I.T.  I love to explore new programming languages and technologies, create innovative projects, love to write stories and blogs like this. &#xA;&#xA;## How it started&#xA;&#xA;When I was learning Vim, I found the need to write some kind of cheatsheet for myself. So I started writing it, soon I realized I had written a kind of an article instead of a cheatsheet XD There were a lot of chunks of it unorganized but meaningful in some sense. So, from there I started to research on article writing. I found the term perfect for my techno feelings to express i.e. &#34;Technical Writing&#34;.  I soon landed on Hashnode, I knew Medium, GeeksforGeeks but found Hashnode absolutely perfect and much more viable to the creativity of an author. &#xA;&#xA;### Starting on Hashnode&#xA;My  [first article](https://techstructiveblog.hashnode.dev/why-and-how-you-should-use-vim-as-a-text-editor-and-ide)  was published on  [Hashnode](https://hashnode.com/@Mr-Destructive/joinme), which I absolutely love and adore. That was a bit big article about 12 minutes for a beginner me writing everything about Vim which I knew about that time.  The response was not that bad, I received a couple of reactions and comments to pump me up but I never felt confident to write more articles. &#xA;&#xA;### GeeksforGeeks Internship&#xA;Until I found an **Internship at GeeksforGeeks for a Technical Writer** role.  [Geeksforgeeks](https://www.google.com/aclk?sa=l&amp;ai=DChcSEwj-_8TAq_7yAhUtlUsFHdI4AD4YABAAGgJzZg&amp;ae=2&amp;sig=AOD64_1xL_mUI1OLWumXOK-H1jF4wRP8eg&amp;q&amp;nis=1&amp;adurl&amp;ved=2ahUKEwjSlb_Aq_7yAhXMb30KHXM6BPsQ0Qx6BAgCEAE)  is a start-up for Computer science students to learn and grow with free as well as paid courses and mentors. In a couple of months, I wrote some 10-12 articles about Shell scripting. There I really found my interest and the confidence I wanted.  It was really a great experience working here and loved the freedom it provided to the authors with genuine and constructive feedback along with a decent stipend.  [Here](https://auth.geeksforgeeks.org/user/meetgor/articles)  you can find my work.&#xA;&#xA;### The start of the `Techstructive Blog` &#xA;Soon, on June 6th something stuck in my mind that I should write about how to use Vim for doing various things. So, the first idea was what currently I was learning in College i.e Python, I started writing about the Vim set up for python and I completed it within 2 hours. I published it on Hashnode and **from there I never looked back. I started writing an article every 3-4 days**. That summed up for 3 months making 21 articles till September. I also made my personal open-source blog called [`Techstructive Blog`](https://mr-destructive.github.io/techstructive-blog/) using Jekyll and hosted it on GitHub pages. &#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1631625512256/eyN6xO92G.png)&#xA;&#xA;I did experiments with a lot of blogging tools and frameworks like  [Netlify with Flask](https://www.patricksoftwareblog.com/flask-tutorial/),  [Django distill](https://github.com/meeb/django-distill),  [Gatsby Js](https://www.gatsbyjs.com/starters/gatsbyjs/gatsby-starter-blog), etc and finally landed on  [Github pages](https://pages.github.com/)  with  [Jekyll](https://jekyllrb.com/)  as it is quite customizable and a lot easier to add certain components as well. &#xA;&#xA;&#xA;&#xA;## Why I kept going?&#xA;&#xA;It&#39;s not hard to keep up with what you love to do. Even if you are not that interested in doing something, just having curiosity and consistency can turn any work into a skill. Here are the reasons why I love to keep going as a Technical writer ( some of them are from the Bootcamp session from @[Edidiong Asikpo](https://twitter.com/didicodes) and @[Quincy Larson](https://twitter.com/quincy) :) &#xA;&#xA;- Over a span of 5-6 months I have learned a lot than I had in my past 2 years, I understood that I am writing for myself first and not only for other people. &#xA;- It helps me in improving my online presence and clearly showcases my skills and interests. &#xA;- Able to sum up ideas and document them well.&#xA;- Maintain a record of learning for your future self.&#xA;- Learning to get things done by writing articles consistently in an allotted time.&#xA;&#xA;&#xA;## How I kept going?&#xA;&#xA;I am writing articles almost daily one for my blog and another day for GeeksforGeeks. I have a schedule and follow it almost punctually. I have never run out of ideas and motivation till now, every time I feel that there&#39;s no motivation in me, something or the other pops up just like this  [Hashnode Bootcamp - The Art &amp; business of Technical writing](https://hashnode.com/bootcamp). Some youtube videos, Hackathon, Bootcamps, Project, conversations with friends just can impact in a constructive and a drastic way, they create a refreshed experience and allow you to clarify the doubt in yourself. Just you have to be open to ideas and feedback from other people and learn in harmony. &#xA;&#xA;## My writing workflow&#xA;&#xA;This is really a neglected part when you ask a technical writer as it can really improve your productivity.  I use Hashnode and Vim both for writing out the markdown script. I like to write the subheadings or the content which I want to explain in that article. That really makes a bigger idea in a small chunk and I repeat the process until I feel that is the lowest abstracted way I can explain something. I record some screencasts in gifs on ScreenToGif recorder, MS Paint and Canva as an Image editor,  **Hashnode makes uploading and accessing images, gifs so easy and it&#39;s quick too, really Hashnode is just flawless for a tech writer.** &#xA;&#xA;&#xA;## Conclusion&#xA;&#xA;That&#39;s been my story so far. I have grown from a noob coder to a somewhat confident nerd who can quickly write functional scripts, jump in Vim with more speed, and of course, teach someone something that can be helpful. It also is a way for me to relax and enjoy the process of programming. &#xA;&#xA;Thanks a lot if you have been reading till here. &#xA;**Thank you Hashnode for providing such a great opportunity to create a side hustle here**. As always Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Vim for Competitive Programming</title>
      <link>https://www.meetgor.com/posts/vim-for-cp</link>
      <description>Introduction Vim is not a bad text editor when it comes to using it for Competitive Programming. It&#39;s kind of one way or the other, you would love it or you cou</description>
      <pubDate>Mon, 13 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Vim is not a bad text editor when it comes to using it for Competitive Programming. It&#39;s kind of one way or the other, you would love it or you could trash it as it can waste a tremendous amount of time to write code every time. But once you are in a decent setup including some key-bindings and plugins can improve your speed in using Vim. I personally have used it since the beginning and haven&#39;t regretted it even a tiny bit till date. It&#39;s a flawless and enjoyable experience. I&#39;ve used C++ for my CP journey, C is quite similar as well, but C++ has more library support and is a bit easier to write comparatively.&#xA;&#xA;Using Vim for CP is a bit challenging in the initial stage but just stay consistent and you&#39;ll be in flying colors on the other side. Here, I&#39;ll give some quick key-bindings and plugins to include to Vim to enhance the workflow for Competitive Programming.&#xA; &#xA;## Quick Boilerplate code&#xA;&#xA;Firstly, let&#39;s create a key-binding for the boilerplate code. We just need to map the key with the code we need to write which will be integrated with Vim commands.&#xA;&#xA;For C++&#xA;```vim&#xA;nnoremap cpf i#include&lt;iostream&gt;&lt;Esc&gt;ousing namespace std;&lt;Esc&gt;o&lt;CR&gt;int main(){&lt;Esc&gt;o&lt;Esc&gt;oreturn 0;&lt;Esc&gt;o}&lt;Esc&gt;kki&#xA;``` &#xA;&#xA;For C:&#xA;```vim&#xA;nnoremap cp i#include&lt;stdio.h&gt;&lt;Esc&gt;o&lt;CR&gt;int main(void){&lt;Esc&gt;o&lt;Esc&gt;oreturn 0;&lt;Esc&gt;o}&lt;Esc&gt;kki&#xA;```&#xA;&#xA;For Java:&#xA;```vim&#xA;nnoremap &lt;C-j&gt; iclass &lt;ESC&gt;&#34;%pxxxxxa {&lt;ESC&gt;opublic static void main(String args[]){&lt;Esc&gt;o&lt;Esc&gt;o}&lt;Esc&gt;o}&lt;Esc&gt;kki&lt;Tab&gt;&lt;Tab&gt;&#xA;```&#xA;&#xA;Now, you get the idea of creating such macros for your own programming language. I don&#39;t think Python can have boilerplate code, even it has it&#39;s quite small compared to C/C++/Java/etc. You can include those codes into the key map. Let me explain the map to you.&#xA;&#xA;Let&#39;s look at the Java example, the `nnoremap` is the map command with attributes like non-recursive and the mode of mapping. In this case `n` stands for normal mode map i.e. the map will get triggered in the normal mode, `nore` stands for non-recursive behavior of the map. For further readings, you can read my [Vim-keymapping Guide](https://mr-destructive.github.io/techstructive-blog/vim/2021/06/14/Vim-Keymapping.html) article. The map itself has the right and a left part, kind of a map of the key and command to execute. You can use any key to trigger it, I have used CTRL+J or `&lt;C-j&gt;` to activate the map. The command is quite long but let&#39;s break it down into smaller chunks.&#xA;&#xA;- Starting with `i`, we enter into insert mode and we type in class and hit `&lt;ESC&gt;` key and this is a special trick to deal with java for the class name as it should be the name of the file itself. We use the `%` register for copying the current file&#39;s entire name (with extension), so we have to remove the `.java` part so, you see five x&#39;s to delete those after we have pressed a to get into insert mode just ahead of the cursor. We then insert `{` for the class and again hit escape to go back to normal mode.&#xA;- We type `o` to go into insert mode to the next line, type in `public static void main(String args[])` and this is the same stuff of escaping and going in next line. &#xA;- Finally, we hit `kk` to move up twice and enter the insert mode with `i` and hit tab twice to get the indentation correct. &#xA;&#xA;So, you can add more as per your preference of pre-defined constants and functions and the rest stuff involved in your language of choice.&#xA;&#xA;&#xA;## Running Code by a Key-binding&#xA;&#xA;After the boilerplate code being taken care of, we can now move to the building and running of the code file in Vim. We can add some more custom key bindings to auto compile and run the code within Vim. &#xA;&#xA;We can compile the code with the compilers or interpreters for the languages from the Vim using `!` which executes the commands from the terminal and pass in `%` i.e. the current file name and output file as `%:r` i.e. current file name without the extension in case of C/C++/Java. &#xA;&#xA;**Compile -&gt; `c++ hello.cpp -o hello`**&#xA;&#xA;**Run -&gt; `%:r` for Windows**&#xA;&#xA;and&#xA;&#xA;**`./%:r` for Linux/macOS**&#xA;&#xA;We have added those two commands i.e (compile and run) using `&amp;&amp;` to make things more clear and compact. &#xA;For C++&#xA;```vim&#xA;nnoremap cpp :!c++ % -o %:r &amp;&amp; %:r&lt;CR&gt;&#xA;```&#xA;&#xA;For C&#xA;```vim&#xA;nnoremap c, :!gcc % -o %:r &amp;&amp; %:r&lt;CR&gt;&#xA;```&#xA;&#xA;For Java&#xA;```vim&#xA;nnoremap ,j :!javac % &amp;&amp; java %:r&lt;CR&gt;&#xA;```&#xA;&#xA;For Python&#xA;```vim&#xA;nnoremap py :python %&lt;CR&gt;&#xA;```&#xA;&#xA;Again, you can apply this to any programming language you use. Also kindly note that their could be some differences for compiling the code in Linux and Windows so just be careful which command you use in which Operating system.&#xA;&#xA;![vimjv.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1631540728028/2puTZUXwK.gif)&#xA;&#xA;The above gif demonstrates the usage of those commands and key bindings to make the workflow of running and building process much easier and smoother.&#xA;&#xA;&#xA;&#xA;## Opening Terminal to compile and run code&#xA;&#xA;We can use the terminal to compile and run code if you wish to do it this way as well. Vim is quite a terminal-friendly editor, you can open any terminal from Vim.&#xA;&#xA;You can enter `:term` to open the default terminal in a horizontal split. You can explicitly enter the terminal/shell name to open it like `:term bash` to open bash, `:term powershell` to open PowerShell, `:term zsh` to open zsh, and so on. This is quite a great utility that vim provides with it. You can switch between windows using `&lt;C-w&gt;` or CTRL+W **twice** or use HJKL keys along with `&lt;C-w&gt;` to move with ease. More on Vim and Integrated Terminals in [this article](https://mr-destructive.github.io/techstructive-blog/vim/2021/06/29/Vim-Terminal.html).&#xA;&#xA;You can now explore the terminal world yourself and execute the commands from therein. &#xA;&#xA;## Plugins&#xA;&#xA;When it comes to Vim, there are a ton of plugins made for everything you can imagine and couldn&#39;t imagine. We can use Plugin managers to manage those plugins for us or manually manage those yourself. You can go with one of the three well-known plugin managers like `Vim-Plug`, `Vundle` or `Pathogen`. &#xA;&#xA;Using those plugin managers, we can install plugins like some of the useful ones are:&#xA; &#xA;### NERDTree&#xA;&#xA;NERDTree is a great plugin for beginners and also for people doing competitive programming as it saves a bit of time as well. You can install the NERDTree plugin from the GitHub docs. You can read about the detail of configuring the NERDTree plugin in [this article](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/31/Vim-NERDTree.html). This plugin allows you to have a graphical representation of the files and folder structures in the current directory. You can extend its capabilities to auto-refresh and auto open/close using some custom configuration as provided in the above-mentioned article. &#xA;&#xA;You basically open the NERDTree with the command `:NERDTree` after installing and configuring. You can make key-bindings and shortcuts to use as you like and prefer.&#xA; &#xA;### Autocompletion and LSP&#xA;&#xA;Autocompletion is quite important when it comes to competitive programming as you have to save time on the **stuff you know** and let the application process it for you. We do have some plugins for auto-completion and also LSPs for this specific task. LSP is quite great and provides much more accurate predictions when it comes to autocompletion. You can check out the LSP [unofficial plugin for vim](https://github.com/prabirshrestha/vim-lsp) for more information. &#xA;&#xA;For normal plugins like [AuotoComplPop](https://github.com/vim-scripts/AutoComplPop) are also decent and provide a great beginner experience.&#xA;&#xA;Not many plugins would be required for competitive programming just some autocompletion and a nice interface with a file system will work for the basic set-up but that is not the end, you can extend it as per your needs and skills. Vim is quite epic in terms of the customization it can offer.&#xA;&#xA;![vimcpt.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1631768130787/ccDyyJ45y.gif)&#xA;&#xA;Similarly, for C++, I have demonstrated the use of key mappings and also showed the usage of the terminal inside of Vim, this is quite a flawless experience to integrate and use the terminal as per choice and preference.&#xA;&#xA;## Conclusion&#xA;&#xA;So from this article, we were able to set up Vim for competitive programming by adding some custom key mappings and plugins for different languages. This was not a complete guide, I could not include every language&#39;s configuration here, surely you can ask me anytime in the comments, GitHub, Twitter or just mail me I can try to configure a programming language on Vim. Thanks for reading. &#xA;&#xA;Happy Viming and Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Filter and Find an Issue on GitHub</title>
      <link>https://www.meetgor.com/posts/find-filter-github-issues</link>
      <description>Introduction Are you stuck in finding an open-source project to contribute to? We will see how you can pick up an issue on GitHub appropriate as per your prefer</description>
      <pubDate>Thu, 09 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Are you stuck in finding an open-source project to contribute to? &#xA;&#xA;We will see how you can pick up an issue on GitHub appropriate as per your preferences of languages, labels, complexity, and thus you can find a Community or a project to work and continue with further contributions.&#xA;&#xA;This process might not be as efficient but is quite helpful for beginners or people getting started to contributing to Open Source.&#xA; &#xA;## Understand the search bar&#xA;&#xA;I assume you have your GitHub account already created. If not go ahead at [Github](https://github.com/join) and create one. On the Home page, you can easily navigate to the `Issues` tab and you will see something like this:&#xA;&#xA;![Issues tab](https://cdn.hashnode.com/res/hashnode/image/upload/v1631190578909/UBpq3rb0H.png)&#xA;&#xA;Now, you won&#39;t find any issues if you haven&#39;t created any. But if you look at the search bar, you will find the reason why it is empty or why there are only the issues that you have created. You will see that in the search bar there is a filter called `author:Username`, which filters the issues which are created by you. You definitely don&#39;t want this as you want to search and find other issues by other people/communities. So, simply remove the text `author:Username` from the search bar. Keep rest as it is for now. Now if you press enter after removing the author filter, you will see all the issues on GitHub. &#xA;&#xA;![Issues removed author](https://cdn.hashnode.com/res/hashnode/image/upload/v1631185853484/e0PyTbgip.png)&#xA;&#xA;There will be a ton of them, very random in terms of programming languages, frameworks, projects, difficulty, type, etc. they are basically the issues created recently on GitHub.&#xA; &#xA;In the next section, we will see how to filter those issues as per the programming languages/tools to which you might like to contribute to.&#xA;&#xA;## Add languages&#xA;&#xA;We can add filters to the issues as `language:name`, this will filter all the Issues which have the languages in their codebase. &#xA;&#xA;For Example:&#xA;&#xA;![Issues language filter](https://cdn.hashnode.com/res/hashnode/image/upload/v1631190679194/8Od1tsdKp.png)&#xA;&#xA;Here, I have filtered the issues which have language as `python`, you can use any language/tool you might want and would love to find some interesting projects to contribute and learn from.&#xA;&#xA;If you want to search by multiple programming languages you can separate the names of those programming languages by a comma `,`.&#xA;&#xA;You can also separate programming languages with space and enclosing all of them under double quotes `&#34;&#34;`.&#xA;&#xA;For Example:&#xA;&#xA;Let&#39;s search for issues with C, C++, and Java as their programming languages, we can use `language:c,cpp,java` or `language:&#34;c cpp java&#34;`&#xA;&#xA;The above filter will give out all the issues which are created from programming languages either C/C++/Java.&#xA; &#xA;You can find more filter options on the [GitHub docs](https://docs.github.com/en/github/searching-for-information-on-github/searching-on-github/searching-issues-and-pull-requests).&#xA;&#xA;## Add labels&#xA;&#xA;You can find issues as per labels marked on them, many issues have a label marked on them to improve their visibility and meta-information about the issue.&#xA;&#xA;We have some labels which GitHub has created already for common scenarios in projects.  &#xA;&#xA;1. `bug`&#xA;2. `documentation`&#xA;3. `duplicate`&#xA;4. `enhancement`&#xA;5. `good first Issue`&#xA;6. `help wanted`&#xA;7. `invalid`&#xA;8. `question`&#xA;9. `wontfix`&#xA;&#xA;We can even create our own labels by providing the label name and a description. &#xA; &#xA;To search for labels, you can use `label:name of the label`. You can any of the above 9 label tags or any other tag name that you think is popular other than those 9. &#xA;&#xA;You would have to use double quotes (`&#34;&#34;`) to add certain labels with multiple words like `good first issue` or `help wanted`.&#xA;&#xA;For example:&#xA;&#xA;If you search for `label:&#34;good first issue&#34;`, you will get all of the issues(newest first) which have a label `good first issues` tagged on them. &#xA;&#xA;Similarly, for multiple issues, you can add comma-separated labels as well. Just like `label:bug,&#34;good first issue&#34;` will search for either `bug`, `good first issue` or both. &#xA;&#xA;![Issues label](https://cdn.hashnode.com/res/hashnode/image/upload/v1631190841185/vrYTLoaaNu.png)&#xA;&#xA;## More Sorting Options&#xA;&#xA;In the rightmost part of the search bar, in the Sort button, you can click on there and find a couple of options like: `newest`, `oldest`, `least commented`, `recently updated`, and so on. If you click on any of them you will see the changes reflected on the list of issues as well as the search bar. &#xA;&#xA;![Issues sort](https://cdn.hashnode.com/res/hashnode/image/upload/v1631189621396/jO58HkYxH.png)&#xA;&#xA;**After this the stage is yours, you can look at any issue and Understand its objective, then ask yourself can you solve this issue? If yes then read the contribution guidelines, and the rest is sheer skills like git, programming, documentation, etc.**&#xA;&#xA;## Conclusion&#xA;&#xA;Now you can go ahead and start applying the filters on issues and make some contributions to Open-Source on GitHub. We covered some methods and tricks to find and filter out the issues on GitHub based on the programming languages/tools and the labels attached to them.  &#xA;&#xA;This technique can be good for beginners as well as people who want to find quick issues to solve. Feel free to explore and try out different filters and find the issue you are confident to work on. Good Luck!&#xA;Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Why use Vim ?</title>
      <link>https://www.meetgor.com/posts/why-use-vim</link>
      <description>Introduction So, Why would one use Vim? As Vim being the most complicated Text editor when it comes to **closing it**!! Also, people think it&#39;s not worth the ti</description>
      <pubDate>Sun, 05 Sep 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;So, Why would one use Vim? As Vim being the most complicated Text editor when it comes to **closing it**!! Also, people think it&#39;s not worth the time or it&#39;s just not their type of Text Editor. Well, that&#39;s true, I am not forcing anyone to use Vim, just to tell the reasons why people use Vim. Again, it&#39;s not to show off to anyone that I use Vim, just to appreciate the power of Vim and its community, it&#39;s just amazing!!&#xA;&#xA;Vim is not a text editor just for anyone, it can be daunting initially but the more you use it, the more you love it. There is nothing called a time-wastage in learning Vim, if you think learning something is waste of time, just don&#39;t learn it! To have understood and got good at something you need to dedicate time and effort to it, there&#39;s no shortcut. When it comes to shortcuts, Vim is the fairyland of it, just use one or create your own. That&#39;s how cool Vim can be when used properly.&#xA;&#xA;Some of the most important and crucial reasons why people use Vim are:&#xA;&#xA;## Vim is Speed!&#xA;&#xA;Vim can be opened from the terminal, and that itself can speak that it is lightweight and uses lower system resources and hence causing minimal lag. This might not be an issue for high-end computers but when it comes to low-end machines, Vim behaves as a life-saver. Using certain commands feels so wicked fast and intuitive. Certain commands such as `t` or `f`, `u`, and many others are really powerful when it comes to editing in Vim. When you don&#39;t have to think about what keys you should type, you are really getting faster in Vim. &#xA;&#xA;![](https://s6.gifyu.com/images/screenrecording.gif)&#xA;&#xA;The above screencast is from my article [Vim: Set up for Python](https://mr-destructive.github.io/techstructive-blog/python/vim/2021/06/06/Vim-for-Python.html), which uses some key mappings to run python code directly from Vim itself by executing the shell commands from Vim&#39;s Command mode. A simple map can save a lot of time.&#xA;&#xA;```&#xA;nnoremap py :!python %&#xA;```&#xA;&#xA;&#xA;## Number of Commands/shortcuts&#xA;&#xA;When it comes to using key-bindings and shortcuts, there is no shortage of them. You can spend days, weeks, months, years but still, there will be some or the other thing to learn in Vim. That can be very intimidating to some people but that&#39;s how life is. **You cannot get perfect at anything but surely better is the word to focus on.** There is a good little cheatsheet at [devhints](https://devhints.io/vim) for beginners to quickly get started. Also, you can follow with the Youtuber- [ThePrimeagen](https://www.youtube.com/channel/UC8ENHE5xdFSwx71u3fDH5Xw), who focuses on switching and using Vim. Vim is has a robust set of command and key-bindings by far compared to other tex-editors. Here are some of my daily used [editing commands](https://mr-destructive.github.io/techstructive-blog/vim/2021/07/18/Vim-Enhancing-Editing-speed.html) and [navigation commands](https://mr-destructive.github.io/techstructive-blog/vim/2021/06/26/Vim-Enhancing-Movement-speed.html).&#xA;&#xA;## Power to use it as an IDE&#xA;&#xA;Vim provides some awesome features such as [terminal integration](https://mr-destructive.github.io/techstructive-blog/vim/2021/06/29/Vim-Terminal.html), [Key-mappings](https://mr-destructive.github.io/techstructive-blog/vim/2021/06/14/Vim-Keymapping.html), [Window Splits](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/06/Vim-Window-Splits.html), etc to take text-editing and developer environment to a next-level. You can literally use Vim as an IDE for many of the programming languages like C/C++, Python, Java, Javascript, GO, etc. Though it can&#39;t replace entire IDE-like features, it is indeed powerful enough to add features in making a custom IDE experience. It might require some time and research to configure it to use it as a full-blown IDE, but it might be a very useful and self-motivating experience. There are definitely many articles, tutorials out to guide you in configuring the editor to an IDE. People might not use Vim for the features it has but also for the lightweights as compared to other IDEs such as Visual Studio, Android Studio, Eclipse, IntelliJ IDEA, etc.&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630827706766/cdN6GdnhH.png)&#xA;&#xA;From, the above screenshot, we can see that I have configured Vim almost like VSCode, Ok! Just the look of it is similar but ya we have the file manager as [NERDTree](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/31/Vim-NERDTree.html), integrated Terminal, and the buffer opened as the editor. You can even add more functionality like autocomplete, keymaps to compile/run/build the code. There is are infinite possibilities when it comes to customization in Vim and to make it a full-blown IDE. OH! And by the way my Vim look changes from time to time XD&#xA;&#xA;## Available almost everywhere&#xA;&#xA;As said, Vim can be opened via the terminal, and hence it is quite viable when it comes to accessibility. It is pre-installed in almost every Linux-distributions, macOS, FreeBSD, etc. Hence it is quite reliable when it comes to compatibility and cross-platform support. You can start editing in any environment in Vim, so it becomes a good reason to learn to use Vim at least basic editing and opening, closing, etc. Vim being minimal and lightweight, it just becomes easy to work with servers. Production-environment, etc. It also isn&#39;t that complicated once you make your head around it. Being available in almost every Operating System has some value to it like it is the most dependable editor out there. Even after 30 years, yes Vim was released in 1991, it is still maintained by its producer [Brain Moolenaar](https://en.wikipedia.org/wiki/Bram_Moolenaar). That is mind-boggling, Vim is truly a vintage text editor silently but widely dominated in terms of its reach in Computing.   &#xA;&#xA;## The Level of Customization/Configuration &#xA;&#xA;It&#39;s kind of a double-sided sword for some people as it can be daunting to customize from scratch and also, it could be a rewarding and pleasurable experience of having the personalized experience.  With the number of plugins, custom vimrc and color schemes, etc Vim is very prone to personalization. It is **Open Source**, so we also have it&#39;s distributions such as [SpaceVim](https://github.com/SpaceVim/SpaceVim), [SPF-13](https://github.com/spf13/spf13-vim), etc. Some more distribution can be found [here](https://dev.to/ajeebkp23/some-popular-vim-distributions-a68). Some Distribution is pref-configured with some basic stuff and is ready to use, so they are quite beginner-friendly and introduce them to Vim in a better way. NeoVim which is an evolution of Vim written in Lua is much more powerful and provides even more customization keeping the basic key-bindings in mind. Editing Vimrc is quite an amazing and intuitive experience as it allows to add functionality or enhance the existing features in Vim. &#xA;&#xA;&#xA;## Conclusion&#xA;&#xA;&gt; Text Editors don&#39;t matter, what matters are the problem-solving skills&#xA;&#xA;Vim is a simple and minimal text editor, everyone can use it but not everyone finds the need to use it, that&#39;s fine. It might be ideal for me to use Vim than other editors but it might not be for you. But there is nothing to lose in knowing some basics of Vim. It might be useful somewhere, you never know! And if your favorite editor is not available, Vim might prove to be a great quick solution. Thanks for reading. Happing Coding :)&#xA;</content>
      <type></type>
    </item>
    <item>
      <title>Vim: NERDTree</title>
      <link>https://www.meetgor.com/posts/vim-nerdtree</link>
      <description>Introduction is a great plugin in Vim for managing and navigating Files. Some might prefer fzf, telescope, and other plugins for navigation, NERDTree is not a b</description>
      <pubDate>Tue, 31 Aug 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;[NERDTree](https://github.com/preservim/nerdtree) is a great plugin in Vim for managing and navigating Files. Some might prefer fzf, telescope, and other plugins for navigation, NERDTree is not a bad option to begin within Vim. NERDTree allows you to even create/delete/move files and folders flawlessly without much effort, so it becomes a much more viable beginner&#39;s plugin.&#xA;&#xA;## Installing NERDTree Plugin&#xA;&#xA;So, let&#39;s start with the Installation of the NERDTree Plugin, it&#39;s quite straightforward and simple.&#xA;&#xA;You should have a Plugin-Manager for installing this plugin. It&#39;s not mandatory to have one but it becomes much easier to work with. You can choose any between `Vundle`, `Vim-Plug`, `Pathogen` to name a few. It does not really matter with what you use just stick to one but if you are stuck somewhere just switch and it&#39;s not a big trouble to use other Plugin Managers as they are quite similar to each other.&#xA; &#xA;#### Vundle&#xA;&#xA;To install a plugin using Vundle, you need to configure Vundle first if you have not already done it. You can find the installation docs [here](https://github.com/VundleVim/Vundle.vim). &#xA;After Vundle has been configured in your vimrc you can simply add `Plugin &#39;preservim/nerdtree&#39;` between the call begin and end of Vundle, like :&#xA;&#xA;```vim&#xA;call vundle#begin()&#xA;  Plugin &#39;preservim/nerdtree&#39;&#xA;call vundle#end()&#xA;``` &#xA;&#xA;All of your other Plugins will go in between those two statements, i.e. `call vundle#begin()` and `call vundle#end()`. &#xA;After saving and sourcing the vimrc file, you need to install the plugin using the command `:PluginInstall`, and there you are all Done!&#xA;&#xA;&#xA;#### Vim-Plug&#xA;&#xA;To install a plugin using the Vim-Plug manager, you need to configure Vim-Plug if you have not already configured it in your vimrc. You can find the installation docs at the GitHub README of [Vim-Plug](https://github.com/junegunn/vim-plug).&#xA;After Vim-Plug has been configured in your vimrc you can simply add `Plug &#39;preservim/nerdtree&#39;` between the call plug begin and end statements. Just like:&#xA;&#xA;```vim&#xA;call plug#begin()&#xA;  Plug &#39;preservim/nerdtree&#39;&#xA;call plug#end()&#xA;``` &#xA;&#xA;All of your other Plugins will go in between those two statements, i.e. `call plug#begin()` and `call plug#end()`. &#xA;After saving and sourcing your vimrc file, you need to now install those plugins using the command `:PlugInstall`, and that is it!&#xA;&#xA;#### Pathogen&#xA;&#xA;To install any plugin using Pathogen plugin manager, you need to configure Pathogen in your vimrc if you have not done it already. You can find the installation docs on [Pathogen.vim](https://github.com/tpope/vim-pathogen).&#xA;After Pathogen has been configured in your vimrc, you can clone the git repository of that plugin into your local machine and then activate it using Pathogen. &#xA;&#xA;```&#xA;git clone https://github.com/preservim/nerdtree.git ~/.vim/bundle/nerdtree&#xA;```&#xA;&#xA;After cloning the repository, you can add this to your vimrc where you have configured it. It&#39;s a kind of DIY manager in terms of managing the folders of the plugin.&#xA;&#xA;```vim&#xA;call plug#begin()&#xA;call pathogen#infect()&#xA;syntax on&#xA;filetype plugin indent on&#xA;```&#xA;&#xA;After this, you need to run this command to get docs and help with the plugins,&#xA;&#xA;`:help tags ~/.vim/bundle/nerdtree/doc/` or `:help tags`&#xA;&#xA;And there you are done with the plugin installed.&#xA;&#xA;There are other Plugin managers as well, but these three are the most widely supported ones and they work out of the box, surely explore for yourself and find the perfect one for you.&#xA;&#xA;&#xA;## Activating and Using NERDTree&#xA;&#xA;Now, we actually need to use NERDTree, for that we can type in `:NERDTree` in any folder in our local machine, and there should be a window open a vertical split to the left, just like this:&#xA;![NERDTree activate](https://i.imgur.com/KU2vMxO.png)&#xA;&#xA;After this, you can use &lt;C-w&gt; that is **CTRL+W twice** twice to switch back and forth between windows. You can also use **CTRL+W and HJKL** to move in directions in the windows. For further guides and key-bindings in Window-Splits, you can read my article [here](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/06/Vim-Window-Splits.html). &#xA;&#xA;Now, you can navigate to the file/folders using HJKL or arrows keys(not preferred). You can even use the numbers before the HJKL to jump and hop around the large codebases, this integrates really well with base Vim key-bindings.&#xA;You can quiet the NERDTree window by just pressing `q` or `:q`, definitely the former is efficient. You can open/collapse the folders also using the enter key to open the file in the current buffer. But hey that&#39;s quite limited, what have you ever seen!&#xA;&#xA;#### Open File in Splits&#xA;&#xA;You can open a file in a horizontal split using the key `i` on the file. You can open a file in Vertical split using the `s` key keeping the current highlight in NERDTree on the file which you would like to open. This can be really a great feature to have while opening multiple files and file structures.&#xA;&#xA;&#xA;#### Managing Files/Folders using NERDTree&#xA;&#xA;You can create files using the NERDTree window by pressing m inside the particular folder where you want to. If you want to create a file in the root folder, you can go to the topmost file location and press `m` inside the NERDTree window. If you press `m`, you will be able to see different kinds of options namely:&#xA;&#xA;1. Add a child node.(`a`)&#xA;&#xA;&#x9;We can create a file or a folder using the key `a` or simply `Enter` to create the file in the current highlighted location. &#xA;&#xA;2. Move the Current Node. (`m`)&#xA;&#xA;&#x9;We can create a file or a folder using the key `a` or simply `Enter` to create the file in the current highlighted location. &#xA;&#xA;3. Delete the current Node. (`d`)&#xA;&#xA;&#x9;We can move the currently highlighted file/folder to any other directory using the file manager itself. &#xA;&#xA;4. Open the current Node in the system Text-Editor.(`o`)&#xA;&#xA;&#x9;We can delete the file/folder which is currently selected on the NERDTree menu.&#xA;&#xA;5. Copy the current Node. (`c`)&#xA;&#xA;&#x9;We can open the file in the system-default text-editor using the key `o`.&#xA;&#xA;6. Copy the Path to the clipboard.(`p`)&#xA;&#xA;&#x9;We can copy the current file/folder or a node using the command `c`.&#xA;&#xA;7. List the Current Node. (`l`)&#xA;&#xA;&#x9;We can list the file/folder i.e to display its properties the read/write/execute permissions, date modified and created, etc.&#xA;&#xA;8. Run system Command in this folder. (`s`)&#xA;&#xA;&#x9;We can run system commands or shell/terminal commands using the key `s`, For windows, we open the COMMAND PROMPT, and in Linux and macOS, it is terminal.&#xA;&#xA;You can quit that window by pressing `Esc`. &#xA;&#xA;&#xA;Here is some of the Screencast of me demonstrating the NERDTree plugin features and the edit options.&#xA;![vimnerd.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1630423628366/zZE7R5aL7.gif)&#xA;&#xA;This is just self-explanatory and beginner-friendly sets of commands, it becomes really easy to do this with some custom macros as we&#39;ll talk about in the next sections.&#xA;&#xA;&#xA;&#xA;## Macros to open file tree&#xA;&#xA;You can make a key-binding to open the NERDTree, &#xA;&#xA;```vim&#xA;nnoremap &lt;C-n&gt; :NERDTree&lt;CR&gt;&#xA;```&#xA;You can map anything instead of `&lt;C-n&gt;`, most people use `&lt;leader&gt;` but it&#39;s easy to use `CTRL+N` for me, it&#39;s just personal preference.&#xA;&#xA;If you do not like to open NERDTree again and again, you can keep it open whatsoever using the custom key-binding in your vimrc.&#xA;&#xA;```vim&#xA;autocmd VimEnter * NERDTree&#xA;```&#xA;This will open the NERDTree automatically for you when you open Vim, Ya I get it, it&#39;s not needed every time but most of the time a developer is switching between the files. &#xA;&#xA;## Enabling Autoreload&#xA;&#xA;We can auto-reload the NERDTree window when there is a change in the file structure, i.e. a file/folder is deleted/created/moved/etc. We again need to set this in our vimrc:&#xA;&#xA;```vim&#xA;autocmd BufEnter NERD_tree_* | execute &#39;normal R&#39;&#xA;au CursorHold * if exists(&#34;t:NerdTreeBufName&#34;) | call &lt;SNR&gt;15_refreshRoot() | endif&#xA;```&#xA;This will reload the NERDTree when the cursor is in the NERDTree&#39;s window. This could be really time-saving and a nice quick configuration to enhance the efficiency of your Text-editing.&#xA; &#xA;&#xA;## Enabling Autorefresh for change in the current directory&#xA;&#xA;We can also reload the NERDTree when we change the directory. The above-mentioned command is not sufficient to do that, we have to add another set of configurations.&#xA;&#xA;```vim&#xA;augroup DIRCHANGE&#xA;    au!&#xA;    autocmd DirChanged global :NERDTreeCWD&#xA;augroup END&#xA;```&#xA;&#xA;By adding this to your vimrc, you will refresh the NERDTree every time you enter or change the current directory. This is also a great addition to have to save time by reloading the Window for every change in the path, if you are looking for something among a huge code-base, this works a charm.&#xA;&#xA;&#xA;## Auto close &#xA;&#xA;You need to close the NERDTree manually each time you want to exit out of it, but this can also be automated just for the sake of simplicity and effectiveness in **QUITTING VIM**.&#xA;&#xA;```vim&#xA;autocmd bufenter * if (winnr(&#34;$&#34;) == 1 &amp;&amp; exists(&#34;b:NERDTree&#34;) &amp;&amp; b:NERDTree.isTabTree()) | q | endif&#xA;```&#xA;This will close the NERDTree window if it is the only open window. That can be frustrating at moments but the majority of the time this is a great addon indeed.&#xA;&#xA;## Packing it together&#xA;&#xA;So, we have learned the basics of using and modifying NERDTree according to our needs, to put it together, you can use this snippet directly into your vimrc and enjoy the flawless experience.&#xA;&#xA;```vim&#xA;&#34; Open nerdtree window on opening Vim&#xA;autocmd VimEnter * NERDTree&#xA;&#xA;&#34; Refresh the current folder if any changes&#xA;autocmd BufEnter NERD_tree_* | execute &#39;normal R&#39;&#xA;au CursorHold * if exists(&#34;t:NerdTreeBufName&#34;) | call &lt;SNR&gt;15_refreshRoot() | endif&#xA;&#xA;&#34;Reload the window if directory is changed&#xA;augroup DIRCHANGE&#xA;    au!&#xA;    autocmd DirChanged global :NERDTreeCWD&#xA;augroup END&#xA;&#xA;&#34;Close nerdtree automatically if it is theonly window open&#xA;autocmd bufenter * if (winnr(&#34;$&#34;) == 1 &amp;&amp; exists(&#34;b:NERDTree&#34;) &amp;&amp; b:NERDTree.isTabTree()) | q | endif&#xA;```&#xA;&#xA;&#xA;## Conclusion:&#xA;&#xA;So, We were able to make Vim a better place to work with. Making it easier to navigate along with files and folders. Configuring the NERDTree Plugin, customizing the look and functionality of Vim as per the needs. &#xA;NERDTree is a great plugin no matter how you use it. It makes Vim more viable as a text editor for daily use and that also in an efficient and clean way. Surely there might be other plugins that are super powerful and blazing fast, NERDTree provides a good UI as well by providing a graphical representation of the File structure that enhances its usage.&#xA;That is what Vim is about, learning every day some things to change the way to edit. Thank you for reading. Happy Viming and Coding :)&#xA;&#xA;### References:&#xA;&#xA;- [NERDTree - docs](https://github.com/preservim/nerdtree)&#xA;- [Refresh NERDTree](https://stackoverflow.com/questions/8793489/nerdtree-reload-new-files/8794468)&#xA;- [Reload NERDTree on Directory change](https://vi.stackexchange.com/questions/31050/how-can-i-make-nerdtree-update-root-to-the-current-directory-when-i-change-direc)&#xA;- [Open NERDTree in Vim by default](https://stackoverflow.com/questions/1447334/how-to-add-nerdtree-to-your-vimrc)&#xA;- [Close NERDTree automatically](https://stackoverflow.com/questions/2066590/automatically-quit-vim-if-nerdtree-is-last-and-only-buffer)</content>
      <type></type>
    </item>
    <item>
      <title>Dev Quotes: A platform for developers to quote and get inspired - Auth0 x Hashnode Hackathon</title>
      <link>https://www.meetgor.com/posts/devquotes-platform</link>
      <description>Introduction No one can understand the joy in finishing a personal project, except the dreamer It was a while,since I have posted an article here, as I was busy</description>
      <pubDate>Fri, 27 Aug 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;&gt; No one can understand the joy in finishing a personal project, except the dreamer &#xA;&#xA;It was a while,since I have posted an article here, as I was busy on a project or a hackathon.&#xA;&#xA;Hello, world! I am Meet a student and a self-taught web developer. I like to make and break stuff, especially when it comes to programming and Linux. I like shell scripting and learning different languages at once, love to learn about Vim and Linux everyday.&#xA;&#xA;Every time I start a project something else comes and distracts me let that be any other programming language or technology. That leads to creating new projects and leaving the one behind unfinished, I know most of the developers face this.  But this time, thanks to Auth0 X Hashnode Hackathon, I was able to create an almost finished project within almost 10 days. Having a deadline and competition creates a mindset to finish a project on time, that&#39;s my first takeaway from this Hackathon. OH! this is my first Hackathon by the way, and it has been amazing so far.  &#xA;&#xA;** Applying a framework to do something you desire and then everything working smoothly (after fixing 100s of bugs) is such a great feeling that no one can understand except for the person who just dreamt of it. **&#xA;&#xA;I&#39;ll like to share my project which is a web application for the Auth0 x Hashnode Hackathon. Here it goes.&#xA;&#xA;## What is Dev Quotes?&#xA;&#xA;Dev quotes is a web app designed for publishing and viewing quotes related to programming, developer mindset, and all the technicalities involved in a developer&#39;s life. It&#39;s basically a medium to express the life of developers and get inspired by others.  Here it is [devquotes](https://devquotess.herokuapp.com/)&#xA;&#xA;#### Dark Mode:&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630074051548/TQz9Koh7l.png)&#xA;&#xA;#### Light Mode:&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630078314355/VhfLrcjJa.png)&#xA;&#xA;&#xA;## Why Dev Quotes?&#xA;&gt; Developers are not the people who only understand how to write code but they&#39;re also the people who can make the code understandable&#xA;&#xA;As a developer, there are often times where you have no motivation left inside, but you never know you might be just a few lines of code away from making another project or fixing a bug. For that, we require some inspiration or a push to break the barrier of.  I am not saying it&#39;s just for developers, it&#39;s designed for developers but everyone is open to understanding the developers&#39; lives and their struggles. &#xA;&#xA;I also felt the need to give back some love-crafted web app to the ever-wonderful and supportive dev community. It&#39;s a small application but still, I would like to give in something instead of nothing at all. Start small grow big, hopefully :)&#xA;&#xA;## Features&#xA;&#xA;Some of the main features of the web application are as follows:&#xA;&#xA;- **Write�dit\Delete Quotes if logged in.**&#xA;&#xA;- **Like / Unlike a Quote.**&#xA;&#xA;- **See all of your quotes.**&#xA;&#xA;- **Randomized Quotes on Homepage.**&#xA;&#xA;- **Dark/Light theme based on Browser&#39;s Preference and local storage.**&#xA; &#xA;- **The app is mobile responsive as well, though the navbar is a bit wonky with the light/dark mode switch toggle, which can be taken care of soon.**&#xA;&#xA;&#xA;&#xA;![dqmob.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630084573444/YEj38dUTD_.png)&#xA;&#xA;&#xA;&#xA;## In the making&#xA;&#xA;&gt; Have the curiosity to learn, rest is automated&#xA;&#xA;The project was made by using various inspirational articles and videos about making a web application. But the idea stuck in my mind when I was thinking about the people who don&#39;t get inspired as a developer. Like there is no way you can remain sad about being a developer and keep on dealing with imposter syndrome. Every developer has a perspective of programming but there is an infinite number of opportunities if you are curious enough. Just started making the project and got so much into it that I was literally dreaming about it like I saw parts of the webpage. In my dream and I am making it that was genuinely a thing that powered me to complete it. &#xA;&#xA;The project roughly started on 19th August and almost ended on 26th August, like the actual webpage and its core functionalities. Though on 27th were some styling and extra additions such as the About section and Footer. That was like the most productive week I ever had in my programming journey. That was fun as heck.&#xA;&#xA;## Tech Stack&#xA;&#xA;The Tech Stack involved with this app is :&#xA;- `Django`&#xA;- `PostgreSQL` &#xA;- `HTML/CSS/JS`&#xA;- ` Bootstrap`&#xA;&#xA; I have not used any front-end end frameworks just because I never found the need to learn them.  I had experience with Django for just 2 months and I am surprised I was able to make it. As obvious I have used Auth0 for authentication in my web application.&#xA;&#xA;### Auth0 integration for Authentication&#xA;&#xA;I must tell you using Auth0 was just flawless addition to my app as I have to do almost nothing, just drop some credentials of the Auth0 application with my Django project using a  [well-documented guide](https://auth0.com/docs/quickstarts)  for every type of framework. Simply straight-forward was the name for integrating authentication in my app.&#xA;&#xA;#### How I used Auth0 with Django&#xA;&#xA;I&#39;ve used Template tags such as if blocks to verify if the user is authenticated or not. &#xA;```html&#xA;&lt;div class=&#34;login-box auth0-box before&#34;&gt;&#xA;&#x9;&#x9;{{ &#34;{% if user.is_authenticated &#34;}} %}&#xA;&#x9;&#x9;    &lt;a class=&#34;btn btn-primary btn-sm tn-logout &#34; href=&#34;#&#34;&gt;Log Out&lt;/a&gt;&#xA;&#x9;&#x9;{{ &#34;{% else &#34;}} %}&#xA;&#x9;&#x9;    &lt;a class=&#34;btn btn-primary btn-sm tn-login &#34; href=&#34;#&#34;&gt;Log In&lt;/a&gt;&#xA;    {{ &#34;{% endif &#34;}} %}&#xA;&lt;/div&gt;&#xA;```&#xA;&#xA;This was just readily available on their documentation though there were some adjustments as per the project requirements in this code to fit in the place.&#xA;&#xA;I must say, integrating Auth0 is even easier than using Django User Model in some sense as most of the stuff is handled by the Auth0, on our side, we simply have to create the Auth0 specific app with the credentials from the dashboard rest just works flawlessly till now. How sweet and &#xA; &#xA;### Specifications&#xA;&#xA;I won&#39;t go in-depth about the technicalities of the project but would like to address certain things. Firstly I have mostly used Class-based views for the major part, certain areas are still function-based just for the simplicity of the application and a few of them are handled and documented by Auth0 so just preferred that. &#xA;&#xA;Another thing is about Models, I just have a simple single model called `Quote` which has an Author as a Foreign Key from the Django User Model. I would have also created multiple emojis for the like system but I was too excited and in a rush to see the actual app, So just kept it simple. XD&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630060555499/246ZKnypP.png) &#xA;&#xA;The rest of the stuff like `URLs`, `Templates`, and `static files` are handled in a neatly organized way as depicted in the below diagram.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630060426600/JHMlrfOKZ.png)&#xA;&#xA;As it might not be clear from that, there are 3 apps -&gt; `auth0login`, `quotes`, and `user`, here `quotes` is kind of the most important app as it has the models, forms, URLs, and the views linked to them here. &#xA;&#xA;### Hosting&#xA;&#xA;Hosting as you can guess, it&#39;s on  [Heroku](https://www.heroku.com/) , quite beginner-friendly and feature-rich. I also have a free addon for PostgreSQL Database here.  It&#39;s limited to 10K rows but that&#39;s sufficient for a starter app like this in my opinion. Also, it has 500 free hours of dyno, which is also sufficient for a small low-traffic app like this.  &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1630076036133/9ofxnM5VN.png)&#xA;&#xA;## Bugs Encountered &#xA;&gt; Love Bugs, they&#39;ll give you experience&#xA;&#xA;This is the most exciting and important part if you consider the hackathon because this determines the level of experience that a developer shoes in achieving certain things or features if you will. Faced some typical Django errors as usual but also some unexpected things like Dark mode and light mode clashing together due to poorly written media queries in CSS.&#xA;&#xA; As usual, the start is often hard, there is no motivation whatsoever in using the admin section to test the database queries and gibberish basic HTML page. In that process, I faced some primary key access issues and was able to understand the concept more clearly by fixing it.&#xA;&#xA;Another instance was with handling post requests in Django which I&#39;ve failed to do before. I used forms and a hybrid of CSS and bootstrap to style those forms which just works brilliantly. That took some time to figure out the exact working but after a while, it was working charms. &#xA;&#xA;&#xA;## Future Updates&#xA;&#xA;As said, I would like to add more like buttons expressing different emotions. Some other features to add are:&#xA;&#xA;- To add more emojis like claps, cheers, and others.&#xA;- To add a profile page in the app that would display all the quotes of the particular author and the details related to him/her.  &#xA;- Adding some tags to filter out particular types of quotes.&#xA;- Improve UI-UX a bit more to make it professional and pleasant.&#xA;&#xA;## Source Code&#xA;&#xA;&gt;Talk is cheap, show me the code - Linus Torvalds&#xA;&#xA;The source code is available at GitHub on this  [Link](https://github.com/Mr-Destructive/devquotes). &#xA;It&#39;s freely open for any contribution after the hackathon(mid-September).  Some of the files such as the environment variables, virtual environments, cached data are not uploaded for security and obvious reasons.&#xA;&#xA;Enough of technical talks, let&#39;s see the DEMO,&#xA;&#xA;&#xA;![dqss.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1630073466386/3wgnST5hc.gif)&#xA;&#xA;Silent claps.......&#xA;&#xA;It&#39;s not a great UI-UX but works and is not too shabby in my opinion considering it only has base HTML and CSS with a little Bootstrap and Javascript. But ya, a fully functional Backend that&#39;s what I was looking for a full-stack app with some decent features. Hope it helps some developers stay motivated and hack into some hackathons like this.&#xA;&#xA;### References used while creating the app:&#xA;- [Codemy -John Elder Django tutorial]( https://www.youtube.com/watch?v=B40bteAMM_M&amp;list=PLCC34OHNcOtr025c1kHSPrnP18YPB-NFi)&#xA;- [Django - Documentation for Forms](https://docs.djangoproject.com/en/3.2/topics/forms/)&#xA;- [Django template randomizer shuffle](https://stackoverflow.com/questions/28837511/django-template-how-to-randomize-order-when-populating-page-with-objects)&#xA;- [Auth0 app Django integration](https://www.youtube.com/watch?v=kzN_VCFG9NM)&#xA;&#xA;&#xA;## Closing Words&#xA;&#xA;&gt; Why developers find solutions to bugs at the stroke of sleeping, that&#39;s multithreading in our brains &#xA;&#xA;Hope you liked the project and hopefully will inspire developers to stay motivated and can focus on their goals more than dealing with imposter syndrome and whatnot. &#xA;&#xA;Thank you for reading and using the app, for any feedbacks, Twitter handles, comment section, GitHub issues, LinkedIn messages are all freely open. Thanks. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Django Quick Setup Script</title>
      <link>https://www.meetgor.com/posts/django-setup-script</link>
      <description>Introduction Once you learn something very neatly and understand it very clearly, it feels like you are wasting time doing things over and over again. Same thin</description>
      <pubDate>Sun, 15 Aug 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Once you learn something very neatly and understand it very clearly, it feels like you are wasting time doing things over and over again. Same things in setting up a Django project, you feel like you could save those typing minutes by making a script. &#xA;&#xA;In this article, we will make a script a batch script for windows and a BASH script for Linux/macOS. We will make a virtual environment using python and then install the libraries and dependencies like in this case we will install Django. You can also tinker with these scripts and install other dependencies if you want like Django rest framework, crispy forms, markdown, Redis, etc. We will also make a Django project using the positional parameter passed before running the script from the command line. &#xA;&#xA;### Python development environment&#xA;This article assumes you have a python environment setup. If you don&#39;t you must install Python from the  [official website](https://www.python.org/downloads/)  as per your operating system. Also, you should have pip installed and configured correctly. You can install pip by following the  [official documentation](https://pip.pypa.io/en/stable/)  for the specific operating systems.&#xA;&#xA;## Steps in Django Project Setup &#xA;&#xA;So, If you are already familiar with the Django project setup, you can directly use the scripts provided in the next few sections. You can also visit  [this GitHub repository](https://github.com/Mr-Destructive/django-quick-setup-script)  if you have any issues and errors.  &#xA;&#xA;If you are new to django, let me first explain the process of django project setup. &#xA;- ### Initialize a VirtualEnvironment (Recommended but not necessary)&#xA;&#xA;Virtual Environment in Python is a great way of localizing the dependencies and frameworks only in the particular folder, it allows the developer to separate things out and keep them distinct, Additionally, when deploying or sharing the repository, the other developers can install the dependencies in the requirement.txt file in their local environment flawlessly. &#xA;&#xA;So, it is always recommended to use python virtualenv when working with python frameworks or libraries. We can set it up by simple pip install and then giving it a name.&#xA;&#xA;```&#xA;pip install virtualenv&#xA;``` &#xA;&#xA;This will install the package/tool using pip. &#xA;&#xA;After that has been properly installed, we can now give it an appropriate name &#xA;```&#xA;virtualenv mytest&#xA;```&#xA;&#xA;The `virtualenv` is the command and `mytest` can be any name, generally `env` or `venv` is preferred but it should be understandable to the user.  You will now be able to see the folder inside of your current directory named as `mytest` or the name you&#39;ve given to it. &#xA;&#xA;**Windows**&#xA;&#xA;Now if you are on windows, you can activate the virtual environment by using the command :&#xA;```&#xA;mytest\Scripts�ctivate&#xA;``` &#xA;here mytest is the name of your virtual env it can be anything as per your choice. This will now activate the virtualenv which will be shown by `(mytest)` before the command prompt. &#xA;&#xA;**Linux/macOS**&#xA;&#xA;For Linux or macOS, you can use the command: &#xA;```&#xA;source mytest/Scripts/activate.sh&#xA;```&#xA;In the above command, `mytest` can be anything that you have used while creating the virtualenv. This should activate the vrtualenv and will be indicated by `(mytest)` before the prompt in the terminal. &#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629023409389/kEe5AVAsr.png)&#xA;From the above image, we can see that we created and activated an virtualenv in python in a folder. &#xA;So, this is the walkthrough for setting up the virtualenv for a Django project, now we will move to install Django in this environment.&#xA;&#xA;- ### Installing Django using pip&#xA;&#xA;This is quite straightforward. You can use `pip install django` or `pip3 install django` or the normal way you install a library from pip. &#xA;&#xA;- ### Creating a Django project&#xA;&#xA;To create a django project, we use the django-admin command like:&#xA;```&#xA;django-admin startproject mywebsite&#xA;```&#xA;This will create a folder called `mywebsite` or your project name anything you like. Inside the `mywebsite` folder, you will have 2 things: `manage.py` file, and `mywebsite` folder . Yes there will be another `mywebsite` folder inside your project which will have the settings, URLs and other global(project-level) configuration files. The `manage.py` file is the most important file here. You should never touch/edit this file. We use this file to carry out all the operations from running the server to managing the database. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1629032998253/QQ5QXf4v5.png)&#xA;&#xA;### Setup script  for Windows&#xA;&#xA;The below is a batch file for Windows Operating System. Save the file in a `.bat` extension.&#xA;&lt;iframe&#xA;  src=&#34;https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;t=cobalt&amp;wt=none&amp;l=application%2Fx-sh&amp;ds=true&amp;dsyoff=20px&amp;dsblur=68px&amp;wc=true&amp;wa=true&amp;pv=56px&amp;ph=56px&amp;ln=false&amp;fl=1&amp;fm=Hack&amp;fs=14px&amp;lh=133%25&amp;si=false&amp;es=2x&amp;wm=false&amp;code=mkdir%2520%25251%2520%250Acd%2520%25251%250Apip%2520install%2520virtualenv%250Avirtualenv%2520env%250Acall%2520env%255CScripts%255Cactivate%250A%250Apip%2520install%2520django%250Adjango-admin%2520startproject%2520%25251%2520.%250Acls%250A&#34;&#xA;  style=&#34;width: 803px; height: 366px; border:0; transform: scale(1); overflow:hidden;&#34;&#xA;  sandbox=&#34;allow-scripts allow-same-origin&#34;&gt;&#xA;&lt;/iframe&gt;&#xA;&#xA;Make sure the file is saved in a `.bat` file and be in the folder where you would like to create the Django project. After being in the appropriate location, enter the following command:&#xA;&#xA;```&#xA;commands.bat myproj&#xA;```&#xA;Here I assume that you have named the script file as `commands.bat`, you can name it anything you like, but I like to keep this a convention. After this you don&#39;t need to do anything, everything will be handled by the script. &#xA;You can run the server using &#xA;```&#xA;python manage.py runserver&#xA;```&#xA;This will have the base django project set up on your system. The below is the live demonstration of the script, I have deliberately removed the `cls` command to show the process. It won&#39;t break the script if you add this to it. &#xA;&#xA;![djqss.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1629024613612/Zsqa54_KD.gif)&#xA;&#xA;### Setup script  for Linux/macOS&#xA;&#xA;Copy the code from the below script and save it in a file preferably called `commands.sh`, you can name it anything you want but keep the `.sh` extension after it to identify it as a shell-script.&#xA;&#xA;After that file is saved locally, you can run the script by passing the positional parameter as the name of the Django project. The command will be like:&#xA;&#xA;```&#xA;bash commands.sh myproj&#xA;```&#xA; &#xA;&lt;iframe&#xA;  src=&#34;https://carbon.now.sh/embed?bg=rgba%28171%2C+184%2C+195%2C+1%29&amp;t=blackboard&amp;wt=none&amp;l=application%2Fx-sh&amp;ds=true&amp;dsyoff=20px&amp;dsblur=68px&amp;wc=true&amp;wa=true&amp;pv=56px&amp;ph=56px&amp;ln=false&amp;fl=1&amp;fm=Hack&amp;fs=14px&amp;lh=133%25&amp;si=false&amp;es=2x&amp;wm=false&amp;code=%2523%21%252Fusr%252Fbin%252Fenv%2520bash%250A%250Amkdir%2520%25241%250Acd%2520%25241%250Apip%2520install%2520virtualenv%250Avirtualenv%2520env%250Asource%2520env%255Cbin%255Cactivate%250A%250Apip%2520install%2520django%250Adjango-admin%2520startproject%2520%25241%2520.%250Aclear&#34;&#xA;  style=&#34;width: 803px; height: 384px; border:0; transform: scale(1); overflow:hidden;&#34;&#xA;  sandbox=&#34;allow-scripts allow-same-origin&#34;&gt;&#xA;&lt;/iframe&gt;&#xA;&#xA;From the output of the script, you will have created a Django project inside a virtual environment. We can manually activate the virtual environment. You can experiment it within your system as it can be a lot more customizable. This is just bare bone script to start a Django project but you can add your own things into it.&#xA;&#xA;## Conclusion&#xA;Thus, from this little article, you can get a bit lazier in initializing a bare-bone Django project. We were able to understand the structure of the Django project and how to set up a virtual environment powered by Python. &#xA;&#xA;After understanding those concepts we then moved on to making a script namely a batch file and a shell script to automate the initialization of the Django project. I hope it might have helped in some or another way, Thanks for reading till here. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Buffers</title>
      <link>https://www.meetgor.com/posts/vim-buffers</link>
      <description>Introduction I have talked about Vim , in the previous articles, and now I am quite keen on explaining the finest unit of file that you can open using Vim and t</description>
      <pubDate>Wed, 11 Aug 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;&#xA;I have talked about Vim [tabs](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/03/Vim-Tabs.html),  [window splits](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/06/Vim-Window-Splits.html)  in the previous articles, and now I am quite keen on explaining the finest unit of file that you can open using Vim and that is a buffer. Tabs are a collection of windows, Windows are the viewport on buffers (collection to view the buffers), and Buffers are the memory that holds text in the file. So let us explore buffer in detail. This article won&#39;t cover each and every tiny detail about buffers but surely enough to make you understand what a buffer is and how to manage those.&#xA;&#xA;## What is a Vim buffer&#xA;&#xA;Vim buffers are the chunks of memory that hold the text in a file. Vim buffers are used from opening a file to using, installing, upgrading the Plugins, using file managers, and everything you can edit and work within Vim. &#xA;&#xA;So, why bother using Vim buffer, you&#39;ll ask. Well, you will need it to make some complex tasks possible in Vim, such as configuring it as your IDE or having much more control over what and how you open and close files in Vim. You might not need it in the broader sense but having a grasp of what you are using and understanding the process under the hood is vital whatsoever. &#xA;&#xA;## Creating a buffer&#xA;&#xA;No need to create a buffer, if you are editing a file, that&#39;s already a buffer :) If you open Vim, a fresh open, then you already have that current buffer as an empty buffer. You can create an empty buffer by entering the command `:enew`. This will actually make a new empty buffer, if you have edited the contents of the previous buffer you need to specify to write/save the buffer in a file.&#xA;&#xA;You can see the details of the buffers opened currently by using the command `:ls` or `:buffers`. This will open a list of buffers currently loaded in the Vim instance. You will see the output somewhat like this:&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628681203173/qSSRrlB2J.png)&#xA;&#xA;We can see the name of the current buffer which in this case it is `&#34;No name&#34;` this will be replaced with a file name if we had a file open. This won&#39;t only print the current buffer but all the loaded buffers in the present Window of Vim. We will explore the details of this command in the next section.&#xA;&#xA;&#xA;## Navigating through the buffers list&#xA;&#xA;![buff1.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628681927410/t9DDPbDEs.gif)&#xA;&#xA;In the first column in the only row, you can see `1` indicating the number of the buffer, secondly, there are indicators `%a` and `#` and others as well. These are used for referencing the buffers using certain attributes those have like :&#xA;&#xA;- `%` stands for the buffer in the current window.&#xA;- `a` stands for the currently active buffer.&#xA;- `#` stands for the alternate buffer (previously edited).&#xA;- `h` stands for the hidden buffer ( used for navigating to other files without saving changes ).&#xA;&#xA;We also have additional attributes or indicators for the buffers like:&#xA;&#xA;- `+` indicating the buffer is modified.&#xA;- `-` indicating the buffer cannot be modified. &#xA;- `=` indicating the buffer is read only.&#xA;&#xA;To use the attribute `h`, you have to set the option in the current window only, you can use `:set hiddden` to hide the files i.e to edit another file without saving changes to the current buffer(with the file loaded).  If you want a permanent option of hidden files you can add `set hidden` in your `vimrc` file.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628683076849/lWQ2ZEj0E.png)&#xA;&#xA;If you have enabled a hidden file in the window and you modify the file, you can see the `+` symbol in the `:ls` command output before the file, indicating that the file has been modified. If you try to quit the window, it will prompt you to save those unsaved modifications.  &#xA;&#xA;This is the power of Vim buffers, you can get really fast in terms of editing multiple files and saving all of them at once. This feature can allow you to work more effectively and flawlessly with Window Splits and Tabs as they can contain multiple buffers within them. &#xA;&#xA;Now we will see how to move between those buffers. We can pretty effectively use the numbers in the buffer list to directly jump to that buffer. We can use `:b n` where n is any number of the buffer in the list of buffers to move through the buffers. Optionally we can use `:buffer n` but that is quite a long command. &#xA;&#xA;We can move to the previous buffer as `:bp`  to move to the previous buffer in the list. Similarly, we can use `:bn` to move to the next buffer in the list, and `:bfirst` and `:blast` to move to the first and the last buffer respectively.  &#xA;&#xA;![buff2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628684713072/QzqBC99NV.gif)&#xA;&#xA;## Moving through the buffers&#xA;&#xA;We can move through our opened buffers using `Ctrl + o` to move backward a buffer and `Ctrl + i` to move forward a buffer. This can be helpful if you do not want to remember the numbers associated with the buffers. &#xA;&#xA;![buff2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628689030442/KNDDAlJpN.gif)&#xA;&#xA;We can scan through the current buffers opened and jump through them one by one and it becomes quite effective in switching between few files. This could probably be suited for Web development where you can hop between HTML/CSS/JS files or in Android development for java/XML files, and so on you get the idea for using it as per your needs.&#xA;&#xA;## Deleting buffers&#xA;&#xA;Now we will see how to delete the loaded buffers, we can simply use `:bd n` where n is the number of the buffer in the list to delete the buffer, optionally we can use `:bd filename` to delete the buffer loaded with that file. &#xA;&#xA;![buff2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628685481565/m4bBHPf5C.gif)&#xA;&#xA;We can also delete a number of buffers at a time using a range of buffers like `:n,mbd` where `n` is the first buffer and `m` is the last buffer. We will delete every buffer between `n` and `m` including `n` and `m` using the above command. &#xA;&#xA;![buff2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628685657441/s4rJ_CRRQ.gif)&#xA;&#xA;Just a quick fact, if you do not delete buffers, they will stay in memory even if you use `:q` they won&#39;t get erased from the buffer list, you are just exiting the buffer view and not removing the buffer from the current memory. This can get really messy if you are opening too many files and keeping them in memory. &#xA;&#xA;## Creating splits in buffer&#xA;&#xA;We can even create splits in buffers, basically a window split but we will see this with the view of buffers this time. To create a horizontal split, we can type in `:new filename` to open the file in the split. We can even create a vertical split with `:vnew filename` to open a vertical split with the provided file.  &#xA;&#xA;![buff2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628686386964/O2HhFqaKa2.gif)&#xA;&#xA;We can see some wired things in the ls menu if we open a new tab in the current instance of Vim. The list command shows all the buffers in the current Tab as well as in the previous tab.  &#xA;&#xA;We can also open buffers in splits by using numbers or file names. We can use `:sb n` to open the contents of the buffer n(any number of buffer), and with `:sb filename` to open the file in a split this will only be a horizontal split. &#xA;&#xA;We can use `:sba` to open all the buffers in the split windows. This can be quite handy to find any file which you cannot remember was edited or wanted at the moment, this opens every buffer loaded in the current instance. We can also use `:sbp` and `:sbn` to open the previous and next buffer respectively in a split. &#xA;&#xA;Additionally, we also have `:vertical ba` and `:ba` (`:ball`)to open all the buffers in vertical and horizontal splits respectively.  &#xA;![buff2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628693311069/5y_xbPho7.gif)&#xA;&#xA;**Also**, We have `:tab ba` or `:tab ball` to open all the buffers as tabs in the current instance. &#xA;&#xA;![bufftab.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628693521966/f1RKu4JwM.gif)&#xA;&#xA;You can add more layers of effectiveness or customization using keymaps to split certain buffers and make use of the above set of commands more powerful and make them work according to your needs.&#xA;&#xA;## Conclusion&#xA;&#xA;So, from the above sections, we were able to see what happens under the hood when you open, close, or edit a file in Vim. Though it was not an in-depth view of each and every detail of the buffers in Vim. But enough to make you understand what and how buffers work and how to use them to become more effective. Hopefully, you have understood how to perform movements with buffers and create and delete them with tremendous control over what files you could edit. Thank you for reading till here.   &#xA;&#xA;### Buffers &lt; Window Splits &lt; Tabs&#xA;&#xA;So from these three articles, we were able to understand the terminology of Buffers, Window Splits, and Tabs. These all are the units of the Vim file system/ Way of grouping Files. We can use buffers to load the file from the memory, we can split various buffers into windows which are all contained in a tab. Some might prefer using buffers, splits, or tabs but hey! what are you even talking about? They are all part of tabs so use everything as per need don&#39;t force yourself to use only one thing. &#xA;&#xA;**Use buffers for opening few files, if you need to open many files regularly use window splits, If that&#39;s not enough use tabs to separate from different folders or directories**. Don&#39;t stop Viming, Happy Coding :)&#xA;&#xA;### References&#xA;&#xA;- [Vim Fandom: Buffers](https://vim.fandom.com/wiki/Buffers)&#xA;&#xA;- [Vim Buffers FAQ](https://vim.fandom.com/wiki/Vim_buffer_FAQ)&#xA;&#xA;- [Linux Hint - Understanding and using VIM Buffers ](https://linuxhint.com/how_vim_buffers_work/)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Window Splits</title>
      <link>https://www.meetgor.com/posts/vim-window-splits</link>
      <description>Introduction Have you ever been stuck in Vim opening multiple files within a single window? Didn&#39;t anyone tell you, you can create multiple windows and split th</description>
      <pubDate>Fri, 06 Aug 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Have you ever been stuck in Vim opening multiple files within a single window? Didn&#39;t anyone tell you, you can create multiple windows and split them within a single tab. Definitely, the window splits will be in separate buffers. In this way you can create multiple windows inside of a single Tab, what are Tabs? You can learn some basics about it from my previous article about  [Tabs in Vim](https://mr-destructive.github.io/techstructive-blog/vim/2021/08/03/Vim-Tabs.html). We can either create Vertical or Horizontal splits within the window making it flexible to work with multiple files in Vim. This article will look into the creation, navigation, closing, and rearrangement of Window Splits.&#xA;&#xA;## Creating a Window Split&#xA;&#xA;Creating Window splits is quite straightforward. You should keep in mind the following things though:&#xA;- You can create a horizontal or a vertical split within a window.&#xA;- Creating a Split either vertically or horizontally can shorten the current window&#39;s size, making it equally spaced.&#xA;&#xA;Let&#39;s take a look at creating the vertical and horizontal splits one by one: &#xA;&#xA;### Vertical Splits&#xA;&#xA;Vertical Split as the name suggests, it will split the current window into **two halves vertically** or a **standing split between two windows**.&#xA;&#xA;The below image clearly shows a vertical split between two windows. Here we are splitting a single window into two windows. We can also think it of in splitting the window from left to right.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628232885853/xtBgWb-Yg.png)&#xA;&#xA;To create a vertical split, you can use `:vsp` or `:vsplit` to create a split of the same file/ blank file.&#xA;&#xA;![vsp.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628233753115/4seJbY-h9.gif)&#xA;&#xA;If you already have a file open, it will open the same file in the split as long as you don&#39;t specify which file to open. You can specify the name of the file after the command `:vsp filename` or `:vsplit filename`&#xA;&#xA;![vsp.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628233871768/B3D_3NNGo.gif)&#xA;&#xA;It&#39;s not like that you can create only a single split, you can create multiple vertical splits. That can get pretty wild pretty quickly.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628234228391/vmJxW5HOo.png)&#xA;&#xA;In the above screenshot, I have created 5 vertical splits from a single window, so making them equally wide and evenly spaced. This might not be useful every time but can get quite handy in some tricky situations.&#xA;&#xA;### Horizontal Splits&#xA;&#xA;Similar to Vertical splits, we have horizontal Splits indicating to split from top to bottom. We can **split a single window into two halves horizontally** or a **sleeping split between the windows**. &#xA;&#xA;The below image clearly shows a horizontal split between two windows. Here we are splitting a single window into two windows. We can also think it of in splitting the window from top to bottom.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628233063400/5PVdEsGHZ.png)&#xA;&#xA;To create a horizontal split, you can use `:sp` or `:split` to create a horizontal split of the same file/ blank file. This will create a blank file inside a horizontal split.&#xA;&#xA;![sp.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628235156757/ckfDxh-1D.gif)&#xA;&#xA;Similar to the vertical splits, you can open files by creating the split. You can use the command `:sp filename` or `:split filename` to create the horizontal split between the windows and opening a specified file in it.&#xA;&#xA;![sp.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628235452142/eVGrEZmHVK.gif)&#xA;&#xA;Again as seen in the vertical split, we can create as many splits as we like. This looks very ugly but who knows when you may need this. We can basically create Splits in any order of vertical or horizontal, we&#39;ll see it in the later section to the same.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628235679425/9dtK5TV6G.png)&#xA;&#xA;### Vertical and Horizontal Splits Together&#xA;We can also create Vertical split and horizontal splits independently like we are not restricted to only creating the only kinds of splits at a time. You can create a vertical split once and then can create a horizontal split within the vertical split created, in this way we can make good use of both the splits as per our needs.&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628236573469/7FOJIgP-z.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628236677571/CCeVePLkp.png)&#xA;&#xA;## Moving around Splits&#xA;&#xA;Now let&#39;s talk about the navigation around these splits, it&#39;s again a simple thing. We can use `Ctrl+w` as our primary command along with Vim navigation to move around the splits. Like for example `Ctrl + w + h` will take you to the left split to your current open window. You can also use `Ctrl+w`** twice** to hop around to the next window split in a repeating loop. &#xA;&#xA;So, we can use:&#xA;&#xA;`Ctrl + w `+ `w` -&gt; Switch to the next split (in the order of creation of those splits).&#xA;&#xA;`Ctrl + w ` + `h` -&gt; Move to the left split.&#xA;&#xA;`Ctrl + w ` + `l` -&gt; Move to the right split.&#xA;&#xA;`Ctrl + w ` + `j`  -&gt; Move to the below split.&#xA;&#xA;`Ctrl + w ` + `k`  -&gt; Move to a upper split.&#xA;&#xA;These commands might be good enough to pull you through any splits from anywhere, it just becomes easy to use Vim navigation inside of these. You can use arrow keys if you are stuck somewhere but using Vim key bindings will work out of the box.&#xA;&#xA;## Creating Splits using Keyboard shortcuts&#xA;&#xA;If you think you are wasting time going into the command mode and typing the commands to create splits, well there are some shortcuts for you. &#xA;&#xA;`Ctrl + w` + `v` -&gt; Create a vertical split.&#xA;&#xA;`Ctrl + w `+ `s` -&gt; Create a horizontal split.&#xA;&#xA;This will open the split with the current file in the original window, so if you need to change the file, you can use the edit command (`:e filename`) inside the split. &#xA; &#xA;&#xA;## Rearranging the Window Splits&#xA;&#xA;If you have a  specific set of splits of a kind open, you can rotate between those. Like for example, if we have a horizontal split, you can rotate the split to move the upper split down and below split up.  &#xA;&#xA;![cwr.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628240624930/HBIKPummz.gif)&#xA;&#xA;From the above illustration, we can see we rotated(swapped) the splits. We can also use certain commands to arrange the splits into appropriate positions according to the user. &#xA;&#xA;`Ctrl + w` + `r` -&gt; Swap the two splits(either horizontal or vertical)&#xA;&#xA;`Ctrl + w` + `H` -&gt; Move the split to the left ( `Ctrl + w` + `Shift + h`). &#xA;&#xA;`Ctrl + w` + `J` -&gt; Move the split down ( `Ctrl + w` + `Shift + j`). &#xA;&#xA;`Ctrl + w` + `K` -&gt; Move the split up ( `Ctrl + w` + `Shift + k`). . &#xA;&#xA;`Ctrl + w` + `L` -&gt; Move the split to the right ( `Ctrl + w` + `Shift + l`).&#xA;&#xA;&#xA;![spr.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628242295049/_HadBQPrs.gif)&#xA;&#xA;We can see that we were able to swap the splits to a location suitable according to our wish. We can definitely switch the splits internally as well.&#xA;&#xA;We can resize the splits as per the requirement and remove the equality in those splits. We have a couple of options to do this:&#xA;&#xA;`Ctrl +w` + `+` -&gt;  Increase the height of the current split.&#xA;&#xA;`Ctrl +w` + `-` -&gt;  Decrease the height of the current split.&#xA;&#xA;`Ctrl +w` + `&gt;` -&gt;  Increase the width of the current split.&#xA;&#xA;`Ctrl +w` +  `&lt;` -&gt;  Decrease the width of the current split.&#xA;&#xA;`Ctrl +w` + `=` -&gt;  Make the splits equal in width and height.&#xA;&#xA;![spr.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628243827459/nL3mus88d.gif)&#xA;&#xA;We can also use `:resize {number}` to manually set the size of the horizontal split and `:vertical resize {number}` to manually set the size of the vertical split. This is really risky, like if you know what you are doing then it&#39;s totally fine. The commands demonstrated earlier are really adjustable and user-friendly. &#xA;&#xA;We can also use `Ctrl + w` + `_` to minimize all the window split except the current one.&#xA; &#xA;## Closing the Splits&#xA;&#xA;Now after doing all sorts of wizardry with the window splits the finishing touch is to close those splits after use. We can surely use `:q` to close the current window split but that is not intuitive to enter each split and manually close all of them, we can use:&#xA;&#xA;`Ctrl +w` + `c` -&gt;  Close the current split.&#xA;&#xA;`Ctrl +w` + `o` -&gt;  Quit all other splits except the current one. &#xA;&#xA;![closesp.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628244089101/Ebdup7wNE.gif)&#xA;&#xA;## Conclusion&#xA;&#xA;So, we have seen the basics of using Window Splits in Vim. Please let me know if there is some important point that is missing. Thank you for reading through here. We have seen how to create, navigate, rearrange, closing and other basic stuff related to Window splits in Vim.&#xA;&#xA;### References:&#xA;&#xA;- [Sodocumentation](https://sodocumentation.net/vim/topic/1705/split-windows)&#xA;- [Linux Hint Vim Split Screen](https://linuxhint.com/vim_split_screen/)&#xA;- [Starefossen](https://gist.github.com/Starefossen/5957088)&#xA;&#xA;Hopefully, this might have given you some good idea to deal with Vim in windows splits. Happy Coding and Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Tabs</title>
      <link>https://www.meetgor.com/posts/vim-tabs</link>
      <description>Introduction So, you are playing with Vim and getting good at but something pulls you off. Basically can&#39;t get used to switching between files or windows, and t</description>
      <pubDate>Tue, 03 Aug 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;So, you are playing with Vim and getting good at but something pulls you off. Basically can&#39;t get used to switching between files or windows, and that&#39;s totally fine. Some may prefer using Window splitting, file managers, and whatnot. But let me introduce you to TABS in Vim. A simple and elegant way to open multiple files in Vim. &#xA;&#xA;## Opening Tabs&#xA;&#xA;To open a tab, you can press `:tabnew`  or  `:tabedit` to open a blank Tab with no file open in it. This basically works like the `:e` command, which opens a buffer for you with no named file.&#xA;&#xA; If you already have an existing file in the current folder you are in, then you can press `:tabf filename` or `:tabnew filename` or `:tabedit filename`. This also applies to opening folders or directories, which will open the file structure in Vim buffer. &#xA;&#xA;![tabop.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1627994196949/A-ZMWZefa.gif)&#xA;&#xA;From the above illustration, we can see that the new tab was created using the filename with the `tabf` command and an empty tab was created with `tabnew` command. Ya, we can use `tabnew` for both cases but it saves time to write two more letters. It depends on the preference as you don&#39;t have to remember one more command in this case. You can also customize the commands if you feel they are too big to type in like a simple mapping would do the trick for opening the tabs for you.&#xA;&#xA;### To open a Tab with a file specified.&#xA;&#xA;- `:tabf filename` &#xA;&#xA;- `:tabnew filename` &#xA;&#xA;- `:tabedit filename`   &#xA;&#xA;### Open a Tab without any file specified.&#xA;&#xA;- `:tabnew` &#xA;&#xA;- `:tabedit`&#xA;&#xA;&#xA;You can open the tabs as per your choice like it could be ideal if you are gonna use certain files for a longer duration of time. This could be very ideal for various programming cases especially in Web, Android, Application Development where we need to edit a few files again and again. If you prefer Window-Splitting, that&#39;s totally fine, this is just to tell that there exist other ways as well.&#xA;&#xA;## Switching Tabs&#xA;&#xA;Now if you are comfortable with opening tabs, we can now move on to switching between tabs. If you just have few tabs open, you can easily switch to the next tab using `gt` and to the previous tab using `gT` commands. But if you are in a great mode and want to open ten-twenty tabs XD, then you can use the numbers before the `gt` command. Like you can type `5gt` to move to the 5th Tab. If you do not know which tab is which, you can type in `:tabs` and this will open up the currently open tabs along with the numbers.&#xA;&#xA;&#xA;![tabsw.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1627996436129/vApSyRD6o.gif)&#xA; &#xA;You can see my keystrokes(except the last keystroke) in the lower right corner. We can easily switch between tabs using the three sets of commands and surely configure them as per your preference. We also saw the `:tabs` command which can be pretty handy if you are working with a number of tabs.&#xA;&#xA;You can move around Tabs using some special commands like `:tablast` to move to the last tab and `:tabfirst` to move to the first tab. &#xA;&#xA;- `gt`  -&gt; Move to the **NEXT** Tab being in Normal mode.&#xA;&#xA;- `gT` -&gt; Move to the **PREVIOUS** Tab being in Normal mode.&#xA;&#xA;- `ngt`  -&gt; Move to the Nth Tab in Normal mode. (n is any number of tab which are opened).&#xA;&#xA;- `:tablast` -&gt; Move to the **LAST** Tab.&#xA;&#xA;- `:tabfirst` -&gt; Move to the **FIRST** Tab.&#xA;&#xA;- `tabs`  -&gt; Get a list of Tabs which are currently opened. (includes file opened in the tab and the number)&#xA;&#xA;&#xA;## Closing Tabs&#xA;&#xA;So, after opening tabs you want to close em right? That is quite simple as expected. Just type `:tabc`, this will delete the current tab. You can close the specific tab by prefixing `tabc` with the number of that tab. Like if you want to delete the 2nd tab, use `:2tabc` to close the 2nd tab. &#xA;&#xA; If you want to reopen the closed tab, you can look out for the buffer name using `:ls` and then after finding the number of buffers in which your tab was open, you can type `:tabnew +nbuf`, here n is that number of the buffer. &#xA;&#xA;If you want to close all the tabs except the current one, you can use `:tabo`. This will clear the tabs except in which you are in, hence it will also collapse the top tab bar showing the file opened in those tabs.&#xA;&#xA;![tabce.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1627997764101/HhUmFFQLZ.gif)&#xA;&#xA;As from the above GIF, we can see we located the number of the buffer last closed as we knew the name of the file which was opened in that tab. We also saw how to delete the specific tab using its number and the current tab.&#xA;&#xA;- `:tabc` -&gt; Close the current tab.&#xA;&#xA;- `:ntabc`-&gt; Close the Nth Tab.&#xA;&#xA;- `:tabo`  -&gt; Close all the tabs except the current Tab.&#xA; &#xA;## Re-ordering Tabs&#xA;&#xA;This is a very tiny little detail but becomes a super tool in many cases. Let&#39;s say you want some reference of some content in the file, again and again, it&#39;s quite likely you should make the tabs nearby instead of switching tabs again and again. You can use Window splitting in this case, though we will see how to reorder tabs just for having the grasp on using Tabs in Vim.&#xA;&#xA;To reorder tabs, you are basically moving a tab from one position to other. Let&#39;s say you have a Tab at position `5` which is your current tab, you want it at position 2. So what you can do is move the current tab to position two, as simple as to speak `:tabm 1`. This will move the current tab which is at number 5 to the 2nd position. Remember the tab order is 0 based so just use the number you are thinking minus 1. So the command becomes `:tabm n`, where n is the index of the tab(starts from 0, the left-most tab). If you want to move to the last tab, you would not specify any number just type the `tabm` command, and that&#39;s it.&#xA;&#xA;![tabmv.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1628001405055/m0XQAfdQJ.gif)&#xA; &#xA;From the above example, we were able to move around the tabs to our desired location without a hassle. This is some basic stuff you can do with Tabs in Vim, surely you can add in your custom mappings to enhance the productivity and improve the workflow in Tabs in Vim.&#xA;&#xA;- `:tabm n` -&gt; Move the current opened Tab to the Nth position (Starts from 0).&#xA;&#xA;## Conclusion&#xA;&#xA;So, we have seen how we can use Tabs and move around in between files and folders, we are now able to open, close, move, navigate around the tabs in Vim. By using some custom mappings, this can be overhauled for much fewer keystrokes that get in it. There are many other navigation techniques in VIm, and using Tabs is one of them, surely it won&#39;t suit everyone but there will be someone who will prefer using this. Thank you for reading till here. I hope you learned something from this to enhance your grasp in Vim. Happy Coding and Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>Scrapping the meaning of a word from dictionary.com using BASH script.</title>
      <link>https://www.meetgor.com/posts/bash-dictionary-scrapper</link>
      <description>Introduction Web Scraping is quite an interesting and powerful tool or skill to have in a Programmer&#39;s toolkit. It helps in analyzing data and getting some info</description>
      <pubDate>Tue, 27 Jul 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Web Scraping is quite an interesting and powerful tool or skill to have in a Programmer&#39;s toolkit.  It helps in analyzing data and getting some information in various formats. Web Scraping is a process in which a user fetches a website&#39;s content using some pattern in those HTML tags and the desired content to be fetched or scraped.&#xA;&#xA;For this article, we aim to fetch the meaning of a word entered by the user from the dictionary.com website. We need to print just the meaning of the word from the HTML tags in it. We must have a good understanding of HTML and some basic Linux tools such as cURL, grep, sed, and others for doing all of these. &#xA;&#xA;![Inspecting the Target Website](https://cdn.hashnode.com/res/hashnode/image/upload/v1625737499658/FGLusWSII.png)&#xA;&#xA;## Inspecting the Target Website&#xA;&#xA;To begin with, scrapping the website, first, it is absolutely important to inspect the website and view its source code. For that, we can make use of Inspect tool in our Browsers. Just Right-click on the website you are viewing or the website for scraping, a list of options appears in front of you. You have to select Inspect option( also Shift + Ctrl + I), this will open a side window with a plethora of options. You simply have to select Elements from the top of the menus. The code that you will see is the source code of the website. No, don&#39;t think you can change the content of the website from here :)&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625737510444/KonUrEpcq-.png)&#xA;Inspect Tool in the browser.&#xA;&#xA;Now we have to analyze the website with the content which we want to scrape. You can go on for clicking the `select the element in the page to inspect it` option or icon in the top left-hand side corner. This will allow you to inspect the particular element that you selected on the webpage. You can now see the element tag, id, class, and other attributes required to fetch the element&#39;s content.&#xA;&#xA;&#xA;## Selecting the particular element from the website to view the source code.&#xA;&#xA;&#xA;### Accessing the website from the Command line/terminal&#xA;&#xA;Now the website structure is being understood we can actually move to scrap it. For that, we need to have the web site&#39;s content on our local machine. First of all, we need to access the website from elsewhere not from the browser, because you cannot copy-paste content from there. So let&#39;s use Command Line here. We have a popular tool known as `cURL`, which stands for client URL. The tool fetches the contents of the provided URL. It also has several parameters or arguments that can be used to modify its output. We can use the command&#xA;&#xA;```&#xA;curl -o meaning.txt https://www.dictionary.com/browse/computer#&#xA;```&#xA;&#xA;The above command fetches the HTML page for the word Computer, it could be any word you might be searching for.&#xA;&#xA;### Understanding the Website Structure.&#xA;&#xA;Here comes the time to explain the structure of dictionary.com. When you search a word on the website(dictionary.com), you are routed to `/browse` which then fetches the word for you and defaults you to the `/browse/word#` (the word can be any word you searched). The curl command dumps the output in the `meaning.txt` or any specified file. If you see the contents of the file, it is the same as on the web.  So we are going to store the meaning of the searched word in the meaning.txt file, you can customize the name and command however you like.&#xA;&#xA;Voila! you successfully scraped a webpage. Now the next target is to filter the webpage content.&#xA;&#xA;### Filtering Content from Website local file&#xA;&#xA;Now we have the content of the webpage on our local machine, we need to search or filter out the useful content and remove the unwanted tags and elements. For that, we can use commands such as `grep` and `sed`. &#xA;&#xA;### Finding Tags to Extract content.&#xA;&#xA;We need to find patterns and similarities in the tags that contain the text of the meaning of the specified word. From the analysis of the webpage, we see that the element `&lt;span class=&#34;one-click-content css-nnyc96 e1q3nk1v1&#34;&gt;` contains the actual meaning. We just need the basic meaning, we may not need examples and long lengthy definitions on our Terminal, So we will go with filtering out the span tag with a class called `one-click-content css-nnyc96 e1q3nk1v1`. To do that we can use the grep command, which can print the text or line matching the specified expression or text. Here we need the span element with the particular class name so we will use regular expressions to find it more effectively.&#xA;&#xA;```shell&#xA;grep -oP &#39;(?&lt;=&lt;span class=&#34;one-click-content css-nnyc96 e1q3nk1v1&#34;&gt;).*?(?=&lt;/span&gt;)&#39; meaning.txt &gt;temp.txt &#xA;```&#xA;&#xA;### Using GREP command to filter.&#xA;&#xA;The above command will search and return only lines that are contained in the span tags with that particular class name from the meaning.txt file which we appended to fill the webpage&#39;s source code. The `-oP` are the arguments that return Only the matching cases and `-P` the coming expression is a Perl Regex. The command will return everything in between those tags. Finally, we are storing the result or output in `temp.txt`. &#xA;&#xA;Now, if you think we are done, then it&#39;s wrong, the webpage can have internal or external links embedded inside of the elements as well, so we need to again filter out the HTML tags from the `temp.txt` file. For that, we will introduce another tool to filter text called `sed` or Stream editor. This tool allows us to filter the stream field and print or store the outcome. The following code will remove the HTML tags from the scrapped text.&#xA;&#xA;### Using SED command to remove embedded &#xA;&#xA;```shell&#xA; sed -i &#39;s/&lt;[^&gt;]*&gt;//g&#39; temp.txt &gt;meaning.txt&#xA;```&#xA;The above command filters the text and removes the HTML tags from the `temp.txt `file using regular expressions. The `-i` command allows us to store the output in a file `meaning.txt`.  We have used Regex to remove `&lt;&gt;` tags from the file and hence anything in between these is also removed and we get the only pure text but it may also contain special characters and symbols. To remove that we&#39;ll again use `grep` and filter the fine meaning in our file.&#xA;&#xA;### Removing Special Characters from the Content using GREP commands.&#xA;```shell&#xA; grep -v &#39;^\s*$\|^\s*\#&#39; temp.txt &gt;meaning.txt&#xA;```&#xA;&#xA;Now from the above command removes the special characters such as `$,#`, and others from the temp.txt file. We finally store everything filtered in the meaning.txt file. If you understood till here, the next concrete step will be super easy for you, as we will assemble everything here in a shell script.&#xA;&#xA;## Making the Shell Script&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the word to find meaning : &#34; word&#xA;output=&#34;meaning.txt&#34;&#xA;url=&#34;https://www.dictionary.com/browse/$word#&#34;&#xA;&#xA;curl -o $output $url &#xA;clear&#xA;grep -oP &#39;(?&lt;=&lt;span class=&#34;one-click-content css-nnyc96 e1q3nk1v1&#34;&gt;).*?(?=&lt;/span&gt;)&#39; $output &gt;temp.txt &#xA;&#xA;sed -i &#39;s/&lt;[^&gt;]*&gt;//g&#39; temp.txt &gt;$output&#xA;grep -v &#39;^\s*$\|^\s*\#&#39; temp.txt &gt;$output&#xA;echo &#34;$word&#34;&#xA;while read meaning &#xA;do&#xA;&#x9;echo $meaning&#xA;done &lt; $output&#xA;```&#xA;&#xA;We can clearly see most of the commands are the same, but some have been modified to avoid repetition and automation. Firstly, I have taken user input of word from the user and stored it in with an appropriate variable name.  Next, I have created another variable to store the file name in which we are going to store the meaning of the word, Also a variable for the URL of the website we are searching for. We have used a variable to access the required URL. Then we invoke `cURL` to the file which we want to store using the variable we created and the URL variable So creating variables makes our script quite easy to manage and also it improves the readability of the script. &#xA;&#xA;## Updating cURL command&#xA;&#xA;We can also update the curl command by adding `&#34;&amp;&gt; /dev/null&#34;` this will dump the curl output of network analysis. So we will only get the output of the meaning.txt file.  It is optional to add the following into your code as it depends on the operating system so we can optionally use clear command to wipe out the curl output.&#xA;&#xA;## Printing the output file line by line.&#xA;&#xA;To print the meaning in the output file, we need to print each line separately as the meanings are distinct. Therefore, we will use a while loop with the output file and echo the line variable we have used as the loop iterator.&#xA;&#xA;## Script Screenshots:&#xA;&#xA;&#xA;![dict.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1627366344193/We_heehuL.gif)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1627365131696/YH8Vaqoh_.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1627365274090/D9IETfRAh.png)&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1627365304653/A9AXuHDH8.png)&#xA; &#xA;&#xA;## Output Conclusion&#xA;&#xA;From the above output, we have scrapped the meaning of the word `Mathematics`, `code`, and `python`.  It works only for the words which are on the dictionary.com website. We have successfully made a scrapper that scraps the meaning of the input word from the dictionary.com website, &#xA;&#xA;## Appropriate use of Web-Scrapping.&#xA;&#xA;We must be careful and not scrape any website without reading its privacy policy. If they allow scraping from their website, then only you should scrape the content and not use it for any monetization of the content. This was just used for demonstrating some idea about web scrapping using BASH and just meant for teaching purposes.&#xA;&#xA;Therefore, it is quite easy to scrape the website&#39;s content especially if you find any patterns in the code structure. We were able to make a script that can print the meaning of the input word from the base of the website dictionary.com.&#xA;&#xA;We can see how Bash can be powerful in terms of web scrapping. I hope you found this interesting and inspiring. Thank you for reading. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Registers</title>
      <link>https://www.meetgor.com/posts/vim-registers</link>
      <description>Introduction Have you ever found it difficult to manage and operate text in Vim, especially in Cut/Copy/Paste stuff. You cannot figure out how on earth am I sup</description>
      <pubDate>Wed, 21 Jul 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Have you ever found it difficult to manage and operate text in Vim, especially in Cut/Copy/Paste stuff. You cannot figure out how on earth am I supposed to retrieve my un-saved, un-committed changes in Vim. Then this my dear friend is an article for you. You will learn what are Vim registers and where your deletes and copied stuff resides. This won&#39;t be a comprehensive guide but enough to let you through in performing the wizardry of registers in Vim.&#xA;&#xA;## What are Vim Registers?&#xA;Vim Registers are memory addresses that store data. Technically they are spaces of memory that Vim can store to retrieve later. It can store text, operations, commands, and anything you can do with text in Vim. &#xA;&#xA;## Access Vim Register&#xA;To store something in register `x` we can use the format `&#34;[register-name][command]`. This will perform the command and store the output in the register specified.&#xA;&#xA;A register can be accessed by using the double quotes `&#34;` before its name. Let&#39;s say we have stored something in x register we can access its content using &#34;x in the normal mode and lastly we can perform commands here.&#xA;&#xA;![vimreg.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626940084837/x99b92Wsq.gif)&#xA;&#xA;For example, in the above gif, I tanked the selected text using the yank command but I stored it in the register x by prefixing the command with `&#34;x`. Hence the command became `&#34;xy` and similarly for pasting from the x register we can use `&#34;xp`, quite simple, right? But what is the purpose of it? This can really shine in handling multiple files and imagine you lost your yanked text by using yank somewhere else. Vim registers add extra dimensions in storing and retrieving text, it is not used often but can shine really well in certain tricky situations. &#xA;&#xA;## Commonly used Vim Registers&#xA;You will use some Registers daily in Vim if you use it all the time. Some common ones are the clipboard registers/ selection registers `&#34;+` and `&#34;*`. These are used to perform operations on the content stored in the system clipboard. They come quite handy in copy and paste in Vim from other applications such as Stack Overflow XD. Just kidding, it can be used in any other application on the system. We can add `p` or `y` after the registers to paste and respectively in Vim, so to paste and yank text to and from Vim, you can use `&#34;+p` and `&#34;+y`.&#xA;&#xA;## Types of Vim Registers&#xA;There are many different types of vim registers used for particular needs and also some for custom editing. &#xA;&#xA;- ### 26 Named registers (`a` to `z`)&#xA;&#xA;These are the registers just for you. You can use this for whatever copy/pasting/editing and other creative stuff you could do with this. In the example where we utilized the `x` register, which was one of them. We can use a to z any register we want, it just boils down to the ease of the key combination as per preference. **Remember anything you yank into any register will also be stored in the unnamed register (discussed later)**. &#xA;&#xA;Also, if you try to use capital named registers (A-Z), the content will be appended to the lowercase named registers as well along with the unnamed register. Making it a bit safer to work with but messy at the same time. &#xA;&#xA;- ### 10 numbered registers (`0` to `9`)&#xA;&#xA;These are the registers updated automatically filled in by Vim, they store the recently yanked or deleted content in the current buffer. &#xA;- The `0` register stores the most recent yanked/copied text.&#xA;- The `1` register stores the most recent deletion of text.&#xA;- The `2` register stores the 2nd most recent deletion. &#xA;&#xA;.. and so on.&#xA;&#xA;The content of the 9th register is flushed away every single time you delete something. The contents are shifted to the next numbered registers every time there is a deletion. For example, If we delete something the content is stored in the `1` register but its previous content gets shifted to the `2` register, and so on. So we have 9 clipboards for our deletion history. That is undoubtedly a powerful thing.&#xA;&#xA;- ### Selection and drop registers (`*` , `+` , and `~`)&#xA;&#xA;As discussed earlier, Selection registers are used for storing the contents of the system clipboard. There are differences in the `*` and `+` for Unix-like operating systems but they can do similar things. In Unix-like Operating systems, the `*` register stores the text from the mouse selection in the X Window, whereas the `+` register is used to store the text system clipboard. On Windows and other operating systems besides Unix-like, both of them function similarly.&#xA;&#xA;The Drop register `~` is available only for the Gvim version as it stores the text from the last drag and drops/drag down operations performed. &#xA;&#xA;- ### Read-only registers (`:`, `.`, and `%`)&#xA;&#xA;The registers `:`, `.`, and `%` are Read-only, which means you cannot change their content, they are modified and maintained by VIm automatically. &#xA;&#xA;Using the `:` register, we can get the content of any register in the Insert mode so that we can copy the contents and edit if we have stored it in some registers. This is quite a neat little feature, not used most of the time but it can become the most powerful tool in corner cases and tricky situations. &#xA;&#xA;![vimc-r.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626946965392/KBSZmxOHvi.gif)&#xA;From the example in the gif, we deleted the line, and hence the content was stored in register 1 so when we press `Ctrl + R` in Insert mode, we can now paste in the contents of any register. Here we used the register `1` to get our content back. &#xA;&#xA;The `%` register holds the name of the current file, this is really a great and life-saving editor, as it is also used in automating the compilation and running of the source files from Vim itself. &#xA;&#xA;The `.` register contains the last inserted text, this can also be used in various scenarios. They make Vim a bit polished and well organized.&#xA;&#xA;![vimc-r.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626948017124/dQCky3kfC.gif)&#xA;&#xA;From the above gif, we inserted some text, and using the Read-only register, we inserted the last inserted text to the screen using the `.` register and also we used the register `%` to get the current file name.&#xA;&#xA;- ### Expression and Search pattern register (`=`, `/`)&#xA;&#xA;The register `=` is used to perform operations and store them in the register. Vim has some built-in functions and some of them are listed  [here](https://renenyffenegger.ch/notes/development/vim/script/vimscript/functions/index). We can use these functions after we type `Ctrl + R` in insert mode and `=` to assign it to the register `=`. I have used some simple multiplication operations in the above gif, but you can take this further however you like and desire. &#xA;&#xA;The register `/` takes searching in Vim even further. We use /pattern in normal mode to search for the pattern. But the actual work is done with help of the register. We actually store the text in the register and Vim accesses it later t find the next pattern or any other operations performed with the pattern. This is again a read-only register but you can change its content using some methods and practices. It is not preferred to change the read-only registers as it can mess up the functions of Vim.&#xA; &#xA;- ### small delete and the alternate file register (`-`, `#`)&#xA;&#xA;Small delete is quite a meaningful and sensible word to put in the context of its functioning. The `-` register or small delete register stores the content of some small deletes like characters or words. Basically, anything smaller than a line is small for Vim. If you delete something let&#39;s say a word from a line, it&#39;ll be stored in the `-` register. &#xA;&#xA;The &#39;#&#39; register or alternate file register is the register that stores the alternate file in Vim. An alternate file is a file edited before the current file in Vim. If you edited a `file-x` and now you are in the `file-y`, the `file-x` becomes your alternate file. &#xA;&#xA;![vimc-r.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626950574633/P7U01Ssy_.gif)&#xA;&#xA;This example shows how we can use the small delete register to retrieve back the small deletions. Also, we saw the current file and alternate files in Vim and in which registers they are stored.&#xA;&#xA;- ### Unnamed and black hole register (`&#34;`, `_`)&#xA;&#xA;Unnamed register `&#34;`, simply to put in it is a register which stores everything from deletions to yank. It stores the text performed by the d, D, x, X, s, S, c, C, y, Y, some variants of these commands. This is quite good and a bad feature sometimes. It is nice to have such types of registers in hand. &#xA;&#xA;Wear your space goggles as we see will how Black holes work :) Blackhole register is an elegant touch to the series of registers we have. The register acts as a sink for our deletions. We can simply dump anything we don&#39;t care about in this register. This register helps in scenarios where we want to avoid adding stuff from deletions to the named or numbered registers. Isn&#39;t it like Blackholes? Just Swallow the things. &#xA;&#xA;![vimc-r.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626951775215/OEanNfA4a.gif)&#xA;The above example makes it clear that we simply dump the deletion to the `_` register and avoid changing the rest of the registers which may replace the deletion and lose the saved configuration or set up of registers. We first deleted the line `solve(s);` and dumped the statement `string s = &#34;0110` into the black hole register `_`, when we retrieve with thee unnamed register `&#34;` we get back the deleted statement rather than the dumped statement. Quite neat and elegant.&#xA;&#xA;## Where can you use Vim registers&#xA;&#xA;Well, this may differ as per the experience with your programming journey, you may find it useless as a beginner but mind you this is some extraordinary stuff that can boost some development time. You may be tempted to use it the scenarios where there is a huge depth in file systems and the code to be edited is a lot for you. When you have a ton of things to be edited systematically, this concept can add &#xA; lights and dimensions to your work and make it comfortable and risk-free. &#xA;&#xA;## Conclusion&#xA;&#xA;![vimregistertable.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626953924487/2bPqejVkT.png)&#xA;The above chart is quite simple to remember if you are really into making Vim for complex projects and even for staying productive in Vim editor. Hope you find it interesting and easy to remember.&#xA;This was a small introduction to Vim-registers but is a small topic with extensive usage and skill-dependent concept. Vim is already a customizable editor, registers add an extra dimension to its customizability in editing and programming. Thank you for listening to me. Happy Coding :)&#xA;&#xA;**References** :&#xA; [baeldung.com](https://www.baeldung.com/linux/vim-registers), [brianstorti.com](https://www.brianstorti.com/vim-registers/)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Enhancing Editing Speed</title>
      <link>https://www.meetgor.com/posts/vim-editing-speed</link>
      <description>Introduction Editing/ Writing is a crucial part of any text editor, the swiftness with which one can make changes in a file(s) or a structure is a bit dependent</description>
      <pubDate>Sun, 18 Jul 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;Editing/ Writing is a crucial part of any text editor, the swiftness with which one can make changes in a file(s) or a structure is a bit dependent on the editor we use as well. Vim can be a bit hassle for beginners in the initial stage but it becomes second nature soon. It also depends majorly on the level of usage and the area of usage. If you are an advanced programmer, you will use these commands extensively, and might not be a big deal for you, But for a beginner, it might take some time to get used to the workflow in Vim.&#xA;&#xA;This article won&#39;t be covering the basics of editing in Vim but the ways to save time on the basic level commands to improve efficiency. That being said, beginners can also read to be aware of the possibilities and tricks in Vim. &#xA;&#xA;Below are some quick basic commands for getting started in editing and improving the overall workflow. &#xA;&#xA;### Entering in into the Insert Mode&#xA;&#xA;`i`  -&gt; Enter into **Insert mode** from the cursor.&#xA;&#xA;`I`   -&gt; Enter into **Insert mode** from the beginning of the current line.&#xA;&#xA;`a`   -&gt; Enter into **Insert mode**  just after the cursor.&#xA;&#xA;`A`   -&gt; Enter into **Insert mode** from the end of the current line.&#xA;&#xA;`o`   -&gt; Enter into **Insert mode** below the current line.&#xA;&#xA;`O`   -&gt; Enter the **Insert mode** above the current line.&#xA;&#xA;The above commands are purely to add text in the line or above or below the line without deleting anything.&#xA;&#xA;The following commands will delete some characters/words/lines and then entering into the Insert mode.&#xA;&#xA;`s`   -&gt; delete the character under the cursor and enter into **Insert mode**.&#xA;&#xA;`cw`  -&gt; Delete the word under cursor and enter into **Insert mode**.&#xA;&#xA;`S`   -&gt; delete the entire line under the cursor and enter into **Insert mode**.&#xA;&#xA;&#xA;## Cut Copy Paste Commands&#xA;&#xA;This set of commands are quite helpful as a programmer and it is used quite frequently. These commands can surely boost the time to cut-copy-paste but also they will provide more customization to the way you do these tasks.&#xA;&#xA;### Cut/Delete Commands&#xA;&#xA;The below-mentioned commands can be used in Normal or any Visual/ Selection mode as it depends whether you have selected the text or you want to work in Normal mode itself. &#xA;&#xA;`dd`  -&gt; Delete the current entire line.&#xA;&#xA;`dw`  -&gt; Delete the word on the cursor.&#xA;&#xA;`d0` -&gt; Deletes the line from the current cursor position to the beginning of the line.&#xA;&#xA;`D` or `d$` -&gt; Deletes the line from the current cursor position to the end of the line.&#xA;&#xA;`d`  -&gt; Delete the selected text (only in Visual/Visual-Line/Visual-Block/Selection mode).&#xA;&#xA;`x`  -&gt; Delete the character under the cursor.&#xA;&#xA;### Yank/Copy Commands&#xA;&#xA;The following commands are used in the Normal mode as they perform the copying of text with words and lines only.&#xA;&#xA;`yw` -&gt; yank(copy) the word on the cursor.&#xA;&#xA;`y$`  -&gt; yank(copy) line till the end from the current cursor position.&#xA;&#xA;`yy`  -&gt; yank(copy) the current entire line to the unnamed register (&#34;&#34;).&#xA;&#xA;You have to be in Visual/ Visual line/ Selection mode to yank the text for the next set of yanking commands. &#xA;&#xA;`y`  -&gt; yank(copy) the selected text to the unnamed register (&#34;&#34;).&#xA;&#xA;`&#34;+y` -&gt; yank(copy) the selected text to the system clipboard (&#34;+ register).&#xA;&#xA;### Paste Commands&#xA;&#xA;`p`   -&gt; Paste the content of the unnamed register(&#34;&#34;) below the cursor.&#xA;&#xA;`P`   -&gt; Paste the content of the unnamed register(&#34;&#34;) above the cursor.&#xA;&#xA;`&#34;+p`  -&gt; Paster the content of system clipboard (&#34;+ register) to the cursor.&#xA;&#xA;&#xA;### Replacing Text&#xA;&#xA;Replacing is a great option for instant productivity, if you want to make some minor changes, you don&#39;t have to go into insert mode and delete and then edit the text. Instead, the replace commands such as `r` and `R` allow us to replace some characters being in Normal and Replace mode respectively. This can be used very heavily if you just want to replace it instead of adding/removing text.&#xA;&#xA;`r`   -&gt; replace the character under the cursor with the following key entered from the keyboard.&#xA;&#xA;`R`   -&gt; Enter into **Replace mode**( replace the character with the specified word from the keyboard). &#xA;&#xA;### Undoing and Redoing&#xA;&#xA;We often make mistakes and want to revert to the changes we have to make and start from the last save. The following sets of commands will make us do exactly that.&#xA;&#xA;`u`   -&gt; Undo the last made changes before saving.&#xA;&#xA;`U` -&gt; Restore the changes in the entire line.&#xA;&#xA;`&lt;C-R&gt;`   -&gt; Redo the last undo (un-undo -&gt; revert back changes).&#xA;&#xA;&#xA;### Search and Replacement&#xA;&#xA;Some of the below-mentioned commands are a great set for bulk replacement and addition. We can either replace a particular pattern in the entire file or in specific parts of the file as mentioned and explained as follows:&#xA;&#xA;`:%s/old/new`  -&gt; Replace the word `old` with `new` in the entire file.&#xA;&#xA;`:4s/old/new`  -&gt; Replace the word `old` with the word `new` on line 4( where 4 can be any number of lines in the file).&#xA;&#xA;`:2, 9s/old/new`  -&gt; Replace the word `old` with `new` between the lines 2 and 9 inclusive(where 2 and 9 can be any number of lines in the file).&#xA;&#xA;`:%s/^/This`  -&gt; Add `This` to the beginning of each line in the file.&#xA;&#xA;`:%s/$/That`  -&gt; Append the word `That` to the end of each file.&#xA;&#xA;You can notice that the `%` symbol here indicates the entire file. We can skip prefixing s with `%` to make changes only in the current line or any number of lines specified instead of it. This command is quite a lot customizable and powerful, I can&#39;t show each and every combination of this command. It should be used as per requirement and thus should be modified accordingly.&#xA;&#xA;&#xA;### Indenting Text&#xA;&#xA;Indenting is quite important in certain languages like Python, YAML, Haskell, etc. This can get really frustrating if you even miss a single indentation, you have to format everything to a proper indentation scratch. But thanks to powerful Text-editors and IDEs which have made the indentation quite easy and even auto-correct the wrong indentation. Vim has some commands to make those indentations much easier and it also has a customizable number of spaces in its config file called vimrc.&#xA;&#xA;`&gt;&gt;`  -&gt; Indent or shift the current line to the right. (normal mode)&#xA;&#xA;`&lt;&lt;`  -&gt; Unindent shift the current line to the left. (normal mode)&#xA;&#xA;`&gt;`   -&gt; Indent or shift the selected text to right. (Visual/Visual-line/VIsual-block/Select mode)&#xA;&#xA;`&lt;`  -&gt; Unindent or shift the selected text to left. (Visual/Visual-line/VIsual-block/Select mode)&#xA; &#xA;If your file is saved as a particular language that supports indentation, it will automatically indent lines for you but it cannot be reliable. So, we need to [IndentLine](https://github.com/Yggdroot/indentLine), and others as well. &#xA;&#xA;## Miscellaneous&#xA;&#xA;`~` -&gt; Convert the character under the cursor to upper case/ lower case.&#xA;&#xA;`vip` -&gt; Yank a entire paragraph ( till empty line).&#xA;&#xA;`gu` -&gt; Convert the selected text into lowercase. (Visual/Select Mode)&#xA;&#xA;`gU` -&gt; Convert the selected text into Uppercase. (Visual/Select Mode)&#xA;&#xA;&#xA;&#xA;All of the above commands were somewhat basic and commonly used but if used along with other key shortcuts for movement can also improve the editing speed quite considerably. Just keep using these commands and you&#39;ll be amazed by the speed you&#39;ve developed. Thanks for reading. Happy Coding :)&#xA;&#xA;References:  [catswhocode](https://catswhocode.com/vim-commands/)  [The valuable dev](https://thevaluable.dev/vim-advanced/) [rtorr.com](https://vim.rtorr.com/)</content>
      <type></type>
    </item>
    <item>
      <title>Cryptocurrency Price Scrapping using BASH and Coingecko API</title>
      <link>https://www.meetgor.com/posts/bash-crypto-scrapper</link>
      <description>Introduction Cryptocurrency is in such a hype that it is attracting even nerds and terminal fanboys, it is quite comfortable for such people to view the price i</description>
      <pubDate>Thu, 15 Jul 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;Cryptocurrency is in such a hype that it is attracting even nerds and terminal fanboys, it is quite comfortable for such people to view the price inside of their terminal, also for normal people to learn about how to interact with an API from the local machine. We are gonna make a script about 20 lines in BASH to extract data from the  [coingecko cryptocurrency API](https://www.coingecko.com/en/api#explore-api) and some tools such as grep and sed. That being said let&#39;s start scripting.&#xA;&#xA;&#xA;![crypsh.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626354050031/0BRlM7tfs.gif)&#xA;&#xA;This is how the script will work like. It is not a fancy script just some basic commands to extract data from the API.&#xA;&#xA;## Getting familiar with the API &#xA;This is quite an important step as this will decide what type of data we will get from it.  So head out to  [coingecko cryptocurrency API](https://www.coingecko.com/en/api#explore-api) and start exploring some things yourselves. I am gonna cover everything you need for the script but just for your knowledge and if you are really into crypto. &#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626336264054/dFqhnBjFl.png)&#xA;&#xA;Hopefully, you should see the following screen and after clicking on &#34;Get Started&#34;, you will see a list of the API endpoints. After playing with it for several hours, I found the perfect and precise endpoint to achieve our target. Many o them work well but the problem with many of them was the precision of the price, there it could not show a reliable price for some low-valued coin/tokens. There are tons of options to choose from, but every option has its own advantage and disadvantage. &#xA;&#xA;Based on my tiny experience, the best fit for the required task will be the &#34;Get historical market data to include price, market cap, and 24h volume (granularity auto)&#34; option in the coin section. It will give the price with very nice precision and also in any national currency as well. So when you find the option, which looks like this:&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626338409604/TPxpUAJ91.png)&#xA;&#xA;If you click on that option you will get a button to **&#34;Try it out&#34;**, just click on it and now you can parse data to the API and respond with the JSON response file like the text below as output. &#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626340029764/-hVg5t0iv.png)&#xA;&#xA;### Filling in the coin id&#xA;You need the **id**which is the code of the cryptocurrency on coingecko so a few ids of famous cryptocurrencies are: `bitcoin`, `ethereum`, `dogecoin`, `shiba-inu`, `basic-attention-token`, and if you want more coins you can check the coingecko API coins list  [here](https://api.coingecko.com/api/v3/coins/list) and now you can search using the shortcut F3 and type the name of the coin you require. &#xA;&#xA;### Filling in the currency &#xA;We need a currency to display the price of the selected coin in terms of a particular currency or its equivalent price in the currency. Every national currency has a code attached to it, for example, USD is the code for US Dollar, INR for Indian Rupee, EUR for Euro, AUD for Australian Dollar, and so on. You can find the list [here](https://www.iban.com/currency-codes).&#xA;&#xA;### The number of days&#xA;We also need the number of days to indicate the price of the coin before the provided day, but we want today&#39;s and latest price, so we can enter the number of days as 0. You can get the price as your wish but we want the current price so we are using 0. &#xA;&#xA;So your request should look somewhat like this, just for particular coin pair:&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626340121389/YkRdtMm-o.png)&#xA;&#xA;After filling in with your favorite coin-currency pair, you can now click on **&#34;Execute&#34;**. You will get a list of prices, volume, market cap, and all of that stuff. It should look somewhat like this:&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626340386948/QepcJHtUV.png)&#xA;&#xA;If you look carefully and are aware of the current price, the second item in the prices object is the actual price of ethereum at some point in time. We need to extract that using tools and utilities in BASH. They have also provided a cURL command to fetch the API, how nice of Coingecko :)&#xA;&#xA;So, that was the introduction to the Coingecko API, you can now explore various endpoints and what they respond to, and what are inputs they take. Many of them do not give precise enough price, so check out that as well. &#xA;&#xA;&#xA;## Fetching data from the API &#xA;&#xA;Now, we&#39;ll finally start coding. Firstly we&#39;ll need input from the user, the coin-id, currency code, and optionally the number of days. We will read the input by using the read command and providing the prompt argument. &#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the coin name: &#34; coin&#xA;read -p &#34;Enter your national currency: &#34; crncy&#xA;days=0&#xA;#read -p &#34;Enter the number of days before today to get its price: &#34; days&#xA;&#xA;```&#xA;I am not inputting the number of days before the equivalent price as it&#39;s not our mission, but if have commented on the part if you want to get the price before some days. The day value is hardcoded to 0 meaning the current day. &#xA;&#xA;Now we need the golden piece of this script, the cURL command. The command is just pre-written for us XD. But hey we need to modify it a bit. We need to store the output of the cURL command in a fie for modifying it further. We can use piping the command and avoid using files but that would make the command quite big. So, I just use the files to handle and trim the output according to what I need. &#xA;&#xA;This is the default command that coingecko gave us:&#xA;```bash&#xA;#!/bin/bash&#xA;curl -X &#39;GET&#39; \&#xA;  &#39;https://api.coingecko.com/api/v3/coins/ethereum/market_chart?vs_currency=usd&amp;days=0&#39; \&#xA;  -H &#39;accept: application/json&#39;&#xA;```&#xA;We send a GET request to the API endpoint with the provide URL and we accept the response in the form of JSON. The -X is for providing the option of either GET, POST, or any other API interaction code. But this is a REST API and we do not want to mess up the database by posting, deleting, or updating anything just to GET the data from the API. We use -H to pass the URL to the head and the type of response in this case JSON. &#xA;&#xA;We&#39;ll change the default command to this, &#xA;```bash&#xA;curl -o temp.json -X &#39;GET&#39; \&#xA;  &#39;https://api.coingecko.com/api/v3/coins/&#39;$coin&#39;/market_chart?vs_currency=&#39;$crncy&#39;&amp;days=&#39;$days&#39;&#39; \&#xA;  -H &#39;accept: application/json&#39; &amp;&gt; /dev/null&#xA;```&#xA;We need to modify the URL a bit to make it dynamic. The ethereum or any coin name should be dynamic, the currency should be dynamic, so we will use the variables created before to use now. We will use `&#39;$variable&#39;` in between the URL to embed the variable value in it. We change the ethereum or any coin name with `&#39;$coin&#39;` and the currency name with `&#39;$crncy&#39;` and the same for the days as well. We have to store the output in the temp.json file, we use -o to output the result in the specified file in the cURL command. It&#39;s optional to add `&amp;&gt; /dev/null` because it just flushes the output of cURL, it looks neater if we add it. &#xA;&#xA;And that is it, we have obtained the JSON response and stored it in a file, we have the access to the data and we just need to edit the file.&#xA;&#xA;&#xA;## Editing the response JSON file&#xA;&#xA;Now we start the actual editing and pattern finding in BASH. We have a plethora of tools to use like grep, sed, awk, and others, but I&#39;ll use only grep to keep things simple. Firstly we need to see the output( JSON file) again. &#xA;&#xA;&#xA;![JSON file response](https://cdn.hashnode.com/res/hashnode/image/upload/v1626345122090/ERfxLQPX-.png)&#xA;&#xA;But the file stored locally has everything in a single line, like this:&#xA;&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626349631809/B0y8F9ruE.png)&#xA;&#xA;So, we just need the numbers between `,` and `]],&#34;market_caps&#34;` right? &#xA;&#xA;OR&#xA;&#xA;The text between `,` and `]],&#34;m`.&#xA;&#xA;It is quite simple to extract the required value, we will use grep and Pearl Regular Expression. &#xA;```bash&#xA;grep -o -P &#39;(?&lt;=,).*(?=]],&#34;m)&#39; temp.json &gt; price.txt&#xA;&#xA;```&#xA;We are finding anything between the `,` and `]],&#34;m` from the temp.json file and storing the output in the price.txt file.  As simple to use and we have the current price of the coin in terms of the provided currency in the file price.txt.&#xA;&#xA;Now we have obtained the result in a crystal clear way, we need to store the price in a variable just for further usage and simplicity.&#xA;We&#39;ll use a while loop that iterates over the file until it is End of the File. We&#39;ll store the value in the variable in the following way:&#xA;```bash&#xA;while read val&#xA;do&#xA;&#x9;p=$val&#xA;done &lt; price.txt&#xA;```&#xA;The value in the `p` variable. But we are not done yet, because if we see some values of certain coins which have quite low value, it displays in the scientific format. We&#39;ll tackle this in the next section.&#xA;&#xA;## Converting the price from scientific notation to decimal&#xA;&#xA;If you try to print the values of coins with pretty low value like `shiba-inu` or `baby-doge-coin` or any other coin with less value then a penny. The value is expressed in scientific notation i.e like `1.998e-5` i.e `0.00001998`&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626351434484/3GeTI-kn0.png)&#xA;&#xA;This is not good-looking, is it? &#xA;```bash&#xA;price=`printf &#34;%.15f&#34; $p`&#xA;```&#xA;The above command is quite similar to the C language. We are printing the value in the `p` variable with a precision of 15 decimal values, that is enough for any serious small value coin.&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626351561755/uPyHXMhd7.png)&#xA;&#xA;Now that was much better. That is it! How simple was that. Really BASH has some powerful commands and tools.&#xA;&#xA;## Printing the price &#xA;&#xA;Finally, we need to print the output, and to keep it simple, we can print using the echo command.&#xA;```bash&#xA;echo &#34;The value of $coin in $crncy is = $price&#34;&#xA;```&#xA;&#xA;![coing.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626353654753/ajVMws8d6.png)&#xA;&#xA;&#xA;## Script&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the coin name : &#34; coin&#xA;read -p &#34;Enter your national currency : &#34; crncy&#xA;days=0&#xA;#read -p &#34;Enter the number of days past today: &#34; days&#xA;touch temp.json price.txt&#xA;&#xA;curl -o temp.json -X &#39;GET&#39; \&#xA;  &#39;https://api.coingecko.com/api/v3/coins/&#39;$coin&#39;/market_chart?vs_currency=&#39;$crncy&#39;&amp;days=&#39;$days&#39;&#39; \&#xA;  -H &#39;accept: application/json&#39; &amp;&gt; /dev/null&#xA;&#xA;grep -o -P &#39;(?&lt;=,).*(?=]],&#34;m)&#39; temp.json &gt; price.txt&#xA;&#xA;while read val&#xA;do&#xA;&#x9;p=$val&#xA;done &lt; price.txt&#xA;&#xA;price=`printf &#34;%.15f&#34; $p`&#xA;&#xA;echo &#34;The value of $coin in $crncy is = $price&#34;&#xA;rm temp.json&#xA;&#xA;```&#xA;&#xA;That was it, I hope you liked it and learned something along with me. BASH truly has the potential to do a whole lot of stuff. This was just one of the many examples where we can do some data gathering or scraping from the web using various tools and utilities. Thank you for listening to me. Happy Coding:)</content>
      <type></type>
    </item>
    <item>
      <title>BASH Scripting Guide - PART - 3</title>
      <link>https://www.meetgor.com/posts/bash-guide-p3</link>
      <description>Bash or shell won&#39;t be much popular and powerful if they didn&#39;t have some tools and utilities baked in. But even further they are supported natively in Bash, wh</description>
      <pubDate>Mon, 12 Jul 2021 00:00:00 UTC</pubDate>
      <content>Bash or shell won&#39;t be much popular and powerful if they didn&#39;t have some tools and utilities baked in. But even further they are supported natively in Bash, which just makes every task and challenge quite hassle-free to deal with. In this part of the series, I&#39;ll try to cover an overview of some quite powerful and robust tools and utilities in Bash(shell in general) and also some of the advanced topics like dictionaries and positional parameters. Enough talk let&#39;s dive in.&#xA;&#xA;The topics to be covered in this part include the following:&#xA;&#xA;- Hash tables/dictionaries in BASH&#xA;- Positional parameters&#xA;- Aliases in BASH&#xA;- Some Tools and utilities&#xA;    - grep/sed/awk&#xA;    - cat/tac/head/tail&#xA;    - cURL&#xA;    - find&#xA;    - bc&#xA;    - wc&#xA;&#xA;&#xA;## Bash dictionaries&#xA;Bash dictionaries or hash tables are just like any other hash tables or keymaps in other programming languages. Bash dictionaries are quite similar to arrays but they have a key instead of the index(0,1,2...) and a value just like arrays. This can be quite useful for storing passwords with emails or usernames or any other way in which a value can be accessed only via a unique key. &#xA;&#xA;To declare a dictionary/ hash table, we can simply write `declare -A name`, this will declare an empty hash map for us. Further, we can populate the hash map with keys and values using the same syntax as of array just instead of numbers we can also have strings. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;declare -A fruits&#xA;fruits[&#34;apple&#34;]=&#34;red&#34;&#xA;fruits[&#34;mango&#34;]=&#34;yellow&#34;&#xA;fruits[&#34;grapes&#34;]=&#34;green&#34;&#xA;&#xA;read -p &#34;Enter the name of fruit : &#34; name &#xA;echo &#34;The fruit is $name and its color is ${fruits[$name]} &#34;&#xA;&#xA;```&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626167875237/A2TxYPNoS.png)&#xA;&#xA;The above example depicts a way to declare, define and access the key values in a dictionary. The example may look silly but you get the idea. We can also access every key or value using the `@` variable and access the number of key-value pairs using the `#` variable just like an array. &#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;declare -A fruits&#xA;fruits[&#34;apple&#34;]=&#34;red&#34;&#xA;fruits[&#34;mango&#34;]=&#34;yellow&#34;&#xA;fruits[&#34;grapes&#34;]=&#34;green&#34;&#xA;&#xA;for i in &#34;${!fruits[@]}&#34;;&#xA;do&#xA;&#x9;echo &#34;$i : ${fruits[$i]}&#34;&#xA;done&#xA;&#xA;echo &#34;There are ${#fruits[@]} key-value pairs.&#34;&#xA;```&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626171398570/14jJl2eFs.png)&#xA;&#xA;OK, this is very tricky they are the same variables but used slightly differently. Firstly in the range-based for loop `${!fruits[@]}`, focus on the `!` before the hash table name, this will expand(depict) the hash map&#39;s keys. This is used to access every key from the hash table and we can also see `#` at the beginning of the hash map name as it is used to represent the entire hash map further in the `{#fruits[@]}` we can also use `{#fruits[*]}`.  Inside the for loop, `i` will be the key, and `{fruits[$i]}` will be the value for that `i` th key.&#xA;&#xA;Also, you can notice the bash interpreter automatically arranges the map in the alphabetical order of the values and not keys. This is quite a neat little feature that can come in handy a lot of times.&#xA;&#xA;If you want to delete or add any key-value pairs we can do that by the following commands:&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;declare -A fruits&#xA;fruits[&#34;apple&#34;]=&#34;red&#34;&#xA;fruits[&#34;mango&#34;]=&#34;yellow&#34;&#xA;fruits[&#34;grapes&#34;]=&#34;green&#34;&#xA;&#xA;for i in &#34;${!fruits[@]}&#34;;&#xA;do&#xA;&#x9;echo &#34;$i : ${fruits[$i]}&#34;&#xA;done&#xA;echo &#34;There are ${#fruits[@]} key-value pairs.&#34;&#xA;&#xA;unset fruits[&#34;mango&#34;] &#xA;echo &#34;There are ${#fruits[@]} key-value pairs.&#34;&#xA;fruits[&#34;strawberry&#34;]=&#34;pink&#34;&#xA;&#xA;for i in &#34;${!fruits[@]}&#34;;&#xA;do&#xA;&#x9;echo &#34;$i : ${fruits[$i]}&#34;&#xA;done&#xA;echo &#34;There are ${#fruits[@]} key-value pairs.&#34;&#xA;&#xA;```&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626172120966/yCeXpaM9w.png)&#xA;&#xA;The above code example is a bit complex but easy enough to understand. We can delete the key-value pair using the unset command and pass in the key along with the hash map name. We can create a key-value pair by simple command as depicted in the above example. This was a basic overview of hash maps/dictionaries in BASH.&#xA;&#xA;##  Positional parameters&#xA;&#xA;We often use user input from within the script but there is another way to pass in parameters outside of the script using positional parameters. It basically allows us to pass in arguments or parameters from the command prompt/ shell and inside of the script, we can access them via Positional Parameters ( $1, $2, $3....$9, ${10} and so on).&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;echo &#34;first parameter : &#34; $1&#xA;echo &#34;second parameter : &#34; $2&#xA;echo &#34;eleventh parameter : &#34; ${11}&#xA;&#xA;```&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626159559317/XSyVwkC9U.png)&#xA;&#xA;You can see from the above script we have passed the parameters from the command line just after typing the filename. The positional parameter $0 is the file name and from 1 onwards the parameters are optional to run if only your script needs any parameters or input to work with. The numbers are just random and just used for demonstration. The 11th parameter or double-digit parameter starting from 10 onwards, you need to encapsulate the number in {curly braces}`${number}` because it won&#39;t interpret `$10` or any other number as just `$1` and print 0. &#xA;&#xA;So we can pass a list of parameters that should be space-separated. We can pass any relevant information such as a string, number, or file names as well. &#xA;&#xA;If we want to access all the parameters passed to the script, we can use `@` variable. You may know this symbol from the array section of part-II, it is used to access every element in the array. But here it is used to access every parameter passed to the script just behaving like a list of values.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;echo &#34;The parameters passed are: &#34; $@&#xA;&#xA;```&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626160205206/eH6BD1r_Yu.png)&#xA; &#xA;To get the number of the parameters passed to the script, we can use `#` variable. This is also a variable used in the array section for accessing the number of elements in the array, in this case, the number of parameters in the list.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;echo &#34;The parameters passed are: &#34; $#&#xA;&#xA;```&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626160630991/jVvJqtCqs.png)&#xA;&#xA;Positional parameters allow to not take explicit input from the user from the script. This might not be used for the simple scripts but for administration purposes for the admins and users who know what does the script needs and it allows them to pass in arguments without designing the input system.&#xA;&#xA;&#xA;## Bash aliases&#xA;&#xA;Bash aliases are a great way of reducing the command length and making it much easier to type and work with the scripts or any development-related work. Alias is a file called bash_aliases inside the .bashrc folder that contains our shortcut commands, it has a particular order to map certain commands with others. &#xA;&#xA;Let&#39;s see what is an alias first and then we&#39;ll see how to set it up.&#xA;```bash&#xA;alias cdc=&#39;cd C:/Users/acer/Desktop/New\ folder/Code/&#39;&#xA;```&#xA;This will make it viable to just type cdc and I will be in this directory instead of printing all of the jargon. The command we need to use to replace the big command is `cdc`. The right command is the variable assigned the command and the left or its value is the command to be replaced with it.&#xA;&#xA;![shalias.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1626163036098/lDwlSdiry.gif)&#xA;&#xA;This command will definitely defer on your machine and OS as the filesystems are different in each major operating system. We can quickly make other such alias or shortcuts so to speak for making the development process faster and efficient. &#xA;&#xA;Now let us see how we set up this environment for bash alias, it&#39;s quite straightforward. You need to create a hidden file named &#34;bashrc&#34; i.e the file name will be `.bashrc`. This file has to be in the root directory (the folder to which bash defaults). I do not mean the `root` directory in Linux but the repository in which your bash interpreter opens. Once you have created the file put any alias in the file and source the file using the command:&#xA;```bash&#xA;source .bashrc&#xA;```&#xA;And that would do the trick, you can now test your macro or shortcut by opening a new instance of the terminal. If this doesn&#39;t work for you, then you can check  [this article](https://opensource.com/article/19/7/bash-aliases)  for a broader understanding of the setup.&#xA;&#xA;## Bash tools &amp; utilities  &#xA;&#xA;What would you call BASH without grep or sed man! It&#39;s a sad life:( BASH comes with some absolutely powerful and handy tools and utilities such as grep, sed, awk, at, wc, find, tar, gzip, which, make, ping, cURL, wget, ssh, .... my words there is an unstoppable long list. Really they are quite important and lay the foundation for some quite complex tasks. Some web servers can become redundant if some of the tools went missing. Let us find why they are so powerful.&#xA;&#xA;### grep&#xA;GREP or global regular expression print is a tool or command that can find patterns using regular expressions in files/strings or any other piece of data. It&#39;s not just printing or searching for the text, besides all that it can also edit the file and store the output in the desired file or any variable by proving some arguments to it. Grep supports Pearl&#39;s regular expression as well. There is a lot of customization options and arguments available in grep that can just do anything. It becomes an irreplaceable tool for some complex tasks. &#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626179054053/9ttkV-MZZ.png)&#xA;&#xA;The above code finds the pattern &#34;more text&#34; in the file specified and prints the line to the screen, but we can modify the output we want, extract the output in a file and do all kinds of wizardry with this tool. This is just a basic, to get started example but trust me it&#39;s more than you think, this tool is used widely for web scrapping and pattern matching in quite a lot of use cases.&#xA;&#xA;&#xA;### sed&#xA;SED or stream editor is another beast in BASH&#39;s toolkit, this is just a flawless tool. No words for this. This is a great tool but still underrated. This can actually edit the text inside the terminal, no graphical environment, no interface at all just commands, but it can do what a huge text editor can&#39;t! Save time, just edit text without opening anything except a terminal, becomes unbeatable in large files. This is surely a tiny little application that can skyrocket the efficiency development process. &#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626179410079/BkvdAqkfDS.png)&#xA;&#xA;The above example replaces the word `more` with `less` using the sed command if you type 3g or nth line preceded by g, you will replace the word on the nth line only. In this case, only`g` will replace every occurrence of the word.&#xA;This is again a basic example of a sed command, its more if you go deeper, its more than a tool, its kind of a text-editor for wizards ;) &#xA;&#xA;### awk&#xA;awk or Aho, Weinberger, and Kernighan XD are the names of the developers of this application. This is another mind-blowing tool that can programmatically do a lot of stuff. This is like a programming language to a whole new level that can extrapolate and extract data from files and other forms of data. This is quite a great option if you want to quite neatly do something. It has great support libraries and functions that can even perform complex mathematical and scientific operations.&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626180322400/hWkEVhPl_.png)&#xA;&#xA;These are the topics for separate articles because it is insufficient to explain everything at once.&#xA;&#xA;### cat / tac / head / tail&#xA;&#xA;CAT or concatenate is a tool used for printing out files, create files, sorting the contents of files, editing files, and a plethora of stuff. This command is generally used for printing the file but there is more to it like creating a file directly in the terminal, merging two files, and a ton of other operations. &#xA;&#xA;TAC or reverse of CAT is a tool used for everything that CAT can do but in reverse:) This is a wired tool but still quite useful sometimes.&#xA;&#xA;Head is a tool that will print or edit the text in the first 10 lines of the file, it can be used to extrapolate multiple files with similar content. &#xA;Tail is a tool that will print or edit the text in the last 10 lines of the file, it can be used just like head but for the last few lines.&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626180451092/Z5VUpIxCm.png)&#xA;&#xA;It turns out, you can not only print the first or last 10 lines but n lines by passing the -n as an argument, there is a ton of stuff to discover, this just drives me crazy.&#xA;&#xA;### cURL&#xA;cURL or client URL is a tool that can be used to transfer data via various network protocols. You might not believe but it is used in cars, televisions, routers, and other embedded systems for exchanging relevant data via appropriate protocols. &#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626181263235/JPocJwoEd.png)&#xA;&#xA;This example depicts how we can fetch data from an API using cURL and extract data in JSON format and use it for relevant tasks.&#xA;This is again one of the best utility out there as it becomes quite remarkable and vintage. Despite being almost 30 years old, it shines bright in the tech world.&#xA;&#xA;### find&#xA;Find as the name suggests it is used to find files among the folders and directories in a file system. it becomes quite helpful in complex projects where the directory structure is deep and large. &#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626181386566/zpP9Yaom4.png)&#xA;&#xA;The command `find *.txt` finds every txt file available on this directory. As simple as it can get. This is surely looking silly and idiotic but it finds its glory in large and complicated codebases. &#xA;&#xA;### bc&#xA;bc or basic calculator is a utility tool for performing mathematical and arithmetical operations in the terminal, this commands gets integrated with other commands such as awk really well and can be used for further extending the limits of what the command line development can do.&#xA;&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626182601487/z8X4KeDGG.png)&#xA;&#xA;AWW! I could hear the excitement. That just added new dimensions into BASH. Just creativity and resonance to anything is the limit here. I am using  [REPL.IT](http://repl.it/)  here for using bash as I do not have it on my windows machine :( But that command is truly insane.&#xA;&#xA;### wc&#xA;wc or word count is a utility tool for counting and analyzing the size or count of characters, words, lines, or files in a given file structure. This is quite a handy tool for monitoring and keeping track of a system, also for general development purposes.&#xA;&#xA;![possh.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1626182319602/r8UidHV2z.png)&#xA;&#xA;The above command prints out the word and lines in the provided file. This command `wc` can even compute the size of files and even more properties of files.&#xA;Those were some of the quite powerful commands, tools, or utilities in BASH/shell. There are plenty of other commands not covered here because this an extremely large topic and even making separate articles or resources there will certainly and surely be some things that will get missed out, that&#39;s the beauty Linux or in general Computer Science has.&#xA;Ok, that was a lot, but I hope you got some insights for learning more BASH or Linux in general. This is a wide topic and can&#39;t be covered entirely in a single article. &#xA;&#xA;Now that is it from this part, everything cannot be covered in any number of parts but at least it will help someone to get started in BASH scripting and its specifications for development. Have a Blast learning BASH. Happy Coding :)</content>
      <type></type>
    </item>
    <item>
      <title>Learning BASH by making a Number game</title>
      <link>https://www.meetgor.com/posts/bash-game-numberjack</link>
      <description>Introduction OK! Learning BASH can be quite confusing without a proper goal in hand. So this will be a pretty good idea to start learning BASH and have a ton of</description>
      <pubDate>Thu, 08 Jul 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;OK! Learning BASH can be quite confusing without a proper goal in hand. So this will be a pretty good idea to start learning BASH and have a ton of fun. In this little time, we&#39;ll make a Number game which I have designed myself last year in C++, which took about 3 months due to lazy research and wasting time. But I was surprised that I made this game within two hours in BASH. You can refer to the game instructions in this  [repository at Github](https://github.com/Mr-Destructive/NumberJack).&#xA;&#xA;## Concept &#xA;The game will ask a number between 0 and 9 to the user. Then a list of 10 numbers shuffled in a random order will appear from of the user along with another list used for indexing the numbers from the array. The user has to select the index beneath its chosen number to proceed ahead. The game loops until the user has failed to enter the correct index of the number or the time for input was exceeded by 5 seconds. The user will get a point for every successful hit. So that is probably the introduction of the game so, let&#39;s dive into the specifications.&#xA;&#xA;## Specifications of the Game in BASH&#xA;The game is a number based which means it will need Arithmetic operators a lot. In fact, we&#39;ll need a few complex functions such as shuf. We will very frequently use while and for loops to perform some tasks such as filling and printing array and the game loop. We&#39;ll use some flag variables to indicate the current situation in the game and finally some arithmetic on arrays and numbers. &#xA;&#xA;## Script Explanation&#xA;The game is quite simple to understand. You just have to select the number beneath your chosen number within 5 seconds in the shell script. We will create a menu-like display in the terminal by simple echo command and formatting. Before the menu, we will have a while loop that will iterate until the user enters 3 which is stored in variable `ch` which is initialized to 0 in the beginning so as to enter the loop for the first time. A while loop starts with the do statement and ends at the done statement.&#xA;```bash&#xA;while [ condition ];&#xA;do &#xA;# statements&#xA;done&#xA;```&#xA;For loop can be different based on the scenario. We&#39;ll use a range-based for loop to iterate over a range of numbers using the { } operators. For loop also has do as the beginning of the loop and done as the end of the loop.&#xA;&#xA;```bash&#xA;for i in {1..5};&#xA;do &#xA;#statements&#xA;done&#xA;```&#xA;We&#39;ll also use some If-else statements just to check for the correct user input and checking the exit status. The if statements have `then` to start the block and `fi` to end the if block. &#xA;```bash&#xA;if [ condition ];&#xA;then&#xA;    #statements&#xA;elif&#xA;    #statements&#xA;else&#xA;    #statements&#xA;fi&#xA;```&#xA;&#xA;&#xA;We use a read statement with the argument -p to have a prompt to the user for some information on the input. The input of choice from the menu i.e 1 to play, 2 for Instructions, and 3 to exit are stored in the variable `ch`. If the input is 1, the game will start and it will ask the user for the number `n`, which is the number used throughout the loop until the game is over. &#xA;&#xA;Now we have the number for the rest of the game, we need to generate the list for the user to select the number from. We will have a flag sort of to check if the user has entered the correct number which is `c`, this will store 0 for correct input(number x) and 1 for incorrect input. It is initialized with 0, again to enter the while loop once before the generation of numbers. &#xA;&#xA;To generate and **shuffle 10 numbers which should not have any repeated numbers**, as it can have multiple numbers which might be unfair also it might happen that the number chosen by the user might not be present due to repetition. So to avoid that mischief of pseudo-random numbers we have to generate distinct 10 numbers from 0 to 9 in this case. For that, we are gonna use a command in BASH called `shuf` which can create some permutation of the elements in a list/array or a sequence of numbers in an input stream. We are gonna use `shuf` to generate a random sequence of 10 numbers from 0 to 9 using the command `shuf -i 0-9 -n 10`. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625748675622/1Li6h3_vX.png)&#xA;&#xA;You can see it generated a list of shuffled numbers between 0 to 9 so there are 10 numbers. We&#39;ll store this in result an array to access and print them later. You can refer to  [this](https://www.geeksforgeeks.org/shuf-command-in-linux-with-examples/)  and  [these](https://www.howtoforge.com/linux-shuf-command/)  articles for understanding shuf.  &#xA;&#xA;The main thing is taken care of, now we need to print the list and also print another list to indicate the index of numbers to the user. We will print the list without a for loop using the `@` variable. If you are new to BASH and want a bit guide on BASH please do check out my series on  [BASH scripting](https://techstructiveblog.hashnode.dev/series/bash-scripting), I have this all covered. So using `@` we can print the entire array in BASH. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625749273007/hf_y4Fm53.png)&#xA;&#xA;To print the lower list of indices, we&#39;ll use a range-based for loop i.e it will iterate over a range(in this case between 1 to 10), and assign each element the value of the counter i.e from 1 to 10. We are using `r` as the shuffled list and `a` as the indices list. And print this array with the same method.&#xA;&#xA;After the generation and printing of lists are complete, we&#39;ll take the input from the user for the index of his/her number. We&#39;ll use an argument in read known as timeout, which will give a stop to the input stream after several seconds provided in the argument. In this case, we will use 5 seconds as a timeout for the input of the index. `read -t 5 -p &#34;Enter the index of your number : &#34; x `&#xA;We&#39;ll store the input in `x` variable and access it later for verification. &#xA;&#xA;Next, we will check if the input was done before the timeout or not. For this, if the user input before timeout, we can proceed ahead but if the time was over, then we&#39;ll get an exit status above 128 so we use this as a checker for the timeout in the input. I came to this via this  [article](https://www.linux.org/threads/exit-script-by-timeout-if-delay-of-read-input-in-command-line.15905/), really very helpful. We will break the loop and make the flag `c` as 1 indicating an improper input and thus it&#39;ll show &#34;GAME OVER&#34;. But if you were fast enough then we&#39;ll check that the index of the shuffled array has your chosen number or not, we used this `${r[$(($x))-1]} -eq $n` to check for the correct number. Why -1? If you remember indexing in the array by default starts with 0, as we have started the second list from 1 hence every element will become offset by 1 hence to avoid that we&#39;ll subtract one to refer to that index. &#xA;&#xA;If the index of the number was equal and correct, well done we&#39;ll increment the counter of points `p` by one and if it was incorrect, the flag will be set to one as previously said and we&#39;ll break the loop. After coming out of the loop, we&#39;ll check if the status flag `c` was 1 if yes, then print the GAME OVER and display the points earned. And that is it. Let&#39;s take a look at some gameplay :)&#xA;&#xA;![numbjackbash.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1625753634816/CCUD8OD_K.gif)&#xA;&#xA;## BASH Script&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;echo -e &#34;&#xA; NumberJack &#xA;&#34;&#xA;ch=0&#xA;while [ $ch -ne 3 ];&#xA;do&#xA;&#x9;echo  &#34;  &#xA;&#x9;&#x9; PLAY : Hit 1 and enter.&#xA;&#x9;&#x9; HELP : Hit 2 and enter.&#xA;&#x9;&#x9; EXIT : Hit 3 and enter.&#xA;&#x9;&#x9; &#34;&#xA;&#xA;&#x9;read -p &#34;Enter your choice : &#34; ch&#xA;&#x9;if [ $ch -eq 1 ];then&#xA;&#x9;x=0&#xA;&#x9;c=0&#xA;&#x9;p=0&#xA;&#x9;read -p &#34;Enter any number between 0 and 9 : &#34; n&#xA;&#x9;while [ $c -eq 0 ];&#xA;&#x9;do&#xA;&#x9;&#x9;x=11&#xA;&#x9;&#x9;r=($(shuf -i 0-9 -n 10))&#xA;&#x9;&#x9;echo &#34;${r[@]} &#34;&#xA;&#x9;&#x9;for i in {1..10};&#xA;&#x9;&#x9;do&#xA;&#x9;&#x9;&#x9;a[$i]=$i&#x9;&#xA;&#x9;&#x9;done&#xA;&#x9;&#x9;echo &#34;${a[@]} &#34;&#xA;&#x9;&#x9;read -t 5 -p &#34;Enter the index of your number : &#34; x&#xA;&#x9;&#x9;if [[ $? -gt 128 ]]; then &#xA;&#x9;&#x9;&#x9;c=1&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;&#x9;fi&#xA;&#x9;&#x9;if [ ${r[$(($x))-1]} -eq $n ];then&#xA;&#x9;&#x9;&#x9;echo &#34;Great&#34;&#xA;&#x9;&#x9;&#x9;((p=p+1))&#xA;&#x9;&#x9;else&#xA;&#x9;&#x9;&#x9;c=1&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;&#x9;fi&#xA;&#x9;done&#xA;&#x9;elif [ $ch -eq 2 ];then&#xA;&#x9;&#x9;echo &#34;HELP: INSTRUCTIONS TO PLAY THE GAME. &#34;&#xA;&#x9;else&#xA;&#x9;&#x9;break&#xA;fi&#xA;&#xA;if [ $c -eq 1 ];then&#xA;&#x9;&#x9;&#x9;echo -e &#34;&#xA;GAME OVER&#xA;&#34;&#xA;&#x9;&#x9;&#x9;echo &#34;You scored $p points&#34;&#xA;fi&#xA;&#x9;&#x9;done&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625753738352/qBDF1PFQG.png)&#xA;&#xA;This is the final bare-bones script without any help instructions just keeping the script simple. I hope you learned something from the game development in BASH. This is just a fun little project and a cool way of learning certain concepts in BASH such as loops, conditional statements, and arithmetic. Have FUN. Happy CODING :)</content>
      <type></type>
    </item>
    <item>
      <title>BASH Scripting Guide - PART - 2</title>
      <link>https://www.meetgor.com/posts/bash-guide-p2</link>
      <description>Introduction In this part, topics such as switch cases, mathematical expression, arrays, and strings. This won&#39;t be an in-depth guide to understand each and eve</description>
      <pubDate>Fri, 02 Jul 2021 00:00:00 UTC</pubDate>
      <content># Introduction&#xA;&#xA;In this part, topics such as switch cases, mathematical expression, arrays, and strings. This won&#39;t be an in-depth guide to understand each and every concept, but to make users aware of the things and features in Bash. This also would not be an absolute basic guide, I expect to have some basic programming knowledge such as binary systems, logical and mathematical concepts. Don&#39;t worry, you won&#39;t be bombarded with commands, I&#39;ll just explain with easy examples to get started.&#xA;&#xA;Topics to be covered in this part are as follows:&#xA;- User Input&#xA;&#xA;    - User Prompt&#xA;    - Changing the Delimiter&#xA;    - Password as Input&#xA;    - Limiting the length of Input&#xA;&#xA;- Cases&#xA;&#xA;- Arrays&#xA;    - Declaring and Printing Arrays&#xA;    - Number of elements in an array&#xA;    - Splicing the array&#xA;    - Inserting and Deleting elements&#xA;&#xA;- Strings&#xA;    - Making Substrings&#xA;    - String Concatenation&#xA;    - String Comparison&#xA;- Arithmetic in Bash&#xA;    - Integer Arithmetic &#xA;    - Floating-Point Arithmetic&#xA;&#xA;# User Input&#xA;&#xA;Taking user input in Bash is quite straightforward and quite readable as well. We can make use of `read` command to take in input from the user. We just specify the variable in which we want to store the input.` read x` Here, the input will be stored in x. We can also pass in certain arguments to the read command such as -p (prompt with string), -r ( delimiter variation), -a(pass to the array), and others as well. Each of them will make the foundation of various complicated tasks to perform in logical operations. &#xA;&#xA;### User prompt argument&#xA;The -p argument will prompt the user with a string before they input anything. It makes quite informative and useful user input. This becomes quite a useful argument/parameter to make it quite readable and understand what to do directly with much hassle. The below is the general syntax of passing the argument to the read function.&#xA;&#xA;```bash&#xA;read -p &#34;Enter the number &#34; n&#xA;```&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the number &#34; n&#xA;echo &#34;The inputted number was $n&#34;&#xA;```&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625118915300/NRF7Ci2rK.png)&#xA;&#xA;In this example, we can prompt the user with the string **Enter the number**, and it gives certain information to the user about what to input. &#xA;&#xA;&#xA;### Changing the delimiter&#xA;Next, we can make use of -r which depending on the use case, we can change the delimiter while taking the input.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;IFS=&#39;/&#39; read -p &#34;Enter the file path : &#34; user project app &#xA;echo $user $project $app&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625202319183/R9Eo3lg0oU.png)&#xA;&#xA;In the above script, we separated the path of a directory(user-entered) into separate components such as the username, project name, and the app name, this can get pretty important and a great tool for automation of making project and application structures. At the beginning of the command, we use IFS which stands for Internal Field Separator, which does the separation of variables based on the field provided, in this case it was `//`, you can use any other field characters appropriate to your needs.&#xA;&#xA;This command will change the delimiter, by default it uses spaces or tabs etc to identify distinct input variables but we change it to other internal field separators such as / , \ ,- , \|, etc. This can make the user input more customizable and dynamic. &#xA;&#xA;### Password Typing&#xA;We can hide the input from the screen so as to keep it confidential and keep sensitive information such as passwords and keys private and protected. &#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -sp &#34;Password: &#34; pswd&#xA;echo &#34;the password was $pswd&#34;&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625161571993/MkCadbyKW.png)&#xA;&#xA;&#xA;We used the -s to keep the input hidden, the screen doesn&#39;t reflect what the user is typing, and -p for the prompt to offer the user some information on the text. &#xA;&#xA;### Limiting Length of Input&#xA;&#xA;We can limit the user to only a certain number of characters as input. It becomes quite useful in constrained environments such as usernames and passwords to be restricted with a certain limit. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -n 6 -p &#34;Enter the name: &#34; n&#xA;echo $n&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625161752440/8xI5Lovxo.png)&#xA;&#xA;In the above script, we demonstrate the limit of characters of 6 in the variable n. This restricts the user with only the first 6 characters, it just doesn&#39;t exceed ahead, directly to the next command.&#xA;&#xA;### Passing to the array&#xA;&#xA;Another important argument to be passed after read command is -a which inserts the value to the array elements.&#xA; &#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -a nums -p &#34;Enter the elements : &#34; &#xA;for i in ${nums[*]};&#xA;do &#xA;&#x9;echo -e &#34;$i&#xA;&#34;&#xA;done&#xA;&#xA;```&#xA;In the above script, we have used array, don&#39;t worry, I&#39;ll explain it in the coming sections of this part. We have assigned an empty array and made the user enter those arrays, they are space-separated values. We have used the -a operator to insert the input to the elements of the array. The for loop is range-based which means it will do certain commands until there are no elements left in nums. The command `${nums[@]}` indicates every element in the array nums. &#xA;  &#xA;# Cases&#xA;&#xA;Cases are quite a good way of replacing nested if-else statements to make them nice and readable in the script.  Cases in Bash are quite powerful and easy to use compared with C/ C++ styled switch cases. &#xA;&#xA;The general structure of using a case in Bash is as follows:&#xA;&#xA;```bash&#xA;case variable in&#xA;    pattern 1)&#xA;        statements&#xA;        ;;&#xA;    pattern 2)&#xA;        statements&#xA;        ;;&#xA;    pattern 3)&#xA;        statements&#xA;        ;;&#xA;    pattern 4)&#xA;        statements&#xA;        ;; &#xA;    *)&#xA;        statements&#xA;        ;;&#xA;esac&#xA;```&#xA;&#xA;It follows a particular pattern if it matches it stops the search and executes the statements, finally comes out of the block. If it doesn&#39;t find any match it redirects to a default condition if any. &#xA;&#xA;```bash&#xA;#!/bin/bash &#xA;&#xA;read -p &#34;Enter a name : &#34; n&#xA;case $n in &#xA;&#x9;admin)&#xA;&#x9;&#x9;echo &#34;You are logged as root&#34;&#xA;&#x9;&#x9;;;&#xA;&#x9;unknown)&#xA;&#x9;&#x9;echo &#34;A hacker probably&#34;&#xA;&#x9;&#x9;;;&#xA;&#x9;manager)&#xA;&#x9;&#x9;echo &#34;Weolcome Manager!&#34;&#xA;&#x9;&#x9;;;&#xA;&#x9;*)&#xA;&#x9;&#x9;echo &#34;A normal person&#34;&#xA;&#x9;&#x9;;;&#xA;esac&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625160454707/YDYGkU34d.png)&#xA;&#xA;Seeing the above example, it is quite clear, that it looks quite structured and readable than the nested ladder of If-else statements. Cases are rendered based on the variable after the `case` keyword. We use the patterns before `)` as making the match in the variable provided before. Once the interpreter finds a match it returns to `esac` command which is `case` spelled in reverse just like `fi` for `if` and `done` for `do` in loops :) If it doesn&#39;t match any pattern, we have a default case represented by `*)` and it executes for any non-matching expression. &#xA;&#xA;&#xA;## Arrays&#xA;&#xA;Arrays or a way to store a list of numbers is implemented in Bash identical to most of the general programming languages. &#xA;&#xA;### Declaring and Printing Arrays &#xA;&#xA;We declare an array similar to a variable but we mention the index of the element in the array(0 based index).  We can also simply declare an empty array using the declare command `declare -A nums`&#xA;&#xA;```bash&#xA;#!/bin/bash &#xA;&#xA;nums[0]=7&#xA;nums[1]=5&#xA;nums[2]=8 &#xA;for i in ${nums[@]}&#xA;do&#xA;echo -e &#34;$i &#xA;&#34;&#xA;done&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625124595563/TzBEaH1E4.png)&#xA;&#xA;The above script initializes an array with some hard-coded elements, surely you can input those from the user. For printing and accessing those elements in the array, We can use a loop, here we have used a range-based for loop. You can use any other loop you prefer. The iterator is &#34; i &#34; and we use $ to access the values from the array, we use `{}` as we have nested expression for indexing the element and `*` for every element in the array ( `@` will also work fine ), that&#39;s why range-based for loops make it quite simple to use. And we have simply printed &#34; i &#34; as it holds a particular element based on the iteration. &#xA;&#xA;OR&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;declare -A nums=(&#xA;[0]=44&#xA;[1]=45&#xA;[2]=46&#xA;)&#xA;echo &#34;${nums[@]}&#34;&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625128343654/NCKUzurXe.png)&#xA;&#xA;The above script uses declare an array, it can be empty as well after the name declaration. We used the `()` to include the values in the array, using indices in the array we assigned the values to the particular index.&#xA;&#xA;If you just want to print the elements, we can use `${nums[@]}` or `${nums[*]}`, this will print every element without using any iteration loops.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;nums[0]=7&#xA;nums[1]=5&#xA;nums[2]=8 &#xA;echo &#34;${nums[@]}&#34;&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625125456166/85zXjapQ_.png)&#xA;&#xA;### Number of Elements in the array&#xA;&#xA;To get the length of the array, we can use # in the expression `${nums[@]}`, like `${#nums[@]}` to get the number of elements from the array.&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625125770005/VzIr8CD7-.png)&#xA;&#xA;Since we had 4 elements in the array, it accurately printed 4. &#xA;&#xA;### Inserting and Deleting elements from Array &#xA;&#xA;We can push elements to the array using the assignment operator. &#xA;&#xA;`nums=(${nums[@]} 76) `&#xA;&#xA;This will push 76 into the array, i.e. in the last index( length -1 index). &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;nums[0]=7&#xA;nums[1]=5&#xA;nums[2]=8&#xA;nums[3]=19&#xA;nums=(${nums[@]} 76)&#xA;echo &#34;${nums[@]}&#34;&#xA;echo &#34;Length of nums = ${#nums[@]}&#34;&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625126198675/A8IAE-2FR.png)&#xA;&#xA;As you can see the element was added at the end of the array, you can also specify the index you want to insert. We can use `unset nums[3] ` to delete the element at the particular location or we can pop back (delete from the end) an element from the array using the index `-1` from the array using the following command.&#xA;&#xA;```bash&#xA;unset nums[-1]&#xA;```&#xA;&#xA;Any index provided will delete the element at that location by using unset. By using -1, we intend to refer to the last element. This can be quite handy and important as well in certain cases.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;nums[0]=7&#xA;nums[1]=5&#xA;nums[2]=8&#xA;nums[3]=19&#xA;unset nums[-1]&#xA;echo &#34;${nums[@]}&#34;&#xA;echo &#34;Length of nums = ${#nums[@]}&#34;&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625126770211/LYK2Q0Rp0.png)&#xA;&#xA;There you can see we removed the element using unset. &#xA;&#xA;### Splicing an Array&#xA;&#xA;We can splice the array to print/ copy a portion of the array to another one. &#xA;&#xA;```bash&#xA;echo &#34;${nums[@]:1:3}&#34;&#xA;```&#xA;&#xA;Using two colons and numbers in between them, we can print in this case certain elements in the array from a particular range. Here the first number after the colon is the starting index to print from(inclusive) and the next number after the second colon is the length to which we would like to print the element, it is not the index but the number of elements after the start index to be spliced&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;nums[0]=7&#xA;nums[1]=5&#xA;nums[2]=8&#xA;nums[3]=19&#xA;nums[4]=76&#xA;newarr=${nums[@]:1:3}&#xA;echo &#34;${newarr[@]}&#34;&#xA;echo &#34;${nums[@]}&#34;&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625127387739/IH2Fc6ghk.png)&#xA;&#xA;In this case, we have copied the slice of an array to another new array using the double colon operator. We added the elements from the 1st index till `1+3` indices i.e till 4. 3 is not the index but the length till you would like to copy or print. &#xA;&#xA;This was a basic introduction to arrays, definitely, there will be much more stuff I didn&#39;t cover. Just to give an overview of how an array looks like in BASH scripting. Next, we move on to strings.&#xA;&#xA;# Strings&#xA;&#xA;Strings are quite important as it forms the core of any script to deal with filenames, user information, etc all contain strings or array of characters. Let&#39;s take a closer look at how strings are declared, handled, and manipulated in Bash scripting. &#xA;&#xA;```bash&#xA;s=&#34;World&#34;&#xA;echo &#34;$s&#34;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625129318948/02V3bpP1I.png)&#xA;&#xA;Strings are again declared as normal variables but are enclosed in double quotation marks.  And we access them in the exact same way as we do with variables. If you were to use single quotes instead of double quotes Bash would not interpret the variable name as a variable, it would print the name literally and not the value of the variable, So prefer using double quotes in echo and other commands to make variables accessible.&#xA;&#xA;### Making Substrings&#xA;&#xA;We can even splice the string as we did with the arrays, in strings we can call it substrings. The syntax is almost identical as we just have to get the variable name.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;s=&#34;Hello World&#34;&#xA;p=${s:6}&#xA;echo $p&#xA;q=${s::5}&#xA;echo $q&#xA;t=${s:4:3}&#xA;echo $t&#xA;&#xA;```&#xA;&#xA;In the above script, we had a base string &#39;s&#39; which was then sliced from the 6th index to the end, If we do not pass the second number and colon, it interprets as the end of the string and if we do not pass the first number, it will interpret as the first character in the string. We sliced s from the 6th index till the end of the string and copied it in the string &#39;p&#39;&#39;. In the &#39;q&#39; string, we sliced the first 5 characters from the string &#39;s&#39;. In the &#39;t&#39; string we sliced starting from the 4th index and 3 characters in length i.e till  7th index, not the 7th index.&#xA;&#xA;We can use the # before the variable name to get the length of the string as we saw in the array section. So we can use the `echo ${#s}` command to print the length of the string where s is the string variable name. &#xA;&#xA;### String Concatenation&#xA;&#xA;String concatenation on Bash is quite straightforward as it is just the matter of adding strings in a very simple way. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;s=&#34;Hello&#34;&#xA;p=&#34;World&#34;&#xA;q=&#34;$s $p&#34;&#xA;echo $q&#xA;```  &#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625135997314/_n05RIoTTM.png)&#xA;&#xA;The space between the two variables is quite literal, anything you place between this space or the double quotes will get stored in the variable or get printed.&#xA;&#xA;### String Comparison&#xA;&#xA;Moving on to the string comparison in Bash. String comparison is quite complex in certain programming languages but it&#39;s quite straightforward in some languages such as Bash. We can compare strings quite easily in Bash, either they are equal or they are not, it&#39;s just comparison operators to perform the heavy-lifting for us. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;s=&#34;hello&#34;&#xA;p=&#34;Hello&#34;&#xA;if [ $s = $p ];&#xA;then&#xA;    echo &#34;Equal&#34;&#xA;else &#xA;    echo &#34;Not equal&#34;&#xA;fi&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625138020711/UWlRN8aPq.png)&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625137884154/7bNPkpGd3.png)&#xA;&#xA;From the above code, it is quite clear that the strings as not equal and we compared them with the &#34;equality&#34; operator (=) and checked if that condition was true, and perform commands accordingly. We can also check if the strings are not equal using `!=` operator and we can perform commands based on the desired logic. We also have operators to compare the length of the strings. We can use `\&lt;` operator to check if the first string is less than the second string(compared characters in ASCII).  And check if the first string is greater than the second string using `\&gt;` operator. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;s=&#34;hello&#34;&#xA;p=&#34;Hello&#34;&#xA;if [ $s \&lt; $p ];&#xA;then&#xA;&#xA;&#x9;echo &#34;$s is Less than $p&#34;&#xA;elif [ $s \&gt; $p ];&#xA;then&#xA;&#x9;echo &#34;$s is greater than $p&#34;&#xA;else&#xA;echo &#34;Equal&#34;&#xA;fi&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625137393683/u3WbgDIrN.png)&#xA;&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625137467448/NP1UXZAbv.png)&#xA;&#xA;Here, we are using the ASCII equivalent of strings to compare them as it gives an idea in terms of the value of the strings. We see that &#39;h&#39;( 104)has a greater ASCII value than &#39;H&#34; (72) which is why we see the shown outcome. &#xA;&#xA;We also have operators to check for the string being empty or not using the -z operator. Also, we have arguments to pass to the string comparison to check for non-empty strings as well, specifically for input validation and some error handling. &#xA;&#xA;We can quite easily use -n to check for non-empty string and -z for the length of the string being zero.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter a string : &#34; s&#xA;if [ -s $s ];&#xA;then &#xA;    echo &#34;Empty Input&#34;&#xA;else&#xA;   echo &#34;Valid input&#34;&#xA;fi&#xA;&#xA;```  &#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625138907051/tbjRDda0U1.png)&#xA;&#xA;So the string topic is quite straightforward and self-explanatory as it doesn&#39;t how that much complexity but is still powerful and convenient to use.&#xA;&#xA;# Arithmetic in Bash&#xA;&#xA;Performing any Arithmetic operations is the core for scripting. Without arithmetic, it feels incomplete to programmatically create something, it would be quite menial to write commands by hand without having the ability to perform arithmetic operations. &#xA;&#xA;### Integer Arithmetic&#xA;&#xA;Firstly we quite commonly use operations on variables, so let us see how to perform an arithmetic operation on variables in Bash. We use double curly braces to evaluate certain results of the operations performed on variables. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;x=4&#xA;y=9&#xA;z=$(($x * $y))&#xA;echo $z&#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625139582135/Sb4hdE990.png)&#xA;&#xA;We use double curly braces in order to evaluate the operations performed on the variables inside them. We definitely have to use the $ symbol to extract the value of the variable. &#xA;&#xA;We can surely use operators such as addition(`+`), subtraction(`-`), multiplication(`*`), division(`/`), and modulus(`%`, it stores the remainder of the division,17%3 gets you 2) in statements. We can also perform operations such as `&lt;&lt;` to do left bitwise shift and `&gt;&gt;` right bitwise shift to shift the binary digits in left tor right respectively in a variable. There are also logical operations such as Bitwise and logical AND(`&amp;`), OR(`|`), EX-OR(`^`), and ternary expressions.  &#xA;&#xA;Alternative to double curly braces is `expr`, expr allows you to freely wherever you need to evaluate an arithmetic operation. Though this is not native in-built in shells, it uses a binary process to evaluate the arithmetic operations. It can also defer depending on the implementation of such commands in various environments. &#xA;&#xA;We can also use the `let` command to initialize a variable and perform expressions in the initialization itself. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;let a=4&#xA;let b=a*a&#xA;let c=&#34;b/(a*2)&#34;&#xA;echo $b&#xA;```&#xA;&#xA;We can perform quite complex operations using simple implementation using `let`, this allows much readable and bug-free scripts.  If you would like to perform operations with brackets and other operations you can enclose the expression in double quotation marks. &#xA;&#xA;### Floating-Point Arithmetic&#xA;&#xA;Performing floating-point arithmetic in Bash is not quite well though. We won&#39;t get 100% accurate answers in the expressions this is because it is **not designed** for such things. Doing **things related to floating-point is a bad idea**, Still, you can improve the precision to a little extent to do some basic things. I **don&#39;t recommend this** only do this if there are no other options. &#xA;&#xA;```bash&#xA;printf %.9f &#34;$((10/3))&#xA;```&#xA;&#xA;The result of this is 3.0000000..064 roughly, which is pretty bad. Bash at its core doesn&#39;t support floating-point calculations. But there is good news, we have  [awk](https://en.wikipedia.org/wiki/AWK)  and other tools such as  [bc](https://en.wikipedia.org/wiki/Bc_(programming_language))  and others which is planned for the next part in the series. I&#39;ll explain awk just for floating-point here, in the next part, I&#39;ll cover it in depth. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;a=10&#xA;b=$(echo | awk -v a=&#34;$a&#34; &#39;{print a/3}&#39;)&#xA;echo $b &#xA;&#xA;```&#xA;&#xA;![bashs2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1625157391350/gHudsNntM4.png)&#xA;&#xA;WOW! That is to the point, but that was a lot of hassle using echo but printing nothing! HH? OK, you see certain things can get really annoying when things aren&#39;t supported natively. Firstly, we use  | to pipe echo command with awk, the echo command doesn&#39;t do anything just a way to use awk command in assigning variables here. Then the general syntax of the awk command is ` awk -options -commands`. In this case, we are using -v as an argument and passing in an as a variable which is equal to a, which is stupid and silly but that is what it is, you can name any variable name you want. Then we simply have to use the variable in the print function which generally evaluates the expressions or other operations and returns to the interpreter. And that is how we print the expression, Phew! That took a while to do some silly things, But hey! That&#39;s possible though. &#xA;&#xA;That is the basic overview of Arithmetic in Bash, you can also perform logical operations in it which is very easy and can be understood on a quick run-through in the  [documentation](https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Arithmetic-Expansion).&#xA;&#xA;I hope you understood the mentioned topics and what are their use cases depending on the requirements. Some topics such as positional parameters, tools and utilities, dictionaries, and some other important aspects of Bash scripting will be covered in the next part. Happy Coding.</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Terminal Integration</title>
      <link>https://www.meetgor.com/posts/vim-plus-teminal</link>
      <description>Vim and Terminal!? Vim was made to work with the command line. Many beginners do not understand what are the true capabilities of Vim, myself included:) Vim can</description>
      <pubDate>Tue, 29 Jun 2021 00:00:00 UTC</pubDate>
      <content>## Vim and Terminal!?&#xA;Vim was made to work with the command line. Many beginners do not understand what are the true capabilities of Vim, myself included:) Vim can run terminal commands without leaving the text editor, open an instance of a terminal, work with shell environments, and other things depending on the use case.&#xA;&#xA;## Running Terminal/ shell commands from within Vim&#xA;&#xA;You can run the commands from inside of Vim by just using `:!` before the command, this means you have to be in command mode. Just after being in command mode, the ! or bang operator will execute the command typed after it from the terminal(Linux/ macOS) or your default shell(Windows -&gt; CMD/Powershell).&#xA;```&#xA;:!pwd&#xA;```&#xA;The above command from vim will redirect to the terminal and show the output of the command and return on pressing any key. In this case, it will execute the PWD command and just wait for the user to enter any key to return to Vim.&#xA;&#xA;The following is an example of how it could be used from Vim in Windows using Powershell as the default shell.&#xA;&#xA;![Animation.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1624885870237/Ie5C-3u1B.gif)&#xA;&#xA;In Windows, dir is equivalent of ls for Linux. That was not the best example of how a terminal can be used at its best, You can also use a logical operator from within vim to run multiple commands at once. &#xA;&#xA;### Running programs/ code from Vim on terminal&#xA;&#xA;This becomes quite a great feature for making Vim from a text editor to an IDE, this can be paired with Keymaps i.e when the user types certain keys, the command gets executed making the code run from the terminal. I have already used this feature to set up Vim for python, bash, and other programming languages. Also, I have written an article about  [keymapping](https://dev.to/mrdestructive/vim-keymapping-guide-3olb)  and Vim setup for  [Python](https://dev.to/mrdestructive/setting-up-vim-for-python-ej)  and  [Bash](https://techstructiveblog.hashnode.dev/vim-setup-for-bash-scripting), this will give you an idea of how to setup vim for any programming language. &#xA;&#xA;Vim can really shine in this kind of feature as it just becomes flawless and a smooth experience even for a beginner. We just have to run the compile the code and run its executable/ output file, rather for python and other interpreted languages, we have to just pass the file name to the interpreter and that&#39;s it.  &#xA;&#xA;## Opening instance of Terminal within Vim.&#xA;&#xA;Vim can also create an instance of the terminal within its window by making a split. This is quite similar to VS Code and other Text editors that have the functionality to create an instance of the terminal within itself. This feature is useful for developing complex systems and depending on the use case, it can be quite important and efficient as well. &#xA;&#xA;The terminal can be created in various ways the most preferred way is by typing in `:term` from Vim. &#xA;This will create a horizontal split from the current editor and split it into half. You can change the size of the split using the mouse according to your preference. &#xA;&#xA;![vimtermsplit.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1624888468392/wR0JT8SBN.gif)&#xA;&#xA;Here Vim has certain variables and shortcuts to make things even simpler, say you want to parse the current file to the terminal for execution. You can surely type the name manually or you can be a bit smarter and use % instead, the `%` symbol will parse the file name along with the extension in the terminal. Also `%:r` will parse filename without the extensions(.txt/.py/etc) to the terminal.&#xA;&#xA;There are many things you can do with terminals surely, but with Vim that even goes further than the limits. Terminal/command line is quite important in any development environment as it is an interface for the user to interact with the Operating System. Vim is quite powerful and behaves as a gecko for programmers because it changes itself according to our needs flawlessly and **efficiently**.&#xA;&#xA;&#xA;![vimpython.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1624891655340/5f81Dpp_O.gif)&#xA;&#xA;Integrating Terminal into a Text Editor truly lights up the environment for development. It becomes an easy and enjoyable experience to test out the code without wasting much time on the actual execution process. Surely it needs time to set up the environment to speed things, for that understanding of the programming and development environment is required. Happy Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Enhancing Movement Speed</title>
      <link>https://www.meetgor.com/posts/vim-movement-speed</link>
      <description>Introduction OK! Vim and movement are like bread and butter or failed brakes. To become a proficient Vim user, you need to move in Vim very effectively. You don</description>
      <pubDate>Sat, 26 Jun 2021 00:00:00 UTC</pubDate>
      <content>![](https://{{ page.image | relative_url }})&#xA;## Introduction&#xA;OK! Vim and movement are like bread and butter or failed brakes. To become a proficient Vim user, you need to move in Vim very effectively. You don&#39;t have to think about doing certain things, your fingertips should automatically move without wasting time thinking about it. I am sure, it takes time and effort but OH! it is so rewarding in the end.  &#xA;&#xA;### Why H J K L?&#xA;First things first, unmap arrow keys and make a habit of using h,j,k, and l. Yes, this would not make any sense in the initial stage but that will make no sense for not using it later. &#xA;The thing with H J K L is that they are quite flexible to work with and if you use it with numbers you would navigate much faster than arrow keys. Such as `10j` will move you 10 lines down at a time in normal mode. These keys are used with many of the other key shortcuts and commands which just make it most important to begin learning to move around in Vim with H J K L. &#xA;&#xA;## Moving Horizontally.&#xA;This is quite the common movement that every programmer uses most of the time. This is also a much-neglected part when movement in Vim is concerned. To the basics, we use the following commands: &#xA;&#xA; `w`   -&gt;  **move forward by a word (considering punctuations as separate words).**&#xA;&#xA; `W`   -&gt;  **move forward by a word (punctuations ignored).**&#xA;&#xA;`b`  -&gt;  **move backward by a word (considering punctuations as separate words).**&#xA;&#xA;`B`  -&gt;  **move backward by a word (punctuations ignored).**&#xA;&#xA;`e`  -&gt;  **move to end of a word (considering punctuations as separate words).**&#xA;&#xA;`E`  -&gt;  **move to end of a word (punctuations ignored).**&#xA;&#xA;`0`  -&gt;  **move to the beginning of a sentence.**&#xA;&#xA;`$`  -   **move to the end of a sentence.**&#xA;&#xA;Those are the most useful and common commands for moving across the line. Don&#39;t forget to use the number before the command to repeat the task for that number of times. Like for example, if you would like to go 6 words ahead type in `6w`. This can improve your thinking and typing as well, good signs of a programmer Eh!&#xA; &#xA;&#xA;## Moving Vertically.&#xA;&#xA;To move vertically we can imagine moving within a file or the block of code. For moving in a file, the following are some useful commands.&#xA;&#xA;`gg`  -&gt; **move to the beginning of a file.**&#xA;&#xA;`G`  -&gt;  **move to the end of a file.**&#xA;&#xA;`Ctrl + e`  -&gt;  **move the screen down without moving the cursor.**&#xA;&#xA;`Ctrl + y`  -&gt; **move the screen up without moving the cursor.**&#xA;&#xA;`Ctrl + f`  -&gt; **move forward one entire screen.**&#xA;&#xA;`Ctrl + b`  -&gt; **move backward one entire screen.**&#xA;&#xA;`Ctrl + d`  -&gt; **move forward half screen.**&#xA;&#xA;`Ctrl + u`  -&gt; **move backward half screen.**&#xA;&#xA;This just was moving around the screen and now a bit programmatic movement. We will see some keystrokes to move in code blocks or code snippets very efficiently.&#xA;&#xA;`gd`  -&gt; **move to the local declaration of any code.**&#xA;&#xA;`gD`  -&gt; **move to the global declaration of any code.**&#xA;&#xA;`%`  -&gt; **move between pairs of ( ), { }, [ ] or any other type of such braces.**&#xA;&#xA;`{`  -&gt; **move to the next paragraph/ code block/ function/ etc)**&#xA;&#xA;`}`  -&gt; **move to the previous paragraph/ code block/ functions/ etc)**&#xA;&#xA;`fa`  -&gt; **move to the next occurrence of the character &#39;a&#39; in a sentence.**&#xA;&#xA;`Fa`  -&gt; **move to the previous occurrence of the character &#39;a&#39; in a sentence.**&#xA;&#xA;`ta`  -&gt; **jump to before of the next occurrence of the character &#39;a&#39; in a sentence.**  &#xA;&#xA;`Ta`  -&gt; **jump to after of the previous occurrence of the character &#39;a&#39; in a sentence.**&#xA;&#xA;The above might be quite handy key shortcuts in moving in a large code file. Even in files with complex variable names and structures, this can be quite handy. &#xA;&#xA;## Search and navigation.&#xA;Searching is quite a time-consuming task, especially when the code is quite complex and has a lot of variables and all. Vim shines in many of such aspects where people think it&#39;s dead. It rises from the ashes to produce a performance-driven experience like any other modern IDEs though it requires a bit of research:) Here are some commands that will make searching and navigating around it quite a lot easier.&#xA;&#xA;`*` -&gt; **next occurrence of the word under the cursor.**&#xA;&#xA;`#` -&gt; **previous occurrence of the word under the cursor.**&#xA;&#xA;`n`  -&gt; **next occurrence of the word searched pattern.**&#xA;&#xA;`N`  -&gt; **previous occurrence of the word searched pattern.**&#xA;&#xA;&#xA;The above commands will also work if you search the pattern from the command mode.&#xA;`/pattern`  or `?pattern` Enter and navigate to the next(`*` or `n`) and previous(`#` or `N`) occurrence of that pattern match.&#xA;&#xA;&#xA;## Moving across files.&#xA;&#xA;Moving across files without any plugins or file explorer is often considered tricky or impossible for some people but there is a way. You can switch between files using the following commands:&#xA;&#xA;`Ctrl + O`  -&gt;   **move in the previously opened file.**&#xA;&#xA;and &#xA;&#xA;`Ctrl + I`  -&gt;  **move in the next file.**&#xA;&#xA;We also can use `Ctrl + ^ ` to move the previous two opened files.&#xA;&#xA;&#xA;If you want to switch from buffers, you can use `:bn` for moving into the next buffer, and `:bp` to move in the previous buffer. You always have an option to move from a buffer using the file name `:b filename` or using the index as `:bindex`.&#xA;&#xA;&#xA;## Moving between Tabs.&#xA;&#xA;People rarely use Tabs as far as I have seen, but they are quite useful and provide the polish off just as robust IDEs. &#xA;&#xA;`:tabnew filename`  -&gt;  **create a Tab of a file.**&#xA;&#xA;`gt`  -&gt; **move to the next tab.**&#xA;&#xA;`ngt`  -&gt; **move to the nth tab.**&#xA;&#xA;`gT`  -&gt; **move to the previous tab**&#xA;&#xA;`:tabo`  -&gt; **close all the tabs except the current one.**&#xA;&#xA;`:tabc`  -&gt; **close the tab.**&#xA;&#xA;`:tabm n`  -&gt; **move the current tab to nth position.**&#xA;&#xA;&#xA;## Movement in Marks.&#xA;&#xA;Marks are used for some quite large files and code-bases. It is used to move from one mark(kind of a bookmark) to another using few key commands, marks are generally created when you would go to a particular code block or any part of the file again and again. Some of the quick navigation using maps are the following.&#xA;&#xA;`mn`  -&gt;  **set the current position as mark &#39;n&#39;.**&#xA;&#xA;`&lt;backtick&gt;n`  -&gt;  **jump to the position of mark &#39;n&#39;.**&#xA;&#xA;` `0`  -&gt; **jump to the position where vim was last exited.**&#xA;&#xA;` `&#34;`  -&gt;  **jump to the position when the last edit was made in the file.**&#xA;&#xA;&#xA;## Split Windows Movement&#xA;&#xA;`Ctrl + w + r`  -&gt; **move the split down.**&#xA;&#xA;`Ctrl + w + R`  -&gt; **move the split up.**&#xA;&#xA;`Ctrl + w + h`  -&gt;  **jump to the left split.**&#xA;&#xA;`Ctrl + w + j`  -&gt;  **jump to the split down.**&#xA;&#xA;`Ctrl + w + k`  -&gt; **jump to the upper split.**&#xA;&#xA;`Ctrl + w + l`  -&gt;  **jump to the left split.**&#xA;&#xA;You can use Caps H J K L to move the leftmost, bottom, uppermost, rightmost split respectively. &#xA;That just was quick to make you enough faster than previous hassles.&#xA;&#xA;&#xA;That was probably it, these were some tricks and shortcuts to move around Vim pretty effectively and smoothly. Moving around Vim can be quite complicated at once, but it&#39;s just finding the key shortcuts to make you feel and glide in VIm. There might be quite a lot of shortcuts missing, If you have any quicker shortcuts, Please let me know in the comments.  Happy Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>BASH Scripting Guide - PART - 1</title>
      <link>https://www.meetgor.com/posts/bash-guide-p1</link>
      <description>Introduction If you are new to BASH and Linux, don&#39;t you worry the community is the driving force here. If someone&#39;s stuck somewhere, the platforms, forums, and</description>
      <pubDate>Tue, 22 Jun 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;If you are new to BASH and Linux, don&#39;t you worry the community is the driving force here. If someone&#39;s stuck somewhere, the platforms, forums, and blogs are ready to help anyone there. BASH is a short term of Bourne-Again Shell, it is a shell interface that has much more capabilities and functions than the simple Bourne shell(sh). It has some quite remarkable features and it is even capable of calling itself a programming language in some sense.&#xA; &#xA;Without wasting any time on the introduction, let&#39;s keep the article rolling.&#xA;In this part, I&#39;ll try to cover the basics of the following topics:&#xA;- **Structure of Bash Script.**&#xA;- **Variables.**&#xA;- **If-else Conditional Statements.**&#xA;- **Loops.**&#xA;    1.  For loop&#xA;    2. While loop&#xA;    3. Until loop&#xA;- **Functions.**&#xA;&#xA;Firstly, you should have some basic understanding of Linux commands such as listing directories, creating, making editing files, and some tiny little tasks. Bash scripting is the way to do these in a programmatic way, that&#39;s why it is called scripting.  &#xA;&#xA;## Understanding a Simple Shell script&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;echo &#34;Hello World&#34;&#xA;```&#xA;That is such a simple elegant script, isn&#39;t it? Well, the first command is called the she-bang which tells the Shell to execute or run the file as a Bash script or from a Bash interpreter. The next command is a simple echo which is used for printing text to the screen/console. She-bang is the path to the Bash interpreter. So, it basically redirects the shell to execute the file/script in a Bash environment. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624287681383/MP2nc6hGR.png)&#xA;&#xA;To execute the script we have many ways, either use Bash, source, or execute it as a shell-script by making it executable from the path. In this case, I used Bash to run the script, we&#39;ll see others as well.&#xA;&#xA;The core structure of the Bash script is quite simple, we can make the format of the script according to the paradigm used and objective of the script. For basic scripts which has utility functions we normally declare those in the beginning after the she-bang header. After the function, we can have the core main part of the script. It was enough and important here to understand the purpose of the she-bang header and how to execute a shell script.&#xA;&#xA;&#xA;## Variables&#xA;Definitely, we need variables to store some value which we are gonna use again and again. Bash has variables but without any specific data types, you can store anything in anything. It becomes a mess in comparing to variables and their values :( Though it might be a feature, not a bug sometimes.&#xA;&#xA;Let&#39;s create some variables in a Bash script. &#xA;```bash&#xA;#!/bin/bash&#xA;x=2&#xA;echo $x&#xA;```&#xA;To create a variable, we simply write the variable name and assign it to whatever we want. **DO NOT leave spaces before and after assignment operator(=), it won&#39;t work**. To access the value of the variable we use the $ symbol. That is about the variables, If you want to perform some arithmetic on variables, it is covered in the further sections. &#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624287785396/GvIdwi0QP.png)&#xA;&#xA;From the above script, we outputted the value of x to the console. We also executed the script file using the source command. &#xA;&#xA;## If-else Conditional statements&#xA;If-else conditional statements are quite the fundamentals of any logical operations performed in the programming world, so Bash has them implemented pretty much the same way as other shells do.&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;a=9&#xA;if [ $a -lt 10 ];then &#xA;echo &#34;Less than 10&#34;&#xA;fi&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624290119634/JT6QuRTZe.png)&#xA;&#xA;We use &#34;If&#34;, followed by &#34;then&#34; to execute the mentioned commands after that until it hits &#34;fi&#34; or else statement, we&#39;ll see else and if-else block after this. That is a basic If statement, here if we are comparing numbers we use -lt for less than, -gt for greater than, -eq for equals to, -ne for not equals to, -le for less than equals to, and -ge for greater than equals to. For string comparison, we use symbols such as &lt; for less than, &gt; for greater than, = for equals to, != for not equals to. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;a=&#34;ERROR&#34;&#xA;if [ $a = &#34;ERROR&#34; ]; then&#xA;&#x9;echo &#34;FAILURE&#34;&#xA;else &#xA;&#x9;echo &#34;SUCCESS&#34;&#xA;fi&#x9;&#xA;&#xA;```&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624289450947/mrd8lawZD.png)&#xA;&#xA;In the above example, we have used the if-else block, comparing a string with other and using the = operator to compare. It&#39;s quite interesting that Bash has string comparison built-in, unlike C/C++ where we have to depend on external libraries. We have used chmod to make the script file executable to anyone using the system. And then we simply put in the path to the file to run it. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;c=3&#xA;if [ $c -eq 0 ];then&#xA;&#x9;echo &#34;Success&#34;&#xA;elif [ $c -eq 1 ];then &#xA;&#x9;echo &#34;Failure&#34;&#xA;else&#xA;&#x9;echo &#34;Invalid Input&#34;&#x9;&#xA;fi&#xA;```&#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624290844242/8FQgLjIQT.png)&#xA;&#xA;From the above script, we used if-elif-else statements to evaluate different conditions. We had use -eq to equate the value of the variable to the number we want to compare with. That was self-explanatory logic.&#xA;&#xA;## Loops&#xA;&#xA;We have 3 types of loop statements in Bash, they are:&#xA;1. For loop&#xA;2. While loop&#xA;3. Until loop&#xA;&#xA;### For loops&#xA;In for loop, we have the freedom to use in range-bound or C-like for loops. Let us take a look at both of them using examples.&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;for i in {1..5} &#xA;do &#x9;&#xA;&#x9;echo $i&#xA;done&#x9;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624292601491/Rbuyx_T6p.png)&#xA;&#xA;The above loop was used as a range-based loop, which loops through 1 and 5 inclusive. We use {} to use it as the range. As &#34;then&#34; and &#34;fi&#34; in if conditions, we have &#34;do&#34; and &#34;done&#34; in loops. Between the do and done statements, we can type in the statements we want to loop. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the number : &#34; n&#xA;for (( i=0;i&lt;$n;i++ ));&#xA;do &#x9;&#xA;&#x9;echo $i&#xA;done&#x9;&#xA;&#xA;&#xA;```  &#xA;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624293294293/jPM1so_As.png)&#xA;&#xA;The above for loop is a typical C-style for loop which takes 3 arguments, the initializing iterator, the condition, and the incrementor. We surround the arguments with double braces followed by a semi-colon. The rest of the syntax is identical to the previous for loop style.&#xA;&#xA;### While loops&#xA;While loops are used quite commonly in Bash and the syntax is quite straightforward.&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the number : &#34; n&#xA;while [ $n -gt 0 ]&#xA;do &#x9;&#xA;&#x9;echo $n&#xA;&#x9;i=$((--n))&#xA;done&#x9;&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624295626909/qLh2y8RPe.png)&#xA;&#xA;The above while loop has a condition for the number to be greater than zero. We take in the input from the user using the command read and store it in the variable n, the -p is an argument to prompt the user with text before the input. We use the decrement operator to decrement the iterator. The syntax is quite similar and easy to understand. As usual in loops, we have do and done to start and end the loop. &#xA;&#xA;### Until loops.&#xA;Until loop is a while loop but with the opposite condition, we loop until a certain criterion is not matched. &#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;read -p &#34;Enter the number : &#34; n&#xA;until [ $n -eq 0 ]&#xA;do &#x9;&#xA;&#x9;echo $n&#xA;&#x9;n=$((--n))&#xA;done&#x9;&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624296796493/SF6-XOEVl.png)&#xA;&#xA;In the loop, we iterate over and over again until n becomes 0. Until is simply to exit from the loop until a certain condition is met. The rest of the syntax is again the same as the other loop with do and done statements.  &#xA;&#xA;## Functions &#xA;&#xA;Functions are the part and parcel of any script, we don&#39;t use it necessarily, but they come in handy quite some times and serve the purpose absolutely well. We can customize what we want to return from the function depending on the needs. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;function add(){&#xA;echo $(($1+$2))&#xA;}&#xA;add 3 6&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624348202842/j0t0LT06q.png)&#xA;&#xA;&#xA;In Bash functions, we do not pass arguments inside brackets, we have to pass in parameters as space-separated values after the function name in the function call. This also means that we can pass any number of parameters to a function, but only we should handle them properly. Otherwise, there is no use in passing unlimited parameters to a function. This is really powerful but it needs to be used wisely to have its full potential. Also, it not mandatory to use the keyword &#34;function&#34; before the name, So you could also write just the name and the rest of it as it is.&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;add(){&#xA;echo $(($1+$2))&#xA;}&#xA;add 3 6&#xA;```&#xA;The return statement in functions is also quite an important aspect in logical programs. But it is not like returning values from a function, it is like returning the exit status of a function scope, so it can return inappropriate behavior from the shell depending on the return value. &#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;add(){&#xA;return $(($1+$2))&#xA;}&#xA;add 6 13&#xA;echo $?&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624349678664/yDGL3U70C.png)&#xA;&#xA;Here we return the addition of two numbers and we use the internal variable ? To access the exit status of the function. The ? Is an internal variable in Bash, which holds the exit status of the last executed command. In this case, it was the function call and its return statement was stored in it.  As said, it can become quite buggy to exit the function scope with wired return statements, so to avoid those we can make use of global variables.&#xA;&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;&#xA;add(){&#xA;n=$(($1+$2))&#xA;}&#xA;add 5 16&#xA;echo $n&#xA;&#xA;```&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1624349769088/VxnjgqETv.png)&#xA;&#xA;In the above program, we use n as a global variable to access out of the function scope. &#xA;Function in Bash can also return multiple values at once but that can be buggy at times, so I don&#39;t recommend that.&#xA;&#xA;&#xA; So, that is the basics of Bash functions covered. &#xA;&#xA;This is it from the Bash scripting guide Part-1, I&#39;ll cover more topics in the upcoming parts of the series.</content>
      <type></type>
    </item>
    <item>
      <title>What is BASH?</title>
      <link>https://www.meetgor.com/posts/bash-intro</link>
      <description>Introduction BASH or Bourne Again Shell is an interface(program) between the user and the Operating System. BASH is just an application just like others, but no</description>
      <pubDate>Fri, 18 Jun 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;BASH or Bourne Again Shell is an interface(program) between the user and the Operating System. BASH is just an application just like others, but not an ordinary one. It is quite powerful when used to its potential. It is used in various aspects of computing. No, it&#39;s not only for server admins, but it specializes in that field though. We&#39;ll see how we can use Bash to perform several tasks quite powerfully. Not just automation, you&#39;ll be surprised what Bash is capable of.&#xA;&#xA;## BASH - The program that powers the world.&#xA;BASH is just like any other shell such as z-shell, c-shell, Korn shell, and Bourne shell. BASH is a variant or update version of the BOURNE shell. Technically it has quite advanced features than the Bourne shell, such as arithmetic operators, signal handling, command-line completion, and tons of quality of life improvements. So what can Bash do? Well pretty much anything, it is close enough to call itself a programming language but not quite enough. You can call it the programming language of the command line. &#xA;&#xA;Bash also has built-in features such as variables, loops, if-else conditional statements, functions, and all but there are no concrete data structures to work with. Some of them quite useful structures are built-in such as arrays and dictionaries. It can perform arithmetic, logical and programmatical operations inside of a terminal or command line. It can make use of some other programs such as grep, awk, sed, cURL, find, wc, the list is quite large. Most of the commands are quite versatile and can be integrated with each other quite well. Certain concepts such as command piping, brace expansion, and others provide a great integration and feasibility to make quite a lot of applications using Bash. Certain commands such as grep, cURL, sed, cat, touch are quite incredible and are used a lot in day-to-day use cases. Let&#39;s take a quick look at how actually we can use Bash to perform a basic task.&#xA;&#xA;![Demonstrating bash commands](https://s6.gifyu.com/images/recording1b26b94770ee59bf.gif)&#xA;&#xA;From the above gif, I edited a file without leaving the terminal. That can be lame but powerful in certain conditions. Anyways, that was just an example, but you can see where it can go. It can be extremely powerful for some repetitive tasks in programming or any other tasks. It can be also quite vividly used in the upcoming fields.&#xA; &#xA;## Applications&#xA;&#xA;Well, Bash has some epic applications, some of them might surprise you but they are indeed liable to call themselves features of Bash. Let&#39;s dive in then.&#xA; &#xA;### Automation &#34;of course!&#34;&#xA;&#xA;This is the most known feature of Bash as it stands upright on top of other tools and programs. Bash is most commonly used by system admins to automate repetitive tasks in managing a system. These tasks may include backing up files, managing file structures, monitoring the performance of a system, and debugging any errors and issues in the system on a regular basis to ensure the smooth functioning of a system. So, it makes quite a lot of sense to automate certain tasks that might waste time and effort to do them over and over again. Hence, Bash is the system admin&#39;s best friend and also for programmers to automate certain tasks and compilation of programs.&#xA;&#xA;### File handling&#xA;&#xA;This is another aspect where Bash just makes things flawless, just flawless. It&#39;s much more easy, straight forward and intuitive to manage or handle files in Linux/ Unix operating systems. The tools and command-line utilities such as grep, awk, sed for matching patterns using regular expressions,  cat, tac, touch to create and make changes to the file. There are a plethora of ways to make files and directories using Bash utility tools. We can also make a bunch of files at once, make the structure of a file system at once just using Bash and its commands. &#xA;&#xA;&#xA;### Programming environment&#xA;&#xA;This is a concept which many people use but are unaware of. People using Linux for programming are mostly using bash as their environment to make, compile, run files of the program. That is quite a powerful environment to work with as it is quite customizable and efficient as compared to Windows at least. Even macOS used Bash till a certain time but migrated to zsh for silly reasons about security issues on Bash. But still, Bash is the tool that most programmers use while interacting with the web servers or any other programs that deal with some kind of interface to the operating system especially Linux/ Unix. So, it is worth noting that Bash is the default development environment for many programmers.&#xA;&#xA;&#xA;### Networking&#xA;&#xA;Networking is another aspect where Bash also some quite remarkable tools to play around with. Bash has certain powerful tools like Nmap, aircrack-ng, ping, and many other networking tools which makes it quite a good choice for ethical hacking and network security as well. This domain is quite complex in terms of concepts as it makes the core IT domain. Bash also provides ways to manage the network using some well-equipped tools and programs to make it quite user-friendly.&#xA;&#xA;&#xA;### Web-Scraping &#34;wait what!?&#34;&#xA;&#xA;Yes, This is possible by making use of certain tools such as cURL to extract the content from the web and then using grep, sed, awk, or any other pattern/ regex matching tools to filter out the content. This might be very useful and informative for beginners trying to understand how web scraping actually works. Web-scraping can be learned this way so as to understand &#34;Under the Hood&#34; working of some utility and frameworks for web-scraping. This is some super great stuff for a simple shell to do in a standalone way though it may require a good skill of regexes and how the web works to make full utilization of the powers of Bash.&#xA;&#xA;&#xA;### Maybe More&#xA;OK, now I just know about these many applications about Bash but if you know any other please let me know in the comments. Bash is a great tool and I don&#39;t why it&#39;s not quite famous but still used a lot. Maybe I am wrong here, just an opinion though. So, I hope you might have got some insights to dig deeper into Bash and learn to improve efficiency and skills. &#xA;&#xA;So, that&#39;s it from Bash and its application in a high-level view, there is a huge field and community for Bash, just because of some threats Bash is not going anywhere, still, zsh is getting replaced in most cases, but the concept and tools remain the same. At the end of the day, its the skill rather than the version of any tool that matters Happy Coding and have a blast learning Bash :)</content>
      <type></type>
    </item>
    <item>
      <title>Vim: Keymapping Guide</title>
      <link>https://www.meetgor.com/posts/vim-keymaps</link>
      <description>Introduction Keymapping or Key binding is a process in which a user or a system can assign certain keys or commands to perform a particular task or commands. Th</description>
      <pubDate>Mon, 14 Jun 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;Keymapping or Key binding is a process in which a user or a system can assign certain keys or commands to perform a particular task or commands. This can be quite useful in Vim as well as in other text editors to avoid some repetitive tasks and save time. In Vim this can be a great power to have for programmers as it can be really very flexible to set up for any programming language and it&#39;s &#xA; environment.&#xA;&#xA;In Vim, you can map a particular keystroke combination to trigger a command or any operation. You can also map a key in place of a weird key combination as well. You can map a key to integrate several commands and make it run at a single key or without much of a hassle. Key Mapping is quite powerful in Vim, only your creativity and imagination are the limits here. Vim can really shine in such aspects where the user has the freedom to tailor the editor and his development environment as per his/her needs.&#xA;&#xA;## Understanding the structure of Key binding&#xA;You can basically map a key depending on the current mode you are in. So that means you can have the same key mapped to different commands depending on the mode. That is really flexible and powerful. Vim allows you to basically map in almost every mode such as normal, insert, visual, command, and any other existing modes. Let us see what a basic key mapping looks like.&#xA;&#xA;&#xA;```&#xA;{mode}{attribute}map {key} {command}&#xA;```&#xA;The key mapping is broken down by adding the mode to map the key, an optional attribute to change/modify the way the mapping behaves and we finally have the key and the command pair. If you want to test a map you can temporarily map in the command mode using the syntax for the map command or if you want a permanent mapping, you can add the mappings in your vimrc file.&#xA;&#xA;For a basic example let us map c to paste from the clipboard.&#xA;&#xA;```vim&#xA;:map c &#34;+p&lt;CR&gt;&#xA;```&#xA;&#xA;The above command maps the &#34;c&#34; key in the **normal/visual mode** by default to paste the contents from the clipboard. It is just an example of how a basic key mapping or binding can be used to save your time and efforts. This is a very small example to demonstrate the concept of mapping in Vim but it scales quite fantastically and smoothly.&#xA;&#xA;Now let&#39;s see how to view the mapping inside of the current vim buffer. We can simply type map, imap, or nmap in the command mode to view the mappings in the respective modes. This will display the maps which are currently present inside of the vim buffer, some might be temporary and most of them will be permanent. &#xA;```&#xA;:map &#xA;```&#xA;If you type in map being in the command mode, it will list out the maps. If you want to stop seeing the list just press q, and you will be back in the editor. You can also view the maps in the normal, visual, insert, or any other modes by prefixing n, v, i, or other modes with map.&#xA;&#xA;## Key Mapping in Modes&#xA;To map a key binding explicitly in the normal mode or any other modes, we have to prefix the word such as n for normal, v for visual or i for insert, and so on. The map will only work in the provided mode so the same keys may have several different maps depending on the current mode, but that can create a bit of confusion so avoid doing that. Let&#39;s map a key in the normal mode for example,&#xA;```vim&#xA;:nmap &lt;C-s&gt; :w&lt;CR&gt;    &#xA;```&#xA;&#xA;This is pretty bad for Vim beginners as it will spoil the real Vim experience of saving with :w, If you feel comfortable using Ctrl + s to save a file, the above map will be perfect. It will map the key Ctrl + s by pressing :w and then pressing enter to save changes to the file. &#xA;&#xA;We can also map certain non-alphabetical keys such as CTRL, ALT, and others, it will help you in saving time to leave a mode and enter the mode again to edit the text. The following is a perfect example of such.&#xA;&#xA;```vim&#xA;:imap &lt;C-c&gt; &lt;ESC&gt;&#34;+pa&#xA;```&#xA;The above command is **mapped in insert mode** as the mode is set to insert at the command&#39;s beginning. The CTRL+c keys are mapped to first Escaping out of insert mode and then pasting from the &#34;+ register which is the device clipboard here. In the end, we again get back to insert mode just to the right of the current cursor position. So, that is how we can map the keys to do the task and save a bit of time. &#xA;&#xA;## Adding Keymaps in vimrc file&#xA;So you can now map in any available modes in Vim and test it out in te=he current buffer, but if you notice when you close the buffer, your custom keymaps will not be functioning as they remain in the buffer until you close it. To make it permanent you need to type the exact map command to the vimrc file. Just don&#39;t add &#34;:&#34; as it is not a command mode, it should be only the pure command of the map.&#xA;&#xA;```vim&#xA;nmap &lt;C-s&gt; :w&lt;CR&gt;&#xA;nmap &lt;C-p&gt; &#34;+p&#xA;vmap &lt;C-y&gt; &#34;+y&#xA;``` &#xA;The above commands are pasted in the vimrc file. These keymaps once sourced will stay in permanently unless you remove them from the file.&#xA;&#xA;## Keymap Modifications&#xA; We can also add extra arguments to the keymaps such as noremap and remap and others. In remap, the keymap is recursive which means the key pair will get mapped to the keys mapped already somewhere. The keymap will redefine its map depending on the already existing maps.  It is preferred to use noremap as it will not mix up the existing maps and have a fresh key binding. The mapping in Vim is quite versatile and there are quite a lot of modifications you could do to make your key binding feel and work according to you. So for further un depth understanding of modifications of keymap in Vim you can check out their documentation  [here](https://vim.fandom.com/wiki/Mapping_keys_in_Vim_-_Tutorial_(Part_1)).&#xA;&#xA;&#xA;## Keymapping Usecases&#xA;Now, it&#39;s entirely up to you to make keymaps according to your preference and choice. You can use your creativity and knowledge to make powerful and efficient keymaps.&#xA;&#xA;For some insights, I&#39;d like to give in some cool and productive keymaps which I personally use very frequently in coding a particular programming language. &#xA;&#xA;```vim&#xA;nmap &lt;C-s&gt; :w&lt;CR&gt;&#xA;nnoremap py :!python %&lt;CR&gt;                          &#34; Python run macro&#xA;nnoremap cpp :!c++ % -o %:r &amp;&amp; ./%:r &lt;CR&gt;           &#34; C++ compile and run macro&#xA;nnoremap sh :!chmod +x % &amp;&amp; source %&lt;CR&gt;            &#34; Bash sourcing macro &#xA;nnoremap c :!clang % -o %:r &amp;&amp; ./%:r &lt;CR&gt;           &#34; C compile and run macro &#xA;nnoremap jv :!javac % &amp;&amp; java %:r &lt;CR&gt;              &#34; Java compile and run macro&#xA;```&#xA;&#xA;&#xA;The above mapping will run the file or compile and run the file you are currently editing in Vim. That is just an overview of how you can use key mappings in Vim for any programming language or tool. Vim allows you to modify and make the keys do the heavy lifting very easily and save a lot of time and frustration. This is how you implement DRY(don&#39;t repeat yourself) perfectly. I personally think this is just perfect to do certain things which are quite common and sometimes daunts you to do this again. So just have a look and play around with Vim keymaps, it is the power with which one can excel in saving time and energy. This is just flawless. OK, That was too much from my side. &#xA;&#xA;## Keymapping and Macros&#xA;Some Keymappings provided above are quite interesting for a person trying to understand macros and key binding. A macro is a bunch of commands packed in together to access it very efficiently, Not the exact definition, but still, it just means to simplify things and saves time. Keymapping is the same thing but to map with the keys. &#xA;&#xA;From some snippets and explanations of VIm keymaps, it must be easier now for a beginner to understand Keymappings in Vim and how to customize accordingly.&#xA;&#xA;I hope it helped, Thank you for listening to my understanding of Vim keymapping. Below are some of my keymaps currently in vim for windows. Some of them were already shown in the code snippets.&#xA;![vimkmap.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1623654888460/pBfJO2jkZ.png)&#xA;&#xA; &#xA;So, Keymaps are quite powerful and can be a bit difficult to set up in a single try. Keep experimenting with keymaps and make Vim the powerhouse of productivity and customization. Happy Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>Setting up Vim for BASH Scripting</title>
      <link>https://www.meetgor.com/posts/vim-for-bash</link>
      <description>Vim and BASH? Bash Scripting is a powerful skill to have as a programmer because we find Linux almost everywhere and to get through it you must have a command o</description>
      <pubDate>Thu, 10 Jun 2021 00:00:00 UTC</pubDate>
      <content>## Vim and BASH? &#xA;Bash Scripting is a powerful skill to have as a programmer because we find Linux almost everywhere and to get through it you must have a command over its interface which is generally the BASH shell. Vim is a great option for doing this, or probably the best out there! Because Vim is pre-installed in almost every Linux distribution. This is not an in-depth setup for BASH on Vim, it is a simple editorial on starting up BASH scripting on the Vim editor. So without wasting time on &#34;Vim features&#34; let&#39;s dive in with the setup for BASH in Vim.&#xA;&#xA;## Boilerplate macro&#xA;Setting up a bash script doesn&#39;t require much code but still in some cases it can be a bit hassle and to avoid the repetitive task, one can easily set up a macro for the boilerplate BASH script.&#xA;&#xA;```vim&#xA;nnoremap bs i#!/bin/bash/&lt;ESC&gt;o&#xA;```&#xA;Ok that was pretty dumb but it can scale pretty quickly and it will be nice to tailor it as per needs, here&#39;s some snippet with function pre-loaded.&#xA;&#xA;```vim&#xA;nnoremap bs i#!/bin/bash/&lt;ESC&gt;o&#xA;nnoremap bs i#!/bin/bash/&lt;ESC&gt;o&lt;ESC&gt;ofunction main(){&lt;ESC&gt;o&lt;ESC&gt;o}&lt;ESC&gt;ki&lt;S-TAB&gt;&#xA;&#xA;```&#xA;![type bs to load boiler-plate code](https://s6.gifyu.com/images/bsclip.gif)&#xA;&#xA;When the key bs is typed in normal mode, you enter into insert mode as per the command macro, then we type in the required text and escape to move to the next line and continue the same stuff. This could be extended further like making some input or printing out some text and any other formatted text that you could think it as repetition. &#xA;&#xA;## Sourcing Scripts&#xA;So, after creating the file, sourcing the script, and running it can be a bit slow for some people, as you have to go to the terminal and toggle in the permission to run the script and then run, But pull on your seatbelts as this is VIM! You can die due to slowness!&#xA;&#xA;```vim&#xA;nnoremap sh :!chmod +x % &amp;&amp; source %&#xA;```&#xA;&#xA;![type sh to run script](https://s6.gifyu.com/images/shclip.gif)&#xA;&#xA;When the sh keys are typed in the normal mode, the preceding command after ! (bang) will be executed in the terminal, the &amp;&amp; keywords will execute the second command only when the first command is successfully executed.&#xA; That just can so fast! Imagine doing this for long scripts and especially for debugging, it will waste 2 minutes every time you leave the editor and for 10 times you do the debugging, you will carelessly was roughly 20 minutes! Improve your debugging skills surely :)&#xA;&#xA;## Plugins&#xA;There are very few plugins out there for BASH as for VIM, but it&#39;s quite to write scripts even without any plugins. One of the most supported and popular plugins for BASH in Vim is  [Bash-Support-Vim](https://www.vim.org/scripts/script.php?script_id=365) for auto-completion and [Shell-Check](https://www.shellcheck.net) for finding/correcting any bugs or error in the script . &#xA;The mentioned plugin is quite awesome and it can greatly improve the speed of scripting for BASH, some commands such as shortcuts for writing if-else, while, for loops, commenting and other aspects in the scripting. The thorough documentation for such commands is also provided by the  [plugin website](https://wolfgangmehner.github.io/vim-plugins/bashsupport.html). &#xA;This can be used for autocompleting keywords and writing nested if-else and other logical operators in BASH scripting. Again, you can do absolutely fine without plugins in Vim as it is heavily customizable to the user&#39;s need and can be very rewarding to set up your own configuration for BASH. You can use standard Vim(barebones) for auto-completion as well with the command CTRL+N and CTRL-P to move down and up respectively.&#xA;&#xA;&#xA;## Some More Tricks&#xA;BASH in Vim can be quite versatile to use as it provides some custom addons to make the script more functional and easier to understand. Some tricks such as using autocompletion can be quite inconvenient to use at once but it can get really smooth after some runs at writing the scripts.&#xA;- In BASH Scripts there are quite a lot of brackets to play with that&#39;s why to jump around swiftly around such parentheses or brackets you can use **% to move from the opened to closed brackets or vice versa**.&#xA;- You can execute any terminal command from Vim, be sure to be in command mode and press ! after the command, you would like to execute. This will run the command from the terminal and you don&#39;t have to leave the editor, it saves a ton of time and it&#39;s blazingly fast.&#xA;- With the above trick, you kind of have a superpower within Vim to make, build, source, run the files or scripts within Vim, that is not repetition but it can run bash within bash. Ok! that&#39;s was pretty fast. Don&#39;t die of quickness now!&#xA;&#xA;Writing BASH scripts in Vim can be also boosted by using some built-in commands such as adding comments for multiple lines at once and some unexplored stuff which can be learned in the way to understanding the flow of Vim and BASH together. Happy Coding and Viming :)</content>
      <type></type>
    </item>
    <item>
      <title>Setting up Vim for Python</title>
      <link>https://www.meetgor.com/posts/vim-for-python</link>
      <description>Introduction Vim is quite a powerful text editor which can add performance to the already fast typed language Python. Vim can be highly customizable and efficie</description>
      <pubDate>Sun, 06 Jun 2021 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;Vim is quite a powerful text editor which can add performance to the already fast typed language Python. Vim can be highly customizable and efficient to use as it has the power of **adding custom plugins and plugins managers, key mappings**, and the most critical weapon of vim - Access to the terminal straight away.&#xA;This is not a full-featured guide of using vim for python, it&#39;s just a quick setup for using python on vim blazingly fast!!&#xA;&#xA;## Plugin Managers&#xA;So let us start making Vim, the text editor suitable for a python programmer. Firstly we&#39;ll need the vim plugin manager. There are different plugin managers out there, each of them has the same purpose to install, upgrade and manage the plugins for vim. You can install any one of them and get up and running.&#xA;&#xA;-  [Vim Plug](https://www.vim.org/scripts/script.php?script_id=4828) &#xA;-  [Vundle](https://github.com/VundleVim/Vundle.vim) &#xA;-  [Pathogen](https://github.com/tpope/vim-pathogen) &#xA;&#xA;These are some of the finest and well-supported plugin managers in vim. You can use any of these plugin managers, and get started by installing some plugins.&#xA;&#xA;## JEDI-VIM- Auto completion &#xA;Firstly I will like to install Jedi for code completion in Python. The plugin can be simple and straightforward to install using any of the above plugin managers. Jedi-Vim provides some neat and clean** syntax analytics and autocompletion for Python in Vim**. You&#39;ll find the docs and installation process here  [JEDI-VIM ](https://github.com/davidhalter/jedi-vim) &#xA;&#xA;## NERDTree-File manager&#xA;Next, It would be great if we install a file manager for managing the files and folders in the code directories. We can simply use the Nerdtree plugin for this. NerdTree is quite a **fantastic plugin for file management in Vim**. It simply makes Vim feel like VS Code. The installation and docs can be found here  [NERDTree](https://github.com/preservim/nerdtree).&#xA;&#xA;Nerdtree commands can be longer to write, for that let&#39;s start mapping and for that, we can start editing our Vimrc. &#xA;&#xA;```vim&#xA;set number&#xA;syntax enable&#xA;filetype indent on&#xA;set tabstop=4&#xA;set softtabstop=4&#xA;set autoindent &#xA;set encoding=utf-8&#xA;``` &#xA;This can be some addition to your existing vimrc as you might have a setup for plugin managers. You can choose the color scheme of your choice, don&#39;t waste time selecting the color scheme. Feel free and modify the vimrc according to your knowledge and choice. &#xA;&#xA;## Keymappings&#xA;We move on to the Key mappings for NERDTree and other features. You can make mappings generally in the normal mode but there might be cases where you need to use maps for visual mode or insert mode as well, that entirely depends on the user :)&#xA;&#xA;To map in normal mode, we&#39;ll its command to be specific:&#xA;&#xA;&#xA;```vim&#xA;nnoremap &lt;C-n&gt; :NERDTree&lt;CR&gt;&#xA;``` &#xA;&#xA;This will map CTRL+n to open the NERDTree file manager to the left, saving a bit of time and avoiding frustration. Feel free to add any keymap of your choice, this is just for demonstration. &#xA;You can further automate NERDTree for switching between tabs because it makes you type CTRL+w twice, you can reduce that to just typing w.&#xA;&#xA;```vim&#xA;nnoremap w:&lt;C-w&gt;&lt;C-w&gt;&#xA;``` &#xA;&#xA;## Integrated Terminal Macros&#xA;We can open a terminal window like a split between the editor. We can simply use the command :terminal to split the window horizontally, where the upper split will be terminal and the down window will have the editor. This is quite a neat feature of Vim in that it blends with the terminal so well so that we can switch between the terminal and the editor very quickly. For that, you can create a macro if you need to fire up a terminal again and again.&#xA;```vim&#xA;nnoremap &lt;C-t&gt; :terminal&lt;CR&gt;&#xA;```&#xA;If you place the above macro in your vimrc and then type Ctrl+t, the exact thing will happen to fire up a terminal split but with fewer keystrokes and without leaving the normal mode. &#xA;Also, the NERDTree macro can be also fruitful with this as it will make a full-blown IDE-like feeling inside of Vim.&#xA;![Demonstrate macros for NERDTree and terminal split](https://s6.gifyu.com/images/screenrecording.gif)&#xA;&#xA;## Running the Code with a snap&#xA;&#xA;We can automate the process of running python scripts inside of vim. Instead of typing out the entire command for executing python script from vim. We can use keymaps for it as they can significantly boost the time required to run and debug the code. &#xA;&#xA;&#xA;```vim&#xA;nnoremap py :!python %&#xA;``` &#xA;&#xA;This is a small map but can save a lot of time and give some motivation to use vim as you run the code blazingly faster than other editors. I have used py, but it can cause some problems as p is already mapped for pasting. So it&#39;s better to use other key combinations such as ty, yh, or any other key combination of your choice. Try it out and add your own flavor that&#39;s how we all learn.&#xA;&#xA;So, that&#39;s the basic set-up for python on vim, you can make more custom mappings, find more plugins and test out which work out the best for your workflow. Happy Coding and Viming ;)</content>
      <type></type>
    </item>
    <item>
      <title>Why and How to make and use Vim as a text editor and customizable IDE</title>
      <link>https://www.meetgor.com/posts/vim-text-editor-ide</link>
      <description>We all are asked to use visual studio code and other rich looking editors as a beginners text editor or an IDE but that makes a habit of a rich and pleasing int</description>
      <pubDate>Sat, 20 Feb 2021 00:00:00 UTC</pubDate>
      <content>We all are asked to use visual studio code and other rich looking editors as a beginners text editor or an IDE but that makes a habit of a rich and pleasing interface of running and debugging programs, and that is the reason we don&#39;t feel good to use command line or the terminal. Terminal or Command line are quite important to learn and are powerful as well.  &#xA;&#xA;Vim is a text editor, not any text editor but a special and one of the fastest out there if you use it wisely. Vim can be used in the terminal which means it can run terminal commands inside the editor interface. That makes it quite a great option to improve your terminal command skills and also for enhancing your Linux knowledge. You can run program files from vim itself just with a few keystrokes. This text editor is preinstalled in most Linux distributions and hence you could quickly edit some files without going anywhere.  &#xA;&#xA;So that being said, vim is not easy to learn, you have to put some effort to reap the fruits of saving time in the longer run. It takes time to get used to the commands and key combination of Vim.  &#xA;&#xA;## Download VIM&#xA;&#xA;But, let us get our feet wet in learning a few commands in Vim.  &#xA;&#xA;To download vim you can go to the official  [Vim website](https://www.vim.org/download.php) .  &#xA;&#xA;Firstly let us understand how to open vim from the command line or the terminal  &#xA;&#xA;If you are on windows you have to add the vim.exe file&#39;s path to the environment variable PATH. After that open CMD or terminal for windows 10 and type vim and enter. You are in Vim.  &#xA;&#xA;If you are in Linux or macOS, then open the terminal and type vim and enter.  &#xA;&#xA;After you are in Vim, you will surely be trying to type something right? else you are not a geek ; )&#xA;&#xA;## VIM Basics&#xA;&#xA;You won&#39;t see anything happening even if you are typing. This is because are 4 basic modes in Vim, understand it like a room.  The modes are&#xA;&#xA;1.  **Normal Mode**&#xA;&#xA;2.  **Insert Mode**&#xA;&#xA;3.  **Command Mode**&#xA;&#xA;4.  **Visual Mode** &#xA;&#xA;Wait for a second, you should learn how to move in vim first but you cannot learn everything at once, So take it bit by bit.&#xA;&#xA;To navigate around vim like blazing fast you can use h,j,k, and l.  I know it&#39;s wired but you will get used to it.&#xA;&#xA;**h**  -&gt; move left.    ( hop left)&#xA;&#xA;**l**   -&gt; move right.   (opposite of left)&#xA;&#xA;**k**  -&gt; move up.       ( keep it up)&#xA;&#xA;**j**   -&gt; move down.   ( jump down!!)&#xA;&#xA;If you want to apply certain movements several times you can use numbers before hjkl to move the number of times in that direction.&#xA;&#xA;Let&#39;s say you type 23j , here you will go down 23 lines. You can also use : and number to jump to a particular line of that number, :12 moves you to the 12th line. &#xA;&#xA;The default mode is the **Normal mode** where you can navigate in your file, like a passageway between different rooms. Each other mode is accessible via Normal mode. You have to enter normal mode before switching to the other three modes. By default, after opening vim you are in Normal mode. You start inserting text in the file while being in Normal mode but you can replace text and edit the existing text in the file. **Press ESC to enter Normal mode**.&#xA;&#xA;The main mode in VIM is **Insert mode**, where the actual typing and text editing takes place, **Enter i to enter insert mode**. After entering i you can type anything you want. After you feel satisfied with writing in VIM, you can escape out of the insert mode (room) and enter the normal mode (passageway). Now the biggest question VIM beginners ask **&#34;HOW ON EARTH SHOULD I EXIT OUT OF VIM ??&#34;** . Take a breath dear, you need to save your file first. To do that let&#39;s explore a different mode.&#xA;&#xA;The next crucial mode is **Command mode**, where you have to remember some commands to do something. Let&#39;s continue from where we left. To save our file,  **Press :  and Voila!! you are in Command mode**. After it depends you want to save the file, quit vim, save the file and quit vim or quit vim without saving the file. OK, that is too much to take in one go. Be sure to be in Command mode to press the following keys (i.e. press colon before any keys if you were in normal mode).&#xA;&#xA;&#xA;**w**     -&gt; save file.&#xA;&#xA;**q**      -&gt; quit vim.&#xA;&#xA;**wq**     -&gt; save file and quit vim.&#xA;&#xA;**q!**      -&gt; quit vim without saving the file.&#xA;&#xA;These are a few of the commands available in VIM :)&#xA;&#xA;So now let&#39;s talk about **Visual mode**, the little helper mode (room) to do cut/ copy in VIM. There are many ways to edit or add text in Visual mode. You have options to choose from character by character, line by line, or block by block. **To enter Visual mode press v**. This gets you into character by character highlighting whether you want to cut the text or copy (yank) the text. After selecting the portion you can either cut that portion or copy it in vim register. &#xA;&#xA;**y**     -&gt; Copy the selected portion.&#xA;&#xA;**d**     -&gt; Cut the selected portion.&#xA;&#xA;You can use **V (Shift+V) to enter Line Visual mode**, this selects or highlights the portion line by line and not single letter by letter.&#xA;&#xA;You can also use **Ctrl+V to enter Block Visual mode**, this selects a block of code.&#xA;&#xA;Here is a good way to remember switching between basic modes in VIM -&gt;&#xA;&#xA;![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1613912016216/SXavZAkPH.png)&#xA;&#xA;&#xA;Switching between basic modes in VIM&#xA;&#xA;It seems to be quite slower than VS Code, doesn&#39;t it? Ya, I know it&#39;s kinda slow but with time it picks up pace, just like a train. Spend time in VIM and it will be worth it. No Text editor is so powerful as VIM, it is so for a reason. If you are curious to dive deeper just dive in fully or stay in VS Code and your fancy.  There is a great community around VIM and you&#39;ll be happy surely to be a part of it. Oh! The quick fact even VS Code has an extension for VIM ;)&#xA;&#xA;##  Some more Basics&#xA;&#xA;You can do most of the basic programming using this but to be a bit faster you can use some tweaked commands to do stuff like,&#xA;&#xA;**I**     -&gt; Enter insert mode to the character at the beginning. ( i puts you in insert mode to the cursor)&#xA;&#xA;**a**  -&gt; Enter insert mode adjacent right to the cursor.&#xA;&#xA;**A**   -&gt; Enter insert mode at the end of the line.&#xA;&#xA;**o**    -&gt; Enter insert mode a line below cursor.&#xA;&#xA;O   -&gt; Enter insert mode a line above cursor.&#xA;&#xA;These commands are an optimized way to enter insert mode in a suitable way. These commands can be used as per needs and quite significantly improve editing speed.&#xA;&#xA;You can also shift from one mode to Normal mode by pressing Ctrl+C or Ctrl+{. Whatever you feel good with.&#xA;&#xA;Pasting is a programmer&#39;s everyday task. No, it&#39;s good as a beginner and only if you are not running as it is. In VIM you can paste from your previously copied text using Visual mode or deleted chunk just using p to paste. Remember if you are on a character it will paste on starting from where your cursor is, which means your character on the current cursor will be overwritten. You can also use &#34;0p to paste from the copied buffer.&#xA;&#xA;If you are pasting from an external source i.e. out of VIM, you have to use &#34;+p . It pastes from the next character from the cursor.   &#xA;&#xA;If you want to search a word in a file, you can use navigation commands like hjkl but there&#39;s a faster way around, If you press / and type the word you are searching for, you will get the first instance of that word. If there are multiple instances of the same word, you can press n to got the next instance. You can also press Shift+N to move to the previous instance of the word.&#xA;&#xA;Some more word searching ahead. Say if you are already on the word which you want to find its instance, then you could press * to move to its next instance and # will do the same backward. Oh ! that was smooth.&#xA;&#xA;You can see that we are just using our keyboards while using vim. This is a great feature of old but gold text editors like vim and others that make you faster on the keyboard and avoid mouse traps. &#xA;&#xA;If you want to jump to the end of the block or a paragraph you can use { and } to go to the beginning and the end of it respectively.&#xA;&#xA;**G**    -&gt; to the end of the file.&#xA;&#xA;**gg** -&gt; to the beginning of the file.&#xA;&#xA;**L**     -&gt; move the cursor to the end of the screen.&#xA;&#xA;&#xA;```&#xA;:%s/old/new/g&#xA;``` &#xA;   in Command mode -&gt; replace the old word with a new word throughout the file(g)&#xA;&#xA;Ctrl p   in Insert mode    -&gt; auto-complete reference from current file.&#xA;&#xA;So these are some quick and pretty good commands to get started with and feel VIM. But this is just VIM used 25% efficiency, you would have to go to the command line or terminal to run and compile the program you have written. It could waste a lot of time. But as a beginner, this is pretty decent and helps to learn the core programming, behind the scenes of compiling and learning new stuff and commands. It gets pretty exciting if you have got the basics cleared. But be sure to make it through.&#xA;&#xA;It&#39;s time to use VIM to its actual powers. VIM + terminal. It can get quite fast. So beware!!!&#xA;&#xA;# Running Programs from VIM&#xA;&#xA;It turns out that you could go to the command mode and run terminal commands from there just use! before the command first.&#xA;&#xA;Linux/Mac Users: use terminal from vim&#xA;&#xA;&#xA;```&#xA;:!ls   &#xA;``` &#xA; //-&gt; ls command from vim&#xA;&#xA;Windows: use CMD from vim&#xA;&#xA;&#xA;```&#xA;:!dir&#xA;``` &#xA;   -&gt; dir command from vim&#xA;&#xA;You could also use other commands which are appropriate to be executed from vim.&#xA;&#xA;Here is where the trick comes in, you don&#39;t have to quit vim now to compile or run the program.&#xA;&#xA;Let&#39;s say you are writing a C++/C code and you have a clang compiler. Then you could use c++ and c respectively command with appropriate file handles to compile from vim. Here&#39;s the command,&#xA;&#xA;&#xA;```&#xA;:!c++ % -o %:r&#xA;``` &#xA;&#xA;&#xA;Here, % stands for the current file and :r removes the extension of the file. So we are creating an executable file without the extension and hence :r. And to execute the file, simply type,&#xA;```&#xA;:!%:r&#xA;```&#xA;this will execute the program. If Windows users face an issue like a file is not identified as a batch file or executable file, you should add .exe after :r to make it an exe file.&#xA;&#xA;For Python or other interpreted language, it is quite simple, just the interpreters name with the file name as it is&#xA;```&#xA;:!python filename.py&#xA;```&#xA;```&#xA;:!python %&#xA;```&#xA;You could find your preferred language&#39;s CLI tool and set it up for your environment and get it plugged with VIM.&#xA;&#xA;**Mapping in VIM**&#xA;&#xA;But this is not it! There is a long way to master VIM. It&#39;s not possible to know everything about anything. So for getting faster I&#39;ll introduce VIM&#39;s fastest tool and the part where it really shines. Mapping!!&#xA;&#xA;You could map (assign) some keys to do a particular task in different modes. Just imagine typing cpp and running the program or py and running the program,no need to go to the mouse to click run button and close the window. It gets really blazing fast. Only your creativity is the limit. Mapping commands are also called macros.&#xA;```&#xA;nmap cpp :!c++ % -o %:r  &amp;&amp;  %:r&#xA;```&#xA;```&#xA;nmap py :!python %&#xA;```&#xA;In the above commands, nmap means in Normal mode MAP cpp and then followed by the command which for this case it is executed in the command mode. There can be various permutations you could make like imap or vmap in Insertion mode and Visual mode and so on.  From mapping,its basically left-hand side mapped to right-hand side. The key combination to the left and command to be executed to the right. &#xA;&#xA;You will surely say, &#34;Do I have to do it every time I run the program or every time I use VIM?&#34;&#xA;&#xA;Well, You can :) but you should not. Because there&#39;s a way around it and this is the fuel of VIM or the soul for its Speed.&#xA;&#xA;Yes, I am talking about VIM CONFIGURATION FILE or vimrc&#xA;&#xA;# VIM Run Commands File(vimrc)&#xA;&#xA;This is really a nasty feature of vim. You could tailor your vim editor as per your needs. You have to write the command once and it will run from the vimrc file. Making a vimrc file is a bit overwhelming in the beginning so, you should have a concrete knowledge of VIM commands and you should remember commands as well. &#xA;&#xA;&#xA;- Make a file called ~/.vimrc&#xA;&#xA;- You should be using VIM to edit it or write it of course! Type vim ~/.vimrc in terminal or cmd&#xA;&#xA;- Enter some commands such as set file type indent on , set number , you can find a  [sample vimrc file ](https://vim.fandom.com/wiki/Example_vimrc) on google and understand its commands and uses.&#xA;Paint your plain canvas vimrc with your own creativity.&#xA;&#xA;From here you can add the mappings and other key combinations that you might have been using while learning VIM to your vimrc. VIM also has plugins and more customizable tweaks that can enhance your productivity. &#xA;&#xA;You could always get more help in VIM by using :help commandname. Also you can learn from vimtutor from terminal or CMD.&#xA;&#xA;If you want to dive in more deeper , i link some few resources and video links to get your vim skills to new level.&#xA;&#xA; [VIM Official Documentation. ](https://www.vim.org/docs.php) &#xA;&#xA; [VIM fandom Wiki](https://vim.fandom.com/wiki/Vim_Tips_Wiki) &#xA;&#xA; [130+ VIM commands. ](https://catswhocode.com/vim-commands/) &#xA;&#xA; [Vim Interactive Guide.](https://scotch.io/tutorials/getting-started-with-vim-an-interactive-guide) &#xA;&#xA; [Vim as your editor.](https://www.youtube.com/watch?v=H3o4l4GVLW0&amp;list=PLm323Lc7iSW_wuxqmKx_xxNtJC_hJbQ7R) &#xA;&#xA;Vim is a tool that can get quite handy for programmers and it stands out from rest of the text editors. By using Vim you will be in very rare people who can make something from almost nothing. Because in VIM you make everything from commands to mapping from customization to functionality. This cannot be done overnight for sure and you would  say you are wasting time in the initial stage but no, you are actually building a foundation for more efficiency and speed. If still you want to use VS Code ,you can use it freely . Because it won&#39;t matter much , at the end of the day its programming skills that win the race.&#xA;&#xA;**This is quite a huge topic and everything could not be explained in a article, it should be self-explored and learnt from various sources to get its best functioning as per needs. This was a quick guide how and why you should be using VIM as a text editor or as a customizable IDE. As you might have seen its functionality is quite customizable and it is quite powerful tool for a programmer. Happy Coding and VIMing ;)**</content>
      <type></type>
    </item>
    <item>
      <title>Django Basics: Database Configurations</title>
      <link>https://www.meetgor.com/posts/django-database-conf</link>
      <description>Introduction In this part of the series, we will create an external database and configure the settings for that database. We also look into the process of migr</description>
      <pubDate>Mon, 01 Jan 0001 00:00:00 UTC</pubDate>
      <content>## Introduction&#xA;&#xA;In this part of the series, we will create an external database and configure the settings for that database. We also look into the process of migrations which is to convert a model which we created in the previous part into the actual structure/schema for the database. I&#39;ll be using `PostgreSQL` for most of the demonstrations but it should be similar for other database management tools. This part will make your Django project link to a local database.  &#xA;&#xA;## Selecting a Database&#xA;&#xA;If you have read my previous article about models, you would have got a glimpse of the tools to manage a database, but not quite to choose a database for your project. This section covers that specifically. We will see what options you have while selecting a database.&#xA;&#xA;- SQL databases&#xA;  - sqlite&#xA;  - PostgreSQL&#xA;  - MySQL&#xA;  - MariaDB&#xA;  - Oracle&#xA;&#xA;&#xA;Selecting a database hugely depends on the type of application you are going to make, but most of the time it&#39;s SQL as a choice just because it has been dominated for a variety of application types over 4 decades. Still, NoSQL is growing in popularity and has some advantages over SQL in many modern applications. You need to analyze your project a bit deeper and understand the flow of data in a better way to make a decision about SQL and No-SQL, but most of the time it&#39;s gonna be SQL. &#xA;&#xA;**Also Django doesn&#39;t officially support NoSQL, so you&#39;ll have to turn up some third-party libraries to integrate and manage a database.**&#xA;&#xA;After deciding the type of database, you have one more decision to make here. It&#39;s picking up a DBMS tool. There are a lot of databases like PostgreSQL, MySQL, MariaDB, Oracle, etc. you need to pick whichever you feel comfortable and the one which suits your project architecture and requirements more closely. Though there might be very few differences in all the SQL Database tools there are a few things that distinguishes one from the other. &#xA;&#xA;## Creating a Database&#xA;&#xA;To create a database, you need to go to the Admin app of the DBMS tool you are using, for Postgres it&#39;s pgAdmin, for MySQL it&#39;s MySQL Administrator or PHPMyAdmin. You need to do the research for setting up a database locally for your project. But Django already is paired with `SQLite` which is a relational database but with a few quirks. It is really great to get started with a project without creating or managing an entire database system. The SQLite database is all contained in a file called `db.sqlite3`. &#xA;&#xA;If you want to work on a particular database like PostgreSQL, MySQL, etc. you need to create the database using the management tool and keep the configuration data like `name`, `host`, `password`, etc. after creating the database. &#xA;&#xA;I&#39;ll give a demo of creating a simple database in PostgreSQL but mostly it is a bit different in other DBMS tools as each of them have their own GUI applications. Install [Postgres](https://www.postgresql.org/download/) and [pgAdmin](https://www.pgadmin.org/download/).&#xA;&#xA;This is a demonstration of creating a database in pgAdmin -3 &#xA;&#xA;![postgres - db creation](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1642325269/blogmedia/postgcreate_wnmyre.gif)&#xA;&#xA;This is how you create a database in pgAdmin-3, it should be quite straightforward and simple to follow in other DBMS tools as well. You will also require a password to open the Admin interface for these tools, so keep that password handy we&#39;ll require that later.&#xA;&#xA;The following is the process to create a database/schema in MySQL&#xA;![mysql - db creation](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1642327473/blogmedia/mysqlcreate_bnkqxg.gif)&#xA;&#xA;This will create an empty database in your local machine. We&#39;ll want our Django project to link that particular database and use it inside its applications. In the next part, we&#39;ll configure the `settings.py` file to access the database from the local machine.&#xA;&#xA;## Configurations for Database&#xA;&#xA;We need to configure the `settings.py` file for integrating the database in our project. Django has a dedicated section for the database in that file. By default, the database configuration is created for the SQLite database which as I said earlier it is the default database that Django uses if not specified and configured. &#xA;&#xA;So, when you first see the configuration for the database in the `settings.py` file, you will see configuration like the following:&#xA;&#xA;```python&#xA;DATABASES = {&#xA;    &#39;default&#39;: {&#xA;        &#39;ENGINE&#39;: &#39;django.db.backends.sqlite3&#39;,&#xA;        &#39;NAME&#39;: BASE_DIR / &#39;db.sqlite3&#39;,&#xA;    }&#xA;}&#xA;```&#xA;For PostgreSQL or any other database, we will require more things than the `sqlite` configuration. Namely, we will need the following:&#xA;&#xA;- `name` of the database&#xA;- `user` of the database&#xA;- `password` for that user&#xA;- `host` and `port` for the database. &#xA;&#xA;The port is optional as most of the database tools pick up the default port for their purpose. For PostgreSQL, the default port is `5432` and for MySQL is `3306`. Leave it blank like `&#39;PORT&#39;: &#39;&#39;,` if you are not sure what is the port for that database. We also need to specify the `ENGINE` as it is the database backend to be used for the project. There are options for specific databases as mentioned in the [documentation](https://docs.djangoproject.com/en/4.0/ref/settings/#engine).&#xA;&#xA;**PostgreSQL**:&#xA;&#xA;In PostgreSQL, the default user is `postgres`, it might depend on your configuration and setup though. The pgAdmin will prompt you for a password when first installing the Postgres on your machine. The password to be used is for the default user of the database. The `name` is the name that you gave while creating the database in the Postgres admin section. Finally, the host is generally `localhost` as we are using the local database which is our system, and `port` as said earlier is `5432` by default for PostgreSQL.&#xA;&#xA;```python&#xA;DATABASES = {&#xA;    &#39;default&#39;: {&#xA;        &#39;ENGINE&#39;: &#39;django.db.backends.postgresql&#39;,&#xA;        &#39;NAME&#39;: &#39;blogapp&#39;,&#xA;        &#39;USER&#39;: &#39;postgres&#39;,&#xA;        &#39;PASSWORD&#39;: &#39;12345678&#39;,&#xA;        &#39;HOST&#39;: &#39;localhost&#39;,&#xA;        &#39;PORT&#39;: &#39;5432&#39;,&#xA;    }&#xA;}&#xA;```&#xA;&#xA;**MySQL**:&#xA;&#xA;For MySQL, the default `user` is `root` and the `port` is `3306`. The password is the default password you use to access the MySQL Administrator application. &#xA;&#xA;```python&#xA;DATABASES = {&#xA;    &#39;default&#39;: {&#xA;        &#39;ENGINE&#39;: &#39;django.db.backends.mysql&#39;,&#xA;        &#39;NAME&#39;: &#39;blogapp&#39;,&#xA;        &#39;USER&#39;: &#39;root&#39;,&#xA;        &#39;PASSWORD&#39;: &#39;&#39;,&#xA;        &#39;HOST&#39;: &#39;localhost&#39;,&#xA;        &#39;PORT&#39;: &#39;3306&#39;,&#xA;    }&#xA;}&#xA;```&#xA;&#xA;For further details on how to configure a Database in a Django project you can check out these articles:&#xA;- [PostgreSQL setup in Django](https://dev.to/mungaigikure/how-to-set-up-postgres-in-your-django-project-575i)&#xA;- [MySQL setup in Django](https://medium.com/@omaraamir19966/connect-django-with-mysql-database-f946d0f6f9e3)&#xA;- [Maria DB setup in Django](https://medium.com/code-zen/django-mariadb-85cc9daeeef8)&#xA;- [Mongo DB setup in Django](https://www.mongodb.com/compatibility/mongodb-and-django)&#xA;&#xA;MongoDB is a NoSQL database so, it will be quite different to set up and configure the database at least in the longer run. I&#39;ve no experience with NoSQL so please forgive me in this part. There are very rare instances you will need a NoSQL database with Django. &#xA;&#xA;There are other configurations as well which might be specific for the purpose and can be explored in the [django documentation](https://docs.djangoproject.com/en/4.0/ref/databases/).&#xA;&#xA;### Verify the database connection&#xA;&#xA;To check if the database was actually linked in the Django project, I&#39;ll introduce you to a great tool in Django: `python manage.py shell` (make sure to be in the virtual environment).&#xA;This command will open a python interpreter in the shell. It is an interactive console so that we can test some aspects in our project. For instance to check if the database is connected or not:&#xA;&#xA;Execute the code after running the command `python manage.py shell` from a virtual environment.&#xA;&#xA;```python&#xA;import django    &#xA;print(django.db.connection.ensure_connection())&#xA;```&#xA;   &#xA;   If this returns `None` you are good to go. And if the result is tons of error messages, you have something wrong in the configuration or the database itself.&#xA;&#xA;![db connection test](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1642342286/blogmedia/djb-8-db-connect_l4xqqr.png)&#xA;&#xA;### Installing database adapter&#xA;&#xA;Before we can do anything with the database, we need one thing which is a **database adapter**. Now, this is dependent on the database you are using. The list is as follows:&#xA;- [psycopg2](https://pypi.org/project/psycopg2/) for PostgreSQL&#xA;- [mysqlclient](https://pypi.org/project/mysqlclient/) for MySQL and MariaDB&#xA;- [cx-Oracle](https://pypi.org/project/cx-Oracle/) for Oracle&#xA;&#xA;SQLite does not require an adapter as the database is a file stored in the base directory as `db.sqlite3`&#xA;&#xA;The above list is nothing but Python packages that allow the Django ORM (which is under the hood python) to operate the database. To install them you can simply install with `pip` as `pip install psycopg2` , `pip install mysqlclient`, and so on. Make sure you are in a python virtual environment. &#xA;&#xA;After installing the particular package, we can now move on to the migration process.&#xA;&#xA;## Migrating the models &#xA;&#xA;Now, we have a clean and fresh instance of a database created. What next? We&#39;ll now use the logic in the [previous part](https://mr-destructive.github.io/techstructive-blog/django/python/web-development/2022/01/04/Django-Basics-P7.html), where we created and designed the logic for the database i.e. `Models`. Now, we can combine our physical database i.e. the database we just created on a DBMS tool locally, and the logical model to populate and give it a structure. &#xA;&#xA;We&#39;ll perform **migration** on our database. &#xA;&#xA;This is where the magic happens and probably the step which should be carefully be executed if working with a real-time (production-level) database. Because the commands you&#39;ll run are gonna directly affect the database. &#xA;&#xA;Making a migration is a two-step process. But what is migration?&#xA;&#xA;The Django documentation states it as the version control for the database schema and takes their word for that. It is basically a folder(hidden) that stores the state of your database structure just like commits in git. &#xA;&#xA;Let&#39;s see it practically&#xA;&#xA;### Makemigrations&#xA;For actually creating tables, relations attributes in a database from a model. We use the command `makemigrations`. This command creates a file inside the `migrations` folder inside the application folder whose model has been recently created or updated. It doesn&#39;t affect the database but it creates a file which in turn after a `migrate` command will be parsed to the actual database using the ORM.&#xA;So, for any attribute or logical change inside the model we use the `makemigrations` command as below:&#xA;```&#xA;python manage.py makemigrations&#xA;```&#xA;   &#xA;   We do not use the above command if any functional change i.e. operations involving querying the database and other operations that don&#39;t affect how the database is structured or stored. Though we have to use the `makemigrations` command when the fields in the model are changed even slightly.  &#xA;&#xA;### Migrate&#xA;&#xA;To see the result or create the actual tables, attributes, and relations among the tables in the database, we need to run the command `migrate` which will see the latest file in the migration folder and execute the queries to change the schema of the database. So, this is a very powerful command that can perform SQL queries under the hood with python. The below demonstration shows the output of these two commands in a Postgres database.&#xA;&#xA;```&#xA;python manage.py migrate&#xA;```&#xA;&#xA;### Demonstration&#xA;&#xA;![db migration](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1642334950/blogmedia/migration_zi6d2c.gif)&#xA;&#xA;Let&#39;s follow the GIF/video in sequence:&#xA;&#xA;1. Create the database in the DBMS tool&#xA;2. Configure `settings.py` for the database connection&#xA;3. Install `psycopg2` or DB-specific adapter.&#xA;4. Create the model.&#xA;5. Run `python manage.py makemigrations`&#xA;6. Run `python manage.py migrate` (if the above command is a success)&#xA;&#xA;Here&#39;s the model from the demo:&#xA;&#xA;```python&#xA;class Article(models.Model):&#xA;    title = models.CharField(max_length=255)&#xA;    post = models.TextField()&#xA;    created = models.DateTimeField(auto_now_add=True)&#xA;    updated = models.DateTimeField(auto_now=True)&#xA;```&#xA;&#xA;If we have multiple apps and we only want Django to migrate a model from a specific app, we can simply add the name of the app to the command. By default, Django will migrate all the models from the `INSTALLED_APPS` list so we need to specify which app to migrate explicitly.&#xA;&#xA;```&#xA;python manage.py makemigrations app_name&#xA;```   &#xA;This also applies to the `migrate` command.&#xA;&#xA;To understand the migration process more deeply, let us see another demonstration of changing the model and then applying migrations. We&#39;ll keep an eye on the `migrations` folder inside of the app in the project. &#xA;&#xA;![migration folder demo](https://res.cloudinary.com/dgpxbrwoz/image/upload/v1642338399/blogmedia/migratefolder_e7fm7n.gif)&#xA;&#xA;We can see that initially when we applied the migrations in the previous demo, there was a single migration file called `0001_initial.py` but after we altered/changed the title&#39;s `max_length` from 255 to 127 and executed the `makemigrations` command another file called `0002_alter_article_title.py` was created that only contained the changed field. This is very similar to git diff in which we are about to commit to the actual database. &#xA;&#xA;So, that was all about migrations. There is definitely a lot to be covered about migrations and databases, but I&#39;ll leave you here for now. There are a lot of things to understand and learn before we can talk more about the databases. We now have a base for other things to get our heads around. We now have a database linked to our project, still, we don&#39;t know how to use fetch or query them. That is a topic for another part.&#xA;&#xA;We didn&#39;t use SQLite database which is by default provided by Django because in production it can just listen to one request/query at a time. That is not ideal for most web applications. Still, there are use cases of this database as it is suitable for embedded database systems for storage applications in desktop/android applications. &#xA;&#xA;## Conclusion&#xA;&#xA;So, from this part of the series, we were able to configure a database more specifically an external database (not the default SQLite DB) in a Django project. The concept of migrations was understood and demonstrated with live examples. Also, the process of creating and designing models was used from the previous part to create the structure in an actual database. In the next part, we shall dive into the Admin Section that Django provides to view and tinker with the local database. If you have any queries(not database queries) then please let me know, thank you for reading, and until then Happy Coding :)</content>
      <type></type>
    </item>
  </channel>
</rss>