<rss version="2.0">
  <channel>
    <title>Meet Gor - Type: sqlog</title>
    <link>https://meetgor.com</link>
    <description>Posts of type sqlog</description>
    <language>en-us</language>
    <pubDate>Sun, 15 Mar 2026 15:43:20 UTC</pubDate>
    <item>
      <title>Advent of SQL 2025 Day 15: Confirmation Phrase Dispatches</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-15</link>
      <description>Advent of SQL, Day 15 - Confirmation Phrase Dispatches We are on the final day of Advent of SQL! I can&#39;t believe it, I completed it! (with some help of course,</description>
      <pubDate>Sun, 28 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL, Day 15 - Confirmation Phrase Dispatches&#xA;&#xA;We are on the final day of Advent of SQL!&#xA;&#xA;I can&#39;t believe it, I completed it! (with some help of course, I ~~don&#39;t~~ didn&#39;t know SQL very well. But this 15 days flipped it around.). I feel good and fresh to start the year to go deep in SQLite and databases!&#xA;&#xA;Let&#39;s solve the last day first, if it has somethings reamining to teach us!&#xA;&#xA;We need to make a few changes.&#xA;&#xA;SQLite doesn&#39;t have `-&gt;&gt;` operator for extracting JSON values from the columns, so we need to use `json_extract` instead.&#xA;&#xA;So, replace this 9th line from and with:&#xA;&#xA;```diff&#xA;-    marker_letter TEXT GENERATED ALWAYS AS (payload -&gt;&gt; &#39;marker&#39;) STORED,&#xA;+    marker_letter TEXT GENERATED ALWAYS AS (json_extract(payload, &#39;$.marker&#39;)) STORED,&#xA;```&#xA;&#xA;Once we have it, we also need to remove the marker `::jsonb` from the each insert row, we can do that with `sed` or any other utility you like.&#xA;&#xA;```&#xA;sed &#34;s/&#39;\(.*\)&#39;::jsonb/&#39;\1&#39;/g&#34; day15-inserts.sql &gt; day15-inserts-sqlite.sql&#xA;```&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS incoming_dispatches;&#xA;DROP TABLE IF EXISTS system_dispatches;&#xA;&#xA;CREATE TABLE system_dispatches (&#xA;    id SERIAL PRIMARY KEY,&#xA;    system_id TEXT NOT NULL,&#xA;    dispatched_at TIMESTAMP NOT NULL,&#xA;    payload JSONB NOT NULL,&#xA;    marker_letter TEXT GENERATED ALWAYS AS (json_extract(payload, &#39;$.marker&#39;)) STORED,&#xA;    UNIQUE (system_id, dispatched_at, payload)&#xA;);&#xA;&#xA;CREATE TABLE incoming_dispatches (&#xA;    system_id TEXT,&#xA;    dispatched_at TIMESTAMP,&#xA;    payload JSONB&#xA;);&#xA;&#xA;INSERT INTO system_dispatches (system_id, dispatched_at, payload) VALUES&#xA;(&#39;SYS-0081&#39;, &#39;2025-12-21T06:02:26&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0137&#39;, &#39;2025-12-19T06:03:21&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0237&#39;, &#39;2025-12-19T06:23:37&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0006&#39;, &#39;2025-12-24T18:10:16&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0170&#39;, &#39;2025-12-19T06:17:24&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0224&#39;, &#39;2025-12-19T06:23:24&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0007&#39;, &#39;2025-12-24T18:10:06&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0035&#39;, &#39;2025-12-23T15:55:34&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0010&#39;, &#39;2025-12-23T15:55:09&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0037&#39;, &#39;2025-12-23T15:55:36&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0001&#39;, &#39;2025-12-24T08:02:00&#39;, &#39;{&#34;marker&#34;: &#34;X&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0225&#39;, &#39;2025-12-19T06:23:25&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0095&#39;, &#39;2025-12-21T06:02:40&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0021&#39;, &#39;2025-12-23T15:55:20&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0001&#39;, &#39;2025-12-24T08:10:00&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0009&#39;, &#39;2025-12-24T08:02:08&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0119&#39;, &#39;2025-12-19T06:03:03&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0142&#39;, &#39;2025-12-19T06:03:26&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0048&#39;, &#39;2025-12-23T15:55:30&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0228&#39;, &#39;2025-12-19T06:23:28&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0092&#39;, &#39;2025-12-21T06:02:37&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0008&#39;, &#39;2025-12-24T08:02:07&#39;, &#39;{&#34;marker&#34;: &#34;B&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0111&#39;, &#39;2025-12-19T06:02:55&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0106&#39;, &#39;2025-12-19T06:02:50&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0082&#39;, &#39;2025-12-21T06:02:27&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0180&#39;, &#39;2025-12-19T06:17:31&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0103&#39;, &#39;2025-12-19T06:02:47&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0096&#39;, &#39;2025-12-21T06:02:41&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0003&#39;, &#39;2025-12-23T06:02:02&#39;, &#39;{&#34;marker&#34;: &#34;Z&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0157&#39;, &#39;2025-12-19T06:17:11&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0010&#39;, &#39;2025-12-23T06:02:09&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0102&#39;, &#39;2025-12-19T06:02:46&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0072&#39;, &#39;2025-12-23T06:02:18&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0175&#39;, &#39;2025-12-19T06:17:29&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0214&#39;, &#39;2025-12-19T06:23:14&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0005&#39;, &#39;2025-12-24T08:02:04&#39;, &#39;{&#34;marker&#34;: &#34;P&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0216&#39;, &#39;2025-12-19T06:23:16&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0047&#39;, &#39;2025-12-23T15:55:46&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0019&#39;, &#39;2025-12-23T15:55:18&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0043&#39;, &#39;2025-12-23T15:55:42&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0039&#39;, &#39;2025-12-23T15:55:38&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0212&#39;, &#39;2025-12-19T06:23:12&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0133&#39;, &#39;2025-12-19T06:03:17&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0005&#39;, &#39;2025-12-24T08:10:04&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0011&#39;, &#39;2025-12-24T08:02:10&#39;, &#39;{&#34;marker&#34;: &#34;J&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0080&#39;, &#39;2025-12-21T06:02:25&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0230&#39;, &#39;2025-12-19T06:23:30&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0100&#39;, &#39;2025-12-21T06:02:45&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0061&#39;, &#39;2025-12-23T06:02:07&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0022&#39;, &#39;2025-12-23T15:55:21&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0149&#39;, &#39;2025-12-19T06:17:03&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0009&#39;, &#39;2025-12-23T15:55:08&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0002&#39;, &#39;2025-12-24T08:02:01&#39;, &#39;{&#34;marker&#34;: &#34;Y&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0132&#39;, &#39;2025-12-19T06:03:16&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0033&#39;, &#39;2025-12-23T15:55:32&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0193&#39;, &#39;2025-12-19T06:17:44&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0167&#39;, &#39;2025-12-19T06:17:21&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0008&#39;, &#39;2025-12-24T08:10:07&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0186&#39;, &#39;2025-12-19T06:17:37&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0078&#39;, &#39;2025-12-21T06:02:23&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0085&#39;, &#39;2025-12-21T06:02:30&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0046&#39;, &#39;2025-12-23T15:55:45&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0054&#39;, &#39;2025-12-23T15:55:36&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0235&#39;, &#39;2025-12-19T06:23:35&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0089&#39;, &#39;2025-12-21T06:02:34&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0011&#39;, &#39;2025-12-23T06:02:10&#39;, &#39;{&#34;marker&#34;: &#34;J&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0034&#39;, &#39;2025-12-23T15:55:33&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0129&#39;, &#39;2025-12-19T06:03:13&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0211&#39;, &#39;2025-12-19T06:23:11&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0151&#39;, &#39;2025-12-19T06:17:05&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0204&#39;, &#39;2025-12-19T06:23:04&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0213&#39;, &#39;2025-12-19T06:23:13&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0147&#39;, &#39;2025-12-19T06:03:31&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0140&#39;, &#39;2025-12-19T06:03:24&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0123&#39;, &#39;2025-12-19T06:03:07&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0052&#39;, &#39;2025-12-23T15:55:34&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0148&#39;, &#39;2025-12-19T06:17:02&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0059&#39;, &#39;2025-12-23T06:02:05&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0027&#39;, &#39;2025-12-23T15:55:26&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0055&#39;, &#39;2025-12-23T15:55:37&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0155&#39;, &#39;2025-12-19T06:17:09&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0107&#39;, &#39;2025-12-19T06:02:51&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0173&#39;, &#39;2025-12-19T06:17:27&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0007&#39;, &#39;2025-12-23T06:02:06&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0016&#39;, &#39;2025-12-23T15:55:15&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0168&#39;, &#39;2025-12-19T06:17:22&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0196&#39;, &#39;2025-12-19T06:17:47&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0234&#39;, &#39;2025-12-19T06:23:34&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0210&#39;, &#39;2025-12-19T06:23:10&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0135&#39;, &#39;2025-12-19T06:03:19&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0150&#39;, &#39;2025-12-19T06:17:04&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0091&#39;, &#39;2025-12-21T06:02:36&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0154&#39;, &#39;2025-12-19T06:17:08&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0223&#39;, &#39;2025-12-19T06:23:23&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0020&#39;, &#39;2025-12-23T15:55:19&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0160&#39;, &#39;2025-12-19T06:17:14&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0128&#39;, &#39;2025-12-19T06:03:12&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0194&#39;, &#39;2025-12-19T06:17:45&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0231&#39;, &#39;2025-12-19T06:23:31&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0045&#39;, &#39;2025-12-23T15:55:44&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0190&#39;, &#39;2025-12-19T06:17:41&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0197&#39;, &#39;2025-12-19T06:17:48&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0002&#39;, &#39;2025-12-23T06:02:01&#39;, &#39;{&#34;marker&#34;: &#34;Y&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0008&#39;, &#39;2025-12-24T18:10:07&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0158&#39;, &#39;2025-12-19T06:17:12&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0003&#39;, &#39;2025-12-24T08:10:02&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0171&#39;, &#39;2025-12-19T06:17:25&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0229&#39;, &#39;2025-12-19T06:23:29&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0079&#39;, &#39;2025-12-21T06:02:24&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0215&#39;, &#39;2025-12-19T06:23:15&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0233&#39;, &#39;2025-12-19T06:23:33&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0060&#39;, &#39;2025-12-23T06:02:06&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0026&#39;, &#39;2025-12-23T15:55:25&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0125&#39;, &#39;2025-12-19T06:03:09&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0001&#39;, &#39;2025-12-24T18:10:00&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0136&#39;, &#39;2025-12-19T06:03:20&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0010&#39;, &#39;2025-12-24T08:02:09&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0010&#39;, &#39;2025-12-24T08:10:09&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0044&#39;, &#39;2025-12-23T15:55:43&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0115&#39;, &#39;2025-12-19T06:02:59&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0005&#39;, &#39;2025-12-24T18:10:15&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0094&#39;, &#39;2025-12-21T06:02:39&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0191&#39;, &#39;2025-12-19T06:17:42&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0182&#39;, &#39;2025-12-19T06:17:33&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0195&#39;, &#39;2025-12-19T06:17:46&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0227&#39;, &#39;2025-12-19T06:23:27&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0032&#39;, &#39;2025-12-23T15:55:31&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0177&#39;, &#39;2025-12-19T06:17:31&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0068&#39;, &#39;2025-12-23T06:02:14&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0069&#39;, &#39;2025-12-23T06:02:15&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0009&#39;, &#39;2025-12-23T06:02:08&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0176&#39;, &#39;2025-12-19T06:17:30&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0008&#39;, &#39;2025-12-23T06:02:07&#39;, &#39;{&#34;marker&#34;: &#34;B&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0099&#39;, &#39;2025-12-21T06:02:44&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0014&#39;, &#39;2025-12-23T15:55:13&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0042&#39;, &#39;2025-12-23T15:55:41&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0006&#39;, &#39;2025-12-24T08:02:05&#39;, &#39;{&#34;marker&#34;: &#34;R&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0187&#39;, &#39;2025-12-19T06:17:38&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0174&#39;, &#39;2025-12-19T06:17:28&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0056&#39;, &#39;2025-12-23T15:55:55&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0199&#39;, &#39;2025-12-19T06:17:50&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0120&#39;, &#39;2025-12-19T06:03:04&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0003&#39;, &#39;2025-12-24T08:02:02&#39;, &#39;{&#34;marker&#34;: &#34;Z&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0116&#39;, &#39;2025-12-19T06:02:59&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0008&#39;, &#39;2025-12-24T18:10:18&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0205&#39;, &#39;2025-12-19T06:23:05&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0006&#39;, &#39;2025-12-24T08:10:05&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0145&#39;, &#39;2025-12-19T06:03:29&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0004&#39;, &#39;2025-12-24T08:10:03&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0065&#39;, &#39;2025-12-23T06:02:11&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0086&#39;, &#39;2025-12-21T06:02:31&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0188&#39;, &#39;2025-12-19T06:17:39&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0183&#39;, &#39;2025-12-19T06:17:34&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0062&#39;, &#39;2025-12-23T06:02:08&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0058&#39;, &#39;2025-12-23T15:55:57&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0169&#39;, &#39;2025-12-19T06:17:23&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0105&#39;, &#39;2025-12-19T06:02:49&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0001&#39;, &#39;2025-12-24T18:10:11&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0005&#39;, &#39;2025-12-23T06:02:04&#39;, &#39;{&#34;marker&#34;: &#34;P&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0131&#39;, &#39;2025-12-19T06:03:15&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0232&#39;, &#39;2025-12-19T06:23:32&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0028&#39;, &#39;2025-12-23T15:55:27&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0070&#39;, &#39;2025-12-23T06:02:16&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0029&#39;, &#39;2025-12-23T15:55:28&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0127&#39;, &#39;2025-12-19T06:03:11&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0010&#39;, &#39;2025-12-24T18:10:09&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0025&#39;, &#39;2025-12-23T15:55:24&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0164&#39;, &#39;2025-12-19T06:17:18&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0114&#39;, &#39;2025-12-19T06:02:58&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0009&#39;, &#39;2025-12-24T18:10:08&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0184&#39;, &#39;2025-12-19T06:17:35&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0003&#39;, &#39;2025-12-24T18:10:02&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0053&#39;, &#39;2025-12-23T15:55:35&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0117&#39;, &#39;2025-12-19T06:03:01&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0074&#39;, &#39;2025-12-23T06:02:20&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0198&#39;, &#39;2025-12-19T06:17:49&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0036&#39;, &#39;2025-12-23T15:55:35&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0112&#39;, &#39;2025-12-19T06:02:56&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0023&#39;, &#39;2025-12-23T15:55:22&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0121&#39;, &#39;2025-12-19T06:03:05&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0181&#39;, &#39;2025-12-19T06:17:32&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0003&#39;, &#39;2025-12-24T18:10:13&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0130&#39;, &#39;2025-12-19T06:03:14&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0066&#39;, &#39;2025-12-23T06:02:12&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0203&#39;, &#39;2025-12-19T06:17:54&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0002&#39;, &#39;2025-12-24T18:10:01&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0104&#39;, &#39;2025-12-19T06:02:48&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0017&#39;, &#39;2025-12-23T15:55:16&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0004&#39;, &#39;2025-12-24T18:10:03&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0238&#39;, &#39;2025-12-19T06:23:38&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0110&#39;, &#39;2025-12-19T06:02:54&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0071&#39;, &#39;2025-12-23T06:02:17&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0004&#39;, &#39;2025-12-24T18:10:14&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0031&#39;, &#39;2025-12-23T15:55:30&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0004&#39;, &#39;2025-12-23T06:02:03&#39;, &#39;{&#34;marker&#34;: &#34;M&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0141&#39;, &#39;2025-12-19T06:03:25&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0207&#39;, &#39;2025-12-19T06:23:07&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0122&#39;, &#39;2025-12-19T06:03:06&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0006&#39;, &#39;2025-12-24T18:10:05&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0030&#39;, &#39;2025-12-23T15:55:29&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0192&#39;, &#39;2025-12-19T06:17:43&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0144&#39;, &#39;2025-12-19T06:03:28&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0098&#39;, &#39;2025-12-21T06:02:43&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0200&#39;, &#39;2025-12-19T06:17:51&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0007&#39;, &#39;2025-12-24T08:02:06&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0220&#39;, &#39;2025-12-19T06:23:20&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0063&#39;, &#39;2025-12-23T06:02:09&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0109&#39;, &#39;2025-12-19T06:02:53&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0179&#39;, &#39;2025-12-19T06:17:30&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0163&#39;, &#39;2025-12-19T06:17:17&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0162&#39;, &#39;2025-12-19T06:17:16&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0222&#39;, &#39;2025-12-19T06:23:22&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0134&#39;, &#39;2025-12-19T06:03:18&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0004&#39;, &#39;2025-12-24T08:02:03&#39;, &#39;{&#34;marker&#34;: &#34;M&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0139&#39;, &#39;2025-12-19T06:03:23&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0097&#39;, &#39;2025-12-21T06:02:42&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0146&#39;, &#39;2025-12-19T06:03:30&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0101&#39;, &#39;2025-12-19T06:02:45&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0201&#39;, &#39;2025-12-19T06:17:52&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0051&#39;, &#39;2025-12-23T15:55:33&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0172&#39;, &#39;2025-12-19T06:17:26&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0083&#39;, &#39;2025-12-21T06:02:28&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0009&#39;, &#39;2025-12-24T08:10:08&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0124&#39;, &#39;2025-12-19T06:03:08&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0041&#39;, &#39;2025-12-23T15:55:40&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0049&#39;, &#39;2025-12-23T15:55:31&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0202&#39;, &#39;2025-12-19T06:17:53&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0178&#39;, &#39;2025-12-19T06:17:29&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0090&#39;, &#39;2025-12-21T06:02:35&#39;, &#39;{&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0006&#39;, &#39;2025-12-23T06:02:05&#39;, &#39;{&#34;marker&#34;: &#34;R&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0073&#39;, &#39;2025-12-23T06:02:19&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0076&#39;, &#39;2025-12-23T06:02:22&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0217&#39;, &#39;2025-12-19T06:23:17&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0159&#39;, &#39;2025-12-19T06:17:13&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0161&#39;, &#39;2025-12-19T06:17:15&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0221&#39;, &#39;2025-12-19T06:23:21&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0236&#39;, &#39;2025-12-19T06:23:36&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0064&#39;, &#39;2025-12-23T06:02:10&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0126&#39;, &#39;2025-12-19T06:03:10&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0024&#39;, &#39;2025-12-23T15:55:23&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0113&#39;, &#39;2025-12-19T06:02:57&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0038&#39;, &#39;2025-12-23T15:55:37&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0075&#39;, &#39;2025-12-23T06:02:21&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0088&#39;, &#39;2025-12-21T06:02:33&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0007&#39;, &#39;2025-12-24T08:10:06&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0067&#39;, &#39;2025-12-23T06:02:13&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0156&#39;, &#39;2025-12-19T06:17:10&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0040&#39;, &#39;2025-12-23T15:55:39&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0219&#39;, &#39;2025-12-19T06:23:19&#39;, &#39;{&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0166&#39;, &#39;2025-12-19T06:17:20&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0007&#39;, &#39;2025-12-24T18:10:17&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0153&#39;, &#39;2025-12-19T06:17:07&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0077&#39;, &#39;2025-12-23T06:02:23&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0084&#39;, &#39;2025-12-21T06:02:29&#39;, &#39;{&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;);&#xA;&#xA;INSERT INTO incoming_dispatches (system_id, dispatched_at, payload) VALUES&#xA;(&#39;SYS-0013&#39;, &#39;2025-12-23T15:55:12&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0002&#39;, &#39;2025-12-24T08:10:01&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0012&#39;, &#39;2025-12-23T15:55:11&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0108&#39;, &#39;2025-12-19T06:02:52&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0165&#39;, &#39;2025-12-19T06:17:19&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0189&#39;, &#39;2025-12-19T06:17:40&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0050&#39;, &#39;2025-12-23T15:55:32&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0138&#39;, &#39;2025-12-19T06:03:22&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0087&#39;, &#39;2025-12-21T06:02:32&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0208&#39;, &#39;2025-12-19T06:23:08&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0057&#39;, &#39;2025-12-23T15:55:56&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0002&#39;, &#39;2025-12-24T18:10:12&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0011&#39;, &#39;2025-12-24T08:10:10&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0226&#39;, &#39;2025-12-19T06:23:26&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0206&#39;, &#39;2025-12-19T06:23:06&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0118&#39;, &#39;2025-12-19T06:03:02&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0185&#39;, &#39;2025-12-19T06:17:36&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0218&#39;, &#39;2025-12-19T06:23:18&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0018&#39;, &#39;2025-12-23T15:55:17&#39;, &#39;{&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0005&#39;, &#39;2025-12-24T18:10:04&#39;, &#39;{&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0093&#39;, &#39;2025-12-21T06:02:38&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0011&#39;, &#39;2025-12-24T18:10:10&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0011&#39;, &#39;2025-12-23T15:55:10&#39;, &#39;{&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0143&#39;, &#39;2025-12-19T06:03:27&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0152&#39;, &#39;2025-12-19T06:17:06&#39;, &#39;{&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0015&#39;, &#39;2025-12-23T15:55:14&#39;, &#39;{&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0001&#39;, &#39;2025-12-23T06:02:00&#39;, &#39;{&#34;marker&#34;: &#34;X&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0209&#39;, &#39;2025-12-19T06:23:09&#39;, &#39;{&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0133&#39;, &#39;2025-12-19T06:03:17&#39;, &#39;{&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0158&#39;, &#39;2025-12-19T06:17:12&#39;, &#39;{&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0195&#39;, &#39;2025-12-19T06:17:46&#39;, &#39;{&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0182&#39;, &#39;2025-12-19T06:17:33&#39;, &#39;{&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0033&#39;, &#39;2025-12-23T15:55:32&#39;, &#39;{&#34;marker&#34;: &#34;W&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0083&#39;, &#39;2025-12-21T06:02:28&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0002&#39;, &#39;2025-12-23T06:02:01&#39;, &#39;{&#34;marker&#34;: &#34;Y&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0076&#39;, &#39;2025-12-23T06:02:22&#39;, &#39;{&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;),&#xA;(&#39;SYS-0009&#39;, &#39;2025-12-24T08:02:08&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;primary&#34;}&#39;),&#xA;(&#39;SYS-0194&#39;, &#39;2025-12-19T06:17:45&#39;, &#39;{&#34;marker&#34;: &#34;U&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;);&#xA;&#xA;```&#xA;&#xA;&#xA;```&#xA;$ sqlite3&#xA;SQLite version 3.50.4 2025-07-30 19:33:53&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read day15-inserts.sql&#xA;Parse error near line 19: unrecognized token: &#34;:&#34;&#xA;  6:02:26&#39;, &#39;{&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;::jsonb), (&#39;SYS-0137&#39;, &#39;2025&#xA;                                      error here ---^&#xA;Parse error near line 275: unrecognized token: &#34;:&#34;&#xA;  5:55:12&#39;, &#39;{&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;}&#39;::jsonb), (&#39;SYS-0002&#39;, &#39;2025&#xA;                                      error here ---^&#xA;sqlite&gt; .read day15-inserts-sqlite.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE system_dispatches (&#xA;    id SERIAL PRIMARY KEY,&#xA;    system_id TEXT NOT NULL,&#xA;    dispatched_at TIMESTAMP NOT NULL,&#xA;    payload JSONB NOT NULL,&#xA;    marker_letter TEXT GENERATED ALWAYS AS (json_extract(payload, &#39;$.marker&#39;)) STORED,&#xA;    UNIQUE (system_id, dispatched_at, payload)&#xA;);&#xA;CREATE TABLE incoming_dispatches (&#xA;    system_id TEXT,&#xA;    dispatched_at TIMESTAMP,&#xA;    payload JSONB&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM system_dispatches LIMIT 10;&#xA;+----+-----------+---------------------+----------------------------------------+---------------+&#xA;| id | system_id |    dispatched_at    |                payload                 | marker_letter |&#xA;+----+-----------+---------------------+----------------------------------------+---------------+&#xA;|    | SYS-0081  | 2025-12-21T06:02:26 | {&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;} | T             |&#xA;|    | SYS-0137  | 2025-12-19T06:03:21 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;} | D             |&#xA;|    | SYS-0237  | 2025-12-19T06:23:37 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;} | D             |&#xA;|    | SYS-0006  | 2025-12-24T18:10:16 | {&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;} | K             |&#xA;|    | SYS-0170  | 2025-12-19T06:17:24 | {&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;} | I             |&#xA;|    | SYS-0224  | 2025-12-19T06:23:24 | {&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;} | S             |&#xA;|    | SYS-0007  | 2025-12-24T18:10:06 | {&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;} | O             |&#xA;|    | SYS-0035  | 2025-12-23T15:55:34 | {&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;} | C             |&#xA;|    | SYS-0010  | 2025-12-23T15:55:09 | {&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;} | L             |&#xA;|    | SYS-0037  | 2025-12-23T15:55:36 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;} | D             |&#xA;+----+-----------+---------------------+----------------------------------------+---------------+&#xA;sqlite&gt; SELECT * FROM incoming_dispatches LIMIT 10;&#xA;+-----------+---------------------+----------------------------------------+&#xA;| system_id |    dispatched_at    |                payload                 |&#xA;+-----------+---------------------+----------------------------------------+&#xA;| SYS-0013  | 2025-12-23T15:55:12 | {&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0002  | 2025-12-24T08:10:01 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;primary&#34;}   |&#xA;| SYS-0012  | 2025-12-23T15:55:11 | {&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0108  | 2025-12-19T06:02:52 | {&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0165  | 2025-12-19T06:17:19 | {&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0189  | 2025-12-19T06:17:40 | {&#34;marker&#34;: &#34;H&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0050  | 2025-12-23T15:55:32 | {&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0138  | 2025-12-19T06:03:22 | {&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0087  | 2025-12-21T06:02:32 | {&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;| SYS-0208  | 2025-12-19T06:23:08 | {&#34;marker&#34;: &#34;G&#34;, &#34;source&#34;: &#34;secondary&#34;} |&#xA;+-----------+---------------------+----------------------------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Good to go!&#xA;&#xA;## Problem&#xA;&#xA;&gt; Reconstruct the final confirmation phrase to text Santa based on the elves’ hazy recollection of how they solved this problem before.&#xA;&gt; &#xA;&gt; Your final result should include the marker_letter for each system, using only the most recent dispatch from a primary source. Once the correct dispatch has been identified for every system, combine the results and order them by dispatched_at in ascending order to reveal the confirmation phrase.&#xA;&gt; &#xA;&gt; The sleigh won’t launch without it.&#xA;&#xA;&#xA;So, we need to first make sure we have the right `dispatches`.&#xA;&#xA;We have two tables.&#xA;&#xA;1. `system_dispatches`&#xA;2. `incoming_dispatches`&#xA;&#xA;The problem statement also states:&#xA;&#xA;&gt; The system kept throwing errors until we figured out how to handle duplicates. Whatever you do the records already in system_dispatches must take precedence.”&#xA;&#xA;&#xA;So, we need to take the `system_dispatches` as the ground truth. However we need to include something or the other from the `incoming_dispatches` as there are new entries being created.&#xA;&#xA;So, we will first try to insert into the system dispatches everything in the `incoming_dispatches`&#xA;&#xA;```sql&#xA;INSERT INTO system_dispatches (system_id, dispatched_at, payload)&#xA;SELECT system_id, dispatched_at, payload&#xA;FROM incoming_dispatches;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO system_dispatches (system_id, dispatched_at, payload)&#xA;SELECT system_id, dispatched_at, payload&#xA;FROM incoming_dispatches;&#xA;Runtime error: UNIQUE constraint failed: system_dispatches.system_id, system_dispatches.dispatched_at, system_dispatches.payload (19)&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Ops!&#xA;&#xA;```sql&#xA;SELECT COUNT(*) as system_dispatches_count FROM system_dispatches ;&#xA;SELECT COUNT(*) as incoming_dispatches_count FROM incoming_dispatches;&#xA;SELECT COUNT(*) as common_dispatches_count FROM (SELECT system_id, dispatched_at, payload FROM system_dispatches INTERSECT SELECT * FROM incoming_dispatches);&#xA;&#xA;&#xA;```&#xA;```&#xA;sqlite&gt; SELECT COUNT(*) FROM system_dispatches ;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 254      |&#xA;+----------+&#xA;sqlite&gt; SELECT COUNT(*) FROM incoming_dispatches;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 38       |&#xA;+----------+&#xA;&#xA;sqlite&gt; SELECT COUNT(*) FROM (SELECT system_id, dispatched_at, payload FROM system_dispatches UNION SELECT * FROM incoming_dispatches);&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 282      |&#xA;+----------+&#xA;&#xA;sqlite&gt; SELECT DISTINCT COUNT(*) FROM (SELECT system_id, dispatched_at, payload FROM system_dispatches UNION SELECT * FROM incoming_dispatches);&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 282      |&#xA;+----------+&#xA;&#xA;sqlite&gt; SELECT COUNT(*) FROM (SELECT system_id, dispatched_at, payload FROM system_dispatches INTERSECT SELECT * FROM incoming_dispatches);&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 10       |&#xA;+----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;We need to be careful about what we insert into the `system_dispatches`.&#xA;&#xA;I think the duplicates causes this conflict in constraints, we can ignore on the constraint failures and don&#39;t insert that row.&#xA;&#xA;```sql&#xA;INSERT OR IGNORE INTO system_dispatches (system_id, dispatched_at, payload)&#xA;SELECT system_id, dispatched_at, payload&#xA;FROM incoming_dispatches;&#xA;```&#xA;&#xA;Ok, now if we see the count, we get `282` which is making sense.&#xA;&#xA;- The original `system_dispatches` had a count of `254` before merge.&#xA;- The original `incoming_dispatches` had a count of `38`.&#xA;- After merging `incoming_dispatches` into `system_dispatches` table, we have a `282` records in `system_dispatches` but `254+38=292`, what about the `10`? Those were already there in the `system_dispatches`. As we saw in the earlier query, there were `10` rows in common from the `incoming_dispatches` table before merge.&#xA;&#xA;```&#xA;sqlite&gt; SELECT COUNT(*) FROM system_dispatches ;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 254      |&#xA;+----------+&#xA;sqlite&gt; SELECT COUNT(*) FROM incoming_dispatches;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 38       |&#xA;+----------+&#xA;sqlite&gt; INSERT INTO system_dispatches (system_id, dispatched_at, payload)&#xA;SELECT system_id, dispatched_at, payload&#xA;FROM incoming_dispatches;&#xA;Runtime error: UNIQUE constraint failed: system_dispatches.system_id, system_dispatches.dispatched_at, system_dispatches.payload (19)&#xA;sqlite&gt; INSERT OR IGNORE INTO system_dispatches (system_id, dispatched_at, payload)&#xA;SELECT system_id, dispatched_at, payload&#xA;FROM incoming_dispatches;&#xA;sqlite&gt; SELECT COUNT(*) FROM system_dispatches ;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 282      |&#xA;+----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now, to the actual problem.&#xA;&#xA;What we need to do now?&#xA;&#xA;- Using only the most recent dispatch from a primary source. &#xA;- Once the correct dispatch has been identified for every system, combine the results and order them by dispatched_at in ascending order &#xA;- Reveal the confirmation phrase&#xA;&#xA;We have the single source now!&#xA;&#xA;```sql&#xA;SELECT * FROM system_dispatches LIMIT 10;&#xA;```&#xA;&#xA;```&#xA;SELECT * FROM system_dispatches LIMIT 10;&#xA;+----+-----------+---------------------+----------------------------------------+---------------+&#xA;| id | system_id |    dispatched_at    |                payload                 | marker_letter |&#xA;+----+-----------+---------------------+----------------------------------------+---------------+&#xA;|    | SYS-0081  | 2025-12-21T06:02:26 | {&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;secondary&#34;} | T             |&#xA;|    | SYS-0137  | 2025-12-19T06:03:21 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;} | D             |&#xA;|    | SYS-0237  | 2025-12-19T06:23:37 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;} | D             |&#xA;|    | SYS-0006  | 2025-12-24T18:10:16 | {&#34;marker&#34;: &#34;K&#34;, &#34;source&#34;: &#34;secondary&#34;} | K             |&#xA;|    | SYS-0170  | 2025-12-19T06:17:24 | {&#34;marker&#34;: &#34;I&#34;, &#34;source&#34;: &#34;secondary&#34;} | I             |&#xA;|    | SYS-0224  | 2025-12-19T06:23:24 | {&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;secondary&#34;} | S             |&#xA;|    | SYS-0007  | 2025-12-24T18:10:06 | {&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;secondary&#34;} | O             |&#xA;|    | SYS-0035  | 2025-12-23T15:55:34 | {&#34;marker&#34;: &#34;C&#34;, &#34;source&#34;: &#34;secondary&#34;} | C             |&#xA;|    | SYS-0010  | 2025-12-23T15:55:09 | {&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;secondary&#34;} | L             |&#xA;|    | SYS-0037  | 2025-12-23T15:55:36 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;secondary&#34;} | D             |&#xA;+----+-----------+---------------------+----------------------------------------+---------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;We have 4 columns.&#xA;&#xA;- system_id&#xA;- dispatched_at&#xA;- payload (json string)&#xA;- marker (its a generated table from the payload[&#34;marker&#34;] string)&#xA;&#xA;So, we need to group by system, that&#39;s where the `system_id` is for.&#xA;&#xA;Then we need to order by `dispatched_at` with the latest ones.&#xA;&#xA;Finally we also need to filter the records that has `payload[&#34;source&#34;]` as `primary`.&#xA;&#xA;But wait, if we group by `system_id` then how can we get the latest dispatched ordered correctly? We can&#39;t use it in the `ORDER BY` as the group would have already been created right?&#xA;&#xA;Oh! We might need window functions, but just a moment, can we do it without them?&#xA;&#xA;Let&#39;s try.&#xA;&#xA;```sql&#xA;SELECT &#xA;    system_id,&#xA;    MAX(dispatched_at),&#xA;    payload, &#xA;    marker_letter&#xA;FROM system_dispatches&#xA;WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;GROUP BY system_id&#xA;ORDER BY dispatched_at ASC;&#xA;```&#xA;&#xA;Oh boy, that was simple!&#xA;&#xA;We just grouped by `system_id`, filtered the source as `primary` and selected the row with `MAX(dispatched_at)` which will give us the latest dispatch record for a system. Boom!&#xA;&#xA;The confirmation phrase is `ADVENTOFSQL`&#xA;&#xA;We can just select the `marker_letter`&#xA;&#xA;```sql&#xA;SELECT&#xA;    marker_letter&#xA;FROM system_dispatches&#xA;WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;GROUP BY system_id&#xA;ORDER BY dispatched_at ASC;&#xA;```&#xA;&#xA;&#xA;```&#xA;sqlite&gt; SELECT&#xA;    marker_letter&#xA;FROM system_dispatches&#xA;WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;GROUP BY system_id&#xA;ORDER BY dispatched_at ASC;&#xA;+---------------+&#xA;| marker_letter |&#xA;+---------------+&#xA;| X             |&#xA;| Y             |&#xA;| Z             |&#xA;| M             |&#xA;| P             |&#xA;| R             |&#xA;| K             |&#xA;| B             |&#xA;| U             |&#xA;| C             |&#xA;| J             |&#xA;+---------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Whops! What happened, the marker letter changed?&#xA;&#xA;Yep, because we have ordered by `dispatched_at DESC` but never told how to return the rows, by using `MAX(dispatched_at)` we were selecting the latest date record. Without it, we are selecting the first record in the group that could be the first or the oldest dispatched_at time.&#xA;&#xA;So, we need `MAX(dispatched_at)` included in the selected result set.&#xA;&#xA;&#xA;```sql&#xA;SELECT&#xA;    marker_letter, MAX(dispatched_at)&#xA;FROM system_dispatches&#xA;WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;GROUP BY system_id&#xA;ORDER BY dispatched_at ASC;&#xA;```&#xA;&#xA;```&#xA; SELECT&#xA;    marker_letter, MAX(dispatched_at)&#xA;FROM system_dispatches&#xA;WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;GROUP BY system_id&#xA;ORDER BY dispatched_at ASC;&#xA;+---------------+---------------------+&#xA;| marker_letter | MAX(dispatched_at)  |&#xA;+---------------+---------------------+&#xA;| A             | 2025-12-24T08:10:00 |&#xA;| D             | 2025-12-24T08:10:01 |&#xA;| V             | 2025-12-24T08:10:02 |&#xA;| E             | 2025-12-24T08:10:03 |&#xA;| N             | 2025-12-24T08:10:04 |&#xA;| T             | 2025-12-24T08:10:05 |&#xA;| O             | 2025-12-24T08:10:06 |&#xA;| F             | 2025-12-24T08:10:07 |&#xA;| S             | 2025-12-24T08:10:08 |&#xA;| Q             | 2025-12-24T08:10:09 |&#xA;| L             | 2025-12-24T08:10:10 |&#xA;+---------------+---------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;There, we go! It scared me for a moment.&#xA;&#xA;We order by, only keep the `source` as `primary` and then select the latest `dispatched_at` for a given system_id.&#xA;&#xA;If you don&#39;t want to do this way! I found a few more hacks&#xA;&#xA;### With Subquery&#xA;&#xA;We can SELECT the `MAX(dispatched_at)` in a subquery for that system_id and filter based on the `primary` source as usual.&#xA;&#xA;```sql&#xA;SELECT &#xA;    *&#xA;FROM system_dispatches&#xA;WHERE &#xA;    json_extract(system_dispatches.payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;    AND system_dispatches.dispatched_at = (&#xA;        SELECT &#xA;            MAX(latest_dispatches.dispatched_at)&#xA;        FROM system_dispatches latest_dispatches&#xA;        WHERE &#xA;            latest_dispatches.system_id = system_dispatches.system_id&#xA;            AND json_extract(latest_dispatches.payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;   )&#xA;ORDER BY system_dispatches.dispatched_at ASC;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT &#xA;    *&#xA;FROM system_dispatches&#xA;WHERE &#xA;    json_extract(system_dispatches.payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;    AND system_dispatches.dispatched_at = (&#xA;        SELECT &#xA;            MAX(latest_dispatches.dispatched_at)&#xA;        FROM system_dispatches latest_dispatches&#xA;        WHERE &#xA;            latest_dispatches.system_id = system_dispatches.system_id&#xA;            AND json_extract(latest_dispatches.payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;   )&#xA;ORDER BY system_dispatches.dispatched_at ASC;&#xA;&#xA;+----+-----------+---------------------+--------------------------------------+---------------+&#xA;| id | system_id |    dispatched_at    |               payload                | marker_letter |&#xA;+----+-----------+---------------------+--------------------------------------+---------------+&#xA;|    | SYS-0001  | 2025-12-24T08:10:00 | {&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;primary&#34;} | A             |&#xA;|    | SYS-0002  | 2025-12-24T08:10:01 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;primary&#34;} | D             |&#xA;|    | SYS-0003  | 2025-12-24T08:10:02 | {&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;primary&#34;} | V             |&#xA;|    | SYS-0004  | 2025-12-24T08:10:03 | {&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;primary&#34;} | E             |&#xA;|    | SYS-0005  | 2025-12-24T08:10:04 | {&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;primary&#34;} | N             |&#xA;|    | SYS-0006  | 2025-12-24T08:10:05 | {&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;primary&#34;} | T             |&#xA;|    | SYS-0007  | 2025-12-24T08:10:06 | {&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;primary&#34;} | O             |&#xA;|    | SYS-0008  | 2025-12-24T08:10:07 | {&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;primary&#34;} | F             |&#xA;|    | SYS-0009  | 2025-12-24T08:10:08 | {&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;primary&#34;} | S             |&#xA;|    | SYS-0010  | 2025-12-24T08:10:09 | {&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;primary&#34;} | Q             |&#xA;|    | SYS-0011  | 2025-12-24T08:10:10 | {&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;primary&#34;} | L             |&#xA;+----+-----------+---------------------+--------------------------------------+---------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;&#xA;### Window Funciton&#xA;&#xA;We can even use the window function to partition by the `system_id` and there itself order by the `dispatched_at` latest time. And then select it as a CTE table.&#xA;&#xA;We can use a `ROW_NUMBER` function to assign a rank to each row per system_id and ordered by the latest `dispatched_at`&#xA;&#xA;```sql&#xA;WITH latest_dispatches AS (&#xA;    SELECT&#xA;        *,&#xA;        ROW_NUMBER() OVER (&#xA;            PARTITION BY system_id&#xA;            ORDER BY dispatched_at DESC&#xA;        ) as rank&#xA;    FROM system_dispatches&#xA;    WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;) SELECT * FROM latest_dispatches WHERE rank = 1 ORDER BY dispatched_at ASC;&#xA;```&#xA;&#xA;The rest is the same as the above query. We order by dispatched latest but then order by ascending (oldest first) in the final result set.&#xA;&#xA;```&#xA;sqlite&gt; WITH latest_dispatches AS (&#xA;    SELECT&#xA;        *,&#xA;        ROW_NUMBER() OVER (&#xA;            PARTITION BY system_id&#xA;            ORDER BY dispatched_at DESC&#xA;        ) as rank&#xA;    FROM system_dispatches&#xA;    WHERE json_extract(payload, &#39;$.source&#39;) = &#39;primary&#39;&#xA;) SELECT * FROM latest_dispatches WHERE rank = 1 ORDER BY dispatched_at ASC;&#xA;+----+-----------+---------------------+--------------------------------------+---------------+------+&#xA;| id | system_id |    dispatched_at    |               payload                | marker_letter | rank |&#xA;+----+-----------+---------------------+--------------------------------------+---------------+------+&#xA;|    | SYS-0001  | 2025-12-24T08:10:00 | {&#34;marker&#34;: &#34;A&#34;, &#34;source&#34;: &#34;primary&#34;} | A             | 1    |&#xA;|    | SYS-0002  | 2025-12-24T08:10:01 | {&#34;marker&#34;: &#34;D&#34;, &#34;source&#34;: &#34;primary&#34;} | D             | 1    |&#xA;|    | SYS-0003  | 2025-12-24T08:10:02 | {&#34;marker&#34;: &#34;V&#34;, &#34;source&#34;: &#34;primary&#34;} | V             | 1    |&#xA;|    | SYS-0004  | 2025-12-24T08:10:03 | {&#34;marker&#34;: &#34;E&#34;, &#34;source&#34;: &#34;primary&#34;} | E             | 1    |&#xA;|    | SYS-0005  | 2025-12-24T08:10:04 | {&#34;marker&#34;: &#34;N&#34;, &#34;source&#34;: &#34;primary&#34;} | N             | 1    |&#xA;|    | SYS-0006  | 2025-12-24T08:10:05 | {&#34;marker&#34;: &#34;T&#34;, &#34;source&#34;: &#34;primary&#34;} | T             | 1    |&#xA;|    | SYS-0007  | 2025-12-24T08:10:06 | {&#34;marker&#34;: &#34;O&#34;, &#34;source&#34;: &#34;primary&#34;} | O             | 1    |&#xA;|    | SYS-0008  | 2025-12-24T08:10:07 | {&#34;marker&#34;: &#34;F&#34;, &#34;source&#34;: &#34;primary&#34;} | F             | 1    |&#xA;|    | SYS-0009  | 2025-12-24T08:10:08 | {&#34;marker&#34;: &#34;S&#34;, &#34;source&#34;: &#34;primary&#34;} | S             | 1    |&#xA;|    | SYS-0010  | 2025-12-24T08:10:09 | {&#34;marker&#34;: &#34;Q&#34;, &#34;source&#34;: &#34;primary&#34;} | Q             | 1    |&#xA;|    | SYS-0011  | 2025-12-24T08:10:10 | {&#34;marker&#34;: &#34;L&#34;, &#34;source&#34;: &#34;primary&#34;} | L             | 1    |&#xA;+----+-----------+---------------------+--------------------------------------+---------------+------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Phew!&#xA;&#xA;That was a good one to end the advent of SQL!&#xA;&#xA;I enjoyed it!&#xA;&#xA;I learnt a ton&#xA;- CTEs&#xA;- JOINs (some wired stuff can be done)&#xA;- Window Functions (LAG, LEAD, ROW_NUMBER)&#xA;- FTS (in SQLite)&#xA;- Json parsing &#xA;- date shenanigans&#xA;- CTEs don&#39;t support insert and delete in SQLite (it ruined my day 10 solution)&#xA;- Recursive CTEs&#xA;- String manipulation (thanks to xml parsing)&#xA;- I need to write a post about explaining what I learned.&#xA;&#xA;Thanks [Aaron francis](https://aaronfrancis.com/) from [databaseschool](https://databaseschool.com/) for this challenge and explaining in depth on each day(i didn&#39;t watch all) and [Kelsey Petrich](https://x.com/krpetrich) for the lore of each problem, those were really lovely to read!&#xA;&#xA;Happy Coding :)&#xA;Merry Christmas&#xA;Happy New year&#xA;Whatever you celebrate!&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 14: Ski Resort Paths</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-14</link>
      <description>Advent of SQL, Day 14 - Ski Resort Paths Ok, almost to the penultimate day in the series. It is day 14 of Advent of SQL. Let&#39;s grab the SQL for the day. That&#39;s</description>
      <pubDate>Sun, 28 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL, Day 14 - Ski Resort Paths&#xA;&#xA;Ok, almost to the penultimate day in the series. It is day 14 of Advent of SQL.&#xA;&#xA;Let&#39;s grab the SQL for the day.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS mountain_network;&#xA;&#xA;&#xA;CREATE TABLE mountain_network (&#xA;    id INTEGER PRIMARY KEY,&#xA;    from_node TEXT,&#xA;    to_node TEXT,&#xA;    node_type TEXT,    -- &#39;Lift&#39; or &#39;Trail&#39;&#xA;    difficulty TEXT    -- Only applicable for trails: &#39;green&#39;, &#39;blue&#39;, &#39;black&#39;, &#39;double_black&#39;&#xA;);&#xA;&#xA;INSERT INTO mountain_network (id, from_node, to_node, node_type, difficulty) VALUES&#xA;(1, &#39;Outlaw Express&#39;, &#39;Stairway Lift&#39;, &#39;Lift&#39;, NULL),&#xA;(2, &#39;Outlaw Express&#39;, &#39;Top Gun Bowl&#39;, &#39;Trail&#39;, &#39;black&#39;),&#xA;(3, &#39;Top Gun Bowl&#39;, &#39;Top Gun&#39;, &#39;Trail&#39;, &#39;black&#39;),&#xA;(4, &#39;Top Gun&#39;, &#39;Montoya&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(5, &#39;Montoya&#39;, &#39;Center Aisle&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(6, &#39;Center Aisle&#39;, &#39;Lower Stampede&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(7, &#39;Stairway Lift&#39;, &#39;Red&#39;&#39;s Lift&#39;, &#39;Lift&#39;, NULL),&#xA;(8, &#39;Stairway Lift&#39;, &#39;Broadway&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(9, &#39;Red&#39;&#39;s Lift&#39;, &#39;Bearclaw&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(10, &#39;Bearclaw&#39;, &#39;Last Chance&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(11, &#39;Last Chance&#39;, &#39;Diamondback&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(12, &#39;Diamondback&#39;, &#39;Broadway&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(13, &#39;Red&#39;&#39;s Lift&#39;, &#39;Bishop&#39;&#39;s Bowl&#39;, &#39;Trail&#39;, &#39;black&#39;),&#xA;(14, &#39;Red&#39;&#39;s Lift&#39;, &#39;Amy&#39;&#39;s Ridge&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(15, &#39;Amy&#39;&#39;s Ridge&#39;, &#39;Grizzly Bowl&#39;, &#39;Trail&#39;, &#39;black&#39;),&#xA;(16, &#39;Flathead Lift&#39;, &#39;Amy&#39;&#39;s Ridge&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(17, &#39;Jake&#39;&#39;s Lift&#39;, &#39;Wildwood Lift&#39;, &#39;Lift&#39;, NULL),&#xA;(18, &#39;Wildwood Lift&#39;, &#39;Sidewinder&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(19, &#39;Wildwood Lift&#39;, &#39;Brightside&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(20, &#39;Brightside&#39;, &#39;Moonrise&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(21, &#39;Moonrise&#39;, &#39;Draw&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(22, &#39;Moonrise&#39;, &#39;Lone Pine&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(23, &#39;Draw&#39;, &#39;Maverick&#39;, &#39;Trail&#39;, &#39;blue&#39;),&#xA;(24, &#39;Draw&#39;, &#39;Broadway&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(25, &#39;Broadway&#39;, &#39;Outlaw Trail&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(26, &#39;Outlaw Trail&#39;, &#39;Center Aisle&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(27, &#39;Center Aisle&#39;, &#39;Bandit&#39;, &#39;Trail&#39;, &#39;green&#39;),&#xA;(28, &#39;Jake&#39;&#39;s Lift&#39;, &#39;Maverick&#39;, &#39;Trail&#39;, &#39;blue&#39;);&#xA;&#xA;```&#xA;&#xA;That&#39;s it, we just have 1 table called `mountain_network` and it has `28` records.&#xA;&#xA;Let&#39;s look at the problem statement to check what we need to do to make sense of those 28 rows. &#xA;&#xA;&#xA;## Problem&#xA;&#xA;&gt; Find all the possible routes from Jake&#39;s Lift to Maverick. None of the possible routes will take more than 12 connections.&#xA;&#xA;&#xA;Oh! This is a graph like or network like problem.&#xA;&#xA;Ouch! &#xA;&#xA;Relational databases looks like they would be a good fit for these stuff, but if the data is just in the single table, there&#39;s not really much help.&#xA;&#xA;Let&#39;s see how we can think about it.&#xA;&#xA;### JOINs and UNIONS&#xA;&#xA;We need to find all the routes from a start node(record) i.e. `Jake&#39;s Lift` and find all ways that lead to the node(record) i.e. `Maverick`. The table gives a list of edges i.e. from which node to which node there is a way or a trail or lift.&#xA;&#xA;So, we can do a simple select to check if the `from_node` has `Jake&#39;s Lift` and the `to_node` has `Maverick` right?&#xA;&#xA;But that would make it too long.&#xA;&#xA;Like we would have to look for 12 consecutive edges and branch off at each direction.&#xA;&#xA;For the first level it would look simple like this:&#xA;&#xA;```sql&#xA;SELECT &#xA;    mountain_network.from_node || &#39; -&gt; &#39; || mountain_network.to_node AS path, &#xA;    1 AS connections&#xA;FROM mountain_network&#xA;WHERE &#xA;    mountain_network.from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;    AND mountain_network.to_node = &#39;Maverick&#39;;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT mountain_network.from_node || &#39; -&gt; &#39; || mountain_network.to_node AS path, 1 AS connections&#xA;FROM mountain_network&#xA;WHERE mountain_network.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND mountain_network.to_node = &#39;Maverick&#39;&#xA;   ...&gt; ;&#xA;+-------------------------+-------------+&#xA;|          path           | connections |&#xA;+-------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Maverick | 1           |&#xA;+-------------------------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Luckily, we have one route directly from `Jake&#39;s Lift` to `Maverick`. But that might not be for all levels. Like we need to check then, from each of the start nodes that begin at `Jake&#39;s Lift`, then for all the nodes that begin at that then it branches of further till we have a dead end only at `Maverick`.&#xA;&#xA;Phew that&#39;s going to be a long one.&#xA;&#xA;```sql&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node, 2                     FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;WHERE &#xA;    T1.from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;    AND T2.to_node = &#39;Maverick&#39;;&#xA;```&#xA;&#xA;That is the first branch off from `Jake&#39;s Lift`. If we remove the `T2.to_node = &#39;Maverick&#39;` you would see all nodes from the `Jake&#39;s Lift`, not necessarily ending at `Maverick`&#xA;&#xA;```sql&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node as path,&#xA; 2 as connections&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39;;&#xA;``` &#xA;&#xA;```&#xA;sqlite&gt; SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node as path,&#xA; 2 as connections&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39;;&#xA;+--------------------------------------------+-------------+&#xA;|                    path                    | connections |&#xA;+--------------------------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside | 2           |&#xA;| Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Sidewinder | 2           |&#xA;+--------------------------------------------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Similarly we can do it for 3 and 4, and till 12 connections.&#xA;&#xA;```sql&#xA;SELECT &#xA;    T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node as path,&#xA;    3 as connections&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39;;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node as path, 3 as connections&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;   ...&gt; ;&#xA;+--------------------------------------------------------+-------------+&#xA;|                          path                          | connections |&#xA;+--------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise | 3           |&#xA;+--------------------------------------------------------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;Yikes.&#xA;&#xA;That gives a long long query. I cannot write.&#xA;&#xA;```sql&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node AS path, 1 AS connections&#xA;FROM mountain_network T1&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T1.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node, 2&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T2.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node, 3&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T3.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node, 4&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T4.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node, 5&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T5.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node, 6&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T6.to_node = &#39;Maverick&#39;&#xA;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node || &#39; -&gt; &#39; || T7.to_node, 7&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;JOIN mountain_network T7 ON T6.to_node = T7.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T7.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node || &#39; -&gt; &#39; || T7.to_node || &#39; -&gt; &#39; || T8.to_node, 8&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;JOIN mountain_network T7 ON T6.to_node = T7.from_node&#xA;JOIN mountain_network T8 ON T7.to_node = T8.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T8.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node || &#39; -&gt; &#39; || T7.to_node || &#39; -&gt; &#39; || T8.to_node || &#39; -&gt; &#39; || T9.to_node, 9&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;JOIN mountain_network T7 ON T6.to_node = T7.from_node&#xA;JOIN mountain_network T8 ON T7.to_node = T8.from_node&#xA;JOIN mountain_network T9 ON T8.to_node = T9.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T9.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node || &#39; -&gt; &#39; || T7.to_node || &#39; -&gt; &#39; || T8.to_node || &#39; -&gt; &#39; || T9.to_node || &#39; -&gt; &#39; || T10.to_node, 10&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;JOIN mountain_network T7 ON T6.to_node = T7.from_node&#xA;JOIN mountain_network T8 ON T7.to_node = T8.from_node&#xA;JOIN mountain_network T9 ON T8.to_node = T9.from_node&#xA;JOIN mountain_network T10 ON T9.to_node = T10.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T10.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node || &#39; -&gt; &#39; || T7.to_node || &#39; -&gt; &#39; || T8.to_node || &#39; -&gt; &#39; || T9.to_node || &#39; -&gt; &#39; || T10.to_node || &#39; -&gt; &#39; || T11.to_node, 11&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;JOIN mountain_network T7 ON T6.to_node = T7.from_node&#xA;JOIN mountain_network T8 ON T7.to_node = T8.from_node&#xA;JOIN mountain_network T9 ON T8.to_node = T9.from_node&#xA;JOIN mountain_network T10 ON T9.to_node = T10.from_node&#xA;JOIN mountain_network T11 ON T10.to_node = T11.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T11.to_node = &#39;Maverick&#39;&#xA;&#xA;UNION ALL&#xA;&#xA;SELECT T1.from_node || &#39; -&gt; &#39; || T1.to_node || &#39; -&gt; &#39; || T2.to_node || &#39; -&gt; &#39; || T3.to_node || &#39; -&gt; &#39; || T4.to_node || &#39; -&gt; &#39; || T5.to_node || &#39; -&gt; &#39; || T6.to_node || &#39; -&gt; &#39; || T7.to_node || &#39; -&gt; &#39; || T8.to_node || &#39; -&gt; &#39; || T9.to_node || &#39; -&gt; &#39; || T10.to_node || &#39; -&gt; &#39; || T11.to_node || &#39; -&gt; &#39; || T12.to_node, 12&#xA;FROM mountain_network T1&#xA;JOIN mountain_network T2 ON T1.to_node = T2.from_node&#xA;JOIN mountain_network T3 ON T2.to_node = T3.from_node&#xA;JOIN mountain_network T4 ON T3.to_node = T4.from_node&#xA;JOIN mountain_network T5 ON T4.to_node = T5.from_node&#xA;JOIN mountain_network T6 ON T5.to_node = T6.from_node&#xA;JOIN mountain_network T7 ON T6.to_node = T7.from_node&#xA;JOIN mountain_network T8 ON T7.to_node = T8.from_node&#xA;JOIN mountain_network T9 ON T8.to_node = T9.from_node&#xA;JOIN mountain_network T10 ON T9.to_node = T10.from_node&#xA;JOIN mountain_network T11 ON T10.to_node = T11.from_node&#xA;JOIN mountain_network T12 ON T11.to_node = T12.from_node&#xA;WHERE T1.from_node = &#39;Jake&#39;&#39;s Lift&#39; AND T12.to_node = &#39;Maverick&#39;;&#xA;```&#xA;&#xA;```&#xA;+--------------------------------------------------------------+-------------+&#xA;|                             path                             | connections |&#xA;+--------------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Maverick                                      | 1           |&#xA;+--------------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 5           |&#xA;| aw -&gt; Maverick                                               |             |&#xA;+--------------------------------------------------------------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;Ok, we get the result, but that looks like a terrifying query.&#xA;&#xA;Can we do better?&#xA;&#xA;I looked at the tutorial and we have something like `RECURSIVE CTE`&#xA;&#xA;Wow!&#xA;&#xA;This is quite challenging to explain.&#xA;&#xA;&#xA;### Recursive CTE&#xA;&#xA;We can create a recursive CTE that draws the path until we have not found the stop node that is, `Maverick` and the number of connections is not more than `12`.&#xA;&#xA;So here&#39;s how it goes.&#xA;&#xA;&#xA;```sql&#xA;WITH RECURSIVE ski_paths AS (&#xA;    SELECT&#xA;        from_node,&#xA;        to_node,&#xA;        CAST(from_node || &#39; -&gt; &#39; || to_node AS TEXT) AS full_path,&#xA;        1 AS connections&#xA;    FROM mountain_network&#xA;    WHERE from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;&#xA;    UNION ALL&#xA;&#xA;    SELECT&#xA;        mn.from_node,&#xA;        mn.to_node,&#xA;        sp.full_path || &#39; -&gt; &#39; || mn.to_node,&#xA;        sp.connections + 1&#xA;    FROM ski_paths sp&#xA;    JOIN mountain_network mn ON sp.to_node = mn.from_node&#xA;    WHERE sp.connections &lt; 12&#xA;      AND sp.to_node != &#39;Maverick&#39;&#xA;)&#xA;SELECT full_path, connections&#xA;FROM ski_paths&#xA;WHERE to_node = &#39;Maverick&#39;&#xA;ORDER BY connections ASC;&#xA;```&#xA;We first define the base case i.e. what to select first. We get the first row where we start, the `Jake&#39;s Lift` and from there on we create a recursive select statement by referencing the CTE within it.&#xA;&#xA;The recursive bit is this one &#xA;&#xA;```sql&#xA; SELECT&#xA;        mn.from_node,&#xA;        mn.to_node,&#xA;        sp.full_path || &#39; -&gt; &#39; || mn.to_node,&#xA;        sp.connections + 1&#xA;    FROM ski_paths sp&#xA;    JOIN mountain_network mn ON sp.to_node = mn.from_node&#xA;    WHERE sp.connections &lt; 12&#xA;      AND sp.to_node != &#39;Maverick&#39;&#xA;```&#xA;&#xA;We are referencing the CTE (table) within it as the recursive condition to select from the nodes where the end node or the `to_node` is not Maverick and connections haven&#39;t gone beyond `12`, we try to find the path.&#xA;&#xA;This would give us all the nodes.&#xA;&#xA;Let&#39;s see first, by checking the CTE as is.&#xA;&#xA;```sql&#xA;WITH RECURSIVE ski_paths AS (&#xA;    SELECT&#xA;        from_node,&#xA;        to_node,&#xA;        CAST(from_node || &#39; -&gt; &#39; || to_node AS TEXT) AS full_path,&#xA;        1 AS connections&#xA;    FROM mountain_network&#xA;    WHERE from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;&#xA;    UNION ALL&#xA;&#xA;    SELECT&#xA;        mn.from_node,&#xA;        mn.to_node,&#xA;        sp.full_path || &#39; -&gt; &#39; || mn.to_node,&#xA;        sp.connections + 1&#xA;    FROM ski_paths sp&#xA;    JOIN mountain_network mn ON sp.to_node = mn.from_node&#xA;    WHERE sp.connections &lt; 12&#xA;      AND sp.to_node != &#39;Maverick&#39;&#xA;)&#xA;SELECT * from ski_paths;&#xA;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; WITH RECURSIVE ski_paths AS (&#xA;    SELECT&#xA;        from_node,&#xA;        to_node,&#xA;        CAST(from_node || &#39; -&gt; &#39; || to_node AS TEXT) AS full_path,&#xA;        1 AS connections&#xA;    FROM mountain_network&#xA;    WHERE from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;&#xA;    UNION ALL&#xA;&#xA;    SELECT&#xA;        mn.from_node,&#xA;        mn.to_node,&#xA;        sp.full_path || &#39; -&gt; &#39; || mn.to_node,&#xA;        sp.connections + 1&#xA;    FROM ski_paths sp&#xA;    JOIN mountain_network mn ON sp.to_node = mn.from_node&#xA;    WHERE sp.connections &lt; 12&#xA;      AND sp.to_node != &#39;Maverick&#39;&#xA;)&#xA;SELECT * from ski_paths;&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;|   from_node   |    to_node     |                          full_path                           | connections |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift   | Wildwood Lift  | Jake&#39;s Lift -&gt; Wildwood Lift                                 | 1           |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift   | Maverick       | Jake&#39;s Lift -&gt; Maverick                                      | 1           |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Wildwood Lift | Brightside     | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside                   | 2           |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Wildwood Lift | Sidewinder     | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Sidewinder                   | 2           |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Brightside    | Moonrise       | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise       | 3           |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Moonrise      | Draw           | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 4           |&#xA;|               |                | aw                                                           |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Moonrise      | Lone Pine      | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Lo | 4           |&#xA;|               |                | ne Pine                                                      |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Draw          | Broadway       | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 5           |&#xA;|               |                | aw -&gt; Broadway                                               |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Draw          | Maverick       | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 5           |&#xA;|               |                | aw -&gt; Maverick                                               |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Broadway      | Outlaw Trail   | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 6           |&#xA;|               |                | aw -&gt; Broadway -&gt; Outlaw Trail                               |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Outlaw Trail  | Center Aisle   | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 7           |&#xA;|               |                | aw -&gt; Broadway -&gt; Outlaw Trail -&gt; Center Aisle               |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Center Aisle  | Bandit         | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 8           |&#xA;|               |                | aw -&gt; Broadway -&gt; Outlaw Trail -&gt; Center Aisle -&gt; Bandit     |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;| Center Aisle  | Lower Stampede | Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 8           |&#xA;|               |                | aw -&gt; Broadway -&gt; Outlaw Trail -&gt; Center Aisle -&gt; Lower Stam |             |&#xA;|               |                | pede                                                         |             |&#xA;+---------------+----------------+--------------------------------------------------------------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;You can see we have got quite a lot of paths, this are all the paths that start from `Jake&#39;s Lift` and have less than `12` connections.&#xA;&#xA;So, now we can simply filter with the `WHERE` clause in the case where the `to_node = &#39;Maverick&#39;` and we would get the result.&#xA;&#xA;&#xA;&#xA;```&#xA;sqlite&gt; WITH RECURSIVE ski_paths AS (&#xA;    SELECT&#xA;        from_node,&#xA;        to_node,&#xA;        CAST(from_node || &#39; -&gt; &#39; || to_node AS TEXT) AS full_path,&#xA;        1 AS connections&#xA;    FROM mountain_network&#xA;    WHERE from_node = &#39;Jake&#39;&#39;s Lift&#39;&#xA;&#xA;    UNION ALL&#xA;&#xA;    SELECT&#xA;        mn.from_node,&#xA;        mn.to_node,&#xA;        sp.full_path || &#39; -&gt; &#39; || mn.to_node,&#xA;        sp.connections + 1&#xA;    FROM ski_paths sp&#xA;    JOIN mountain_network mn ON sp.to_node = mn.from_node&#xA;    WHERE sp.connections &lt; 12&#xA;      AND sp.to_node != &#39;Maverick&#39;&#xA;)&#xA;SELECT full_path, connections&#xA;FROM ski_paths&#xA;WHERE to_node = &#39;Maverick&#39;&#xA;ORDER BY connections ASC;&#xA;+--------------------------------------------------------------+-------------+&#xA;|                          full_path                           | connections |&#xA;+--------------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Maverick                                      | 1           |&#xA;+--------------------------------------------------------------+-------------+&#xA;| Jake&#39;s Lift -&gt; Wildwood Lift -&gt; Brightside -&gt; Moonrise -&gt; Dr | 5           |&#xA;| aw -&gt; Maverick                                               |             |&#xA;+--------------------------------------------------------------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Ok, that gave the result correctly.&#xA;&#xA;And this is it!&#xA;&#xA;Sweet problem to learn about Recursive CTE!&#xA;&#xA;Off to the final day of advent of SQL 2025!&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 13: XML Travel Manifests</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-13</link>
      <description>Advent of SQL - Day 13, XML Travel Manifests Its day 13 of Advent of SQL, we have some xml to parse, which I don&#39;t think SQL can handle, but string manipulation</description>
      <pubDate>Sat, 27 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL - Day 13, XML Travel Manifests&#xA;&#xA;Its day 13 of Advent of SQL, we have some xml to parse, which I don&#39;t think SQL can handle, but string manipulation to the rescue.&#xA;&#xA;Let&#39;s get the SQL for the day:&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS travel_manifests;&#xA;&#xA;CREATE TABLE travel_manifests (&#xA;    manifest_id INT PRIMARY KEY,&#xA;    vehicle_id TEXT,&#xA;    departure_time TIMESTAMP,&#xA;    manifest_xml XML&#xA;);&#xA;&#xA;INSERT INTO travel_manifests (manifest_id, vehicle_id, departure_time, manifest_xml) VALUES&#xA;  (1, &#39;SLEIGH-01&#39;, &#39;2025-12-22 06:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;engine_check&gt;ignored&lt;/engine_check&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (2, &#39;SLEIGH-07&#39;, &#39;2025-12-23 13:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Ravi Patel&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Keiko Ito&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Anya Pavlov&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Carter Lewis&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;engine_check&gt;ignored&lt;/engine_check&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Layla Brooks&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;reindeer_mood&gt;ignored&lt;/reindeer_mood&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (3, &#39;FLIGHT-NP-9&#39;, &#39;2025-12-22 18:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Keiko Ito&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Diego Ramos&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Priya Das&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Layla Brooks&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (4, &#39;TRAIN-ICE-3&#39;, &#39;2025-12-22 18:00:00&#39;, &#39;&lt;manifest&gt;&lt;reindeer_mood&gt;low&lt;/reindeer_mood&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Isla Torres&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Ravi Patel&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Hiro Tanaka&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Priya Das&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (5, &#39;FLIGHT-NP-9&#39;, &#39;2025-12-22 17:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Mateo Cruz&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;snowfall_inches&gt;ignored&lt;/snowfall_inches&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (6, &#39;CARGO-12&#39;, &#39;2025-12-22 15:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Carter Lewis&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Hiro Tanaka&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Lucas Ford&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (7, &#39;SLEIGH-01&#39;, &#39;2025-12-22 11:00:00&#39;, &#39;&lt;manifest&gt;&lt;snack_inventory&gt;unknown&lt;/snack_inventory&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Priya Das&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Diego Ramos&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Lucas Ford&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Carter Lewis&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;reindeer_mood&gt;ignored&lt;/reindeer_mood&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Hiro Tanaka&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;weather_note&gt;ignored&lt;/weather_note&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Zara Sheikh&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (8, &#39;CARGO-12&#39;, &#39;2025-12-23 13:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Layla Brooks&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Leo Becker&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;weather_note&gt;ignored&lt;/weather_note&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Ravi Patel&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Elena Morales&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (9, &#39;SLEIGH-01&#39;, &#39;2025-12-23 10:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Bianca Pereira&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Zara Sheikh&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Elena Morales&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;reindeer_mood&gt;ignored&lt;/reindeer_mood&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Hiro Tanaka&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Keiko Ito&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (10, &#39;SLEIGH-01&#39;, &#39;2025-12-22 21:00:00&#39;, &#39;&lt;manifest&gt;&lt;snowfall_inches&gt;low&lt;/snowfall_inches&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Ava Johnson&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Priya Das&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Mateo Cruz&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Bianca Pereira&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Leo Becker&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (11, &#39;SLEIGH-07&#39;, &#39;2025-12-23 10:00:00&#39;, &#39;&lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Bianca Pereira&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Anya Pavlov&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (12, &#39;SLEIGH-01&#39;, &#39;2025-12-22 08:00:00&#39;, &#39;&lt;manifest&gt;&lt;reindeer_mood&gt;ok&lt;/reindeer_mood&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Ravi Patel&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Bianca Pereira&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Keiko Ito&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (13, &#39;FLIGHT-NP-9&#39;, &#39;2025-12-22 11:00:00&#39;, &#39;&lt;manifest&gt;&lt;snowfall_inches&gt;ok&lt;/snowfall_inches&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Elena Morales&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (14, &#39;SLEIGH-01&#39;, &#39;2025-12-22 14:00:00&#39;, &#39;&lt;manifest&gt;&lt;engine_check&gt;high&lt;/engine_check&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Layla Brooks&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;weather_note&gt;ignored&lt;/weather_note&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Anya Pavlov&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Keiko Ito&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Zara Sheikh&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;engine_check&gt;ignored&lt;/engine_check&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;),&#xA;  (15, &#39;FLIGHT-NP-9&#39;, &#39;2025-12-22 14:00:00&#39;, &#39;&lt;manifest&gt;&lt;snowfall_inches&gt;ok&lt;/snowfall_inches&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Priya Das&lt;/name&gt;&lt;ticket_class&gt;priority&lt;/ticket_class&gt;&lt;snowfall_inches&gt;ignored&lt;/snowfall_inches&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Bianca Pereira&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;&#39;);&#xA;```&#xA;&#xA;Just one table, with some wild XML.&#xA;&#xA;Let&#39;s see what do we want to do in the problem statement.&#xA;&#xA;## Problem&#xA;&#xA;&gt; Using the `travel_manifests` table, extract the passenger information from the XML data and produce a report that shows all of the departure times for &#34;CARGO&#34; vehicles that have more than 20 passengers booked. Include in the results:&#xA;&gt; &#xA;&gt; - The vehicle_id&#xA;&gt; - The departure_time&#xA;&gt; - The total number of passengers on that departure&#xA;&gt; - Order the results by departure_time.&#xA;&#xA;&#xA;Ok, so we need the number of passengers in the records which are of type `CARGO` and have more than 20 passengers booked.&#xA;&#xA;Interesting!&#xA;&#xA;Let&#39;s look at one record.&#xA;&#xA;```&#xA;sqlite&gt; .schema&#xA;CREATE TABLE travel_manifests (&#xA;    manifest_id INT PRIMARY KEY,&#xA;    vehicle_id TEXT,&#xA;    departure_time TIMESTAMP,&#xA;    manifest_xml XML&#xA;);&#xA;sqlite&gt; SELECT * FROM travel_manifests WHERE id = 1;&#xA;Parse error: no such column: id&#xA;  SELECT * FROM travel_manifests WHERE id = 1;&#xA;                         error here ---^&#xA;sqlite&gt; SELECT * FROM travel_manifests LIMIT 1;&#xA;+-------------+------------+---------------------+--------------------------------------------------------------+&#xA;| manifest_id | vehicle_id |   departure_time    |                         manifest_xml                         |&#xA;+-------------+------------+---------------------+--------------------------------------------------------------+&#xA;| 1           | SLEIGH-01  | 2025-12-22 06:00:00 | &lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;tick |&#xA;|             |            |                     | et_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;nam |&#xA;|             |            |                     | e&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;eng |&#xA;|             |            |                     | ine_check&gt;ignored&lt;/engine_check&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name |&#xA;|             |            |                     | &gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;sna |&#xA;|             |            |                     | ck_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;/passenge |&#xA;|             |            |                     | rs&gt;&lt;/manifest&gt;                                               |&#xA;+-------------+------------+---------------------+--------------------------------------------------------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;So, we have the following columns &#xA;&#xA;- `vehicle_id` which is I think related to filtering `CARGO` related vehicles only&#xA;- `departure_time` which we just return as is&#xA;- `manifest_xml`, oh! This is xml and it has passenger details. &#xA;&#xA;If we look carefully, we can see the xml looks like this:&#xA;&#xA;```xml&#xA;&lt;manifest&gt;&#xA;    &lt;passengers&gt;&#xA;        &lt;passenger&gt;&#xA;            &lt;name&gt;Nia Grant&lt;/name&gt; &#xA;           &lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&#xA;        &lt;/passenger&gt;&#xA;        &lt;passenger&gt;&#xA;            &lt;name&gt;Sofia Kim&lt;/name&gt; &#xA;           &lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&#xA;           &lt;engine_check&gt;ignored&lt;/engine_check&gt;&#xA;       &lt;/passenger&gt;&#xA;       &lt;passenger&gt;&#xA;           &lt;name&gt;Jonah Wolfe&lt;/name&gt;&#xA;           &lt;ticket_class&gt;standard&lt;/ticket_class&gt;&#xA;           &lt;snack_inventory&gt;ignored&lt;/snack_inventory&gt;&#xA;       &lt;/passenger&gt;&#xA;    &lt;/passengers&gt;&#xA;&lt;/manifest&gt;&#xA;```&#xA;&#xA;We have `manifest` which has a property of `passengers` which is a list of `passenger` tags, inside of which, each element of `passenger` has its details like `name`, `ticket_class`, etc.&#xA;&#xA;We only want the count of `passengers`, how can we get that? The dirtiest way to do is to count the occurances of `&lt;passenger&gt;` or `&lt;/passenger&gt;` in the xml string. &#xA;&#xA;We can do that with counting the full length of the xml string, and then dividing by the number of times the string can be replaced(which is the dirty part, there could be hidden `&lt;passenger&gt;` string somewhere that might break this logic, but if it is a valid xml, it works). We count the number of characters left after we replace the string `&lt;passenger&gt;` with empty string `&#39;&#39;` so that we can get the difference of the total number of character and the number of characters occupied by the string `&lt;passenger&gt;`. This difference if we divide by the length of `&lt;passenger&gt;` will give us the count of the number of times the `&lt;passenger&gt;` string is present in the xml string.&#xA;&#xA;Let&#39;s take an example from the above. The length of the xml string is 374.&#xA;&#xA;```sql&#xA;SELECT LENGTH(manifest_xml) FROM travel_manifests LIMIT 1;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT LENGTH(manifest_xml) FROM travel_manifests LIMIT 1;&#xA;+----------------------+&#xA;| LENGTH(manifest_xml) |&#xA;+----------------------+&#xA;| 374                  |&#xA;+----------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Let&#39;s replace the occurances of `&lt;passenger&gt; with empty string in the `manifest_xml` string, like so:&#xA;&#xA;```sql&#xA;SELECT REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;) FROM travel_manifests LIMIT 1;&#xA;```&#xA;&#xA;Now, we can see the string `&lt;passenger&gt;` is gone from the returned result set. We can try getting its length now.&#xA;&#xA;```sql&#xA;SELECT &#xA;    LENGTH(&#xA;        REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)&#xA;    )&#xA;FROM travel_manifests LIMIT 1;&#xA;```&#xA;&#xA;Now, it says `341` why? Because we removed (replaced with empty string) the occurences of `&lt;passenger&gt;`.&#xA;&#xA;Let&#39;s get the length of `&#39;&lt;passenger&gt;&#39;` string, which should be `11` right? &#xA;&#xA;Spell it `p-a-s-s-e-n-g-e-r` as `pass` + `enger` (4+5=9) and 2 for `&lt;&gt;` so 11. Sometimes I don&#39;t know how to do math, I use SQL.&#xA;&#xA;```sql&#xA;SELECT LENGTH(&#39;&lt;passenger&gt;&#39;);&#xA;```&#xA;&#xA;There it is `11`.&#xA;&#xA;Now, if you compute the difference of the actual length of XML with the removed parts of the `&lt;passenger&gt;` what do we get?&#xA;&#xA;```sql&#xA;SELECT LENGTH(manifest_xml) - LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)) FROM travel_manifests LIMIT 1;&#xA;```&#xA;&#xA;WE got `33`, why would you ask because `11` times 3 is `33`. We found three instances of `&lt;passenger&gt;`,  so we just need to divide by the length of `&lt;passenger&gt;` or hard code it as `11` doesn&#39;t matter.&#xA;&#xA;We would get the number of occurances of `&lt;passenger&gt;` which will give the number of passenger in the xml string.&#xA;&#xA;```sql&#xA;SELECT&#xA;    (&#xA;        LENGTH(manifest_xml) - LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;))&#xA;    ) / LENGTH(&#39;&lt;passenger&gt;&#39;)&#xA;FROM travel_manifests LIMIT 1;&#xA;```&#xA;&#xA;Phew, its `3`!&#xA;&#xA;That was a lot for a simple stuff. But hey its fun!&#xA;&#xA;```&#xA;sqlite&gt; SELECT (manifest_xml) FROM travel_manifests LIMIT 1;&#xA;+--------------------------------------------------------------+&#xA;|                         manifest_xml                         |&#xA;+--------------------------------------------------------------+&#xA;| &lt;manifest&gt;&lt;passengers&gt;&lt;passenger&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;tick |&#xA;| et_class&gt;overnight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;nam |&#xA;| e&gt;Sofia Kim&lt;/name&gt;&lt;ticket_class&gt;overnight&lt;/ticket_class&gt;&lt;eng |&#xA;| ine_check&gt;ignored&lt;/engine_check&gt;&lt;/passenger&gt;&lt;passenger&gt;&lt;name |&#xA;| &gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt;standard&lt;/ticket_class&gt;&lt;sna |&#xA;| ck_inventory&gt;ignored&lt;/snack_inventory&gt;&lt;/passenger&gt;&lt;/passenge |&#xA;| rs&gt;&lt;/manifest&gt;                                               |&#xA;+--------------------------------------------------------------+&#xA;&#xA;sqlite&gt; SELECT REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;) FROM travel_manifests LIMIT 1;&#xA;+--------------------------------------------------------------+&#xA;|           REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)           |&#xA;+--------------------------------------------------------------+&#xA;| &lt;manifest&gt;&lt;passengers&gt;&lt;name&gt;Nia Grant&lt;/name&gt;&lt;ticket_class&gt;ov |&#xA;| ernight&lt;/ticket_class&gt;&lt;/passenger&gt;&lt;name&gt;Sofia Kim&lt;/name&gt;&lt;tic |&#xA;| ket_class&gt;overnight&lt;/ticket_class&gt;&lt;engine_check&gt;ignored&lt;/eng |&#xA;| ine_check&gt;&lt;/passenger&gt;&lt;name&gt;Jonah Wolfe&lt;/name&gt;&lt;ticket_class&gt; |&#xA;| standard&lt;/ticket_class&gt;&lt;snack_inventory&gt;ignored&lt;/snack_inven |&#xA;| tory&gt;&lt;/passenger&gt;&lt;/passengers&gt;&lt;/manifest&gt;                    |&#xA;+--------------------------------------------------------------+&#xA;sqlite&gt; &#xA;sqlite&gt; SELECT LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)) FROM travel_manifests LIMIT 1;&#xA;+--------------------------------------------------+&#xA;| LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)) |&#xA;+--------------------------------------------------+&#xA;| 341                                              |&#xA;+--------------------------------------------------+&#xA;sqlite&gt; SELECT LENGTH(manifest_xml) - LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)) FROM travel_manifests LIMIT 1;&#xA;+--------------------------------------------------------------+&#xA;| LENGTH(manifest_xml) - LENGTH(REPLACE(manifest_xml, &#39;&lt;passen |&#xA;+--------------------------------------------------------------+&#xA;| 33                                                           |&#xA;+--------------------------------------------------------------+&#xA;sqlite&gt; SELECT LENGTH(&#39;&lt;passenger&gt;&#39;) FROM travel_manifests LIMIT 1;&#xA;+-----------------------+&#xA;| LENGTH(&#39;&lt;passenger&gt;&#39;) |&#xA;+-----------------------+&#xA;| 11                    |&#xA;+-----------------------+&#xA;&#xA;sqlite&gt; SELECT (LENGTH(manifest_xml) - LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;)))/LENGTH(&#39;&lt;passenger&gt;&#39;) FROM travel_manifests LIMIT 1;&#xA;+--------------------------------------------------------------+&#xA;| (LENGTH(manifest_xml) - LENGTH(REPLACE(manifest_xml, &#39;&lt;passe |&#xA;+--------------------------------------------------------------+&#xA;| 3                                                            |&#xA;+--------------------------------------------------------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Now, let construct the query to get the number of passengers.&#xA;&#xA;```sql&#xA;SELECT&#xA;    vehicle_id,&#xA;    departure_time,&#xA;    (&#xA;        LENGTH(manifest_xml)&#xA;        - LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;))&#xA;    ) / LENGTH(&#39;&lt;passenger&gt;&#39;) AS passengers_in_manifest&#xA;FROM travel_manifests&#xA;WHERE vehicle_id LIKE &#39;CARGO-%&#39;;&#xA;```&#xA;&#xA;We need to wrap it in a CTE to grab and group by the vehicle_id I believe, as there are similar entries.&#xA;&#xA;Also we need to group the records with the same departure time, so that we can combine the number of passengers for that vehicle.&#xA;&#xA;&#xA;```sql&#xA;WITH passenger_counts AS (&#xA;    SELECT&#xA;        vehicle_id,&#xA;        departure_time,&#xA;        (&#xA;            LENGTH(manifest_xml)&#xA;            - LENGTH(REPLACE(manifest_xml, &#39;&lt;passenger&gt;&#39;, &#39;&#39;))&#xA;        ) / LENGTH(&#39;&lt;passenger&gt;&#39;) AS passengers_in_manifest&#xA;    FROM travel_manifests&#xA;    WHERE vehicle_id LIKE &#39;CARGO-%&#39;&#xA;)&#xA;SELECT&#xA;    vehicle_id,&#xA;    departure_time,&#xA;    SUM(passengers_in_manifest) AS total_passengers&#xA;FROM passenger_counts&#xA;GROUP BY vehicle_id, departure_time&#xA;HAVING SUM(passengers_in_manifest) &gt; 20&#xA;ORDER BY departure_time;&#xA;```&#xA;&#xA;We count the number of passengers in the CTE and use it as a filter in the outer query as `SUM(passengers_in_manifest) &gt; 20` which will give the right condition for us to get the result. We have to use `HAVING` as we need to do that after grouping the same `vehicle_id` and records across same `departure_time`.&#xA;&#xA;We also use the `vehicle_id LIKE &#39;CARGO-%&#39;` in the CTE to filter it right at the inner query to avoid looping in all the queries for computing the number of passengers. &#xA;&#xA;&#xA;That solves this problem.&#xA;&#xA;&#xA;That&#39;s it from day 13 of Advent of SQL.&#xA;&#xA;There are other ways, but its the same parsing, We can use JOINs and stuff, but hey that was not the point of this.&#xA;&#xA;Anyways! See you tomorrow for day 14!&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 12: Archive Flight Records</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-12</link>
      <description>Advent of SQL - Day 12, Archive Flight Records We are on Day 12! Phew its almost done! Just 3 days more! Let&#39;s get the SQL! We have just one table and a couple</description>
      <pubDate>Sat, 27 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL - Day 12, Archive Flight Records&#xA;&#xA;We are on Day 12! Phew its almost done! Just 3 days more!&#xA;&#xA;Let&#39;s get the SQL!&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS archive_records;&#xA;&#xA;CREATE TABLE archive_records (&#xA;    id INT PRIMARY KEY,&#xA;    title TEXT,&#xA;    description TEXT&#xA;);&#xA;&#xA;INSERT INTO archive_records (id, title, description) VALUES&#xA;(1, &#39;Flight Stabilization Prototype Analysis&#39;, &#39;This report details the latest advancements in stabilizing aerial maneuvers for enchanted sleighs. Initial tests yielded promising results, showcasing a marked decrease in turbulence during airborne navigation.&#39;),&#xA;(2, &#39;Lift Calibration Incident Log&#39;, &#39;During a routine lift calibration test, the sleigh experienced an unexpected upward surge, causing it to hover dangerously close to the workshop ceiling. Subsequent analysis revealed a miscalculation in the weight distribution formula, prompting a thorough review of all aerodynamic coefficients.&#39;),&#xA;(3, &#39;Aerial Aspirations: The Great Flop&#39;, &#39;Despite the initial excitement surrounding the design of our feather-laden airborne contraption, the prototype proved less than buoyant. The unexpected descent resulted in a rather spectacular cloud of glitter and twigs, serving as a vivid reminder that not all dreams of flight take wing as intended.&#39;),&#xA;(4, &#39;Optimized Sled Dynamics&#39;, &#39;This document explores advanced mechanics behind sled propulsion and movement efficiency. Through extensive calculations and enchanted material tests, the elves aim to refine turning capabilities and enhance downhill speed.&#39;),&#xA;(5, &#39;Reindeer Harness Design Flaws&#39;, &#39;The latest prototype of our reindeer harness was found to inhibit mobility, causing undue strain on the animals during testing. Observations indicated that the weight distribution was poorly calibrated, necessitating a complete redesign for optimal comfort and performance.&#39;),&#xA;(6, &#39;Streamlined Cargo Routing System&#39;, &#39;In our latest endeavor, we have implemented a magical algorithm to optimize the route taken by our toy-laden sleighs. This groundbreaking update minimizes transport time between the workshop and delivery points, ensuring that holiday cheer reaches every home even faster!&#39;),&#xA;(7, &#39;Toy Durability Testing Protocols&#39;, &#39;The elves meticulously conducted stress evaluations on the latest toy prototypes to determine their resilience under various conditions. Initial findings indicate that while some designs withstood rigorous play, others required reinforcements to avoid premature wear and tear.&#39;),&#xA;(8, &#39;Weather Resistance Breakthrough: Frost Shielding&#39;, &#39;In our recent experiments, we discovered an innovative composite material that effectively repels moisture while withstanding extreme cold. This newfound frost shielding could revolutionize our outdoor toys, ensuring they remain both functional and enchanting, even in the harshest winter conditions.&#39;),&#xA;(9, &#39;Safety Compliance Check Overview&#39;, &#39;In the pursuit of enchantment and joy, this document outlines the mandatory safety compliance measures for all workshop operations. Each elf must adhere strictly to these guidelines to ensure the safe transport and handling of our delicate prototypes, thus preventing any unforeseen magical mishaps.&#39;),&#xA;(10, &#39;Sleigh Skim Mechanism Upgrade&#39;, &#39;The experiment involved enchanting the underside of the sleigh with a whispering wind charm to achieve unprecedented speeds. Results were alarming, leading to uncontrollable flight trajectories and sudden descents—definitely do not attempt again.&#39;),&#xA;(11, &#39;Caution: Enchanted Toy Prototype&#39;, &#39;This design incorporates a reactive magic component that may unpredictably animate in the presence of mischief. Ensure all test environments are secured against spontaneous giggles and potential chaos.&#39;),&#xA;(12, &#39;Workshop Experiment Safety Checklist&#39;, &#39;Before embarking on any workshop experiments, ensure all safety goggles are securely fastened to prevent debris from interfered visions. Always double-check that the workspace is free of clutter, as unexpected accidents can arise from even the smallest flurry of trinkets and tools.&#39;),&#xA;(13, &#39;Cocoa Bean Roasting Innovations&#39;, &#39;This review explores the latest techniques in roasting cocoa beans to achieve unparalleled flavor profiles. Adjustments to temperature and timing have led to a delightful spectrum of aromas, promising to elevate our confectionery creations to new heights.&#39;),&#xA;(14, &#39;Intricate Snowflake Ornament Design&#39;, &#39;This design blueprint outlines the geometric intricacies of a multi-faceted snowflake ornament, emphasizing a balance between elegance and structural integrity. Each arm is meticulously patterned to reflect light, creating a shimmering effect that dances with the seasons, while ensuring optimal symmetry for enchanting visual appeal.&#39;),&#xA;(15, &#39;Gift Box Assembly Prototype&#39;, &#39;This prototype outlines the intricate process of assembling the enchanted gift boxes designed to withstand the whims of time and space. Each step must be meticulously executed to ensure that every box not only sparkles with joy but also maintains its magical properties through every unwrapping.&#39;),&#xA;(16, &#39;Wrap-It-Up: Innovative Designs&#39;, &#39;This experimental report explores various materials and techniques for creating enchanted wrapping paper that enhances the gift-giving experience. Initial findings suggest that incorporation of shimmering elven dust can amplify the aesthetic appeal while maintaining structural integrity during airborne delivery.&#39;),&#xA;(17, &#39;Magical Confection Fusion Results&#39;, &#39;The experimental concoction blended sugar crystals with essence of starlight, resulting in a luminescent treat that sparkles enchantingly. However, a curious side effect was noted: excessive giggling among taste testers, raising questions about potential airborne laughter.&#39;),&#xA;(18, &#39;Luminous Ornament Crafting Techniques&#39;, &#39;In our continuous pursuit of radiance, this document outlines innovative methods for creating ornaments that glow with enchantment. Engaging both traditional techniques and modern enchantments, each design is meant to instill joy and sparkle during the festive season.&#39;),&#xA;(19, &#39;Magical Energy Conduction Analysis&#39;, &#39;Recent experiments have shown that the flow of magical energy through crystalline conduits behaves unpredictably under varying lunar phases. Further investigation into the correlation between ambient mana levels and energy stability is necessary to optimize enchantment potency.&#39;),&#xA;(20, &#39;Elven Workshop Organization Protocols&#39;, &#39;The implementation of open shelving systems has significantly increased accessibility to essential materials, thus enhancing workflow efficiency. Furthermore, the organization of tools into color-coded bins ensures that each elf can swiftly locate their required implements without disrupting the harmony of the workshop.&#39;);&#xA;&#xA;```&#xA; &#xA;We have just one table and a couple of text like columns. That&#39;s it, looks like a string searching problem.&#xA;&#xA;Let&#39;s head to the problem statement!&#xA;&#xA;&#xA;## Problem&#xA;&#xA;&gt; Using the `archive_records` table, search both the `title` and `description` fields for the term &#34;fly&#34;. Make sure that you also match for words like &#34;flying&#34;, &#34;flight&#34;, etc. Boost the results where the term appears in the title and lastly, rank the results by relevance (most relevant first). Provide the elves the top 5 most relevant archived records back.&#xA;&#xA;It is a text search use case problem indeed!&#xA;&#xA;We need to find and rank the records matching the word `fly`, `flying`, `flight` like those. Kind of tricky if we miss any terms that are not hard-coded.&#xA;&#xA;We need to boost the search term in `title`, so there is more weightage if the term appears in title than in description. Makes sense!&#xA;&#xA;Let&#39;s start simple and move into full-text-search in SQLite!&#xA;&#xA;&#xA;### Simple String Matching&#xA;&#xA;We start by a simple nested `CASE WHEN THEN` condition. We check if the `title` has `fly`, `flying`, `flight`, etc then we set the rank as `2` and add the score from `description` as `1` if the same terms appear in description.&#xA;&#xA;So,&#xA;- If the search term (fly, flight, etc) appears **only** in title, score is `2`&#xA;- If the search term appears **only** in description, score is `1`&#xA;- If the search term appears in **both** title and description, the score is `3` since we are adding the scores.&#xA;- If the search term doesn&#39;t appear at all, then the score remains `0`.&#xA;&#xA;We simply assign the score based on the appearance of the search term and then order the result based on the computed `rank` and list the top 5.&#xA;&#xA;```sql&#xA; SELECT &#xA;    id,&#xA;    title,&#xA;    description,&#xA;    (&#xA;        CASE WHEN LOWER(title) LIKE &#39;%fly%&#39; OR LOWER(title) LIKE &#39;%flight%&#39; OR LOWER(title) LIKE &#39;%flying%&#39; THEN 2 ELSE 0 END +&#xA;        CASE WHEN LOWER(description) LIKE &#39;%fly%&#39; OR LOWER(description) LIKE &#39;%flight%&#39; OR LOWER(description) LIKE &#39;%flying%&#39; THEN 1 ELSE 0 END&#xA;    ) AS rank&#xA;FROM archive_records&#xA;ORDER BY rank DESC, id ASC&#xA;LIMIT 5;&#xA;```&#xA;&#xA;However, we can limit the search space by eliminating the computation of ranking all the records with a `WHERE` clause since it can optimise only the relevant records before giving out the result set.&#xA;&#xA;Hence, we filter only in the cases where the `title` and `description` have the relevant word and then use the condition to order by the `rank` score that we computed and list the top `5` ones.&#xA;&#xA;```sql&#xA;SELECT &#xA;    id,&#xA;    title,&#xA;    description,&#xA;    (&#xA;        -- Title matches worth 2 points (boosted relevance)&#xA;        CASE WHEN LOWER(title) LIKE &#39;%fly%&#39; OR LOWER(title) LIKE &#39;%flight%&#39; OR LOWER(title) LIKE &#39;%flying%&#39; THEN 2 ELSE 0 END +&#xA;        -- Description matches worth 1 point&#xA;        CASE WHEN LOWER(description) LIKE &#39;%fly%&#39; OR LOWER(description) LIKE &#39;%flight%&#39; OR LOWER(description) LIKE &#39;%flying%&#39; THEN 1 ELSE 0 END&#xA;    ) AS rank&#xA;FROM archive_records&#xA;WHERE &#xA;    LOWER(title) LIKE &#39;%fly%&#39; OR &#xA;    LOWER(title) LIKE &#39;%flight%&#39; OR &#xA;    LOWER(title) LIKE &#39;%flying%&#39; OR&#xA;    LOWER(description) LIKE &#39;%fly%&#39; OR &#xA;    LOWER(description) LIKE &#39;%flight%&#39; OR &#xA;    LOWER(description) LIKE &#39;%flying%&#39;&#xA;ORDER BY rank DESC, id ASC&#xA;LIMIT 5;&#xA;```&#xA;&#xA;We can wrap it in a CTE, but that doesn&#39;t does anything differently though.&#xA;&#xA;With CTE as below, we assign a common score like `1` or `0` based on the `title` or description match. Then in the outer query, we can assign a weight to the rank if the `title` matched as `2` or `description` matched as `1` and add it up to get the same `rank` for that record.&#xA;&#xA;```sql&#xA;WITH search_results AS (&#xA;    SELECT &#xA;        id,&#xA;        title,&#xA;        description,&#xA;        CASE WHEN LOWER(title) LIKE &#39;%fly%&#39; OR LOWER(title) LIKE &#39;%flight%&#39; OR LOWER(title) LIKE &#39;%flying%&#39; &#xA;            THEN 1 ELSE 0 END AS title_match,&#xA;        CASE WHEN LOWER(description) LIKE &#39;%fly%&#39; OR LOWER(description) LIKE &#39;%flight%&#39; OR LOWER(description) LIKE &#39;%flying%&#39; &#xA;            THEN 1 ELSE 0 END AS desc_match&#xA;    FROM archive_records&#xA;    WHERE &#xA;        LOWER(title) LIKE &#39;%fly%&#39; OR LOWER(title) LIKE &#39;%flight%&#39; OR LOWER(title) LIKE &#39;%flying%&#39; OR&#xA;        LOWER(description) LIKE &#39;%fly%&#39; OR LOWER(description) LIKE &#39;%flight%&#39; OR LOWER(description) LIKE &#39;%flying%&#39;&#xA;)&#xA;SELECT &#xA;    id,&#xA;    title,&#xA;    description,&#xA;    (title_match * 2 + desc_match * 1) AS rank&#xA;FROM search_results&#xA;ORDER BY rank DESC, id ASC&#xA;LIMIT 5;&#xA;```&#xA;&#xA;### Assign score based on matching frequency&#xA;&#xA;We can also make it better by counting how many times the term appeared in the relevant column and then compute a cumulative rank. This way it becomes more aggressive on the most relevant docs only.&#xA;&#xA;So, we write something like this.&#xA;&#xA;- Compute the length of the full column example as title with `LENGTH(LOWER(title)` then subtract the length of the word left after removing `fly`, or `flight`, or others. It calculates the difference in length between the original title and the one with &#39;fly&#39; removed. Each &#39;fly&#39; is 3 characters, so the difference divided by 3 gives the number of occurrences of &#39;fly&#39;. This `(LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;fly&#39;, &#39;&#39;))) / 3&#xA;` counts how many times &#39;fly&#39; appears. Similarly for `flight` and `flying` it is `6`, and so on.&#xA;- If that appears in title it is multiplied by `2`&#xA;- If it appears in description, it is kept as is (multiplied by `1`) but you can weight it accordingly, as there could be multiple column to weigh.&#xA;&#xA;```sql&#xA;SELECT &#xA;    id,&#xA;    title,&#xA;    description,&#xA;    (&#xA;&#xA;        ((LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;fly&#39;, &#39;&#39;))) / 3 +&#xA;         (LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;flight&#39;, &#39;&#39;))) / 6 +&#xA;         (LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;flying&#39;, &#39;&#39;))) / 6&#xA;        ) * 2&#xA;        +&#xA;&#xA;        ((LENGTH(LOWER(description)) - LENGTH(REPLACE(LOWER(description), &#39;fly&#39;, &#39;&#39;))) / 3 +&#xA;         (LENGTH(LOWER(description)) - LENGTH(REPLACE(LOWER(description), &#39;flight&#39;, &#39;&#39;))) / 6 +&#xA;         (LENGTH(LOWER(description)) - LENGTH(REPLACE(LOWER(description), &#39;flying&#39;, &#39;&#39;))) / 6&#xA;        )&#xA;    ) AS rank&#xA;FROM archive_records&#xA;ORDER BY rank DESC, id ASC&#xA;LIMIT 5;&#xA;```&#xA;&#xA;You can always add a where clause to reduce the search space upfront:&#xA;&#xA;```sql&#xA;-- Simplified frequency-based ranking&#xA;SELECT &#xA;    id,&#xA;    title,&#xA;    description,&#xA;    (&#xA;        ((LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;fly&#39;, &#39;&#39;))) / 3 +&#xA;         (LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;flight&#39;, &#39;&#39;))) / 6 +&#xA;         (LENGTH(LOWER(title)) - LENGTH(REPLACE(LOWER(title), &#39;flying&#39;, &#39;&#39;))) / 6&#xA;        ) * 2&#xA;        +&#xA;        ((LENGTH(LOWER(description)) - LENGTH(REPLACE(LOWER(description), &#39;fly&#39;, &#39;&#39;))) / 3 +&#xA;         (LENGTH(LOWER(description)) - LENGTH(REPLACE(LOWER(description), &#39;flight&#39;, &#39;&#39;))) / 6 +&#xA;         (LENGTH(LOWER(description)) - LENGTH(REPLACE(LOWER(description), &#39;flying&#39;, &#39;&#39;))) / 6&#xA;        )&#xA;    ) AS rank&#xA;FROM archive_records&#xA;WHERE &#xA;    LOWER(title) LIKE &#39;%fly%&#39; OR LOWER(title) LIKE &#39;%flight%&#39; OR LOWER(title) LIKE &#39;%flying%&#39; OR&#xA;    LOWER(description) LIKE &#39;%fly%&#39; OR LOWER(description) LIKE &#39;%flight%&#39; OR LOWER(description) LIKE &#39;%flying%&#39;&#xA;ORDER BY rank DESC, id ASC&#xA;LIMIT 5;&#xA;```&#xA;&#xA;This would give us similar results but slightly different as we are assigning rank on the number of times each term appears and not just if it appears once or not.&#xA;&#xA;We can even make the list of words and their weight as a CTE and dynamically use it in the actual query:&#xA;&#xA;```sql&#xA;WITH keywords AS (&#xA;    SELECT &#39;fly&#39; AS term, 2.0 AS title_weight, 1.0 AS desc_weight&#xA;    UNION ALL&#xA;    SELECT &#39;flight&#39;, 2.0, 1.0&#xA;    UNION ALL&#xA;    SELECT &#39;flying&#39;, 2.0, 1.0&#xA;),&#xA;ranked AS (&#xA;    SELECT &#xA;        a.id,&#xA;        a.title,&#xA;        a.description,&#xA;        SUM(&#xA;            ((LENGTH(LOWER(a.title)) - LENGTH(REPLACE(LOWER(a.title), k.term, &#39;&#39;))) / LENGTH(k.term)) * k.title_weight +&#xA;            ((LENGTH(LOWER(a.description)) - LENGTH(REPLACE(LOWER(a.description), k.term, &#39;&#39;))) / LENGTH(k.term)) * k.desc_weight&#xA;        ) AS rank&#xA;    FROM archive_records a&#xA;    CROSS JOIN keywords k&#xA;    GROUP BY a.id&#xA;)&#xA;SELECT id, title, description, rank&#xA;FROM ranked&#xA;ORDER BY rank DESC, id ASC&#xA;LIMIT 5;&#xA;&#xA;```&#xA;&#xA;Here we have 2 CTEs now:&#xA;- `keywords` defining the words to search for and its weight&#xA;- `ranked` defining the computed sum of weights with frequency on the relevant columns.&#xA;&#xA;As you can see, by just adding the keyword in the CTE along with its weight, the rest of the query can work without changing anything.&#xA;&#xA;&#xA;### Full Text Search&#xA;&#xA;We can now also look at the [FTS](https://sqlite.org/fts5.html) or Full Text Search in SQLite.&#xA;&#xA;In SQLite, we can create a `VIRTUAL TABLE` which is like&#xA;- A table computed on the fly&#xA;- Doesn&#39;t exist as physical entity in the database&#xA;&#xA;We have [fts5](https://sqlite.org/fts5.html) which we can use to match, equality, or expressions in functions to search for text in columns.&#xA;&#xA;```sql&#xA;CREATE VIRTUAL TABLE IF NOT EXISTS archive_fts USING fts5(&#xA;    title, &#xA;    description,&#xA;    content=archive_records&#xA;);&#xA;```&#xA;&#xA;We need to provide the columns that we want to search against. In this case we want to search for `title` and `description` as the column. The `content=archive_records`, links the FTS table to a real table `archive_records` instead of storing its own copy of all text, the FTS table indexes the data from archive_records.&#xA;&#xA;Then we want to insert all the records from the `archive_records` for it to make aware of the records existing in the actual table.&#xA;&#xA;```sql&#xA;INSERT OR IGNORE INTO archive_fts(rowid, title, description)&#xA;SELECT id, title, description FROM archive_records;&#xA;```&#xA;&#xA;Then we can query it like so:&#xA;&#xA;```sql&#xA;SELECT * FROM archive_fts WHERE archive_fts MATCH &#39;fly&#39;;&#xA;```&#xA;&#xA;We have a couple of option to search against like&#xA;1. MATCH&#xA;2. Boolean Operator&#xA;3. Expressions and Wildcards&#xA;&#xA;We simply can filter the records with this &#xA;&#xA;```sql&#xA;SELECT *&#xA;FROM archive_fts&#xA;JOIN archive_records ON archive_fts.rowid = archive_records.id&#xA;WHERE archive_fts MATCH &#39;fly OR flight OR flying&#39;;&#xA;```&#xA;&#xA;We have added `MATCH &#39;fly OR flight OR flying&#39;` to only limit the search space on those keywords.&#xA;&#xA;&#xA;```sql&#xA;SELECT &#xA;    archive_records.id,&#xA;    archive_records.title,&#xA;    archive_records.description,&#xA;    (&#xA;        CASE WHEN archive_records.title LIKE &#39;%fly%&#39; OR archive_records.title LIKE &#39;%flight%&#39; OR archive_records.title LIKE &#39;%flying%&#39; THEN 2 ELSE 0 END +&#xA;        CASE WHEN archive_records.description LIKE &#39;%fly%&#39; OR archive_records.description LIKE &#39;%flight%&#39; OR archive_records.description LIKE &#39;%flying%&#39; THEN 1 ELSE 0 END&#xA;    ) AS rank&#xA;FROM archive_fts&#xA;JOIN archive_records ON archive_fts.rowid = archive_records.id&#xA;WHERE archive_fts MATCH &#39;fly OR flight OR flying&#39;&#xA;ORDER BY rank DESC, archive_records.id ASC&#xA;LIMIT 5;&#xA;```&#xA;&#xA;This simply will give us the relevant results by doing a fuzzy search which is way better then `LIKE` or `%` wildcard operators in the columns.&#xA;&#xA;So, that are the approaches I like taking on day 12.&#xA;&#xA;It was fun! Working with full text search for the first time.&#xA;&#xA;On to day 13!&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 11: Behavior Score</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-11</link>
      <description>Advent of SQL - Day 11, Behavior Score All right, this is day 11 from Advent of SQL. Let&#39;s pull in the data. No hiccups! Good to go. We just have one table toda</description>
      <pubDate>Fri, 26 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL - Day 11, Behavior Score&#xA;&#xA;All right, this is day 11 from Advent of SQL.&#xA;&#xA;Let&#39;s pull in the data.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS behavior_logs;&#xA;&#xA;CREATE TABLE behavior_logs (&#xA;    id INT PRIMARY KEY,&#xA;    child_id INT,&#xA;    child_name TEXT,&#xA;    behavior_date DATE,&#xA;    score INT&#xA;);&#xA;&#xA;INSERT INTO behavior_logs (id, child_id, child_name, behavior_date, score) VALUES&#xA;    (1, 1, &#39;Emma D.&#39;, &#39;2025-12-01&#39;, 5),&#xA;    (2, 1, &#39;Emma D.&#39;, &#39;2025-12-02&#39;, 1),&#xA;    (3, 1, &#39;Emma D.&#39;, &#39;2025-12-03&#39;, 3),&#xA;    (4, 1, &#39;Emma D.&#39;, &#39;2025-12-04&#39;, 5),&#xA;    (5, 1, &#39;Emma D.&#39;, &#39;2025-12-05&#39;, 2),&#xA;    (6, 1, &#39;Emma D.&#39;, &#39;2025-12-06&#39;, 2),&#xA;    (7, 1, &#39;Emma D.&#39;, &#39;2025-12-07&#39;, 3),&#xA;    (8, 1, &#39;Emma D.&#39;, &#39;2025-12-08&#39;, 5),&#xA;    (9, 1, &#39;Emma D.&#39;, &#39;2025-12-09&#39;, 4),&#xA;    (10, 1, &#39;Emma D.&#39;, &#39;2025-12-10&#39;, 5),&#xA;    (11, 1, &#39;Emma D.&#39;, &#39;2025-12-11&#39;, 5),&#xA;    (12, 1, &#39;Emma D.&#39;, &#39;2025-12-12&#39;, -1),&#xA;    (13, 1, &#39;Emma D.&#39;, &#39;2025-12-13&#39;, 1),&#xA;    (14, 1, &#39;Emma D.&#39;, &#39;2025-12-14&#39;, 1),&#xA;    (15, 1, &#39;Emma D.&#39;, &#39;2025-12-15&#39;, -1),&#xA;    (16, 1, &#39;Emma D.&#39;, &#39;2025-12-16&#39;, 3),&#xA;    (17, 1, &#39;Emma D.&#39;, &#39;2025-12-17&#39;, -2),&#xA;    (18, 1, &#39;Emma D.&#39;, &#39;2025-12-18&#39;, 1),&#xA;    (19, 1, &#39;Emma D.&#39;, &#39;2025-12-19&#39;, 1),&#xA;    (20, 1, &#39;Emma D.&#39;, &#39;2025-12-20&#39;, -2),&#xA;    (21, 2, &#39;Ava X.&#39;, &#39;2025-12-01&#39;, 0),&#xA;    (22, 2, &#39;Ava X.&#39;, &#39;2025-12-02&#39;, -1),&#xA;    (23, 2, &#39;Ava X.&#39;, &#39;2025-12-03&#39;, 4),&#xA;    (24, 2, &#39;Ava X.&#39;, &#39;2025-12-04&#39;, 0),&#xA;    (25, 2, &#39;Ava X.&#39;, &#39;2025-12-05&#39;, 2),&#xA;    (26, 2, &#39;Ava X.&#39;, &#39;2025-12-06&#39;, 3),&#xA;    (27, 2, &#39;Ava X.&#39;, &#39;2025-12-07&#39;, 5),&#xA;    (28, 2, &#39;Ava X.&#39;, &#39;2025-12-08&#39;, 2),&#xA;    (29, 2, &#39;Ava X.&#39;, &#39;2025-12-09&#39;, 1),&#xA;    (30, 2, &#39;Ava X.&#39;, &#39;2025-12-10&#39;, 5),&#xA;    (31, 2, &#39;Ava X.&#39;, &#39;2025-12-11&#39;, 2),&#xA;    (32, 2, &#39;Ava X.&#39;, &#39;2025-12-12&#39;, 5),&#xA;    (33, 2, &#39;Ava X.&#39;, &#39;2025-12-13&#39;, 5),&#xA;    (34, 2, &#39;Ava X.&#39;, &#39;2025-12-14&#39;, 2),&#xA;    (35, 2, &#39;Ava X.&#39;, &#39;2025-12-15&#39;, 0),&#xA;    (36, 2, &#39;Ava X.&#39;, &#39;2025-12-16&#39;, 0),&#xA;    (37, 2, &#39;Ava X.&#39;, &#39;2025-12-17&#39;, 5),&#xA;    (38, 2, &#39;Ava X.&#39;, &#39;2025-12-18&#39;, 4),&#xA;    (39, 2, &#39;Ava X.&#39;, &#39;2025-12-19&#39;, 5),&#xA;    (40, 2, &#39;Ava X.&#39;, &#39;2025-12-20&#39;, 5),&#xA;    (181, 10, &#39;Ava C.&#39;, &#39;2025-12-01&#39;, 3),&#xA;    (182, 10, &#39;Ava C.&#39;, &#39;2025-12-02&#39;, 0),&#xA;    (183, 10, &#39;Ava C.&#39;, &#39;2025-12-03&#39;, 3),&#xA;    (184, 10, &#39;Ava C.&#39;, &#39;2025-12-04&#39;, 5),&#xA;    (185, 10, &#39;Ava C.&#39;, &#39;2025-12-05&#39;, 5),&#xA;    (186, 10, &#39;Ava C.&#39;, &#39;2025-12-06&#39;, 4),&#xA;    (187, 10, &#39;Ava C.&#39;, &#39;2025-12-07&#39;, 1),&#xA;    (188, 10, &#39;Ava C.&#39;, &#39;2025-12-08&#39;, 4),&#xA;    (189, 10, &#39;Ava C.&#39;, &#39;2025-12-09&#39;, 5),&#xA;    (190, 10, &#39;Ava C.&#39;, &#39;2025-12-10&#39;, 5),&#xA;    (191, 10, &#39;Ava C.&#39;, &#39;2025-12-11&#39;, 5),&#xA;    (192, 10, &#39;Ava C.&#39;, &#39;2025-12-12&#39;, 0),&#xA;    (193, 10, &#39;Ava C.&#39;, &#39;2025-12-13&#39;, 0),&#xA;    (194, 10, &#39;Ava C.&#39;, &#39;2025-12-14&#39;, 3),&#xA;    (195, 10, &#39;Ava C.&#39;, &#39;2025-12-15&#39;, 1),&#xA;    (196, 10, &#39;Ava C.&#39;, &#39;2025-12-16&#39;, 3),&#xA;    (197, 10, &#39;Ava C.&#39;, &#39;2025-12-17&#39;, -1),&#xA;    (198, 10, &#39;Ava C.&#39;, &#39;2025-12-18&#39;, 0),&#xA;    (199, 10, &#39;Ava C.&#39;, &#39;2025-12-19&#39;, 5),&#xA;    (200, 10, &#39;Ava C.&#39;, &#39;2025-12-20&#39;, 4),&#xA;    (241, 13, &#39;Ava R.&#39;, &#39;2025-12-01&#39;, 3),&#xA;    (242, 13, &#39;Ava R.&#39;, &#39;2025-12-02&#39;, 2),&#xA;    (243, 13, &#39;Ava R.&#39;, &#39;2025-12-03&#39;, 2),&#xA;    (244, 13, &#39;Ava R.&#39;, &#39;2025-12-04&#39;, 1),&#xA;    (245, 13, &#39;Ava R.&#39;, &#39;2025-12-05&#39;, -1),&#xA;    (246, 13, &#39;Ava R.&#39;, &#39;2025-12-06&#39;, -1),&#xA;    (247, 13, &#39;Ava R.&#39;, &#39;2025-12-07&#39;, 2),&#xA;    (248, 13, &#39;Ava R.&#39;, &#39;2025-12-08&#39;, 5),&#xA;    (249, 13, &#39;Ava R.&#39;, &#39;2025-12-09&#39;, 0),&#xA;    (250, 13, &#39;Ava R.&#39;, &#39;2025-12-10&#39;, 5),&#xA;    (251, 13, &#39;Ava R.&#39;, &#39;2025-12-11&#39;, 2),&#xA;    (252, 13, &#39;Ava R.&#39;, &#39;2025-12-12&#39;, -1),&#xA;    (253, 13, &#39;Ava R.&#39;, &#39;2025-12-13&#39;, 2),&#xA;    (254, 13, &#39;Ava R.&#39;, &#39;2025-12-14&#39;, 3),&#xA;    (255, 13, &#39;Ava R.&#39;, &#39;2025-12-15&#39;, 2),&#xA;    (256, 13, &#39;Ava R.&#39;, &#39;2025-12-16&#39;, -1),&#xA;    (257, 13, &#39;Ava R.&#39;, &#39;2025-12-17&#39;, -2),&#xA;    (258, 13, &#39;Ava R.&#39;, &#39;2025-12-18&#39;, -4),&#xA;    (259, 13, &#39;Ava R.&#39;, &#39;2025-12-19&#39;, -3),&#xA;    (260, 13, &#39;Ava R.&#39;, &#39;2025-12-20&#39;, 2),&#xA;    (1961, 99, &#39;Ava X.&#39;, &#39;2025-12-01&#39;, 2),&#xA;    (1962, 99, &#39;Ava X.&#39;, &#39;2025-12-02&#39;, -2),&#xA;    (1963, 99, &#39;Ava X.&#39;, &#39;2025-12-03&#39;, -1),&#xA;    (1964, 99, &#39;Ava X.&#39;, &#39;2025-12-04&#39;, -2),&#xA;    (1965, 99, &#39;Ava X.&#39;, &#39;2025-12-05&#39;, 3),&#xA;    (1966, 99, &#39;Ava X.&#39;, &#39;2025-12-06&#39;, -1),&#xA;    (1967, 99, &#39;Ava X.&#39;, &#39;2025-12-07&#39;, 0),&#xA;    (1968, 99, &#39;Ava X.&#39;, &#39;2025-12-08&#39;, 1),&#xA;    (1969, 99, &#39;Ava X.&#39;, &#39;2025-12-09&#39;, 0),&#xA;    (1970, 99, &#39;Ava X.&#39;, &#39;2025-12-10&#39;, 0),&#xA;    (1971, 99, &#39;Ava X.&#39;, &#39;2025-12-11&#39;, 3),&#xA;    (1972, 99, &#39;Ava X.&#39;, &#39;2025-12-12&#39;, 4),&#xA;    (1973, 99, &#39;Ava X.&#39;, &#39;2025-12-13&#39;, 4),&#xA;    (1974, 99, &#39;Ava X.&#39;, &#39;2025-12-14&#39;, 0),&#xA;    (1975, 99, &#39;Ava X.&#39;, &#39;2025-12-15&#39;, 3),&#xA;    (1976, 99, &#39;Ava X.&#39;, &#39;2025-12-16&#39;, -1),&#xA;    (1977, 99, &#39;Ava X.&#39;, &#39;2025-12-17&#39;, -1),&#xA;    (1978, 99, &#39;Ava X.&#39;, &#39;2025-12-18&#39;, 3),&#xA;    (1979, 99, &#39;Ava X.&#39;, &#39;2025-12-19&#39;, 3),&#xA;    (1980, 99, &#39;Ava X.&#39;, &#39;2025-12-20&#39;, -3);&#xA;```&#xA;&#xA;No hiccups! Good to go.&#xA;&#xA;We just have one table today.&#xA;&#xA;```sql&#xA; SELECT * FROM behavior_logs;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; .read day11-inserts.sql&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; .schema&#xA;CREATE TABLE behavior_logs (&#xA;    id INT PRIMARY KEY,&#xA;    child_id INT,&#xA;    child_name TEXT,&#xA;    behavior_date DATE,&#xA;    score INT&#xA;);&#xA;sqlite&gt; SELECT * FROM behavior_logs LIMIT 20;&#xA;+----+----------+------------+---------------+-------+&#xA;| id | child_id | child_name | behavior_date | score |&#xA;+----+----------+------------+---------------+-------+&#xA;| 1  | 1        | Emma D.    | 2025-12-01    | 5     |&#xA;| 2  | 1        | Emma D.    | 2025-12-02    | 1     |&#xA;| 3  | 1        | Emma D.    | 2025-12-03    | 3     |&#xA;| 4  | 1        | Emma D.    | 2025-12-04    | 5     |&#xA;| 5  | 1        | Emma D.    | 2025-12-05    | 2     |&#xA;| 6  | 1        | Emma D.    | 2025-12-06    | 2     |&#xA;| 7  | 1        | Emma D.    | 2025-12-07    | 3     |&#xA;| 8  | 1        | Emma D.    | 2025-12-08    | 5     |&#xA;| 9  | 1        | Emma D.    | 2025-12-09    | 4     |&#xA;| 10 | 1        | Emma D.    | 2025-12-10    | 5     |&#xA;| 11 | 1        | Emma D.    | 2025-12-11    | 5     |&#xA;| 12 | 1        | Emma D.    | 2025-12-12    | -1    |&#xA;| 13 | 1        | Emma D.    | 2025-12-13    | 1     |&#xA;| 14 | 1        | Emma D.    | 2025-12-14    | 1     |&#xA;| 15 | 1        | Emma D.    | 2025-12-15    | -1    |&#xA;| 16 | 1        | Emma D.    | 2025-12-16    | 3     |&#xA;| 17 | 1        | Emma D.    | 2025-12-17    | -2    |&#xA;| 18 | 1        | Emma D.    | 2025-12-18    | 1     |&#xA;| 19 | 1        | Emma D.    | 2025-12-19    | 1     |&#xA;| 20 | 1        | Emma D.    | 2025-12-20    | -2    |&#xA;+----+----------+------------+---------------+-------+&#xA;sqlite&gt; SELECT * FROM behavior_logs WHERE child_name LIKE &#39;Ava&#39;;&#xA;sqlite&gt; SELECT * FROM behavior_logs WHERE child_name LIKE &#39;Ava%&#39;;&#xA;+------+----------+------------+---------------+-------+&#xA;|  id  | child_id | child_name | behavior_date | score |&#xA;+------+----------+------------+---------------+-------+&#xA;| 21   | 2        | Ava X.     | 2025-12-01    | 0     |&#xA;| 22   | 2        | Ava X.     | 2025-12-02    | -1    |&#xA;| 23   | 2        | Ava X.     | 2025-12-03    | 4     |&#xA;| 24   | 2        | Ava X.     | 2025-12-04    | 0     |&#xA;| 25   | 2        | Ava X.     | 2025-12-05    | 2     |&#xA;| 26   | 2        | Ava X.     | 2025-12-06    | 3     |&#xA;| 27   | 2        | Ava X.     | 2025-12-07    | 5     |&#xA;| 28   | 2        | Ava X.     | 2025-12-08    | 2     |&#xA;| 29   | 2        | Ava X.     | 2025-12-09    | 1     |&#xA;| 30   | 2        | Ava X.     | 2025-12-10    | 5     |&#xA;| 31   | 2        | Ava X.     | 2025-12-11    | 2     |&#xA;| 32   | 2        | Ava X.     | 2025-12-12    | 5     |&#xA;| 33   | 2        | Ava X.     | 2025-12-13    | 5     |&#xA;| 34   | 2        | Ava X.     | 2025-12-14    | 2     |&#xA;| 35   | 2        | Ava X.     | 2025-12-15    | 0     |&#xA;| 36   | 2        | Ava X.     | 2025-12-16    | 0     |&#xA;| 37   | 2        | Ava X.     | 2025-12-17    | 5     |&#xA;| 38   | 2        | Ava X.     | 2025-12-18    | 4     |&#xA;| 39   | 2        | Ava X.     | 2025-12-19    | 5     |&#xA;| 40   | 2        | Ava X.     | 2025-12-20    | 5     |&#xA;| 181  | 10       | Ava C.     | 2025-12-01    | 3     |&#xA;| 182  | 10       | Ava C.     | 2025-12-02    | 0     |&#xA;| 183  | 10       | Ava C.     | 2025-12-03    | 3     |&#xA;| 184  | 10       | Ava C.     | 2025-12-04    | 5     |&#xA;| 185  | 10       | Ava C.     | 2025-12-05    | 5     |&#xA;| 186  | 10       | Ava C.     | 2025-12-06    | 4     |&#xA;| 187  | 10       | Ava C.     | 2025-12-07    | 1     |&#xA;| 188  | 10       | Ava C.     | 2025-12-08    | 4     |&#xA;| 189  | 10       | Ava C.     | 2025-12-09    | 5     |&#xA;| 190  | 10       | Ava C.     | 2025-12-10    | 5     |&#xA;| 191  | 10       | Ava C.     | 2025-12-11    | 5     |&#xA;| 192  | 10       | Ava C.     | 2025-12-12    | 0     |&#xA;| 193  | 10       | Ava C.     | 2025-12-13    | 0     |&#xA;| 194  | 10       | Ava C.     | 2025-12-14    | 3     |&#xA;| 195  | 10       | Ava C.     | 2025-12-15    | 1     |&#xA;| 196  | 10       | Ava C.     | 2025-12-16    | 3     |&#xA;| 197  | 10       | Ava C.     | 2025-12-17    | -1    |&#xA;| 198  | 10       | Ava C.     | 2025-12-18    | 0     |&#xA;| 199  | 10       | Ava C.     | 2025-12-19    | 5     |&#xA;| 200  | 10       | Ava C.     | 2025-12-20    | 4     |&#xA;| 241  | 13       | Ava R.     | 2025-12-01    | 3     |&#xA;| 242  | 13       | Ava R.     | 2025-12-02    | 2     |&#xA;| 243  | 13       | Ava R.     | 2025-12-03    | 2     |&#xA;| 244  | 13       | Ava R.     | 2025-12-04    | 1     |&#xA;| 245  | 13       | Ava R.     | 2025-12-05    | -1    |&#xA;| 246  | 13       | Ava R.     | 2025-12-06    | -1    |&#xA;| 247  | 13       | Ava R.     | 2025-12-07    | 2     |&#xA;| 248  | 13       | Ava R.     | 2025-12-08    | 5     |&#xA;| 249  | 13       | Ava R.     | 2025-12-09    | 0     |&#xA;| 250  | 13       | Ava R.     | 2025-12-10    | 5     |&#xA;| 251  | 13       | Ava R.     | 2025-12-11    | 2     |&#xA;| 252  | 13       | Ava R.     | 2025-12-12    | -1    |&#xA;| 253  | 13       | Ava R.     | 2025-12-13    | 2     |&#xA;| 254  | 13       | Ava R.     | 2025-12-14    | 3     |&#xA;| 255  | 13       | Ava R.     | 2025-12-15    | 2     |&#xA;| 256  | 13       | Ava R.     | 2025-12-16    | -1    |&#xA;| 257  | 13       | Ava R.     | 2025-12-17    | -2    |&#xA;| 258  | 13       | Ava R.     | 2025-12-18    | -4    |&#xA;| 259  | 13       | Ava R.     | 2025-12-19    | -3    |&#xA;| 260  | 13       | Ava R.     | 2025-12-20    | 2     |&#xA;| 1961 | 99       | Ava X.     | 2025-12-01    | 2     |&#xA;| 1962 | 99       | Ava X.     | 2025-12-02    | -2    |&#xA;| 1963 | 99       | Ava X.     | 2025-12-03    | -1    |&#xA;| 1964 | 99       | Ava X.     | 2025-12-04    | -2    |&#xA;| 1965 | 99       | Ava X.     | 2025-12-05    | 3     |&#xA;| 1966 | 99       | Ava X.     | 2025-12-06    | -1    |&#xA;| 1967 | 99       | Ava X.     | 2025-12-07    | 0     |&#xA;| 1968 | 99       | Ava X.     | 2025-12-08    | 1     |&#xA;| 1969 | 99       | Ava X.     | 2025-12-09    | 0     |&#xA;| 1970 | 99       | Ava X.     | 2025-12-10    | 0     |&#xA;| 1971 | 99       | Ava X.     | 2025-12-11    | 3     |&#xA;| 1972 | 99       | Ava X.     | 2025-12-12    | 4     |&#xA;| 1973 | 99       | Ava X.     | 2025-12-13    | 4     |&#xA;| 1974 | 99       | Ava X.     | 2025-12-14    | 0     |&#xA;| 1975 | 99       | Ava X.     | 2025-12-15    | 3     |&#xA;| 1976 | 99       | Ava X.     | 2025-12-16    | -1    |&#xA;| 1977 | 99       | Ava X.     | 2025-12-17    | -1    |&#xA;| 1978 | 99       | Ava X.     | 2025-12-18    | 3     |&#xA;| 1979 | 99       | Ava X.     | 2025-12-19    | 3     |&#xA;| 1980 | 99       | Ava X.     | 2025-12-20    | -3    |&#xA;+------+----------+------------+---------------+-------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Let&#39;s get to the problem of day 11&#xA;&#xA;## Problem&#xA;&#xA;&gt; Calculate the 7-day rolling average behavior score for each child. Identify any child whose rolling average drops below 0. For those children with a rolling average below 0, return the `child_id`, `child_name`, `behavior_date` (this will be the latest date in the 7-day rolling average), and the calculated 7-day rolling average. Only include results with a `behavior_date` of `December 7, 2025` or later, ensuring that each rolling average is based on a full 7 days of data.&#xA;&gt; &#xA;&gt; Order the results by `behavior_date` and then `child_name`.&#xA;&#xA;So, we need to do what?&#xA;&#xA;- Group by child_id&#xA;- Compute the rolling average for the past 7 days (so if there is a data for 20 days from 1st to 20th December, we&#39;ll only consider the average from 14th to 20th December)&#xA;- Order by behavior_date and child_name.&#xA;&#xA;### Using Simple Join&#xA;&#xA;We have to first grab the rolling average which is only for the past 7 days per child.&#xA;&#xA;So, to do that, we can self join the `behavior_log` table on the condition that the behavior date (from right table) is between the current behavior date (from left table) and 6 days past that for each child.&#xA;&#xA;```sql&#xA;SELECT &#xA;    behavior_logs.child_id,&#xA;    behavior_logs.child_name,&#xA;    behavior_logs.behavior_date,&#xA;    AVG(week_logs.score) AS rolling_avg&#xA;FROM &#xA;    behavior_logs&#xA;JOIN &#xA;    behavior_logs week_logs&#xA;    ON behavior_logs.child_id = week_logs.child_id&#xA;    AND week_logs.behavior_date BETWEEN DATE(behavior_logs.behavior_date, &#39;-6 days&#39;) AND behavior_logs.behavior_date&#xA;GROUP BY &#xA;    behavior_logs.child_id;&#xA;```&#xA;&#xA;We basically joined the table `behavior_logs` with itself, i.e. the right and left tables are the same. So, we join on the condition of same child_id, and then look for dates between the past 6 days and the current behavior date (we included the 7th day in the behavior date, so hence subtracting 6 days from that date).&#xA;&#xA;We group by `child_id`, so that we get a single row for each `child_id`. We compute the `AVG(score)` to get the average score for each day in the range of the past 7 days.&#xA;&#xA;You see the issue, we need to compute the average for each week, and not just the last, it will give us the last most score for a child. We need to also group by the behavior date so as to keep week unique and have multiple entries for a child.&#xA;&#xA;```sql&#xA;SELECT &#xA;    behavior_logs.child_id,&#xA;    behavior_logs.child_name,&#xA;    behavior_logs.behavior_date,&#xA;    AVG(week_logs.score) AS rolling_avg&#xA;FROM &#xA;    behavior_logs&#xA;JOIN &#xA;    behavior_logs week_logs&#xA;    ON behavior_logs.child_id = week_logs.child_id&#xA;    AND week_logs.behavior_date BETWEEN DATE(behavior_logs.behavior_date, &#39;-6 days&#39;) AND behavior_logs.behavior_date&#xA;GROUP BY &#xA;    behavior_logs.child_id, behavior_logs.behavior_date;&#xA;```&#xA;&#xA;Now, we have grouped by `behavior_date` so that each child can have multiple entries for the weeks that we have logs for. Now we can further filter it.&#xA;&#xA;We also need to filter when the `rolling_average` is less than 0, that is the child had a bad week overall. So that will be in a `HAVING` condition and not a `WHERE` condition since `AVG` is a aggregate function, we can&#39;t reference the `rolling_average` in the where clause, it won&#39;t be defined there yet. So, we use `HAVING` to filter `rolling_average` as less than `0`.&#xA;&#xA;```sql&#xA;SELECT &#xA;    behavior_logs.child_id,&#xA;    behavior_logs.child_name,&#xA;    behavior_logs.behavior_date,&#xA;    AVG(week_logs.score) AS rolling_avg&#xA;FROM &#xA;    behavior_logs&#xA;JOIN &#xA;    behavior_logs week_logs&#xA;    ON behavior_logs.child_id = week_logs.child_id&#xA;    AND week_logs.behavior_date BETWEEN DATE(behavior_logs.behavior_date, &#39;-6 days&#39;) AND behavior_logs.behavior_date&#xA;GROUP BY &#xA;    behavior_logs.child_id, behavior_logs.behavior_date&#xA;HAVING &#xA;    rolling_avg &lt; 0;&#xA;```&#xA;&#xA;We are now down to the rows that only have `bad` weeks for the child in various weeks. &#xA;&#xA;There is one catch however, we can&#39;t use week periods before `7th December` because we don&#39;t have enough data to compute the rolling average for the 7 day score. Hence we only include the records having logs after `7th December`.&#xA;&#xA;This again would come in the `HAVING` clause as we are deciding the final behavior date from the week and not the individual logs from the table.&#xA;&#xA;```sql&#xA;SELECT &#xA;    behavior_logs.child_id,&#xA;    behavior_logs.child_name,&#xA;    behavior_logs.behavior_date,&#xA;    AVG(week_logs.score) AS rolling_avg&#xA;FROM &#xA;    behavior_logs&#xA;JOIN &#xA;    behavior_logs week_logs&#xA;    ON behavior_logs.child_id = week_logs.child_id&#xA;    AND week_logs.behavior_date BETWEEN DATE(behavior_logs.behavior_date, &#39;-6 days&#39;) AND behavior_logs.behavior_date&#xA;GROUP BY &#xA;    behavior_logs.child_id, behavior_logs.behavior_date&#xA;HAVING &#xA;    rolling_avg &lt; 0&#xA;    AND behavior_logs.behavior_date &gt;= &#39;2025-12-07&#39;&#xA;&#xA;```&#xA;&#xA;Now, the final piece is the order.&#xA;&#xA;We need to order it by the `behavior_date` and the `child_name` as requested.&#xA;&#xA;```sql&#xA;SELECT &#xA;    behavior_logs.child_id,&#xA;    behavior_logs.child_name,&#xA;    behavior_logs.behavior_date,&#xA;    AVG(week_logs.score) AS rolling_avg&#xA;FROM &#xA;    behavior_logs&#xA;JOIN &#xA;    behavior_logs week_logs&#xA;    ON behavior_logs.child_id = week_logs.child_id&#xA;    AND week_logs.behavior_date BETWEEN DATE(behavior_logs.behavior_date, &#39;-6 days&#39;) AND behavior_logs.behavior_date&#xA;GROUP BY &#xA;    behavior_logs.child_id, behavior_logs.behavior_date&#xA;HAVING &#xA;    rolling_avg &lt; 0&#xA;    AND behavior_logs.behavior_date &gt;= &#39;2025-12-07&#39;&#xA;ORDER BY &#xA;    behavior_logs.behavior_date, behavior_logs.child_name;&#xA;```&#xA;&#xA;This is a simple solution.&#xA;Easy to understand, and explain, not quite short and crisp.&#xA;&#xA;&#xA;We can even do this in a sub query without a JOIN like so&#xA;&#xA;### Using Sub-query&#xA;&#xA;We take the conditions from the `JOIN` and replace that in a sub-query like so.&#xA;&#xA;```sql&#xA;SELECT&#xA;    current_logs.child_id,&#xA;    current_logs.child_name,&#xA;    current_logs.behavior_date,&#xA;    (&#xA;        SELECT AVG(past_logs.score)&#xA;        FROM behavior_logs past_logs&#xA;        WHERE past_logs.child_id = current_logs.child_id&#xA;          AND past_logs.behavior_date&#xA;              BETWEEN date(current_logs.behavior_date, &#39;-6 days&#39;)&#xA;                  AND current_logs.behavior_date&#xA;    ) AS rolling_avg&#xA;FROM behavior_logs current_logs&#xA;WHERE current_logs.behavior_date &gt;= &#39;2025-12-07&#39;&#xA;  AND rolling_avg &lt; 0&#xA;ORDER BY current_logs.behavior_date, current_logs.child_name;&#xA;&#xA;```&#xA;&#xA;Since its a subquery, we don&#39;t need to group by or add a having clause to filter the rolling average and the behavior date.&#xA;&#xA;### Using Sub-query and Window Function&#xA;&#xA;We can take the above query and instead of join, we can write a sub-query to compute the rolling average using window function.&#xA;&#xA;```sql&#xA;SELECT&#xA;    child_id,&#xA;    child_name,&#xA;    behavior_date,&#xA;    AVG(score) OVER (&#xA;        PARTITION BY child_id&#xA;        ORDER BY behavior_date&#xA;        ROWS BETWEEN 6 PRECEDING AND CURRENT ROW&#xA;    ) AS rolling_avg&#xA;FROM behavior_logs&#xA;&#xA;```&#xA;&#xA;We use the `AVG(score) OVER ()` this is a partition. We partition or create a window for each child and we order by the behavior date and then create a sliding window for the past 7 days. &#xA;&#xA;The `ROWS BETWEEN 6 PRECEDING AND CURRENT ROW` defines a 7-row sliding window for each child. Every row gets its own window, and each window is separate per child because of PARTITION BY `child_id`.&#xA;&#xA;So we get a full per day rolling average for each child with this query.&#xA;&#xA;This had around 2400 rows.&#xA;&#xA;Now, we need filter it down to the only rows where the rolling average is less than `0` and we don&#39;t include the average for days before the `7th December` as we don&#39;t have enough days before that to compute the 7-day rolling average.&#xA;&#xA;But this is a query in itself we can&#39;t reference the `rolling_average` in the where clause as its not available there. We haven&#39;t grouped by anything explicitly so we can&#39;t use `HAVING`, we need to wrap it in a sub query to get the rolling average as well as the behavior date condition.&#xA;&#xA;```sql&#xA;SELECT&#xA;    child_id,&#xA;    child_name,&#xA;    behavior_date,&#xA;    rolling_avg&#xA;FROM (&#xA;    SELECT&#xA;        child_id,&#xA;        child_name,&#xA;        behavior_date,&#xA;        AVG(score) OVER (&#xA;            PARTITION BY child_id&#xA;            ORDER BY behavior_date&#xA;            ROWS BETWEEN 6 PRECEDING AND CURRENT ROW&#xA;        ) AS rolling_avg&#xA;    FROM behavior_logs&#xA;)&#xA;WHERE behavior_date &gt;= &#39;2025-12-07&#39;&#xA;  AND rolling_avg &lt; 0;&#xA;```&#xA;&#xA;This will filter down the rows.&#xA;&#xA;Now, we also need to order by the `behavior_date` and then `child_name`.&#xA;&#xA;```sql&#xA;SELECT&#xA;    child_id,&#xA;    child_name,&#xA;    behavior_date,&#xA;    rolling_avg&#xA;FROM (&#xA;    SELECT&#xA;        child_id,&#xA;        child_name,&#xA;        behavior_date,&#xA;        AVG(score) OVER (&#xA;            PARTITION BY child_id&#xA;            ORDER BY behavior_date&#xA;            ROWS BETWEEN 6 PRECEDING AND CURRENT ROW&#xA;        ) AS rolling_avg&#xA;    FROM behavior_logs&#xA;)&#xA;WHERE behavior_date &gt;= &#39;2025-12-07&#39;&#xA;  AND rolling_avg &lt; 0&#xA;ORDER BY behavior_date, child_name;&#xA;```&#xA;&#xA;So, that is again the same stuff just with window function.&#xA;&#xA;We can even do this with that subquery wrapped in a CTE (it just looks and reads good, nothing different really, other than you can use the CTE in the same query multiple times, here we don&#39;t need to do it)&#xA;&#xA;### Using CTE and Window Function&#xA;&#xA;We just take the above sub-query and wrap it in a CTE.&#xA;&#xA;```sql&#xA;WITH rolling AS (&#xA;    SELECT&#xA;        child_id,&#xA;        child_name,&#xA;        behavior_date,&#xA;        AVG(score) OVER (&#xA;            PARTITION BY child_id&#xA;            ORDER BY behavior_date&#xA;            ROWS BETWEEN 6 PRECEDING AND CURRENT ROW&#xA;        ) AS rolling_avg&#xA;    FROM behavior_logs&#xA;)&#xA;SELECT&#xA;    child_id,&#xA;    child_name,&#xA;    behavior_date,&#xA;    rolling_avg&#xA;FROM rolling&#xA;WHERE behavior_date &gt;= &#39;2025-12-07&#39;&#xA;  AND rolling_avg &lt; 0&#xA;ORDER BY behavior_date, child_name;&#xA;&#xA;```&#xA;&#xA;The rest remains the same, we just reference the `rolling` as the CTE and grab the necessary details in the query using that CTE.&#xA;&#xA;That should be it for the Day 11!&#xA;&#xA;Some cool CTE, Window function and pratical use case for computing rolling averages, loved it!&#xA;&#xA;On to the day 12!&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 10: Misdelivered Presents</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-10</link>
      <description>Advent of SQL, Day 10 - Misdelivered Presents It&#39;s already day 10? We just need 5 more days now! Whoa! that flew by swiftly. Let&#39;s pull in the data. This is the</description>
      <pubDate>Thu, 25 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL, Day 10 - Misdelivered Presents&#xA;&#xA;It&#39;s already day 10? We just need 5 more days now! Whoa! that flew by swiftly.&#xA;&#xA;Let&#39;s pull in the data.&#xA;&#xA;This is the SQL for day 10 in SQLite.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS misdelivered_presents;&#xA;DROP TABLE IF EXISTS deliveries;&#xA;&#xA;CREATE TABLE deliveries (&#xA;    id INT PRIMARY KEY,&#xA;    child_name TEXT,&#xA;    delivery_location TEXT,&#xA;    gift_name TEXT,&#xA;    scheduled_at TIMESTAMP&#xA;);&#xA;&#xA;CREATE TABLE misdelivered_presents (&#xA;    id INT PRIMARY KEY,&#xA;    child_name TEXT,&#xA;    delivery_location TEXT,&#xA;    gift_name TEXT,&#xA;    scheduled_at TIMESTAMP,&#xA;    flagged_at TIMESTAMP,&#xA;    reason TEXT&#xA;);&#xA;&#xA;INSERT INTO deliveries (id, child_name, delivery_location, gift_name, scheduled_at) VALUES&#xA;    (1, &#39;Omar Q.&#39;, &#39;45 Maple Street&#39;, &#39;storybook collection&#39;, &#39;2025-12-24 21:09:00&#39;),&#xA;    (2, &#39;Sofia K.&#39;, &#39;77 Snowflake Road&#39;, &#39;plush reindeer&#39;, &#39;2025-12-24 18:35:00&#39;),&#xA;    (3, &#39;Mila N.&#39;, &#39;The Vibes&#39;, &#39;storybook collection&#39;, &#39;2025-12-24 21:09:00&#39;),&#xA;    (4, &#39;Elias M.&#39;, &#39;Frost Hollow Cabin&#39;, &#39;board game&#39;, &#39;2025-12-24 20:31:00&#39;),&#xA;    (5, &#39;Ravi P.&#39;, &#39;45 Maple Street&#39;, &#39;wooden train set&#39;, &#39;2025-12-24 18:23:00&#39;),&#xA;    (6, &#39;Jonah W.&#39;, &#39;77 Snowflake Road&#39;, &#39;plush reindeer&#39;, &#39;2025-12-24 20:34:00&#39;),&#xA;    (7, &#39;Ava J.&#39;, &#39;123 Evergreen Lane&#39;, &#39;board game&#39;, &#39;2025-12-24 21:03:00&#39;),&#xA;    (8, &#39;Omar Q.&#39;, &#39;77 Snowflake Road&#39;, &#39;board game&#39;, &#39;2025-12-24 18:56:00&#39;),&#xA;    (9, &#39;Nia G.&#39;, &#39;Frost Hollow Cabin&#39;, &#39;teddy bear&#39;, &#39;2025-12-24 21:27:00&#39;),&#xA;    (10, &#39;Zara S.&#39;, &#39;North Pole Annex&#39;, &#39;wooden train set&#39;, &#39;2025-12-24 20:58:00&#39;),&#xA;    (11, &#39;Ravi P.&#39;, &#39;Frost Hollow Cabin&#39;, &#39;puzzle box&#39;, &#39;2025-12-24 18:39:00&#39;),&#xA;    (12, &#39;Jonah W.&#39;, &#39;123 Evergreen Lane&#39;, &#39;puzzle box&#39;, &#39;2025-12-24 18:23:00&#39;),&#xA;    (13, &#39;Ravi P.&#39;, &#39;North Pole Annex&#39;, &#39;storybook collection&#39;, &#39;2025-12-24 21:36:00&#39;),&#xA;    (14, &#39;Lena F.&#39;, &#39;North Pole Annex&#39;, &#39;teddy bear&#39;, &#39;2025-12-24 21:26:00&#39;),&#xA;    (15, &#39;Ava J.&#39;, &#39;North Pole Annex&#39;, &#39;snow globe&#39;, &#39;2025-12-24 18:31:00&#39;),&#xA;    (16, &#39;Elias M.&#39;, &#39;Frost Hollow Cabin&#39;, &#39;robot toy&#39;, &#39;2025-12-24 20:21:00&#39;),&#xA;    (17, &#39;Sofia K.&#39;, &#39;Frost Hollow Cabin&#39;, &#39;teddy bear&#39;, &#39;2025-12-24 20:27:00&#39;),&#xA;    (18, &#39;Jonah W.&#39;, &#39;77 Snowflake Road&#39;, &#39;storybook collection&#39;, &#39;2025-12-24 20:49:00&#39;),&#xA;    (19, &#39;Jonah W.&#39;, &#39;Frost Hollow Cabin&#39;, &#39;art supplies&#39;, &#39;2025-12-24 21:38:00&#39;),&#xA;    (20, &#39;Jonah W.&#39;, &#39;123 Evergreen Lane&#39;, &#39;storybook collection&#39;, &#39;2025-12-24 19:11:00&#39;);&#xA;&#xA;INSERT INTO misdelivered_presents&#xA;(id, child_name, delivery_location, gift_name, scheduled_at, flagged_at, reason)&#xA;VALUES&#xA;    (601, &#39;Priya D.&#39;, &#39;The Vibes&#39;, &#39;plush reindeer&#39;, &#39;2025-12-24 14:00:00&#39;, &#39;2025-12-24 14:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (602, &#39;Lena F.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;board game&#39;, &#39;2025-12-22 06:00:00&#39;, &#39;2025-12-22 06:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (603, &#39;Caleb O.&#39;, &#39;Drifting Igloo&#39;, &#39;board game&#39;, &#39;2025-12-24 06:00:00&#39;, &#39;2025-12-24 06:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (604, &#39;Mateo C.&#39;, &#39;The Vibes&#39;, &#39;art supplies&#39;, &#39;2025-12-22 04:00:00&#39;, &#39;2025-12-22 04:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (605, &#39;Hiro T.&#39;, &#39;The Vibes&#39;, &#39;robot toy&#39;, &#39;2025-12-24 08:00:00&#39;, &#39;2025-12-24 08:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (606, &#39;Priya D.&#39;, &#39;Volcano Rim&#39;, &#39;puzzle box&#39;, &#39;2025-12-22 08:00:00&#39;, &#39;2025-12-22 08:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (607, &#39;Nia G.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;board game&#39;, &#39;2025-12-24 01:00:00&#39;, &#39;2025-12-24 01:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (608, &#39;Elias M.&#39;, &#39;Drifting Igloo&#39;, &#39;board game&#39;, &#39;2025-12-24 01:00:00&#39;, &#39;2025-12-24 01:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (609, &#39;Ravi P.&#39;, &#39;Volcano Rim&#39;, &#39;board game&#39;, &#39;2025-12-24 02:00:00&#39;, &#39;2025-12-24 02:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (610, &#39;Hiro T.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;science kit&#39;, &#39;2025-12-23 20:00:00&#39;, &#39;2025-12-23 20:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (611, &#39;Priya D.&#39;, &#39;Drifting Igloo&#39;, &#39;puzzle box&#39;, &#39;2025-12-22 21:00:00&#39;, &#39;2025-12-22 21:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (612, &#39;Hiro T.&#39;, &#39;Volcano Rim&#39;, &#39;art supplies&#39;, &#39;2025-12-23 09:00:00&#39;, &#39;2025-12-23 09:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (613, &#39;Jonah W.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;board game&#39;, &#39;2025-12-24 01:00:00&#39;, &#39;2025-12-24 01:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (614, &#39;Omar Q.&#39;, &#39;Volcano Rim&#39;, &#39;art supplies&#39;, &#39;2025-12-22 01:00:00&#39;, &#39;2025-12-22 01:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (615, &#39;Omar Q.&#39;, &#39;Drifting Igloo&#39;, &#39;science kit&#39;, &#39;2025-12-23 20:00:00&#39;, &#39;2025-12-23 20:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (616, &#39;Omar Q.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;teddy bear&#39;, &#39;2025-12-24 12:00:00&#39;, &#39;2025-12-24 12:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (617, &#39;Zara S.&#39;, &#39;Volcano Rim&#39;, &#39;wooden train set&#39;, &#39;2025-12-24 12:00:00&#39;, &#39;2025-12-24 12:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (618, &#39;Omar Q.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;teddy bear&#39;, &#39;2025-12-23 15:00:00&#39;, &#39;2025-12-23 15:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (619, &#39;Caleb O.&#39;, &#39;The Vibes&#39;, &#39;teddy bear&#39;, &#39;2025-12-24 14:00:00&#39;, &#39;2025-12-24 14:05:00&#39;, &#39;Invalid delivery location&#39;),&#xA;    (620, &#39;Nia G.&#39;, &#39;Abandoned Lighthouse&#39;, &#39;board game&#39;, &#39;2025-12-23 03:00:00&#39;, &#39;2025-12-23 03:05:00&#39;, &#39;Invalid delivery location&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM deliveries;&#xA;SELECT * FROM misdelivered_presents;&#xA;```&#xA;&#xA;We have two tables. Almost the same with critical logical distinction among them and one extra column.&#xA;&#xA;Let&#39;s see the problem to check it.&#xA;&#xA;&#xA;## Problem&#xA;&#xA;&gt; Clean-up the deliveries table to remove any records where the delivery_location is &#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;.&#xA;&gt; &#xA;&gt; Move those records to the misdelivered_presents with all the same columns as deliveries plus a flagged_at column with the current time and a reason column with &#34;Invalid delivery location&#34; listed as the reason for each moved record.&#xA;&gt; &#xA;&gt; Make sure your final step shows the misdelivered_presents records that you just moved (i.e. don&#39;t include any existing records from the misdelivered_presents table).&#xA;&#xA;Ok, this looks some easy problem.&#xA;&#xA;- SELECT some data&#xA;- INSERT that data into other table&#xA;- DELETE that data from the original table&#xA;- SELECT the newly inserted data in the other table&#xA;&#xA;Right?&#xA;&#xA;Unless!&#xA;&#xA;&gt; Santa turned to you.&#xA;&gt;&#xA;&gt; “I don’t want this done in five steps,” he said. “And I don’t want any re-selecting. Move the problem presents out of the delivery system, log them in the vault, and show me exactly what you moved.”&#xA;&#xA;Ouch Santa! Don&#39;t be lazy! Be smart he says! Huhh!&#xA;&#xA;&#xA;Ok, at least let&#39;s check both the tables, how many rows they have, and the rows that we want to move around.&#xA;&#xA;```sql&#xA;SELECT COUNT(*) AS delivery_count FROM deliveries;&#xA;SELECT COUNT(*) AS misdelivered_present_count FROM misdelivered_presents;&#xA;&#xA;SELECT &#xA;    COUNT(*) AS misdelivered_deliveries_count&#xA;FROM deliveries &#xA;WHERE &#xA;    delivery_location IN (&#xA;        &#39;Volcano Rim&#39;,&#xA;        &#39;Drifting Igloo&#39;,&#xA;        &#39;Abandoned Lighthouse&#39;,&#xA;        &#39;The Vibes&#39;&#xA;    );&#xA;&#xA;SELECT &#xA;    COUNT(*) AS misdelivered_present_count&#xA;FROM misdelivered_presents&#xA;WHERE &#xA;    delivery_location IN (&#xA;        &#39;Volcano Rim&#39;,&#xA;        &#39;Drifting Igloo&#39;,&#xA;        &#39;Abandoned Lighthouse&#39;,&#xA;        &#39;The Vibes&#39;&#xA;    );&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT COUNT(*) FROM deliveries;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 600      |&#xA;+----------+&#xA;sqlite&gt; SELECT COUNT(*) FROM misdelivered_presents;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 50       |&#xA;+----------+&#xA;sqlite&gt; SELECT COUNT(*) FROM deliveries WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;);&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 103      |&#xA;+----------+&#xA;sqlite&gt; SELECT COUNT(*) FROM misdelivered_presents WHERE delivery_location  IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;);&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 50       |&#xA;+----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;OK, so we need to delete the `103` records from the `deliveries` table and move them into the `misdelivered_presents` at the same time.&#xA;&#xA;How are we going to do that in SQLite.&#xA;&#xA;We can try&#xA;- DELETE FROM deliveries with a SELECT from deliveries &#xA;- Then INSERT the deleted data into misdelivered_presents&#xA;&#xA;Could that work?&#xA;&#xA;Let&#39;s see&#xA;&#xA;```sql&#xA;WITH moved AS (&#xA;    DELETE &#xA;        FROM deliveries&#xA;        WHERE delivery_location IN (&#xA;            &#39;Volcano Rim&#39;,&#xA;            &#39;Drifting Igloo&#39;,&#xA;            &#39;Abandoned Lighthouse&#39;,&#xA;            &#39;The Vibes&#39;&#xA;        )&#xA;    RETURNING &#xA;        id, &#xA;        child_name, &#xA;        delivery_location, &#xA;        gift_name, &#xA;        scheduled_at, &#xA;        datetime(&#39;now&#39;) AS flagged_at, &#xA;        &#39;Invalid delivery location&#39; AS reason&#xA;)&#xA;SELECT * FROM moved;&#xA;```&#xA;&#xA;```&#xA;&#xA;```&#xA;&#xA;Ops! Can&#39;t generate a CTE with delete in it.&#xA;&#xA;That&#39;s nasty. Thought that could shove it in the insert into the misdelivered table.&#xA;&#xA;But it should be other way then?&#xA;&#xA;Insert first and then use the data to delete?&#xA;&#xA;```sql&#xA;WITH inserted_data AS (&#xA;  INSERT INTO misdelivered_presents (id, child_name, delivery_location, gift_name, scheduled_at)&#xA;  SELECT id, child_name, delivery_location, gift_name, scheduled_at&#xA;  FROM deliveries&#xA;  WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;)&#xA;DELETE FROM deliveries&#xA;WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;);&#xA;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; WITH inserted_data AS (&#xA;  INSERT INTO misdelivered_presents (id, child_name, delivery_location, gift_name, scheduled_at)&#xA;  SELECT id, child_name, delivery_location, gift_name, scheduled_at&#xA;  FROM deliveries&#xA;  WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;)&#xA;DELETE FROM deliveries&#xA;WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;);&#xA;Parse error: near &#34;INSERT&#34;: syntax error&#xA;  rt the selected rows into misdelivered_presents   INSERT INTO misdelivered_pre&#xA;                                      error here ---^&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Tried it, didn&#39;t work&#xA;&#xA;Quoting the documentation here.&#xA;&#xA;&gt; Common Table Expressions or CTEs act like temporary views that exist only for the duration of a single SQL statement. There are two kinds of common table expressions: &#34;ordinary&#34; and &#34;recursive&#34;. Ordinary common table expressions are helpful for making queries easier to understand by factoring subqueries out of the main SQL statement. Recursive common table expressions provide the ability to do hierarchical or recursive queries of trees and graphs, a capability that is not otherwise available in the SQL language.&#xA;&#xA;I think it doesn&#39;t support CTEs with delete or insert! Sigh!&#xA;&#xA;Now?&#xA;&#xA;BEGIN COMMIT? Atomic Transactions?&#xA;&#xA;Yeah!&#xA;&#xA;Santa wanted it in one go right? That&#39;s not possible in SQLite but at least everything will happen or nothing will.&#xA;&#xA;```sql&#xA;BEGIN;&#xA;&#xA;WITH misdelivered_deliveries AS (&#xA;    SELECT * FROM deliveries &#xA;    WHERE delivery_location IN (&#xA;        &#39;Volcano Rim&#39;, &#xA;        &#39;Drifting Igloo&#39;, &#xA;        &#39;Abandoned Lighthouse&#39;, &#xA;        &#39;The Vibes&#39;)&#xA;)&#xA;INSERT INTO misdelivered_presents (&#xA;    id, &#xA;    child_name,&#xA;    delivery_location, &#xA;    gift_name, &#xA;    scheduled_at, &#xA;    flagged_at, &#xA;    reason&#xA;)&#xA;SELECT &#xA;    id,&#xA;    child_name,&#xA;    delivery_location,&#xA;    gift_name,&#xA;    scheduled_at,&#xA;    DATETIME(&#39;now&#39;),&#xA;    &#39;Invalid delivery location&#39;&#xA;FROM misdelivered_deliveries&#xA;RETURNING *;&#xA;&#xA;DELETE FROM deliveries &#xA;WHERE id IN (&#xA;    SELECT id FROM deliveries &#xA;    WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;);&#xA;&#xA;COMMIT;&#xA;```&#xA;&#xA;So here, we defined the CTE that selects data from `deliveries` and uses it to insert into the `misdelivered_deliveries` table. Then a separate query to delete that data from the `deliveries` table.&#xA;&#xA;Yeah! I mean I don&#39;t think there could be other way to it!&#xA;&#xA;We might or can use triggers to insert into one table when inserted in one table. But I think that is too much of a farfetched solution. We might create a trigger and instantly delete it as it could populate unwanted data if kept in the database.&#xA;&#xA;### Trigger to insert when deleted&#xA;&#xA;We can create a `TRIGGER` to insert into `misdelivered_presents` when something is deleted from `deliveries` table. We will separately have to delete the  records from the `deliveries` table. But the insert will happen automatically after the deletion.&#xA;&#xA;Opening a fresh instance of the database!&#xA;&#xA;```sql&#xA;CREATE TRIGGER move_misdelivered_presents&#xA;BEFORE DELETE ON deliveries&#xA;FOR EACH ROW&#xA;WHEN OLD.delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;BEGIN&#xA;    INSERT INTO misdelivered_presents (&#xA;        id, child_name, delivery_location, gift_name, &#xA;        scheduled_at, flagged_at, reason&#xA;    )&#xA;    VALUES (&#xA;        OLD.id, OLD.child_name, OLD.delivery_location, OLD.gift_name, &#xA;        OLD.scheduled_at, DATETIME(&#39;now&#39;), &#39;Invalid delivery location&#39;&#xA;    );&#xA;END;&#xA;```&#xA;This will create the trigger to insert the row into `misdelivered_presents` table when deleted from the `deliveries` table.&#xA;&#xA;```sql&#xA;DELETE FROM deliveries &#xA;WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;RETURNING *;&#xA;```&#xA;&#xA;Ops! We are returning from the DELETE statement which is wrong as the problem stated to select from the `misdelivered_presents` table.&#xA;&#xA;&gt; Make sure your final step shows the misdelivered_presents records that you just moved (i.e. don&#39;t include any existing records from the misdelivered_presents table).&#xA;&#xA;This is invalid then!&#xA;&#xA;&#xA;Though its not technically atomic. It happens before the delete, so it can mess up things.&#xA;&#xA;&#xA;&#xA;```&#xA;sqlite&gt; .read day10-inserts.sql&#xA;sqlite&gt; .mode table &#xA;sqlite&gt; CREATE TRIGGER move_misdelivered_presents&#xA;BEFORE DELETE ON deliveries&#xA;FOR EACH ROW&#xA;WHEN OLD.delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;BEGIN&#xA;    INSERT INTO misdelivered_presents (&#xA;        id, child_name, delivery_location, gift_name, &#xA;        scheduled_at, flagged_at, reason&#xA;    )&#xA;    VALUES (&#xA;        OLD.id, OLD.child_name, OLD.delivery_location, OLD.gift_name, &#xA;        OLD.scheduled_at, DATETIME(&#39;now&#39;), &#39;Invalid delivery location&#39;&#xA;    );&#xA;END;&#xA;sqlite&gt; DELETE FROM deliveries &#xA;WHERE delivery_location IN (&#39;Volcano Rim&#39;, &#39;Drifting Igloo&#39;, &#39;Abandoned Lighthouse&#39;, &#39;The Vibes&#39;)&#xA;RETURNING *;&#xA;+-----+------------+----------------------+----------------------+---------------------+&#xA;| id  | child_name |  delivery_location   |      gift_name       |    scheduled_at     |&#xA;+-----+------------+----------------------+----------------------+---------------------+&#xA;| 3   | Mila N.    | The Vibes            | storybook collection | 2025-12-24 21:09:00 |&#xA;| 22  | Lena F.    | Abandoned Lighthouse | plush reindeer       | 2025-12-24 19:08:00 |&#xA;| 23  | Mila N.    | Abandoned Lighthouse | storybook collection | 2025-12-24 20:42:00 |&#xA;| 29  | Mateo C.   | Volcano Rim          | plush reindeer       | 2025-12-24 21:44:00 |&#xA;| 31  | Nia G.     | Drifting Igloo       | robot toy            | 2025-12-24 19:57:00 |&#xA;...&#xA;...&#xA;| 582 | Zara S.    | The Vibes            | teddy bear           | 2025-12-24 21:20:00 |&#xA;| 585 | Layla B.   | Abandoned Lighthouse | wooden train set     | 2025-12-24 18:39:00 |&#xA;| 587 | Nia G.     | Volcano Rim          | storybook collection | 2025-12-24 18:35:00 |&#xA;| 596 | Omar Q.    | Abandoned Lighthouse | puzzle box           | 2025-12-24 19:28:00 |&#xA;+-----+------------+----------------------+----------------------+---------------------+&#xA;sqlite&gt; DROP TRIGGER move_misdelivered_presents;&#xA;sqlite&gt; SELECT COUNT(*) FROM deliveries;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 497      |&#xA;+----------+&#xA;sqlite&gt; SELECT COUNT(*) FROM misdelivered_presents;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 153      |&#xA;+----------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;So, it gets the result but I don&#39;t think the `TRIGGER` is a solution to this.&#xA;&#xA;In SQLite, Atomic transaction using begin and end is the only thing to go for right?&#xA;&#xA;Some one prove Santa that it can&#39;t be done in SQLite in one query? Please!&#xA;&#xA;That&#39;s it from day 10, I have spent enough time on this, banging my head on the sqlite shell! &#xA;&#xA;Off to day 11 tomorrow!&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 9: Evergreen Market Orders</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-9</link>
      <description>Advent of SQL, Day 9 - Evergreen Market Orders We are on day 9 of advent of SQL, and I feel good so far. Let&#39;s see what we learn today? Let&#39;s get the inserts fo</description>
      <pubDate>Wed, 24 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL, Day 9 - Evergreen Market Orders&#xA;&#xA;We are on day 9 of advent of SQL, and I feel good so far.&#xA;&#xA;Let&#39;s see what we learn today?&#xA;&#xA;Let&#39;s get the inserts for the day.&#xA;&#xA;```&#xA;sqlite&gt; .read day9-inserts.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE orders (&#xA;    id           INT PRIMARY KEY,&#xA;    customer_id  INT,&#xA;    created_at   TIMESTAMP,&#xA;    order_data   JSONB&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM orders limit 10;&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| id | customer_id |     created_at      |                          order_data                          |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 1  | 1           | 2025-11-21 13:08:22 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: tru |&#xA;|    |             |                     | e}}                                                          |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 2  | 1           | 2025-11-21 18:42:58 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;high |&#xA;|    |             |                     | &#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}                              |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 3  | 1           | 2025-11-21 21:01:46 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: fal |&#xA;|    |             |                     | se}}                                                         |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 4  | 1           | 2025-11-24 13:17:27 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: tru |&#xA;|    |             |                     | e}}                                                          |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 5  | 1           | 2025-11-24 21:09:46 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: fa |&#xA;|    |             |                     | lse}}                                                        |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 6  | 1           | 2025-11-25 07:24:55 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;mediu |&#xA;|    |             |                     | m&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}                              |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 7  | 1           | 2025-11-25 17:42:36 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: fal |&#xA;|    |             |                     | se}}                                                         |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 8  | 1           | 2025-11-27 02:34:24 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true |&#xA;|    |             |                     | }}                                                           |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 9  | 1           | 2025-11-30 22:43:54 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: tr |&#xA;|    |             |                     | ue}}                                                         |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;| 10 | 1           | 2025-12-01 04:03:33 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;medium |&#xA;|    |             |                     | &#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}                              |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Looks like we will deal with JSON today, seems exciting. I haven&#39;t dealt with JSON in SQLite yet, today will break it.&#xA;&#xA;Let&#39;s get some sample inserts for you to play in the browser.&#xA;&#xA;LIMITING TO 20, there are more than 400 rows!&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS orders;&#xA;&#xA;CREATE TABLE orders (&#xA;    id           INT PRIMARY KEY,&#xA;    customer_id  INT,&#xA;    created_at   TIMESTAMP,&#xA;    order_data   JSONB&#xA;);&#xA;&#xA;INSERT INTO orders (id, customer_id, created_at, order_data) VALUES&#xA;    (1, 1, &#39;2025-11-21 13:08:22&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (2, 1, &#39;2025-11-21 18:42:58&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;high&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (3, 1, &#39;2025-11-21 21:01:46&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (4, 1, &#39;2025-11-24 13:17:27&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (5, 1, &#39;2025-11-24 21:09:46&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (6, 1, &#39;2025-11-25 07:24:55&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;medium&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (7, 1, &#39;2025-11-25 17:42:36&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (8, 1, &#39;2025-11-27 02:34:24&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (9, 1, &#39;2025-11-30 22:43:54&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (10, 1, &#39;2025-12-01 04:03:33&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;medium&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (11, 1, &#39;2025-12-02 05:19:10&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;low&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (12, 1, &#39;2025-12-03 16:25:56&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;medium&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (13, 1, &#39;2025-12-10 19:34:28&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (14, 1, &#39;2025-12-16 19:23:53&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (15, 2, &#39;2025-11-23 19:11:23&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (16, 2, &#39;2025-11-28 15:23:27&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (17, 2, &#39;2025-11-30 12:05:36&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;low&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: true}}&#39;),&#xA;    (18, 2, &#39;2025-12-03 07:03:06&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (19, 2, &#39;2025-12-07 13:55:13&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;express&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;high&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;),&#xA;    (20, 2, &#39;2025-12-08 07:17:31&#39;, &#39;{&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM orders;&#xA;```&#xA;&#xA;Let&#39;s get to the problem now.&#xA;&#xA;## Problem&#xA;&#xA;&gt; Build a report using the orders table that shows the latest order for each customer, along with their requested shipping method, gift wrap choice (as true or false), and the risk flag in separate columns.&#xA;&gt; &#xA;&gt; Order the report by the most recent order first so Evergreen Market can reach out to them ASAP.&#xA;&#xA;Ok, so we need for each customer the latest order with the following details:&#xA;- shipping method&#xA;- gift wrap choice&#xA;- risk flag&#xA;&#xA;These all I think are in the same column as a JSON string or blob. We need to extract those out from the column.&#xA;&#xA;Let&#39;s first check the `orders` table.&#xA;&#xA;It has a few columns:&#xA;- id&#xA;- customer_id&#xA;- created_at&#xA;- order_data&#xA;&#xA;We do require the `order_data` as that is the column that has JSON.&#xA;&#xA;Also the problem said to give the most recent order, so we need to order by `created_at` in a reverse way, the latest first. Also we need it per customer, so we need to group by `customer_id`.&#xA;&#xA;Let&#39;s see how to get the data inside JSON in SQLite.&#xA;&#xA;### JSON Extract&#xA;&#xA;Well we have [json_extract](https://sqlite.org/json1.html#jex) which can give us the value of the key from the given json data string.&#xA;&#xA;The function takes in the column name from the table which would be the column containing the json data and then the second parameter is the path to the key, in this case if we want to get the `method` from the `shipping` key, we can use `$.shipping.method` which means from the root `$` get the `shipping` key, and inside that (`shipping` key) give the value inside the `method` key.&#xA;&#xA;If the path is not present, in our case the `risk`  key is very rarely present in the actual original json data, the function skips the further key lookup, it returns `NULL`.&#xA;&#xA;```sql&#xA;SELECT json_extract(orders.order_data, &#39;$.shipping.method&#39;) FROM orders LIMIT 5;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; select json_extract(orders.order_data, &#39;$.shipping.method&#39;) FROM orders LIMIT 5;&#xA;+------------------------------------------------------+&#xA;| json_extract(orders.order_data, &#39;$.shipping.method&#39;) |&#xA;+------------------------------------------------------+&#xA;| standard                                             |&#xA;| overnight                                            |&#xA;| standard                                             |&#xA;| standard                                             |&#xA;| overnight                                            |&#xA;+------------------------------------------------------+&#xA;sqlite&gt; select *, json_extract(orders.order_data, &#39;$.shipping.method&#39;) FROM orders LIMIT 5;&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;| id | customer_id |     created_at      |                          order_data                          | json_extract(orders.order_data, &#39;$.shipping.method&#39;) |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;| 1  | 1           | 2025-11-21 13:08:22 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: tru | standard                                             |&#xA;|    |             |                     | e}}                                                          |                                                      |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;| 2  | 1           | 2025-11-21 18:42:58 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;risk&#34;: {&#34;flag&#34;: &#34;high | overnight                                            |&#xA;|    |             |                     | &#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: false}}                              |                                                      |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;| 3  | 1           | 2025-11-21 21:01:46 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: fal | standard                                             |&#xA;|    |             |                     | se}}                                                         |                                                      |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;| 4  | 1           | 2025-11-24 13:17:27 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;standard&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: tru | standard                                             |&#xA;|    |             |                     | e}}                                                          |                                                      |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;| 5  | 1           | 2025-11-24 21:09:46 | {&#34;shipping&#34;: {&#34;method&#34;: &#34;overnight&#34;}, &#34;gift&#34;: {&#34;wrapped&#34;: fa | overnight                                            |&#xA;|    |             |                     | lse}}                                                        |                                                      |&#xA;+----+-------------+---------------------+--------------------------------------------------------------+------------------------------------------------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Let&#39;s grab the other two.&#xA;&#xA;```sql&#xA;SELECT&#xA;    json_extract(order_data, &#39;$.shipping.method&#39;),&#xA;    json_extract(order_data, &#39;$.gift.wrapped&#39;),&#xA;    json_extract(order_data, &#39;$.risk.flag&#39;) &#xA;FROM orders;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; select json_extract(order_data, &#39;$.shipping.method&#39;), json_extract(order_data, &#39;$.gift.wrapped&#39;), json_extract(order_data, &#39;$.risk.flag&#39;) FROM orders LIMIT 5;&#xA;+-----------------------------------------------+--------------------------------------------+-----------------------------------------+&#xA;| json_extract(order_data, &#39;$.shipping.method&#39;) | json_extract(order_data, &#39;$.gift.wrapped&#39;) | json_extract(order_data, &#39;$.risk.flag&#39;) |&#xA;+-----------------------------------------------+--------------------------------------------+-----------------------------------------+&#xA;| standard                                      | 1                                          |                                         |&#xA;| overnight                                     | 0                                          | high                                    |&#xA;| standard                                      | 0                                          |                                         |&#xA;| standard                                      | 1                                          |                                         |&#xA;| overnight                                     | 0                                          |                                         |&#xA;+-----------------------------------------------+--------------------------------------------+-----------------------------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Hmm, interesting, it automatically converted `false` to `0`, and `true` to `1` as you can see in the actual json data it was `true` and `false`. Nice, but could go the other way as well, if you were checking for true or false instead of 0 and 1. Both are same in SQLite, they are treated as truth and false values.&#xA;&#xA;We have all the things we need, now?&#xA;&#xA;Well, let&#39;s group by the `customer_id` and order by the `created_at` to get the info about each customer with its latest order.&#xA;&#xA;```sql&#xA;SELECT *, json_extract(order_data, &#39;$.shipping.method&#39;), json_extract(order_data, &#39;$.gift.wrapped&#39;), json_extract(order_data, &#39;$.risk.flag&#39;) FROM orders GROUP BY customer_id ORDER by created_at LIMIT 5;&#xA;&#xA;```&#xA;&#xA;But there I see the problem, how can we group by customer_id and find the latest order for that customer! The order by will only work after the grouping has been done, so its not necessarily the per order for each customer.&#xA;&#xA;Hmm!&#xA;&#xA;We need to make a subquery to get the latest order for each customer it seems.&#xA;&#xA;```sql&#xA;SELECT &#xA;    orders.customer_id,&#xA;    orders.created_at,&#xA;    json_extract(orders.order_data, &#39;$.shipping.method&#39;) AS shipping_method,&#xA;    json_extract(orders.order_data, &#39;$.gift.wrapped&#39;) AS gift_wrap,&#xA;    json_extract(orders.order_data, &#39;$.risk.flag&#39;) AS risk_flag&#xA;FROM orders&#xA;WHERE orders.created_at = (&#xA;    SELECT MAX(created_at)&#xA;    FROM orders AS latest_order&#xA;    WHERE orders.customer_id = latest_order.customer_id&#xA;)&#xA;ORDER BY orders.created_at DESC;&#xA;&#xA;```&#xA;We just added this &#xA;&#xA;```sql&#xA;WHERE orders.created_at = (&#xA;    SELECT MAX(created_at)&#xA;    FROM orders AS latest_order&#xA;    WHERE orders.customer_id = latest_order.customer_id&#xA;)&#xA;```&#xA;This will filter only the latest created order for each customer and then we can grab the details from json in each row per customer after this isolates the row with the latest created at time.&#xA;&#xA;```&#xA;sqlite&gt; SELECT &#xA;    orders.customer_id,&#xA;    orders.created_at,&#xA;    json_extract(orders.order_data, &#39;$.shipping.method&#39;) AS shipping_method,&#xA;    json_extract(orders.order_data, &#39;$.gift.wrapped&#39;) AS gift_wrap,&#xA;    json_extract(orders.order_data, &#39;$.risk.flag&#39;) AS risk_flag&#xA;FROM orders&#xA;WHERE orders.created_at = (&#xA;    SELECT MAX(created_at)&#xA;    FROM orders AS latest_order&#xA;    WHERE orders.customer_id = latest_order.customer_id&#xA;)&#xA;ORDER BY orders.created_at DESC;&#xA;&#xA;+-------------+---------------------+-----------------+-----------+-----------+&#xA;| customer_id |     created_at      | shipping_method | gift_wrap | risk_flag |&#xA;+-------------+---------------------+-----------------+-----------+-----------+&#xA;| 32          | 2025-12-17 21:17:39 | overnight       | 0         |           |&#xA;| 15          | 2025-12-17 19:21:33 | express         | 0         | medium    |&#xA;| 50          | 2025-12-17 14:47:54 | express         | 1         | low       |&#xA;| 43          | 2025-12-17 14:23:46 | express         | 1         |           |&#xA;| 27          | 2025-12-17 14:05:13 | standard        | 1         |           |&#xA;| 3           | 2025-12-17 14:02:28 | standard        | 1         | high      |&#xA;| 49          | 2025-12-17 13:28:49 | express         | 1         | high      |&#xA;| 36          | 2025-12-17 11:11:29 | overnight       | 1         |           |&#xA;| 31          | 2025-12-17 08:05:46 | express         | 0         |           |&#xA;| 16          | 2025-12-17 07:32:36 | express         | 0         |           |&#xA;| 38          | 2025-12-17 06:05:12 | standard        | 1         |           |&#xA;| 44          | 2025-12-17 05:28:54 | standard        | 1         |           |&#xA;| 9           | 2025-12-17 04:33:08 | express         | 1         |           |&#xA;| 23          | 2025-12-17 03:01:49 | express         | 0         |           |&#xA;| 21          | 2025-12-16 23:53:14 | overnight       | 1         |           |&#xA;| 25          | 2025-12-16 20:49:58 | overnight       | 1         | high      |&#xA;| 46          | 2025-12-16 19:38:37 | standard        | 0         |           |&#xA;| 1           | 2025-12-16 19:23:53 | express         | 0         |           |&#xA;| 28          | 2025-12-16 18:20:55 | standard        | 0         | low       |&#xA;| 40          | 2025-12-16 17:54:05 | express         | 0         |           |&#xA;| 13          | 2025-12-16 16:11:16 | standard        | 1         |           |&#xA;| 24          | 2025-12-16 14:19:45 | overnight       | 0         |           |&#xA;| 11          | 2025-12-16 11:20:31 | standard        | 1         | medium    |&#xA;| 17          | 2025-12-16 08:19:36 | standard        | 0         |           |&#xA;| 4           | 2025-12-16 04:38:51 | express         | 0         |           |&#xA;| 34          | 2025-12-16 02:11:57 | express         | 0         |           |&#xA;| 30          | 2025-12-15 15:32:04 | overnight       | 0         | medium    |&#xA;| 48          | 2025-12-15 13:03:59 | standard        | 1         |           |&#xA;| 41          | 2025-12-15 13:00:00 | standard        | 0         | high      |&#xA;| 45          | 2025-12-15 11:37:57 | standard        | 0         |           |&#xA;| 7           | 2025-12-14 23:39:47 | express         | 0         |           |&#xA;| 35          | 2025-12-14 22:46:36 | express         | 1         | high      |&#xA;| 47          | 2025-12-14 20:53:07 | standard        | 0         |           |&#xA;| 22          | 2025-12-14 12:38:58 | standard        | 0         | medium    |&#xA;| 12          | 2025-12-14 07:59:28 | standard        | 1         | medium    |&#xA;| 18          | 2025-12-14 04:55:34 | overnight       | 0         | low       |&#xA;| 20          | 2025-12-14 04:54:07 | overnight       | 0         |           |&#xA;| 14          | 2025-12-13 07:44:19 | standard        | 1         |           |&#xA;| 6           | 2025-12-13 07:03:12 | overnight       | 1         |           |&#xA;| 10          | 2025-12-13 04:23:37 | standard        | 0         | medium    |&#xA;| 19          | 2025-12-13 03:29:15 | standard        | 0         |           |&#xA;| 8           | 2025-12-12 12:42:18 | overnight       | 0         |           |&#xA;| 26          | 2025-12-11 17:35:46 | standard        | 0         | low       |&#xA;| 37          | 2025-12-11 13:55:35 | overnight       | 1         |           |&#xA;| 33          | 2025-12-09 12:30:54 | express         | 1         |           |&#xA;| 2           | 2025-12-08 07:17:31 | standard        | 0         |           |&#xA;| 42          | 2025-12-08 02:48:12 | overnight       | 0         | medium    |&#xA;| 5           | 2025-12-06 17:53:53 | overnight       | 1         |           |&#xA;| 39          | 2025-12-06 14:38:29 | overnight       | 1         |           |&#xA;| 29          | 2025-12-03 05:10:32 | overnight       | 1         | high      |&#xA;+-------------+---------------------+-----------------+-----------+-----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Now, that is it! I guess, a quick solution for getting this data sorted.&#xA;&#xA;However there is one more way without needing the subquery.&#xA;&#xA;### ROW NUMBER - Window Function&#xA;&#xA;We can create partition for each customer with the `customer_id` and then sort by the `created_at` date in a latest first order (descending) and simply take the first row for extraction. This is kind of the same thing, but a kind of &#39;gives a good feel to me&#39; solution. Elite Mindset!&#xA;&#xA;```sql&#xA;WITH ranked_orders AS (&#xA;    SELECT &#xA;        orders.customer_id,&#xA;        orders.created_at,&#xA;        json_extract(orders.order_data, &#39;$.shipping.method&#39;) AS shipping_method,&#xA;        json_extract(orders.order_data, &#39;$.gift.wrapped&#39;) AS gift_wrap,&#xA;        json_extract(orders.order_data, &#39;$.risk.flag&#39;) AS risk_flag,&#xA;        ROW_NUMBER() OVER (PARTITION BY orders.customer_id ORDER BY orders.created_at DESC) AS row_num&#xA;    FROM orders&#xA;)&#xA;SELECT &#xA;    customer_id,&#xA;    created_at,&#xA;    shipping_method,&#xA;    gift_wrap,&#xA;    risk_flag&#xA;FROM ranked_orders&#xA;WHERE row_num = 1&#xA;ORDER BY created_at DESC;&#xA;```&#xA;&#xA;We are just doing the same thing as explained.&#xA;&#xA;This thing&#xA;&#xA;`ROW_NUMBER() OVER (PARTITION BY orders.customer_id ORDER BY orders.created_at DESC) AS row_num`&#xA;&#xA;It will partition the table `orders` with `customer_id` and order by `created_at` latest first. And we assign each row as the `row_number` and grab the 1st row per customer to grab the latest order (filter with WHERE when calling the cte, or querying. &#xA;&#xA;&#xA;However we created the CTE `ranked_orders` or you can call it `latest_orders`, to get the `row_num` filter to only the first `1` and for that we grab the already extracted json data as well the other columns.&#xA;&#xA;Simple!&#xA;&#xA;That is it!&#xA;&#xA;Day 9 was easy peasy!&#xA;&#xA;Onwards day 10! Catch you tomorrow!&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 8: Product Catalog</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-8</link>
      <description>Advent of SQL - Day 8, Product Catalog Whopsies! This is day 8. Let&#39;s get straigh... HOOH! We need to clean up some SQL for running in SQLite. Just cleaning up</description>
      <pubDate>Tue, 23 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL - Day 8, Product Catalog&#xA;&#xA;Whopsies! This is day 8.&#xA;&#xA;Let&#39;s get straigh...&#xA;&#xA;HOOH! We need to clean up some SQL for running in SQLite.&#xA;&#xA;```bash&#xA;sed -i &#39;s/TIMESTAMP[[:space:]]*//g&#39; day8-inserts-sqlite.sql&#xA;```&#xA;&#xA;Just cleaning up `TIMESTAMP` in `INSERT` before the date value.&#xA;&#xA;Here we go:&#xA;The SQL to run in SQLite.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS price_changes;&#xA;DROP TABLE IF EXISTS products;&#xA;&#xA;CREATE TABLE products (&#xA;    product_id INT PRIMARY KEY,&#xA;    product_name TEXT&#xA;);&#xA;&#xA;CREATE TABLE price_changes (&#xA;    id INT PRIMARY KEY,&#xA;    product_id INT,&#xA;    price NUMERIC(10,2),&#xA;    effective_timestamp &#xA;);&#xA;&#xA;INSERT INTO products (product_id, product_name) VALUES&#xA;    (1, &#39;Deluxe Sled&#39;),&#xA;    (2, &#39;Holiday Trail Mix Trio&#39;),&#xA;    (3, &#39;Premium Cinnamon Roasted Almonds&#39;),&#xA;    (4, &#39;Deluxe Wrapping Paper&#39;),&#xA;    (5, &#39;Deluxe Roasted Cashews&#39;),&#xA;    (6, &#39;Festive Cookware Set&#39;),&#xA;    (7, &#39;Deluxe Mug&#39;),&#xA;    (8, &#39;Premium Sled&#39;),&#xA;    (9, &#39;Essential Sled&#39;),&#xA;    (10, &#39;Family Snow Boots&#39;),&#xA;    (11, &#39;Family Dark Chocolate Almonds&#39;),&#xA;    (12, &#39;Premium Festive Scarf&#39;),&#xA;    (13, &#39;Essential Cookie Decorating Kit&#39;),&#xA;    (14, &#39;Festive White Chocolate Popcorn&#39;),&#xA;    (15, &#39;Cozy Puzzle&#39;),&#xA;    (16, &#39;Holiday Cheddar Popcorn&#39;),&#xA;    (17, &#39;Premium Board Game&#39;),&#xA;    (18, &#39;Deluxe Pecan Praline Bites&#39;),&#xA;    (19, &#39;Cozy Almond Brittle&#39;),&#xA;    (20, &#39;Winter Sled&#39;);&#xA;&#xA;INSERT INTO price_changes (id, product_id, price, effective_timestamp) VALUES&#xA;    (1, 1, 148.28, &#39;2025-12-01 05:25:35&#39;),&#xA;    (2, 1, 148.63, &#39;2025-12-02 18:15:33&#39;),&#xA;    (3, 1, 126.78, &#39;2025-12-02 18:40:38&#39;),&#xA;    (4, 1, 119.12, &#39;2025-12-03 10:14:35&#39;),&#xA;    (5, 1, 98.57, &#39;2025-12-04 04:14:31&#39;),&#xA;    (6, 1, 88.49, &#39;2025-12-06 19:02:40&#39;),&#xA;    (7, 1, 80.88, &#39;2025-12-07 10:43:54&#39;),&#xA;    (8, 1, 78.88, &#39;2025-12-08 06:45:39&#39;),&#xA;    (9, 1, 80.24, &#39;2025-12-08 16:11:11&#39;),&#xA;    (10, 1, 73.9, &#39;2025-12-10 14:33:43&#39;),&#xA;    (11, 1, 88.2, &#39;2025-12-12 02:21:09&#39;),&#xA;    (12, 1, 99.03, &#39;2025-12-12 02:58:14&#39;),&#xA;    (13, 1, 100.18, &#39;2025-12-14 15:58:03&#39;),&#xA;    (14, 1, 106.91, &#39;2025-12-16 01:51:05&#39;),&#xA;    (15, 1, 109.25, &#39;2025-12-16 16:01:53&#39;),&#xA;    (16, 2, 29.54, &#39;2025-12-03 14:21:10&#39;),&#xA;    (17, 2, 34.33, &#39;2025-12-03 19:14:31&#39;),&#xA;    (18, 2, 39.08, &#39;2025-12-04 06:13:48&#39;),&#xA;    (19, 2, 32.71, &#39;2025-12-04 18:33:17&#39;),&#xA;    (20, 2, 31.71, &#39;2025-12-05 22:36:14&#39;),&#xA;    (21, 2, 28.88, &#39;2025-12-06 02:42:02&#39;),&#xA;    (22, 2, 23.14, &#39;2025-12-07 09:46:54&#39;),&#xA;    (23, 2, 25.65, &#39;2025-12-07 10:03:45&#39;),&#xA;    (24, 2, 27.06, &#39;2025-12-07 14:39:15&#39;),&#xA;    (25, 2, 24.48, &#39;2025-12-07 20:08:05&#39;),&#xA;    (26, 2, 26.84, &#39;2025-12-09 07:44:32&#39;),&#xA;    (27, 2, 27.39, &#39;2025-12-13 06:25:19&#39;),&#xA;    (28, 2, 26.6, &#39;2025-12-14 10:16:34&#39;),&#xA;    (29, 2, 21.38, &#39;2025-12-15 16:20:20&#39;),&#xA;    (30, 2, 17.75, &#39;2025-12-16 09:28:13&#39;);&#xA;```&#xA;&#xA;We can get started.&#xA;&#xA;## Problem&#xA;&#xA;&gt; Generate a report, using the products and `price_changes` tables for leadership that returns the `product_name`, `current_price`, `previous_price`, and the difference between the current and previous prices&#xA;&#xA;&#xA;So what we need is &#xA;&#xA;- product_name&#xA;- current_price (latest)&#xA;- previous_price (just before the latest)&#xA;- price_difference = current - previous&#xA;&#xA;Again we have to meddle with dates, maybe, maybe not!&#xA;&#xA;### With CTEs and JOINs&#xA;&#xA;Let&#39;s start with the simplest approach. We need 2 prices, the latest(highest date timestamp) and the 2nd latest (the 2nd highest date timestamp). We can get the first pretty easily, but what about the second?&#xA;&#xA;Well, if we get the first, then the second should be easy to get right? Right? Because it will be just before it. Well not directly.&#xA;&#xA;Let&#39;s first grab the max timestamp.&#xA;&#xA;```sql&#xA;SELECT &#xA;    product_id,&#xA;    MAX(effective_timestamp) AS latest_timestamp&#xA;FROM price_changes&#xA;GROUP BY product_id;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT product_id, MAX(effective_timestamp) AS max_ts&#xA;        FROM price_changes&#xA;        GROUP BY product_id;&#xA;+------------+---------------------+&#xA;| product_id |       max_ts        |&#xA;+------------+---------------------+&#xA;| 1          | 2025-12-16 16:01:53 |&#xA;| 2          | 2025-12-16 09:28:13 |&#xA;| 3          | 2025-12-15 03:20:11 |&#xA;| 4          | 2025-12-16 01:33:41 |&#xA;| 5          | 2025-12-12 10:11:48 |&#xA;| 6          | 2025-12-15 11:31:40 |&#xA;| 7          | 2025-12-16 03:00:51 |&#xA;| 8          | 2025-12-15 22:33:48 |&#xA;| 9          | 2025-12-15 20:05:34 |&#xA;| 10         | 2025-12-15 20:53:45 |&#xA;...&#xA;...&#xA;| 139        | 2025-12-16 04:46:33 |&#xA;| 140        | 2025-12-16 21:19:30 |&#xA;| 141        | 2025-12-15 09:50:36 |&#xA;| 142        | 2025-12-16 18:39:51 |&#xA;| 143        | 2025-12-15 07:27:06 |&#xA;| 144        | 2025-12-16 16:25:16 |&#xA;| 146        | 2025-12-13 07:07:19 |&#xA;| 148        | 2025-12-16 09:30:11 |&#xA;| 149        | 2025-12-13 16:40:21 |&#xA;| 150        | 2025-12-13 08:24:43 |&#xA;+------------+---------------------+&#xA;```&#xA;&#xA;We got all the timestamp&#39;s for each product. But we wanted the prices. Well we can&#39;t grab the price here, since we are grouping!&#xA;&#xA;We can use the timestamp as we are selecting the `MAX` aggregate function however among the many rows for single product how do we group the price? `MIN`, `MAX`, `AVG`, but that&#39;s not we want, we just want the price for that timestamp.&#xA;&#xA;Well, we need to join to the same table for that timestamp and grab the price.&#xA;&#xA;```sql&#xA;SELECT &#xA;    price_changes.product_id,&#xA;    price_changes.price AS current_price,&#xA;    price_changes.effective_timestamp AS latest_ts&#xA;FROM price_changes&#xA;JOIN (&#xA;    SELECT &#xA;        product_id, &#xA;        MAX(effective_timestamp) AS latest_timestamp&#xA;    FROM price_changes&#xA;    GROUP BY product_id&#xA;) AS latest_price_change&#xA;  ON price_changes.product_id = latest_price_change.product_id&#xA; AND price_changes.effective_timestamp = latest_price_change.latest_timestamp;&#xA;```&#xA;&#xA;Here, we first specify what we want&#xA;- `product_id`&#xA;- `current_price` which is the price for the &#xA;- `latest_timestamp` which is the latest time recorded for the product price.&#xA;&#xA;We are grouping this by `product_id` since there are price recorded and various timestamps. We need to get the latest timestamp.&#xA;So, we do a nested query to join the latest timestamp and join the price with the same timestamp.&#xA; &#xA;This condition `price_changes.effective_timestamp = latest_price_change.latest_timestamp` helps us get the `price` for the `latest_timestamp`. We first get each timestamp for the product and then find its max, so that&#39;s the inner query from which we joined this table to. Self join with the different thing to find.&#xA;&#xA;This gives us 2 things.&#xA;- Product id&#xA;- Price for the latest timestamp &#xA; &#xA;We don&#39;t really want the timestamp in the final result, its just a criteria or a intermediate value to get the current and the previous prices for each product.&#xA;&#xA;```&#xA;sqlite&gt; &#xA;sqlite&gt; SELECT &#xA;    price_changes.product_id,&#xA;    price_changes.price AS current_price,&#xA;    price_changes.effective_timestamp AS latest_ts&#xA;FROM price_changes&#xA;JOIN (&#xA;    SELECT &#xA;        product_id, &#xA;        MAX(effective_timestamp) AS latest_timestamp&#xA;    FROM price_changes&#xA;    GROUP BY product_id&#xA;) AS latest_price_change&#xA;  ON price_changes.product_id = latest_price_change.product_id&#xA; AND price_changes.effective_timestamp = latest_price_change.latest_timestamp;&#xA;+------------+---------------+---------------------+&#xA;| product_id | current_price |      latest_ts      |&#xA;+------------+---------------+---------------------+&#xA;| 1          | 109.25        | 2025-12-16 16:01:53 |&#xA;| 2          | 17.75         | 2025-12-16 09:28:13 |&#xA;| 3          | 143.65        | 2025-12-15 03:20:11 |&#xA;| 4          | 98.51         | 2025-12-16 01:33:41 |&#xA;| 5          | 124.04        | 2025-12-12 10:11:48 |&#xA;| 6          | 84.14         | 2025-12-15 11:31:40 |&#xA;| 7          | 123.09        | 2025-12-16 03:00:51 |&#xA;| 8          | 221.06        | 2025-12-15 22:33:48 |&#xA;| 9          | 255.88        | 2025-12-15 20:05:34 |&#xA;| 10         | 57.99         | 2025-12-15 20:53:45 |&#xA;...&#xA;...&#xA;| 139        | 16.41         | 2025-12-16 04:46:33 |&#xA;| 140        | 173.05        | 2025-12-16 21:19:30 |&#xA;| 141        | 69.97         | 2025-12-15 09:50:36 |&#xA;| 142        | 35.05         | 2025-12-16 18:39:51 |&#xA;| 143        | 153.94        | 2025-12-15 07:27:06 |&#xA;| 144        | 118.21        | 2025-12-16 16:25:16 |&#xA;| 146        | 54.73         | 2025-12-13 07:07:19 |&#xA;| 148        | 107.81        | 2025-12-16 09:30:11 |&#xA;| 149        | 72.6          | 2025-12-13 16:40:21 |&#xA;| 150        | 138.66        | 2025-12-13 08:24:43 |&#xA;+------------+---------------+---------------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Now, we need to the price before this max timestamp. How do we get it?&#xA;&#xA;We need to again join? Yes...&#xA;&#xA;We need to subquery the timestamp just before it. But how will we get the max timestamp for each product. Well that&#39;s what we wrote above.&#xA;&#xA;We can convert that to a CTE.&#xA;&#xA;```sql&#xA;&#xA;WITH latest AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;)&#xA;SELECT * FROM latest;&#xA;```&#xA;&#xA;We just got the same thing, however we can now use `latest` as a temporary table in the query.&#xA;&#xA;```sql&#xA;WITH latest AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes&#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id&#xA;)&#xA;SELECT * FROM previous JOIN latest ON previous.product_id = latest.product_id;&#xA;```&#xA;&#xA;&#xA;So, ok, this is getting long.&#xA;&#xA;Just we added this &#xA;&#xA;```sql&#xA;SELECT &#xA;    price_changes.product_id,&#xA;    price_changes.price AS previous_price,&#xA;    MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;FROM price_changes&#xA;JOIN latest&#xA;    ON price_changes.product_id = latest.product_id&#xA;WHERE &#xA;    price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;GROUP BY price_changes.product_id&#xA;```&#xA;&#xA;This won&#39;t work as we need the latest table which we converted to CTE.&#xA;&#xA;So, to get the 2nd highest or latest timestamp for each product, we do this.&#xA;&#xA;```sql&#xA;WITH latest AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;) &#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id;&#xA;```&#xA;&#xA;We use:&#xA; &#xA;```sql&#xA; JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;```&#xA;To get match the timestamp which will exclude the latest timestamp and that way we can again get the `MAX(price_changes.effective_timestamp) AS previous_timestamp` for the subset of the timestamp.&#xA;&#xA;&#xA;&#xA;This gives us all the previous timestamps i.e. the price just before the max timestamp for each product.&#xA;&#xA;```&#xA;sqlite&gt; WITH latest AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;) &#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id;&#xA;+------------+----------------+---------------------+&#xA;| product_id | previous_price | previous_timestamp  |&#xA;+------------+----------------+---------------------+&#xA;| 1          | 106.91         | 2025-12-16 01:51:05 |&#xA;| 2          | 21.38          | 2025-12-15 16:20:20 |&#xA;| 3          | 159.65         | 2025-12-14 09:52:09 |&#xA;| 4          | 105.6          | 2025-12-14 14:09:20 |&#xA;| 5          | 129.23         | 2025-12-12 04:08:45 |&#xA;| 6          | 88.97          | 2025-12-15 02:13:04 |&#xA;| 7          | 127.14         | 2025-12-13 07:25:12 |&#xA;| 8          | 241.99         | 2025-12-14 19:31:40 |&#xA;| 9          | 259.56         | 2025-12-12 12:47:13 |&#xA;| 10         | 64.88          | 2025-12-15 10:52:34 |&#xA;...&#xA;...&#xA;| 140        | 157.02         | 2025-12-09 17:58:07 |&#xA;| 141        | 73.88          | 2025-12-13 14:35:53 |&#xA;| 142        | 30.25          | 2025-12-16 13:44:42 |&#xA;| 143        | 143.04         | 2025-12-11 10:08:13 |&#xA;| 144        | 114.22         | 2025-12-15 17:33:37 |&#xA;| 146        | 65.71          | 2025-12-10 15:50:14 |&#xA;| 148        | 101.09         | 2025-12-15 05:37:27 |&#xA;| 149        | 86.31          | 2025-12-13 13:18:14 |&#xA;| 150        | 123.61         | 2025-12-12 10:49:22 |&#xA;+------------+----------------+---------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Now, we need to join both of these on the same product id, to fetch both previous and current timestamp as well as the price.&#xA;&#xA;```sql&#xA;SELECT * FROM previous JOIN latest ON previous.product_id = latest.product_id;&#xA;```&#xA;&#xA;Simple&#xA;&#xA;```sql&#xA;WITH latest AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id &#xA;)&#xA;SELECT * FROM previous JOIN latest ON previous.product_id = latest.product_id;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; WITH latest AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id&#xA;)&#xA;SELECT * FROM previous JOIN latest ON previous.product_id = latest.product_id;&#xA;+------------+----------------+---------------------+------------+---------------+---------------------+&#xA;| product_id | previous_price | previous_timestamp  | product_id | current_price |  latest_timestamp   |&#xA;+------------+----------------+---------------------+------------+---------------+---------------------+&#xA;| 1          | 106.91         | 2025-12-16 01:51:05 | 1          | 109.25        | 2025-12-16 16:01:53 |&#xA;| 2          | 21.38          | 2025-12-15 16:20:20 | 2          | 17.75         | 2025-12-16 09:28:13 |&#xA;| 3          | 159.65         | 2025-12-14 09:52:09 | 3          | 143.65        | 2025-12-15 03:20:11 |&#xA;| 4          | 105.6          | 2025-12-14 14:09:20 | 4          | 98.51         | 2025-12-16 01:33:41 |&#xA;| 5          | 129.23         | 2025-12-12 04:08:45 | 5          | 124.04        | 2025-12-12 10:11:48 |&#xA;| 6          | 88.97          | 2025-12-15 02:13:04 | 6          | 84.14         | 2025-12-15 11:31:40 |&#xA;| 7          | 127.14         | 2025-12-13 07:25:12 | 7          | 123.09        | 2025-12-16 03:00:51 |&#xA;| 8          | 241.99         | 2025-12-14 19:31:40 | 8          | 221.06        | 2025-12-15 22:33:48 |&#xA;| 9          | 259.56         | 2025-12-12 12:47:13 | 9          | 255.88        | 2025-12-15 20:05:34 |&#xA;| 10         | 64.88          | 2025-12-15 10:52:34 | 10         | 57.99         | 2025-12-15 20:53:45 |&#xA;...&#xA;...&#xA;| 139        | 15.26          | 2025-12-12 03:43:32 | 139        | 16.41         | 2025-12-16 04:46:33 |&#xA;| 140        | 157.02         | 2025-12-09 17:58:07 | 140        | 173.05        | 2025-12-16 21:19:30 |&#xA;| 141        | 73.88          | 2025-12-13 14:35:53 | 141        | 69.97         | 2025-12-15 09:50:36 |&#xA;| 142        | 30.25          | 2025-12-16 13:44:42 | 142        | 35.05         | 2025-12-16 18:39:51 |&#xA;| 143        | 143.04         | 2025-12-11 10:08:13 | 143        | 153.94        | 2025-12-15 07:27:06 |&#xA;| 144        | 114.22         | 2025-12-15 17:33:37 | 144        | 118.21        | 2025-12-16 16:25:16 |&#xA;| 146        | 65.71          | 2025-12-10 15:50:14 | 146        | 54.73         | 2025-12-13 07:07:19 |&#xA;| 148        | 101.09         | 2025-12-15 05:37:27 | 148        | 107.81        | 2025-12-16 09:30:11 |&#xA;| 149        | 86.31          | 2025-12-13 13:18:14 | 149        | 72.6          | 2025-12-13 16:40:21 |&#xA;| 150        | 123.61         | 2025-12-12 10:49:22 | 150        | 138.66        | 2025-12-13 08:24:43 |&#xA;+------------+----------------+---------------------+------------+---------------+---------------------+&#xA;sqlite&gt; &#xA;&#xA;&#xA;```&#xA;&#xA;Now, we are getting somewhere, we just need to find the difference right?&#xA;&#xA;Yes, but more JOINs&#xA;&#xA;We need the product_name from the `product` table. Almost forgot that table exists?&#xA;&#xA;```sql&#xA;SELECT &#xA;    *&#xA;FROM &#xA;    products &#xA;JOIN latest &#xA;    ON products.product_id = latest.product_id &#xA;LEFT JOIN previous &#xA;    ON products.product_id = previous.product_id;&#xA;```&#xA;&#xA;We need both right? So, we need to fetch the product id and join for the latest and the previous table from the CTE, joining on the `product_id`.&#xA;&#xA;&#xA;```sql&#xA;WITH latest AS (                                        &#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id&#xA;)&#xA;SELECT &#xA;    *&#xA;FROM &#xA;    products &#xA;JOIN latest &#xA;    ON products.product_id = latest.product_id &#xA;LEFT JOIN previous &#xA;    ON products.product_id = previous.product_id;&#xA;```&#xA;&#xA;&#xA;```&#xA;sqlite&gt; WITH latest AS (                                        &#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id&#xA;)&#xA;SELECT * FROM products JOIN latest ON products.product_id = latest.product_id &#xA;   ...&gt; LEFT JOIN previous ON products.product_id = previous.product_id;&#xA;+------------+-----------------------------------+------------+---------------+---------------------+------------+----------------+---------------------+&#xA;| product_id |           product_name            | product_id | current_price |  latest_timestamp   | product_id | previous_price | previous_timestamp  |&#xA;+------------+-----------------------------------+------------+---------------+---------------------+------------+----------------+---------------------+&#xA;| 1          | Deluxe Sled                       | 1          | 109.25        | 2025-12-16 16:01:53 | 1          | 106.91         | 2025-12-16 01:51:05 |&#xA;| 2          | Holiday Trail Mix Trio            | 2          | 17.75         | 2025-12-16 09:28:13 | 2          | 21.38          | 2025-12-15 16:20:20 |&#xA;| 3          | Premium Cinnamon Roasted Almonds  | 3          | 143.65        | 2025-12-15 03:20:11 | 3          | 159.65         | 2025-12-14 09:52:09 |&#xA;| 4          | Deluxe Wrapping Paper             | 4          | 98.51         | 2025-12-16 01:33:41 | 4          | 105.6          | 2025-12-14 14:09:20 |&#xA;| 5          | Deluxe Roasted Cashews            | 5          | 124.04        | 2025-12-12 10:11:48 | 5          | 129.23         | 2025-12-12 04:08:45 |&#xA;| 6          | Festive Cookware Set              | 6          | 84.14         | 2025-12-15 11:31:40 | 6          | 88.97          | 2025-12-15 02:13:04 |&#xA;| 7          | Deluxe Mug                        | 7          | 123.09        | 2025-12-16 03:00:51 | 7          | 127.14         | 2025-12-13 07:25:12 |&#xA;| 8          | Premium Sled                      | 8          | 221.06        | 2025-12-15 22:33:48 | 8          | 241.99         | 2025-12-14 19:31:40 |&#xA;| 9          | Essential Sled                    | 9          | 255.88        | 2025-12-15 20:05:34 | 9          | 259.56         | 2025-12-12 12:47:13 |&#xA;| 10         | Family Snow Boots                 | 10         | 57.99         | 2025-12-15 20:53:45 | 10         | 64.88          | 2025-12-15 10:52:34 |&#xA;...&#xA;...&#xA;| 140        | Classic Mug                       | 140        | 173.05        | 2025-12-16 21:19:30 | 140        | 157.02         | 2025-12-09 17:58:07 |&#xA;| 141        | Family Fruit Assortment           | 141        | 69.97         | 2025-12-15 09:50:36 | 141        | 73.88          | 2025-12-13 14:35:53 |&#xA;| 142        | Classic Ornament                  | 142        | 35.05         | 2025-12-16 18:39:51 | 142        | 30.25          | 2025-12-16 13:44:42 |&#xA;| 143        | Essential Ornament                | 143        | 153.94        | 2025-12-15 07:27:06 | 143        | 143.04         | 2025-12-11 10:08:13 |&#xA;| 144        | Premium Trail Mix Trio            | 144        | 118.21        | 2025-12-16 16:25:16 | 144        | 114.22         | 2025-12-15 17:33:37 |&#xA;| 146        | Premium Book Collection           | 146        | 54.73         | 2025-12-13 07:07:19 | 146        | 65.71          | 2025-12-10 15:50:14 |&#xA;| 148        | Cozy Trail Mix Trio               | 148        | 107.81        | 2025-12-16 09:30:11 | 148        | 101.09         | 2025-12-15 05:37:27 |&#xA;| 149        | Family Cheddar Popcorn            | 149        | 72.6          | 2025-12-13 16:40:21 | 149        | 86.31          | 2025-12-13 13:18:14 |&#xA;| 150        | Holiday Headphones                | 150        | 138.66        | 2025-12-13 08:24:43 | 150        | 123.61         | 2025-12-12 10:49:22 |&#xA;+------------+-----------------------------------+------------+---------------+---------------------+------------+----------------+---------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Now, we just want&#xA;- product_name&#xA;- previous_price&#xA;- current_price&#xA;- difference of current_price and previous price&#xA;&#xA;So,&#xA;&#xA;```sql&#xA;WITH latest AS (                                        &#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id&#xA;)&#xA;SELECT &#xA;    products.product_name, &#xA;    latest.current_price, &#xA;    previous.previous_price, &#xA;    (latest.current_price - previous.previous_price) as price_difference&#xA;FROM &#xA;    products &#xA;JOIN latest &#xA;    ON products.product_id = latest.product_id &#xA;LEFT JOIN previous &#xA;    ON products.product_id = previous.product_id;&#xA;```&#xA;&#xA;&#xA;```&#xA;sqlite&gt; WITH latest AS (                                        &#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS current_price,&#xA;           price_changes.effective_timestamp AS latest_timestamp&#xA;    FROM price_changes&#xA;    JOIN (                                                   &#xA;        SELECT product_id, MAX(effective_timestamp) AS max_timestamp&#xA;        FROM price_changes&#xA;        GROUP BY product_id&#xA;    ) latest_price_changes                             &#xA;      ON price_changes.product_id = latest_price_changes.product_id&#xA;     AND price_changes.effective_timestamp = latest_price_changes.max_timestamp&#xA;),  previous AS (&#xA;    SELECT price_changes.product_id,&#xA;           price_changes.price AS previous_price,&#xA;           MAX(price_changes.effective_timestamp) AS previous_timestamp&#xA;    FROM price_changes&#xA;    JOIN latest&#xA;      ON price_changes.product_id = latest.product_id&#xA;    WHERE price_changes.effective_timestamp &lt; latest.latest_timestamp&#xA;    GROUP BY price_changes.product_id&#xA;)&#xA;SELECT &#xA;    products.product_name, latest.current_price, previous.previous_price, (latest.current_price - previous.previous_price) as price_difference&#xA;FROM &#xA;    products &#xA;JOIN latest &#xA;    ON products.product_id = latest.product_id &#xA;LEFT JOIN previous &#xA;    ON products.product_id = previous.product_id;&#xA;&#xA;+-----------------------------------+---------------+----------------+-------------------+&#xA;|           product_name            | current_price | previous_price | price_difference  |&#xA;+-----------------------------------+---------------+----------------+-------------------+&#xA;| Deluxe Sled                       | 109.25        | 106.91         | 2.34              |&#xA;| Holiday Trail Mix Trio            | 17.75         | 21.38          | -3.63             |&#xA;| Premium Cinnamon Roasted Almonds  | 143.65        | 159.65         | -16.0             |&#xA;| Deluxe Wrapping Paper             | 98.51         | 105.6          | -7.08999999999999 |&#xA;| Deluxe Roasted Cashews            | 124.04        | 129.23         | -5.18999999999998 |&#xA;| Festive Cookware Set              | 84.14         | 88.97          | -4.83             |&#xA;| Deluxe Mug                        | 123.09        | 127.14         | -4.05             |&#xA;| Premium Sled                      | 221.06        | 241.99         | -20.93            |&#xA;| Essential Sled                    | 255.88        | 259.56         | -3.68000000000001 |&#xA;...&#xA;...&#xA;| Classic Ornament                  | 35.05         | 30.25          | 4.8               |&#xA;| Essential Ornament                | 153.94        | 143.04         | 10.9              |&#xA;| Premium Trail Mix Trio            | 118.21        | 114.22         | 3.98999999999999  |&#xA;| Premium Book Collection           | 54.73         | 65.71          | -10.98            |&#xA;| Cozy Trail Mix Trio               | 107.81        | 101.09         | 6.72              |&#xA;| Family Cheddar Popcorn            | 72.6          | 86.31          | -13.71            |&#xA;| Holiday Headphones                | 138.66        | 123.61         | 15.05             |&#xA;+-----------------------------------+---------------+----------------+-------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;&#xA;Oh ok! That is it!&#xA;&#xA;CTEs and some nasty number of JOINs.&#xA;&#xA;Can we do it some other ways?&#xA;&#xA;Of course we can, and we will&#xA;&#xA;&#xA;### ROW NUMBER - Window Function&#xA;&#xA;We can even use some window functions because we are doing things per product so we can leverage [ROW_NUMBER](https://sqlite.org/windowfunctions.html#:~:text=in%20window%20functions%3A-,row_number(),-The%20number%20of) in our case.&#xA;&#xA;&gt; ROW_NUMBER: The number of the row within the current partition. Rows are numbered starting from 1 in the order defined by the ORDER BY clause in the window definition, or in arbitrary order otherwise.&#xA;&#xA;So, we can partition by `product_id` in the `price_changes` table and order by the timestamp (latest first i.e. descending) and grab the first two prices.&#xA;&#xA;```sql&#xA;WITH ranked_prices AS (&#xA;    SELECT price_changes.*,&#xA;           ROW_NUMBER() OVER (&#xA;               PARTITION BY price_changes.product_id&#xA;               ORDER BY price_changes.effective_timestamp DESC&#xA;           ) AS row_number&#xA;    FROM price_changes&#xA;)&#xA;SELECT&#xA;    *&#xA;FROM ranked_prices;&#xA;```&#xA;&#xA;Here, we just partitioned the price table based on the latest to oldest timestamp (descending) and added the other column `row_number` that will be further used to get the `previous` and the `current` price and time.&#xA;&#xA;```&#xA;sqlite&gt; WITH ranked_prices AS (&#xA;    SELECT price_changes.*,&#xA;           ROW_NUMBER() OVER (&#xA;               PARTITION BY price_changes.product_id&#xA;               ORDER BY price_changes.effective_timestamp DESC&#xA;           ) AS row_number&#xA;    FROM price_changes&#xA;)&#xA;SELECT&#xA;    *&#xA;FROM ranked_prices;&#xA;+------+------------+--------+---------------------+------------+&#xA;|  id  | product_id | price  | effective_timestamp | row_number |&#xA;+------+------------+--------+---------------------+------------+&#xA;| 15   | 1          | 109.25 | 2025-12-16 16:01:53 | 1          |&#xA;| 14   | 1          | 106.91 | 2025-12-16 01:51:05 | 2          |&#xA;| 13   | 1          | 100.18 | 2025-12-14 15:58:03 | 3          |&#xA;| 12   | 1          | 99.03  | 2025-12-12 02:58:14 | 4          |&#xA;| 11   | 1          | 88.2   | 2025-12-12 02:21:09 | 5          |&#xA;| 10   | 1          | 73.9   | 2025-12-10 14:33:43 | 6          |&#xA;| 9    | 1          | 80.24  | 2025-12-08 16:11:11 | 7          |&#xA;| 8    | 1          | 78.88  | 2025-12-08 06:45:39 | 8          |&#xA;| 7    | 1          | 80.88  | 2025-12-07 10:43:54 | 9          |&#xA;| 6    | 1          | 88.49  | 2025-12-06 19:02:40 | 10         |&#xA;| 5    | 1          | 98.57  | 2025-12-04 04:14:31 | 11         |&#xA;| 4    | 1          | 119.12 | 2025-12-03 10:14:35 | 12         |&#xA;| 3    | 1          | 126.78 | 2025-12-02 18:40:38 | 13         |&#xA;| 2    | 1          | 148.63 | 2025-12-02 18:15:33 | 14         |&#xA;| 1    | 1          | 148.28 | 2025-12-01 05:25:35 | 15         |&#xA;...&#xA;...&#xA;| 1260 | 149        | 167.1  | 2025-12-01 06:36:43 | 15         |&#xA;| 1288 | 150        | 138.66 | 2025-12-13 08:24:43 | 1          |&#xA;| 1287 | 150        | 123.61 | 2025-12-12 10:49:22 | 2          |&#xA;| 1286 | 150        | 141.8  | 2025-12-10 07:06:06 | 3          |&#xA;| 1285 | 150        | 122.16 | 2025-12-09 20:01:54 | 4          |&#xA;| 1284 | 150        | 122.06 | 2025-12-06 22:27:41 | 5          |&#xA;| 1283 | 150        | 128.6  | 2025-12-06 13:03:18 | 6          |&#xA;| 1282 | 150        | 154.72 | 2025-12-05 08:15:08 | 7          |&#xA;| 1281 | 150        | 170.3  | 2025-12-04 15:25:50 | 8          |&#xA;| 1280 | 150        | 156.51 | 2025-12-03 19:11:12 | 9          |&#xA;| 1279 | 150        | 161.93 | 2025-12-02 02:47:10 | 10         |&#xA;| 1278 | 150        | 174.36 | 2025-12-02 01:39:14 | 11         |&#xA;| 1277 | 150        | 180.17 | 2025-12-01 20:36:02 | 12         |&#xA;| 1276 | 150        | 164.35 | 2025-12-01 07:09:29 | 13         |&#xA;| 1275 | 150        | 141    | 2025-12-01 01:29:46 | 14         |&#xA;+------+------------+--------+---------------------+------------+&#xA;&#xA;```&#xA;&#xA;&#xA;&#xA;```sql&#xA;WITH ranked_prices AS (&#xA;    SELECT price_changes.*,&#xA;           ROW_NUMBER() OVER (&#xA;               PARTITION BY price_changes.product_id&#xA;               ORDER BY price_changes.effective_timestamp DESC&#xA;           ) AS row_number&#xA;    FROM price_changes&#xA;)&#xA;SELECT&#xA;    products.product_name,&#xA;    current_price.price AS current_price,&#xA;    previous_price.price AS previous_price,&#xA;    current_price.price - previous_price.price AS price_difference&#xA;FROM products&#xA;LEFT JOIN ranked_prices AS current_price&#xA;       ON products.product_id = current_price.product_id&#xA;      AND current_price.row_number = 1&#xA;LEFT JOIN ranked_prices AS previous_price&#xA;       ON products.product_id = previous_price.product_id&#xA;      AND previous_price.row_number = 2;&#xA;```&#xA;This is pretty simple, we just get&#xA;- the `current_price` as `row_number = 1` from the `ranked_prices` CTE&#xA;- the `previous_price` as `row_number = 2` from the `ranked_prices` CTE&#xA;&#xA;We got all we needed as:&#xA;- `products.product_name`&#xA;- `current_price.price AS current_price`&#xA;- `previous_price.price AS previous_price`&#xA;- `current_price.price - previous_price.price AS price_difference`&#xA;&#xA;So, this is how `ROW_NUMBER` can be used here.&#xA;&#xA;&#xA;### With LEAD LAG - Window Functions&#xA;&#xA;We can also use [LEAD](https://sqlite.org/windowfunctions.html#:~:text=does%20not%20exist.-,lead(expr),-lead(expr%2C%20offset) and [LAG](https://sqlite.org/windowfunctions.html#:~:text=a%20part%20of.-,lag(expr),-lag(expr%2C%20offset) window functions here. We can specifically use `LAG` as it was a challenge to get the second latest timestamped price.&#xA;&#xA;We will partition the `price_changes` table on the `product_id` and order it by the `effective_timestamp`. This will give us the row before the timestamp of the current one, so the first row for each product can be empty (if sorted in ascending order of timestamp since there is no before row for the first row)&#xA;&#xA;```sql&#xA; WITH lagged_prices AS (&#xA;    SELECT&#xA;        product_id,&#xA;        price AS current_price,&#xA;        LAG(price) OVER (&#xA;            PARTITION BY product_id&#xA;            ORDER BY effective_timestamp&#xA;        ) AS previous_price,&#xA;        effective_timestamp&#xA;    FROM price_changes&#xA;)&#xA;SELECT * FROM lagged_prices;&#xA;```&#xA;&#xA;Here, we simply selected the data that we need, `product_id`, `price` as the `current_price` since we can order by latest timestamp. &#xA;&#xA;```&#xA;sqlite&gt; WITH lagged_prices AS (&#xA;    SELECT&#xA;        product_id,&#xA;        price AS current_price,&#xA;        LAG(price) OVER (&#xA;            PARTITION BY product_id&#xA;            ORDER BY effective_timestamp&#xA;        ) AS previous_price,&#xA;        effective_timestamp&#xA;    FROM price_changes&#xA;)&#xA;SELECT * FROM lagged_prices;&#xA;+------------+---------------+----------------+---------------------+&#xA;| product_id | current_price | previous_price | effective_timestamp |&#xA;+------------+---------------+----------------+---------------------+&#xA;| 1          | 148.28        |                | 2025-12-01 05:25:35 |&#xA;| 1          | 148.63        | 148.28         | 2025-12-02 18:15:33 |&#xA;| 1          | 126.78        | 148.63         | 2025-12-02 18:40:38 |&#xA;| 1          | 119.12        | 126.78         | 2025-12-03 10:14:35 |&#xA;| 1          | 98.57         | 119.12         | 2025-12-04 04:14:31 |&#xA;| 1          | 88.49         | 98.57          | 2025-12-06 19:02:40 |&#xA;| 1          | 80.88         | 88.49          | 2025-12-07 10:43:54 |&#xA;| 1          | 78.88         | 80.88          | 2025-12-08 06:45:39 |&#xA;| 1          | 80.24         | 78.88          | 2025-12-08 16:11:11 |&#xA;| 1          | 73.9          | 80.24          | 2025-12-10 14:33:43 |&#xA;| 1          | 88.2          | 73.9           | 2025-12-12 02:21:09 |&#xA;| 1          | 99.03         | 88.2           | 2025-12-12 02:58:14 |&#xA;| 1          | 100.18        | 99.03          | 2025-12-14 15:58:03 |&#xA;| 1          | 106.91        | 100.18         | 2025-12-16 01:51:05 |&#xA;| 1          | 109.25        | 106.91         | 2025-12-16 16:01:53 |&#xA;| 2          | 29.54         |                | 2025-12-03 14:21:10 |&#xA;| 2          | 34.33         | 29.54          | 2025-12-03 19:14:31 |&#xA;| 2          | 39.08         | 34.33          | 2025-12-04 06:13:48 |&#xA;| 2          | 32.71         | 39.08          | 2025-12-04 18:33:17 |&#xA;| 2          | 31.71         | 32.71          | 2025-12-05 22:36:14 |&#xA;| 2          | 28.88         | 31.71          | 2025-12-06 02:42:02 |&#xA;...&#xA;...&#xA;| 149        | 101.03        | 98.4           | 2025-12-10 00:37:46 |&#xA;| 149        | 95.68         | 101.03         | 2025-12-13 01:10:53 |&#xA;| 149        | 86.31         | 95.68          | 2025-12-13 13:18:14 |&#xA;| 149        | 72.6          | 86.31          | 2025-12-13 16:40:21 |&#xA;| 150        | 141           |                | 2025-12-01 01:29:46 |&#xA;| 150        | 164.35        | 141            | 2025-12-01 07:09:29 |&#xA;| 150        | 180.17        | 164.35         | 2025-12-01 20:36:02 |&#xA;| 150        | 174.36        | 180.17         | 2025-12-02 01:39:14 |&#xA;| 150        | 161.93        | 174.36         | 2025-12-02 02:47:10 |&#xA;| 150        | 156.51        | 161.93         | 2025-12-03 19:11:12 |&#xA;| 150        | 170.3         | 156.51         | 2025-12-04 15:25:50 |&#xA;| 150        | 154.72        | 170.3          | 2025-12-05 08:15:08 |&#xA;| 150        | 128.6         | 154.72         | 2025-12-06 13:03:18 |&#xA;| 150        | 122.06        | 128.6          | 2025-12-06 22:27:41 |&#xA;| 150        | 122.16        | 122.06         | 2025-12-09 20:01:54 |&#xA;| 150        | 141.8         | 122.16         | 2025-12-10 07:06:06 |&#xA;| 150        | 123.61        | 141.8          | 2025-12-12 10:49:22 |&#xA;| 150        | 138.66        | 123.61         | 2025-12-13 08:24:43 |&#xA;+------------+---------------+----------------+---------------------+&#xA;&#xA;```&#xA;&#xA;&#xA;Then we can join. The conditions are important to filter.&#xA;&#xA;- we get the lagged time stamp where we want the max of the timestamp since the latest timestamp will have the lagging timestamp for it from the CTE of lagged_prices and current_price as well.&#xA;&#xA;&#xA;```sql&#xA;SELECT&#xA;    products.product_name,&#xA;    lagged_prices.current_price,&#xA;    lagged_prices.previous_price,&#xA;    lagged_prices.current_price - lagged_prices.previous_price AS price_difference&#xA;FROM products&#xA;JOIN lagged_prices&#xA;  ON products.product_id = lagged_prices.product_id&#xA;WHERE lagged_prices.effective_timestamp = (&#xA;    SELECT MAX(effective_timestamp)&#xA;    FROM price_changes&#xA;    WHERE product_id = products.product_id&#xA;);&#xA;&#xA;```&#xA;&#xA;&#xA;```sql&#xA;WITH lagged_prices AS (&#xA;    SELECT&#xA;        product_id,&#xA;        price AS current_price,&#xA;        LAG(price) OVER (&#xA;            PARTITION BY product_id&#xA;            ORDER BY effective_timestamp&#xA;        ) AS previous_price,&#xA;        effective_timestamp&#xA;    FROM price_changes&#xA;)&#xA;SELECT&#xA;    products.product_name,&#xA;    lagged_prices.current_price,&#xA;    lagged_prices.previous_price,&#xA;    lagged_prices.current_price - lagged_prices.previous_price AS price_difference&#xA;FROM products&#xA;JOIN lagged_prices&#xA;  ON products.product_id = lagged_prices.product_id&#xA;WHERE lagged_prices.effective_timestamp = (&#xA;    SELECT MAX(effective_timestamp)&#xA;    FROM price_changes&#xA;    WHERE product_id = products.product_id&#xA;);&#xA;&#xA;```&#xA;This yields the same result.&#xA;&#xA;This is all big, any smaller queries?&#xA;&#xA;Not quite small.&#xA;&#xA;### LIMIT OFFSET and CTE&#xA;&#xA;Well here&#39;s a little short, but quite dirty.&#xA;```sql&#xA;SELECT *,&#xA;       current_price - previous_price AS price_difference&#xA;FROM (&#xA;    SELECT&#xA;        products.product_name,&#xA;        (SELECT price&#xA;         FROM price_changes&#xA;         WHERE price_changes.product_id = products.product_id&#xA;         ORDER BY effective_timestamp DESC&#xA;         LIMIT 1) AS current_price,&#xA;        (SELECT price&#xA;         FROM price_changes&#xA;         WHERE price_changes.product_id = products.product_id&#xA;         ORDER BY effective_timestamp DESC&#xA;         LIMIT 1 OFFSET 1) AS previous_price&#xA;    FROM products&#xA;);&#xA;```&#xA;&#xA;This basically grabs the  price from the latest timestamp and the 2nd timestamp with offset and wraps it in a CTE to compute the difference.&#xA;&#xA;Pretty slick if you ask me.&#xA;&#xA;But hey! I am done for this! &#xA;&#xA;It was a great problem.&#xA;&#xA;Getting data behind certain data is quite relatable and challenging enough to explore window functions and what not.&#xA;&#xA;That&#39;s it from day 8.&#xA;&#xA;See you tomorrow for day 9!&#xA;&#xA;&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 7: Polar Express Mixin</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-7</link>
      <description>Advent of SQL, Day 7 - Polar Express There were a few things, I had to dig up for converting the JSON in the statements into strings for SQLite, we can&#39;t really</description>
      <pubDate>Mon, 22 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL, Day 7 - Polar Express&#xA;&#xA;There were a few things, I had to dig up for converting the JSON `ARRAY[]` in the statements into strings for SQLite, we can&#39;t really use list of strings in SQLite.&#xA;&#xA;Here&#39;s the command to convert that array of strings into string.&#xA;&#xA;```&#xA;sed &#34;s/ARRAY\[&#39;/&#39;\[\&#34;/g; s/&#39;,&#39;/\&#34;,\&#34;/g; s/&#39;]/\&#34;]&#39;/g&#34; day7-inserts.sql &gt; day7-inserts-sqlite.sql&#xA;```&#xA; &#xA;OK, once that&#39;s done, this can be safely run into a sqlite database.&#xA;&#xA;```&#xA;$ sqlite3&#xA;SQLite version 3.45.1 2024-01-30 16:01:20&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read day7-inserts-sqlite.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE passengers (&#xA;    passenger_id INT PRIMARY KEY,&#xA;    passenger_name TEXT,&#xA;    favorite_mixins TEXT[],&#xA;    car_id INT&#xA;);&#xA;CREATE TABLE cocoa_cars (&#xA;    car_id INT PRIMARY KEY,&#xA;    available_mixins TEXT[],&#xA;    total_stock INT&#xA;);&#xA;sqlite&gt; &#xA;sqlite&gt; .mode table&#xA;sqlite&gt; select * from passengers limit 20;&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| passenger_id | passenger_name |                       favorite_mixins                        | car_id |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 1            | Ava Johnson    | [&#34;vanilla foam&#34;]                                             | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 2            | Mateo Cruz     | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 3            | Nia Grant      | [&#34;shaved chocolate&#34;]                                         | 5      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 4            | Hiro Tanaka    | [&#34;shaved chocolate&#34;]                                         | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 5            | Layla Brooks   | [&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;cinnamon&#34; | 3      |&#xA;|              |                | ]                                                            |        |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 6            | Ravi Patel     | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 5      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 7            | Sofia Kim      | [&#34;cinnamon&#34;]                                                 | 9      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 8            | Jonah Wolfe    | [&#34;cinnamon&#34;,&#34;dark chocolate&#34;]                                | 7      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 9            | Elena Morales  | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]     | 6      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 10           | Diego Ramos    | [&#34;shaved chocolate&#34;]                                         | 1      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 11           | Zara Sheikh    | [&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]                  | 4      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 12           | Caleb Osei     | [&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;white chocolate&#34;]      | 8      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 13           | Mila Novak     | [&#34;crispy rice&#34;,&#34;cinnamon&#34;]                                   | 4      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 14           | Lucas Ford     | [&#34;vanilla foam&#34;,&#34;white chocolate&#34;,&#34;cinnamon&#34;]                | 4      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 15           | Yara Haddad    | [&#34;white chocolate&#34;,&#34;dark chocolate&#34;]                         | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 16           | Omar Qureshi   | [&#34;marshmallow&#34;]                                              | 3      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 17           | Keiko Ito      | [&#34;vanilla foam&#34;,&#34;marshmallow&#34;]                               | 7      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 18           | Tariq Hassan   | [&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;white chocolate&#34;,&#34;peppermin | 2      |&#xA;|              |                | t&#34;]                                                          |        |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 19           | Mira Zhao      | [&#34;caramel drizzle&#34;,&#34;marshmallow&#34;,&#34;cinnamon&#34;]                 | 7      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 20           | Bianca Pereira | [&#34;dark chocolate&#34;,&#34;peppermint&#34;]                              | 5      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;sqlite&gt; &#xA;sqlite&gt; select * from cocoa_cars;&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| car_id |                       available_mixins                       | total_stock |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         |&#xA;|        | ate&#34;]                                                        |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 4      | [&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]                       | 338         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 8      | [&#34;vanilla foam&#34;,&#34;marshmallow&#34;]                               | 263         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 1      | [&#34;peppermint&#34;,&#34;crispy rice&#34;]                                 | 205         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 6      | [&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;cinnamon | 161         |&#xA;|        | &#34;,&#34;peppermint&#34;]                                              |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 7      | [&#34;caramel drizzle&#34;,&#34;crispy rice&#34;,&#34;marshmallow&#34;,&#34;vanilla foam | 132         |&#xA;|        | &#34;,&#34;cinnamon&#34;]                                                |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 3      | [&#34;vanilla foam&#34;,&#34;peppermint&#34;]                                | 95          |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Here&#39;s your full SQL file:&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS passengers;&#xA;DROP TABLE IF EXISTS cocoa_cars;&#xA;&#xA;CREATE TABLE passengers (&#xA;    passenger_id INT PRIMARY KEY,&#xA;    passenger_name TEXT,&#xA;    favorite_mixins TEXT[],&#xA;    car_id INT&#xA;);&#xA;&#xA;CREATE TABLE cocoa_cars (&#xA;    car_id INT PRIMARY KEY,&#xA;    available_mixins TEXT[],&#xA;    total_stock INT&#xA;);&#xA;&#xA;INSERT INTO passengers (passenger_id, passenger_name, favorite_mixins, car_id) VALUES&#xA;    (1, &#39;Ava Johnson&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 2),&#xA;    (2, &#39;Mateo Cruz&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]&#39;, 2),&#xA;    (3, &#39;Nia Grant&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 5),&#xA;    (4, &#39;Hiro Tanaka&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 2),&#xA;    (5, &#39;Layla Brooks&#39;, &#39;[&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;cinnamon&#34;]&#39;, 3),&#xA;    (6, &#39;Ravi Patel&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]&#39;, 5),&#xA;    (7, &#39;Sofia Kim&#39;, &#39;[&#34;cinnamon&#34;]&#39;, 9),&#xA;    (8, &#39;Jonah Wolfe&#39;, &#39;[&#34;cinnamon&#34;,&#34;dark chocolate&#34;]&#39;, 7),&#xA;    (9, &#39;Elena Morales&#39;, &#39;[&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 6),&#xA;    (10, &#39;Diego Ramos&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 1),&#xA;    (11, &#39;Zara Sheikh&#39;, &#39;[&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]&#39;, 4),&#xA;    (12, &#39;Caleb Osei&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;white chocolate&#34;]&#39;, 8),&#xA;    (13, &#39;Mila Novak&#39;, &#39;[&#34;crispy rice&#34;,&#34;cinnamon&#34;]&#39;, 4),&#xA;    (14, &#39;Lucas Ford&#39;, &#39;[&#34;vanilla foam&#34;,&#34;white chocolate&#34;,&#34;cinnamon&#34;]&#39;, 4),&#xA;    (15, &#39;Yara Haddad&#39;, &#39;[&#34;white chocolate&#34;,&#34;dark chocolate&#34;]&#39;, 2),&#xA;    (16, &#39;Omar Qureshi&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 3),&#xA;    (17, &#39;Keiko Ito&#39;, &#39;[&#34;vanilla foam&#34;,&#34;marshmallow&#34;]&#39;, 7),&#xA;    (18, &#39;Tariq Hassan&#39;, &#39;[&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;white chocolate&#34;,&#34;peppermint&#34;]&#39;, 2),&#xA;    (19, &#39;Mira Zhao&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;marshmallow&#34;,&#34;cinnamon&#34;]&#39;, 7),&#xA;    (20, &#39;Bianca Pereira&#39;, &#39;[&#34;dark chocolate&#34;,&#34;peppermint&#34;]&#39;, 5),&#xA;    (21, &#39;Eva Schmidt&#39;, &#39;[&#34;white chocolate&#34;,&#34;marshmallow&#34;]&#39;, 5),&#xA;    (22, &#39;Rafael Silva&#39;, &#39;[&#34;cinnamon&#34;,&#34;caramel drizzle&#34;]&#39;, 3),&#xA;    (23, &#39;Nolan Murphy&#39;, &#39;[&#34;caramel drizzle&#34;]&#39;, 8),&#xA;    (24, &#39;Sara Johansson&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 6),&#xA;    (25, &#39;Ingrid Nilsen&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;peppermint&#34;,&#34;marshmallow&#34;]&#39;, 2),&#xA;    (26, &#39;Arjun Kapoor&#39;, &#39;[&#34;dark chocolate&#34;]&#39;, 2),&#xA;    (27, &#39;Nova Adams&#39;, &#39;[&#34;cinnamon&#34;,&#34;dark chocolate&#34;]&#39;, 9),&#xA;    (28, &#39;Felix Schneider&#39;, &#39;[&#34;crispy rice&#34;,&#34;vanilla foam&#34;]&#39;, 4),&#xA;    (29, &#39;Tim Cook&#39;, &#39;[&#34;crispy rice&#34;,&#34;vanilla foam&#34;]&#39;, 6),&#xA;    (30, &#39;Sophia Rossi&#39;, &#39;[&#34;white chocolate&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;,&#34;marshmallow&#34;]&#39;, 4),&#xA;    (31, &#39;Liam OConnor&#39;, &#39;[&#34;caramel drizzle&#34;]&#39;, 1),&#xA;    (32, &#39;Olivia Dubois&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;peppermint&#34;]&#39;, 2),&#xA;    (33, &#39;Emma Svensson&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 2),&#xA;    (34, &#39;Noah Fischer&#39;, &#39;[&#34;caramel drizzle&#34;]&#39;, 2),&#xA;    (35, &#39;William Becker&#39;, &#39;[&#34;crispy rice&#34;,&#34;dark chocolate&#34;]&#39;, 4),&#xA;    (36, &#39;Isabella Laurent&#39;, &#39;[&#34;dark chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 8),&#xA;    (37, &#39;James Kim&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;marshmallow&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;]&#39;, 7),&#xA;    (38, &#39;Mia Chen&#39;, &#39;[&#34;vanilla foam&#34;,&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 1),&#xA;    (39, &#39;Benjamin Patel&#39;, &#39;[&#34;peppermint&#34;]&#39;, 7),&#xA;    (40, &#39;Charlotte Singh&#39;, &#39;[&#34;marshmallow&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]&#39;, 4),&#xA;    (41, &#39;Daniel Murphy&#39;, &#39;[&#34;cinnamon&#34;,&#34;vanilla foam&#34;,&#34;marshmallow&#34;,&#34;white chocolate&#34;]&#39;, 8),&#xA;    (42, &#39;Zoe Wilson&#39;, &#39;[&#34;marshmallow&#34;,&#34;dark chocolate&#34;]&#39;, 9),&#xA;    (43, &#39;Robert Smith&#39;, &#39;[&#34;peppermint&#34;]&#39;, 9),&#xA;    (44, &#39;Emily Johnson&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 4),&#xA;    (45, &#39;David Brown&#39;, &#39;[&#34;vanilla foam&#34;,&#34;dark chocolate&#34;]&#39;, 8),&#xA;    (46, &#39;Sarah Davis&#39;, &#39;[&#34;vanilla foam&#34;,&#34;peppermint&#34;]&#39;, 3),&#xA;    (47, &#39;James Simon&#39;, &#39;[&#34;peppermint&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;,&#34;shaved chocolate&#34;]&#39;, 5),&#xA;    (48, &#39;Linda Lee&#39;, &#39;[&#34;white chocolate&#34;,&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;vanilla foam&#34;]&#39;, 5),&#xA;    (49, &#39;Carlos Mendez&#39;, &#39;[&#34;peppermint&#34;,&#34;white chocolate&#34;]&#39;, 6),&#xA;    (50, &#39;Fatima Noor&#39;, &#39;[&#34;peppermint&#34;]&#39;, 8),&#xA;    (51, &#39;Youssef El-Sayed&#39;, &#39;[&#34;peppermint&#34;,&#34;marshmallow&#34;]&#39;, 3),&#xA;    (52, &#39;Ian Landsman&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 4),&#xA;    (53, &#39;Nolan Young&#39;, &#39;[&#34;marshmallow&#34;,&#34;shaved chocolate&#34;,&#34;crispy rice&#34;,&#34;vanilla foam&#34;]&#39;, 1),&#xA;    (54, &#39;Ava Martinez&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 9),&#xA;    (55, &#39;William Chen&#39;, &#39;[&#34;crispy rice&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 6),&#xA;    (56, &#39;Isabella Rodriguez&#39;, &#39;[&#34;crispy rice&#34;,&#34;vanilla foam&#34;]&#39;, 3),&#xA;    (57, &#39;Zachary Collins&#39;, &#39;[&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]&#39;, 2),&#xA;    (58, &#39;Audrey Edwards&#39;, &#39;[&#34;dark chocolate&#34;]&#39;, 2),&#xA;    (59, &#39;Jason Stewart&#39;, &#39;[&#34;white chocolate&#34;]&#39;, 4),&#xA;    (60, &#39;Lucy Morris&#39;, &#39;[&#34;cinnamon&#34;,&#34;caramel drizzle&#34;,&#34;peppermint&#34;]&#39;, 4),&#xA;    (61, &#39;Cameron Rogers&#39;, &#39;[&#34;crispy rice&#34;,&#34;cinnamon&#34;,&#34;shaved chocolate&#34;]&#39;, 9),&#xA;    (62, &#39;Aria Blackwood&#39;, &#39;[&#34;white chocolate&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;]&#39;, 9),&#xA;    (63, &#39;Felix Whitmore&#39;, &#39;[&#34;marshmallow&#34;,&#34;cinnamon&#34;,&#34;dark chocolate&#34;]&#39;, 2),&#xA;    (64, &#39;Luna Hartley&#39;, &#39;[&#34;white chocolate&#34;]&#39;, 3),&#xA;    (65, &#39;Jasper Thorne&#39;, &#39;[&#34;crispy rice&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 6),&#xA;    (66, &#39;Nora Calloway&#39;, &#39;[&#34;crispy rice&#34;,&#34;dark chocolate&#34;]&#39;, 5),&#xA;    (67, &#39;Silas Merrick&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 7),&#xA;    (68, &#39;Iris Pembroke&#39;, &#39;[&#34;peppermint&#34;,&#34;white chocolate&#34;,&#34;cinnamon&#34;]&#39;, 3),&#xA;    (69, &#39;Milo Ashford&#39;, &#39;[&#34;cinnamon&#34;,&#34;dark chocolate&#34;,&#34;crispy rice&#34;]&#39;, 7),&#xA;    (70, &#39;Clara Westbrook&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 2),&#xA;    (71, &#39;Owen Fairchild&#39;, &#39;[&#34;white chocolate&#34;,&#34;peppermint&#34;]&#39;, 6),&#xA;    (72, &#39;Ruby Hawthorne&#39;, &#39;[&#34;vanilla foam&#34;,&#34;cinnamon&#34;]&#39;, 1),&#xA;    (73, &#39;Finn Lockhart&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;peppermint&#34;,&#34;cinnamon&#34;]&#39;, 4),&#xA;    (74, &#39;Violet Sterling&#39;, &#39;[&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]&#39;, 9),&#xA;    (75, &#39;August Blackwell&#39;, &#39;[&#34;cinnamon&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;marshmallow&#34;]&#39;, 3),&#xA;    (76, &#39;Hazel Kincaid&#39;, &#39;[&#34;peppermint&#34;,&#34;cinnamon&#34;,&#34;caramel drizzle&#34;,&#34;dark chocolate&#34;]&#39;, 7),&#xA;    (77, &#39;Leo Greyson&#39;, &#39;[&#34;crispy rice&#34;,&#34;cinnamon&#34;]&#39;, 2),&#xA;    (78, &#39;Stella Beaumont&#39;, &#39;[&#34;peppermint&#34;,&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;vanilla foam&#34;]&#39;, 8),&#xA;    (79, &#39;Miles Brennan&#39;, &#39;[&#34;crispy rice&#34;,&#34;shaved chocolate&#34;,&#34;marshmallow&#34;]&#39;, 1),&#xA;    (80, &#39;Ivy Winslow&#39;, &#39;[&#34;vanilla foam&#34;,&#34;caramel drizzle&#34;]&#39;, 5),&#xA;    (81, &#39;Jack Carmichael&#39;, &#39;[&#34;crispy rice&#34;]&#39;, 6),&#xA;    (82, &#39;Scarlett Dalton&#39;, &#39;[&#34;white chocolate&#34;,&#34;caramel drizzle&#34;,&#34;peppermint&#34;,&#34;vanilla foam&#34;]&#39;, 5),&#xA;    (83, &#39;Oliver Ashby&#39;, &#39;[&#34;crispy rice&#34;,&#34;peppermint&#34;]&#39;, 2),&#xA;    (84, &#39;Aurora Whitfield&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;crispy rice&#34;]&#39;, 9),&#xA;    (85, &#39;Noah Hastings&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 4),&#xA;    (86, &#39;Eliza Radcliffe&#39;, &#39;[&#34;peppermint&#34;,&#34;vanilla foam&#34;,&#34;white chocolate&#34;]&#39;, 9),&#xA;    (87, &#39;Liam Donovan&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]&#39;, 2),&#xA;    (88, &#39;Penelope Sinclair&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;marshmallow&#34;,&#34;white chocolate&#34;]&#39;, 5),&#xA;    (89, &#39;Ethan Marlowe&#39;, &#39;[&#34;vanilla foam&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;]&#39;, 5),&#xA;    (90, &#39;Charlotte Waverly&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;]&#39;, 7),&#xA;    (91, &#39;Lucas Prescott&#39;, &#39;[&#34;crispy rice&#34;,&#34;vanilla foam&#34;]&#39;, 9),&#xA;    (92, &#39;Amelia Rosewood&#39;, &#39;[&#34;crispy rice&#34;]&#39;, 5),&#xA;    (93, &#39;Henry Treadwell&#39;, &#39;[&#34;vanilla foam&#34;,&#34;caramel drizzle&#34;]&#39;, 8),&#xA;    (94, &#39;Sophie Langford&#39;, &#39;[&#34;dark chocolate&#34;,&#34;shaved chocolate&#34;,&#34;crispy rice&#34;,&#34;white chocolate&#34;]&#39;, 3),&#xA;    (95, &#39;Benjamin Fairweather&#39;, &#39;[&#34;crispy rice&#34;]&#39;, 9),&#xA;    (96, &#39;Grace Aldridge&#39;, &#39;[&#34;marshmallow&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 5),&#xA;    (97, &#39;Samuel Kingsley&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;cinnamon&#34;]&#39;, 1),&#xA;    (98, &#39;Eleanor Morrison&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 8),&#xA;    (99, &#39;Daniel Lockwood&#39;, &#39;[&#34;dark chocolate&#34;]&#39;, 7),&#xA;    (100, &#39;Lucy Harrington&#39;, &#39;[&#34;vanilla foam&#34;,&#34;dark chocolate&#34;]&#39;, 7),&#xA;    (101, &#39;Matthew Sutherland&#39;, &#39;[&#34;cinnamon&#34;,&#34;peppermint&#34;]&#39;, 2),&#xA;    (102, &#39;Emma Gilmore&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;cinnamon&#34;,&#34;vanilla foam&#34;,&#34;caramel drizzle&#34;]&#39;, 9),&#xA;    (103, &#39;Alexander Stratton&#39;, &#39;[&#34;peppermint&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;white chocolate&#34;]&#39;, 8),&#xA;    (104, &#39;Abigail Worthington&#39;, &#39;[&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 8),&#xA;    (105, &#39;William Beauchamp&#39;, &#39;[&#34;white chocolate&#34;,&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]&#39;, 8),&#xA;    (106, &#39;Hannah Livingston&#39;, &#39;[&#34;crispy rice&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;caramel drizzle&#34;]&#39;, 5),&#xA;    (107, &#39;James Garrison&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;crispy rice&#34;,&#34;white chocolate&#34;,&#34;peppermint&#34;]&#39;, 5),&#xA;    (108, &#39;Sophia Brookshire&#39;, &#39;[&#34;crispy rice&#34;,&#34;caramel drizzle&#34;]&#39;, 6),&#xA;    (109, &#39;Theodore Hadley&#39;, &#39;[&#34;cinnamon&#34;]&#39;, 3),&#xA;    (110, &#39;Olivia Carrington&#39;, &#39;[&#34;vanilla foam&#34;,&#34;cinnamon&#34;]&#39;, 4),&#xA;    (111, &#39;Sebastian Ashworth&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 7),&#xA;    (112, &#39;Chloe Blackstone&#39;, &#39;[&#34;white chocolate&#34;,&#34;dark chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 1),&#xA;    (113, &#39;Nicholas Montague&#39;, &#39;[&#34;vanilla foam&#34;,&#34;white chocolate&#34;]&#39;, 1),&#xA;    (114, &#39;Madeline Ramsey&#39;, &#39;[&#34;dark chocolate&#34;,&#34;peppermint&#34;,&#34;cinnamon&#34;,&#34;vanilla foam&#34;]&#39;, 7),&#xA;    (115, &#39;Gabriel Winthrop&#39;, &#39;[&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;marshmallow&#34;]&#39;, 5),&#xA;    (116, &#39;Alice Merriweather&#39;, &#39;[&#34;dark chocolate&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;,&#34;cinnamon&#34;]&#39;, 7),&#xA;    (117, &#39;Isaac Kendrick&#39;, &#39;[&#34;dark chocolate&#34;,&#34;cinnamon&#34;]&#39;, 9),&#xA;    (118, &#39;Lillian Holbrook&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 1),&#xA;    (119, &#39;Caleb Bellamy&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 3),&#xA;    (120, &#39;Rose Drummond&#39;, &#39;[&#34;cinnamon&#34;,&#34;peppermint&#34;,&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 6),&#xA;    (121, &#39;Elijah Wakefield&#39;, &#39;[&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 6),&#xA;    (122, &#39;Margaret Fairbanks&#39;, &#39;[&#34;crispy rice&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;,&#34;peppermint&#34;]&#39;, 8),&#xA;    (123, &#39;Julian Blackburn&#39;, &#39;[&#34;white chocolate&#34;]&#39;, 1),&#xA;    (124, &#39;Eva Templeton&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;marshmallow&#34;,&#34;vanilla foam&#34;]&#39;, 1),&#xA;    (125, &#39;Nathan Whitley&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 4),&#xA;    (126, &#39;Anna Westfield&#39;, &#39;[&#34;cinnamon&#34;]&#39;, 4),&#xA;    (127, &#39;Aaron Ashcroft&#39;, &#39;[&#34;dark chocolate&#34;,&#34;marshmallow&#34;]&#39;, 4),&#xA;    (128, &#39;Julia Pendleton&#39;, &#39;[&#34;crispy rice&#34;,&#34;caramel drizzle&#34;,&#34;marshmallow&#34;,&#34;white chocolate&#34;]&#39;, 2),&#xA;    (129, &#39;Connor Redmond&#39;, &#39;[&#34;crispy rice&#34;,&#34;marshmallow&#34;]&#39;, 1),&#xA;    (130, &#39;Grace Thornhill&#39;, &#39;[&#34;vanilla foam&#34;,&#34;white chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 4),&#xA;    (131, &#39;Zachary Stafford&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 2),&#xA;    (132, &#39;Caroline Bannister&#39;, &#39;[&#34;marshmallow&#34;,&#34;peppermint&#34;,&#34;cinnamon&#34;]&#39;, 9),&#xA;    (133, &#39;Dylan Blakely&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;marshmallow&#34;,&#34;crispy rice&#34;,&#34;white chocolate&#34;]&#39;, 6),&#xA;    (134, &#39;Katherine Underwood&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 8),&#xA;    (135, &#39;Tyler Braddock&#39;, &#39;[&#34;vanilla foam&#34;]&#39;, 6),&#xA;    (136, &#39;Victoria Harwood&#39;, &#39;[&#34;cinnamon&#34;,&#34;vanilla foam&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]&#39;, 9),&#xA;    (137, &#39;Ryan Beckett&#39;, &#39;[&#34;white chocolate&#34;,&#34;dark chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 7),&#xA;    (138, &#39;Elizabeth Chesterfield&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;]&#39;, 2),&#xA;    (139, &#39;Jordan Waverly&#39;, &#39;[&#34;dark chocolate&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;]&#39;, 8),&#xA;    (140, &#39;Sarah Remington&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;,&#34;cinnamon&#34;]&#39;, 3),&#xA;    (141, &#39;Brandon Locklear&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;]&#39;, 6),&#xA;    (142, &#39;Rachel Wyndham&#39;, &#39;[&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;white chocolate&#34;]&#39;, 4),&#xA;    (143, &#39;Logan Sherwood&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 7),&#xA;    (144, &#39;Amanda Fitzroy&#39;, &#39;[&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;,&#34;dark chocolate&#34;]&#39;, 8),&#xA;    (145, &#39;Jackson Thorpe&#39;, &#39;[&#34;peppermint&#34;,&#34;marshmallow&#34;,&#34;cinnamon&#34;,&#34;dark chocolate&#34;]&#39;, 7),&#xA;    (146, &#39;Rebecca Ashcombe&#39;, &#39;[&#34;crispy rice&#34;,&#34;caramel drizzle&#34;]&#39;, 8),&#xA;    (147, &#39;Cameron Gladstone&#39;, &#39;[&#34;vanilla foam&#34;,&#34;caramel drizzle&#34;,&#34;cinnamon&#34;]&#39;, 8),&#xA;    (148, &#39;Jessica Langston&#39;, &#39;[&#34;crispy rice&#34;,&#34;white chocolate&#34;,&#34;marshmallow&#34;]&#39;, 2),&#xA;    (149, &#39;Mason Fairmont&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;marshmallow&#34;]&#39;, 9),&#xA;    (150, &#39;Emily Claridge&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]&#39;, 8),&#xA;    (151, &#39;Hunter Bellingham&#39;, &#39;[&#34;white chocolate&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]&#39;, 4),&#xA;    (152, &#39;Laura Thornbury&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;,&#34;marshmallow&#34;]&#39;, 5),&#xA;    (153, &#39;Wyatt Alderton&#39;, &#39;[&#34;white chocolate&#34;]&#39;, 3),&#xA;    (154, &#39;Claire Berkshire&#39;, &#39;[&#34;peppermint&#34;,&#34;white chocolate&#34;,&#34;crispy rice&#34;]&#39;, 5),&#xA;    (155, &#39;Cole Ashland&#39;, &#39;[&#34;dark chocolate&#34;,&#34;marshmallow&#34;]&#39;, 1),&#xA;    (156, &#39;Diana Brightwell&#39;, &#39;[&#34;dark chocolate&#34;,&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 6),&#xA;    (157, &#39;Aiden Stanfield&#39;, &#39;[&#34;peppermint&#34;,&#34;crispy rice&#34;]&#39;, 8),&#xA;    (158, &#39;Natalie Warwick&#39;, &#39;[&#34;marshmallow&#34;]&#39;, 7),&#xA;    (159, &#39;Parker Blackmore&#39;, &#39;[&#34;marshmallow&#34;,&#34;peppermint&#34;,&#34;white chocolate&#34;,&#34;vanilla foam&#34;]&#39;, 5),&#xA;    (160, &#39;Morgan Steadman&#39;, &#39;[&#34;shaved chocolate&#34;]&#39;, 2),&#xA;    (161, &#39;Blake Dunwood&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;white chocolate&#34;,&#34;dark chocolate&#34;]&#39;, 5),&#xA;    (162, &#39;Taylor Woodridge&#39;, &#39;[&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;white chocolate&#34;,&#34;caramel drizzle&#34;]&#39;, 2),&#xA;    (163, &#39;Chase Ashbury&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;crispy rice&#34;]&#39;, 2),&#xA;    (164, &#39;Madison Clearwater&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;cinnamon&#34;]&#39;, 9),&#xA;    (165, &#39;Carter Brookfield&#39;, &#39;[&#34;cinnamon&#34;]&#39;, 1),&#xA;    (166, &#39;Ashley Fairhaven&#39;, &#39;[&#34;dark chocolate&#34;,&#34;white chocolate&#34;,&#34;cinnamon&#34;,&#34;peppermint&#34;]&#39;, 4),&#xA;    (167, &#39;Griffin Hartwell&#39;, &#39;[&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;]&#39;, 4),&#xA;    (168, &#39;Megan Redfield&#39;, &#39;[&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;peppermint&#34;]&#39;, 9),&#xA;    (169, &#39;Grayson Westmore&#39;, &#39;[&#34;cinnamon&#34;,&#34;crispy rice&#34;]&#39;, 3),&#xA;    (170, &#39;Nicole Ashridge&#39;, &#39;[&#34;peppermint&#34;]&#39;, 3),&#xA;    (171, &#39;Sawyer Hollingsworth&#39;, &#39;[&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;]&#39;, 8),&#xA;    (172, &#39;Alexis Thorndale&#39;, &#39;[&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;dark chocolate&#34;]&#39;, 9),&#xA;    (173, &#39;Declan Summerfield&#39;, &#39;[&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]&#39;, 7),&#xA;    (174, &#39;Samantha Brightwood&#39;, &#39;[&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;dark chocolate&#34;]&#39;, 4),&#xA;    (175, &#39;Tristan Ashbrook&#39;, &#39;[&#34;crispy rice&#34;]&#39;, 1),&#xA;    (176, &#39;Melissa Ravenscroft&#39;, &#39;[&#34;dark chocolate&#34;,&#34;white chocolate&#34;]&#39;, 5),&#xA;    (177, &#39;Colton Hawthorne&#39;, &#39;[&#34;vanilla foam&#34;,&#34;dark chocolate&#34;]&#39;, 2),&#xA;    (178, &#39;Lauren Silverton&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;,&#34;dark chocolate&#34;]&#39;, 2),&#xA;    (179, &#39;Landon Whitworth&#39;, &#39;[&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]&#39;, 8),&#xA;    (180, &#39;Kayla Mansfield&#39;, &#39;[&#34;vanilla foam&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;]&#39;, 2);&#xA;&#xA;INSERT INTO cocoa_cars (car_id, available_mixins, total_stock) VALUES&#xA;    (5, &#39;[&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]&#39;, 412),&#xA;    (2, &#39;[&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]&#39;, 359),&#xA;    (9, &#39;[&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;]&#39;, 354),&#xA;    (4, &#39;[&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]&#39;, 338),&#xA;    (8, &#39;[&#34;vanilla foam&#34;,&#34;marshmallow&#34;]&#39;, 263),&#xA;    (1, &#39;[&#34;peppermint&#34;,&#34;crispy rice&#34;]&#39;, 205),&#xA;    (6, &#39;[&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;cinnamon&#34;,&#34;peppermint&#34;]&#39;, 161),&#xA;    (7, &#39;[&#34;caramel drizzle&#34;,&#34;crispy rice&#34;,&#34;marshmallow&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;]&#39;, 132),&#xA;    (3, &#39;[&#34;vanilla foam&#34;,&#34;peppermint&#34;]&#39;, 95);&#xA;&#xA;```&#xA;&#xA;Changing `TEXT[]` to `TEXT` is not required, as SQLITE really doesn&#39;t bother with the types unless the table is strict. We&#39;ll take that flexibility here.&#xA;&#xA;## Problem&#xA;&#xA;&gt; Get the stewards a list of all the passengers and the cocoa car(s) they can be served from that has at least one of their favorite mixins.&#xA;&gt; &#xA;&gt; Remember only the top three most-stocked cocoa cars remained operational, so the passengers must be served from one of those cars.&#xA;&#xA;Ok so, we have two tables.&#xA;&#xA;1. `cocoa_cars`&#xA;2. `passengers`&#xA;&#xA;What we need to do is to list down the passengers which the `cocoa_cars` can satisfy their atleast one mixin(food) with their car_ids. So, there are limited cars, just 9 cars in total right?&#xA;&#xA;```sql&#xA;SELECT * FROM cocoa_cars ORDER BY car_id;&#xA;```&#xA;So, there are only 9 cars.&#xA;```&#xA;sqlite&gt; SELECT * FROM cocoa_cars ORDER BY car_id;&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| car_id |                       available_mixins                       | total_stock |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 1      | [&#34;peppermint&#34;,&#34;crispy rice&#34;]                                 | 205         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 3      | [&#34;vanilla foam&#34;,&#34;peppermint&#34;]                                | 95          |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 4      | [&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]                       | 338         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 6      | [&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;cinnamon | 161         |&#xA;|        | &#34;,&#34;peppermint&#34;]                                              |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 7      | [&#34;caramel drizzle&#34;,&#34;crispy rice&#34;,&#34;marshmallow&#34;,&#34;vanilla foam | 132         |&#xA;|        | &#34;,&#34;cinnamon&#34;]                                                |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 8      | [&#34;vanilla foam&#34;,&#34;marshmallow&#34;]                               | 263         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         |&#xA;|        | ate&#34;]                                                        |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;However, the question states, we only need to consider the top three most stacked cars. Now we can order by `total_stack` here and find the top stacked 3 cars with the `LIMIT 3` clause.&#xA;&#xA;```sql&#xA;SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; &#xA;sqlite&gt; select * from cocoa_cars ORDER BY total_stock DESC;&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| car_id |                       available_mixins                       | total_stock |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         |&#xA;|        | ate&#34;]                                                        |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 4      | [&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]                       | 338         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 8      | [&#34;vanilla foam&#34;,&#34;marshmallow&#34;]                               | 263         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 1      | [&#34;peppermint&#34;,&#34;crispy rice&#34;]                                 | 205         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 6      | [&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;cinnamon | 161         |&#xA;|        | &#34;,&#34;peppermint&#34;]                                              |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 7      | [&#34;caramel drizzle&#34;,&#34;crispy rice&#34;,&#34;marshmallow&#34;,&#34;vanilla foam | 132         |&#xA;|        | &#34;,&#34;cinnamon&#34;]                                                |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 3      | [&#34;vanilla foam&#34;,&#34;peppermint&#34;]                                | 95          |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;sqlite&gt; select * from cocoa_cars ORDER BY total_stock DESC LIMIT 3;&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| car_id |                       available_mixins                       | total_stock |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         |&#xA;|        | ate&#34;]                                                        |             |&#xA;+--------+--------------------------------------------------------------+-------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Now, we have all the cars that we can start assigning the passengers.&#xA;How?&#xA;&#xA;We need to select and assign the passenger the `car_id` which contains one or more of their `favourite_mixins`.&#xA;Now, this is the tricky part.&#xA;&#xA;We are in SQLite!&#xA;&#xA;We have a `favourite_list` for `Mateo Cruz` as a string like `[&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]` which we need to match against these 3 car `available_mixins`:&#xA;- `[&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]` on car_id `5`&#xA;- `[&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]` on car_id `2`&#xA;- `[&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;]` on car_id `9`&#xA;&#xA;&#xA;So here all three cars have at least one right?&#xA;- car_id `5` has `shaved_chocolate`&#xA;- car_id `2` has `caramel drizzle`&#xA;- car_id `9` has 2 of his 3 `favourite_list` dishes `caramel drizzle`, and `shaved chocolate&#34;`.&#xA;&#xA;So we should ideally return for `Matro Cruz` the car_ids `[5, 2, 9]` or as separate rows doesn&#39;t matter as much I think. But the first one looks cool!&#xA;&#xA;So, how do we do it?&#xA;&#xA;First let&#39;s start with what we had!&#xA;&#xA;The top 3 cars as&#xA;&#xA;```sql&#xA;SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3;&#xA;```&#xA;&#xA;This gives us the table that we can use to compare for each car_id if we have any one of the `favourite_list` for one passenger at a time.&#xA;&#xA;But how do we split the string of mixins and favourite_list?&#xA;&#xA;We can use the [json_each](https://sqlite.org/json1.html#jeach) function which takes in any valid json string (could be raw, could be a column name) for each row of the the table. And it returns back a lot of things.&#xA;&#xA;Let&#39;s just try to select everything from the `json_each` with the favourite_mixins column.&#xA;&#xA;```sql&#xA;SELECT * FROM json_each(favorite_mixins) FROM passengers;&#xA;```&#xA;&#xA;```sqlite&gt; SELECT * FROM json_each(favorite_mixins) FROM passengers;&#xA;Parse error: near &#34;FROM&#34;: syntax error&#xA;  SELECT * FROM json_each(favorite_mixins) FROM passengers;&#xA;                             error here ---^&#xA;sqlite&gt; &#xA;```&#xA;Ops!&#xA;Why?&#xA;Because we have given it the whole column, it can only take one cell at a time, so we need to give it that cell for each row.&#xA;&#xA;&#xA;```sql&#xA;SELECT * FROM json_each((SELECT favorite_mixins FROM passengers));&#xA;```&#xA;&#xA;Here we try to give it only the `favorite_mixins` from the passengers column.&#xA;&#xA;```&#xA;sqlite&gt; SELECT * FROM json_each((SELECT favorite_mixins FROM passengers));&#xA;+-----+--------------+------+--------------+----+--------+---------+------+&#xA;| key |    value     | type |     atom     | id | parent | fullkey | path |&#xA;+-----+--------------+------+--------------+----+--------+---------+------+&#xA;| 0   | vanilla foam | text | vanilla foam | 2  |        | $[0]    | $    |&#xA;+-----+--------------+------+--------------+----+--------+---------+------+&#xA;sqlite&gt; select * from passengers limit 5;&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| passenger_id | passenger_name |                       favorite_mixins                        | car_id |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 1            | Ava Johnson    | [&#34;vanilla foam&#34;]                                             | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 2            | Mateo Cruz     | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 3            | Nia Grant      | [&#34;shaved chocolate&#34;]                                         | 5      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 4            | Hiro Tanaka    | [&#34;shaved chocolate&#34;]                                         | 2      |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;| 5            | Layla Brooks   | [&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;cinnamon&#34; | 3      |&#xA;|              |                | ]                                                            |        |&#xA;+--------------+----------------+--------------------------------------------------------------+--------+&#xA;sqlite&gt; &#xA;```&#xA;But it only gave it for the first column, unluckily it had just one mixin.&#xA;&#xA;What is happening?&#xA;&#xA;Well!&#xA;&#xA;This actually selects only the first row&#39;s `favorite_mixins` because `json_each()` processes one value at a time. Since favorite_mixins is a JSON array, SQLite expects a single array value per row. When we try to pass the entire column, it only processes the first row of `favorite_mixins`&#xA;&#xA;Let&#39;s try to use the json_each for passengers with more than one favorite mixins.&#xA;&#xA;```sql&#xA;SELECT * &#xA;FROM json_each(&#xA;     (SELECT favorite_mixins FROM passengers WHERE passenger_name = &#39;Mateo Cruz&#39;)&#xA;);&#xA;```&#xA;&#xA;&#xA;```&#xA;sqlite&gt; SELECT * &#xA;FROM json_each((SELECT favorite_mixins FROM passengers WHERE passenger_name = &#39;Ava Johnson&#39;));&#xA;+-----+--------------+------+--------------+----+--------+---------+------+&#xA;| key |    value     | type |     atom     | id | parent | fullkey | path |&#xA;+-----+--------------+------+--------------+----+--------+---------+------+&#xA;| 0   | vanilla foam | text | vanilla foam | 2  |        | $[0]    | $    |&#xA;+-----+--------------+------+--------------+----+--------+---------+------+&#xA;sqlite&gt; SELECT * &#xA;FROM json_each((SELECT favorite_mixins FROM passengers WHERE passenger_name = &#39;Mateo Cruz&#39;));&#xA;+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| key |      value       | type |       atom       | id | parent | fullkey | path |&#xA;+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 0   | caramel drizzle  | text | caramel drizzle  | 2  |        | $[0]    | $    |&#xA;| 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;| 2   | white chocolate  | text | white chocolate  | 37 |        | $[2]    | $    |&#xA;+-----+------------------+------+------------------+----+--------+---------+------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Ok now that is neat, it returned 3 rows for the 3 favorite items for the passenger `Mateo Cruz`&#xA;&#xA;Now what?&#xA;&#xA;How do we get it for all the passengers? &#xA;&#xA;How about JOINs, since the passenger&#39;s data will remain the same, we just change the mixins for each of their favorite list.&#xA;&#xA;&#xA;```sql&#xA;SELECT *&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) AS mixin &#xA;ORDER BY passengers.passenger_name;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; sqlite&gt; SELECT *&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) AS mixin &#xA;ORDER BY passengers.passenger_name;&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| passenger_id |     passenger_name     |                       favorite_mixins                        | car_id | key |      value       | type |       atom       | id | parent | fullkey | path |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 127          | Aaron Ashcroft         | [&#34;dark chocolate&#34;,&#34;marshmallow&#34;]                             | 4      | 0   | dark chocolate   | text | dark chocolate   | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 127          | Aaron Ashcroft         | [&#34;dark chocolate&#34;,&#34;marshmallow&#34;]                             | 4      | 1   | marshmallow      | text | marshmallow      | 18 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 104          | Abigail Worthington    | [&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;]                         | 8      | 0   | dark chocolate   | text | dark chocolate   | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 104          | Abigail Worthington    | [&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;]                         | 8      | 1   | caramel drizzle  | text | caramel drizzle  | 18 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 157          | Aiden Stanfield        | [&#34;peppermint&#34;,&#34;crispy rice&#34;]                                 | 8      | 0   | peppermint       | text | peppermint       | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 157          | Aiden Stanfield        | [&#34;peppermint&#34;,&#34;crispy rice&#34;]                                 | 8      | 1   | crispy rice      | text | crispy rice      | 13 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 103          | Alexander Stratton     | [&#34;peppermint&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;white choco | 8      | 0   | peppermint       | text | peppermint       | 2  |        | $[0]    | $    |&#xA;|              |                        | late&#34;]                                                       |        |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 103          | Alexander Stratton     | [&#34;peppermint&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;white choco | 8      | 1   | shaved chocolate | text | shaved chocolate | 13 |        | $[1]    | $    |&#xA;|              |                        | late&#34;]                                                       |        |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 103          | Alexander Stratton     | [&#34;peppermint&#34;,&#34;shaved chocolate&#34;,&#34;vanilla foam&#34;,&#34;white choco | 8      | 2   | vanilla foam     | text | vanilla foam     | 31 |        | $[2]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;...&#xA;...&#xA;| 153          | Wyatt Alderton         | [&#34;white chocolate&#34;]                                          | 3      | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 15           | Yara Haddad            | [&#34;white chocolate&#34;,&#34;dark chocolate&#34;]                         | 2      | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 15           | Yara Haddad            | [&#34;white chocolate&#34;,&#34;dark chocolate&#34;]                         | 2      | 1   | dark chocolate   | text | dark chocolate   | 19 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 51           | Youssef El-Sayed       | [&#34;peppermint&#34;,&#34;marshmallow&#34;]                                 | 3      | 0   | peppermint       | text | peppermint       | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 51           | Youssef El-Sayed       | [&#34;peppermint&#34;,&#34;marshmallow&#34;]                                 | 3      | 1   | marshmallow      | text | marshmallow      | 13 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 57           | Zachary Collins        | [&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]          | 2      | 0   | dark chocolate   | text | dark chocolate   | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 57           | Zachary Collins        | [&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]          | 2      | 1   | caramel drizzle  | text | caramel drizzle  | 18 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 57           | Zachary Collins        | [&#34;dark chocolate&#34;,&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]          | 2      | 2   | vanilla foam     | text | vanilla foam     | 35 |        | $[2]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 131          | Zachary Stafford       | [&#34;shaved chocolate&#34;]                                         | 2      | 0   | shaved chocolate | text | shaved chocolate | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 11           | Zara Sheikh            | [&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]                  | 4      | 0   | vanilla foam     | text | vanilla foam     | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 11           | Zara Sheikh            | [&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]                  | 4      | 1   | crispy rice      | text | crispy rice      | 16 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 11           | Zara Sheikh            | [&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]                  | 4      | 2   | peppermint       | text | peppermint       | 28 |        | $[2]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 42           | Zoe Wilson             | [&#34;marshmallow&#34;,&#34;dark chocolate&#34;]                             | 9      | 0   | marshmallow      | text | marshmallow      | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 42           | Zoe Wilson             | [&#34;marshmallow&#34;,&#34;dark chocolate&#34;]                             | 9      | 1   | dark chocolate   | text | dark chocolate   | 14 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;&#xA;```&#xA;&#xA;It just automagically joins the relevant rows for the expanded rows from the json_each when we pass the column which is a json list.&#xA;&#xA;But we don&#39;t want everything, we just want the passenger and the mixin names. And let&#39;s also include the index just to see the data.&#xA;&#xA;```sql&#xA;SELECT passengers.passenger_name, mixin.key AS mixin_index, mixin.value AS mixin&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) AS mixin&#xA;ORDER BY passengers.passenger_name;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT passengers.passenger_name, mixin.key AS mixin_index, mixin.value AS mixin&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) AS mixin&#xA;ORDER BY passengers.passenger_name;&#xA;+------------------------+-------------+------------------+&#xA;|     passenger_name     | mixin_index |      mixin       |&#xA;+------------------------+-------------+------------------+&#xA;| Aaron Ashcroft         | 0           | dark chocolate   |&#xA;| Aaron Ashcroft         | 1           | marshmallow      |&#xA;| Abigail Worthington    | 0           | dark chocolate   |&#xA;| Abigail Worthington    | 1           | caramel drizzle  |&#xA;| Aiden Stanfield        | 0           | peppermint       |&#xA;| Aiden Stanfield        | 1           | crispy rice      |&#xA;| Alexander Stratton     | 0           | peppermint       |&#xA;| Alexander Stratton     | 1           | shaved chocolate |&#xA;| Alexander Stratton     | 2           | vanilla foam     |&#xA;| Alexander Stratton     | 3           | white chocolate  |&#xA;...&#xA;...&#xA;| Yara Haddad            | 1           | dark chocolate   |&#xA;| Youssef El-Sayed       | 0           | peppermint       |&#xA;| Youssef El-Sayed       | 1           | marshmallow      |&#xA;| Zachary Collins        | 0           | dark chocolate   |&#xA;| Zachary Collins        | 1           | caramel drizzle  |&#xA;| Zachary Collins        | 2           | vanilla foam     |&#xA;| Zachary Stafford       | 0           | shaved chocolate |&#xA;| Zara Sheikh            | 0           | vanilla foam     |&#xA;| Zara Sheikh            | 1           | crispy rice      |&#xA;| Zara Sheikh            | 2           | peppermint       |&#xA;| Zoe Wilson             | 0           | marshmallow      |&#xA;| Zoe Wilson             | 1           | dark chocolate   |&#xA;+------------------------+-------------+------------------+&#xA;&#xA;```&#xA;&#xA;That looks good.&#xA;&#xA;Now what?&#xA;&#xA;We also need to do it for the cocoa_cars with `available_mixins`&#xA;&#xA;```sql&#xA;SELECT &#xA;    *&#xA;FROM cocoa_cars &#xA;JOIN &#xA;    json_each(available_mixins) AS mixin&#xA;ORDER BY total_stock DESC;&#xA;```&#xA;OR&#xA;&#xA;```sql&#xA;SELECT &#xA;    car_id,&#xA;    mixin.key AS mixin_index,&#xA;    mixin.value AS mixin &#xA;FROM cocoa_cars &#xA;JOIN&#xA;    json_each(available_mixins) as mixin &#xA;ORDER BY total_stock DESC;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT car_id, mixin.key as mixin_index, mixin.value as mixin from cocoa_cars join json_each(available_mixins) as mixin ORDER BY total_stock DESC;&#xA;+--------+-------------+------------------+&#xA;| car_id | mixin_index |      mixin       |&#xA;+--------+-------------+------------------+&#xA;| 5      | 0           | white chocolate  |&#xA;| 5      | 1           | shaved chocolate |&#xA;| 2      | 0           | cinnamon         |&#xA;| 2      | 1           | marshmallow      |&#xA;| 2      | 2           | caramel drizzle  |&#xA;| 9      | 0           | crispy rice      |&#xA;| 9      | 1           | peppermint       |&#xA;| 9      | 2           | caramel drizzle  |&#xA;| 9      | 3           | shaved chocolate |&#xA;| 4      | 0           | shaved chocolate |&#xA;| 4      | 1           | white chocolate  |&#xA;| 8      | 0           | vanilla foam     |&#xA;| 8      | 1           | marshmallow      |&#xA;| 1      | 0           | peppermint       |&#xA;| 1      | 1           | crispy rice      |&#xA;| 6      | 0           | shaved chocolate |&#xA;| 6      | 1           | dark chocolate   |&#xA;| 6      | 2           | crispy rice      |&#xA;| 6      | 3           | cinnamon         |&#xA;| 6      | 4           | peppermint       |&#xA;| 7      | 0           | caramel drizzle  |&#xA;| 7      | 1           | crispy rice      |&#xA;| 7      | 2           | marshmallow      |&#xA;| 7      | 3           | vanilla foam     |&#xA;| 7      | 4           | cinnamon         |&#xA;| 3      | 0           | vanilla foam     |&#xA;| 3      | 1           | peppermint       |&#xA;+--------+-------------+------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;But hold on! We only needed it for the top 3 stocked cars.&#xA;&#xA;So, how do we do it, we can (bad and dirty practise) limit by the count of the numbers of mixin in 5, 2 and 9 but that is bad.&#xA;&#xA;We need to dynamically get this table. &#xA;&#xA;Help in! CTEs&#xA;&#xA;```sql&#xA;WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT &#xA;    *&#xA;FROM stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins);&#xA;```&#xA;&#xA;OR&#xA;&#xA;```sql&#xA;WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT &#xA;    car_id,&#xA;    car_mixins.key as car_mixin_index,&#xA;    car_mixins.value as car_mixin&#xA;FROM stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) as car_mixins;&#xA;```&#xA;&#xA;&#xA;```&#xA;sqlite&gt; WITH stocked_cars as (&#xA;(x1...&gt; select * from cocoa_cars ORDER BY total_stock DESC LIMIT 3)&#xA;   ...&gt; select * from stocked_cars JOIN json_each(stocked_cars.available_mixins);&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| car_id |                       available_mixins                       | total_stock | key |      value       | type |       atom       | id | parent | fullkey | path |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         | 0   | cinnamon         | text | cinnamon         | 2  |        | $[0]    | $    |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         | 1   | marshmallow      | text | marshmallow      | 11 |        | $[1]    | $    |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 2      | [&#34;cinnamon&#34;,&#34;marshmallow&#34;,&#34;caramel drizzle&#34;]                 | 359         | 2   | caramel drizzle  | text | caramel drizzle  | 23 |        | $[2]    | $    |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    |&#xA;|        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 1   | peppermint       | text | peppermint       | 14 |        | $[1]    | $    |&#xA;|        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 2   | caramel drizzle  | text | caramel drizzle  | 25 |        | $[2]    | $    |&#xA;|        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 3   | shaved chocolate | text | shaved chocolate | 42 |        | $[3]    | $    |&#xA;|        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;sqlite&gt; &#xA;sqlite&gt; WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT &#xA;    car_id,&#xA;    car_mixins.key as car_mixin_index,&#xA;    car_mixins.value as car_mixin&#xA;FROM stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) as car_mixins;&#xA;+--------+-----------------+------------------+&#xA;| car_id | car_mixin_index |    car_mixin     |&#xA;+--------+-----------------+------------------+&#xA;| 5      | 0               | white chocolate  |&#xA;| 5      | 1               | shaved chocolate |&#xA;| 2      | 0               | cinnamon         |&#xA;| 2      | 1               | marshmallow      |&#xA;| 2      | 2               | caramel drizzle  |&#xA;| 9      | 0               | crispy rice      |&#xA;| 9      | 1               | peppermint       |&#xA;| 9      | 2               | caramel drizzle  |&#xA;| 9      | 3               | shaved chocolate |&#xA;+--------+-----------------+------------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Now what next?&#xA;&#xA;We simply need to combine all of it&#xA;&#xA;- Grab the top 3 cars &#xA;- Grab the passenger favorite mixins (expand with json_each)&#xA;- Grab the car available mixins (expand with json_each and use the top 3 car as cte)&#xA;- JOIN them when the car_mixin has one mixin from a passenger favorite mixin.&#xA;&#xA;```sql&#xA;WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT passengers.passenger_name, stocked_cars.car_id&#xA;FROM passengers  &#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars  &#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value;&#xA;```&#xA;&#xA;OR &#xA;&#xA;Select some more rows for visual confirmation.&#xA;&#xA;```sql&#xA;WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT passengers.passenger_name, passengers.favorite_mixins, stocked_cars.available_mixins, passenger_mixin.value as passenger_mixin, available_mixin.value as available_mixin, stocked_cars.car_id&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value;&#xA;&#xA;```&#xA;&#xA;OR&#xA;&#xA;select everything &#xA;&#xA;```sql&#xA;WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT *&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value;&#xA;&#xA;```&#xA;&#xA;&#xA;```&#xA;sqlite&gt; WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT passengers.passenger_name, stocked_cars.car_id&#xA;FROM passengers  &#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars  &#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value;&#xA;&#xA;+------------------------+--------+&#xA;|     passenger_name     | car_id |&#xA;+------------------------+--------+&#xA;| Mateo Cruz             | 5      |&#xA;| Mateo Cruz             | 5      |&#xA;| Nia Grant              | 5      |&#xA;| Hiro Tanaka            | 5      |&#xA;| Ravi Patel             | 5      |&#xA;| Ravi Patel             | 5      |&#xA;| Elena Morales          | 5      |&#xA;| Elena Morales          | 5      |&#xA;| Diego Ramos            | 5      |&#xA;| Caleb Osei             | 5      |&#xA;| Caleb Osei             | 5      |&#xA;| Lucas Ford             | 5      |&#xA;| Yara Haddad            | 5      |&#xA;| Tariq Hassan           | 5      |&#xA;| Eva Schmidt            | 5      |&#xA;| Ingrid Nilsen          | 5      |&#xA;| Sophia Rossi           | 5      |&#xA;| Olivia Dubois          | 5      |&#xA;| Emma Svensson          | 5      |&#xA;| Isabella Laurent       | 5      |&#xA;| James Kim              | 5      |&#xA;...&#xA;...&#xA;| Griffin Hartwell       | 9      |&#xA;| Griffin Hartwell       | 9      |&#xA;| Megan Redfield         | 9      |&#xA;| Megan Redfield         | 9      |&#xA;| Grayson Westmore       | 9      |&#xA;| Nicole Ashridge        | 9      |&#xA;| Sawyer Hollingsworth   | 9      |&#xA;| Sawyer Hollingsworth   | 9      |&#xA;| Alexis Thorndale       | 9      |&#xA;| Declan Summerfield     | 9      |&#xA;| Declan Summerfield     | 9      |&#xA;| Samantha Brightwood    | 9      |&#xA;| Samantha Brightwood    | 9      |&#xA;| Tristan Ashbrook       | 9      |&#xA;| Lauren Silverton       | 9      |&#xA;| Landon Whitworth       | 9      |&#xA;| Kayla Mansfield        | 9      |&#xA;| Kayla Mansfield        | 9      |&#xA;+------------------------+--------+&#xA;sqlite&gt;&#xA;&#xA;&#xA;sqlite&gt; WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT passengers.passenger_name, passengers.favorite_mixins, stocked_cars.available_mixins, passenger_mixin.value as passenger_mixin, available_mixin.value as available_mixin, stocked_cars.car_id&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value;&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;|     passenger_name     |                       favorite_mixins                        |                       available_mixins                       | passenger_mixin  | available_mixin  | car_id |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Mateo Cruz             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Mateo Cruz             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Nia Grant              | [&#34;shaved chocolate&#34;]                                         | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Hiro Tanaka            | [&#34;shaved chocolate&#34;]                                         | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Ravi Patel             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Ravi Patel             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Elena Morales          | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]     | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Elena Morales          | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]     | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Diego Ramos            | [&#34;shaved chocolate&#34;]                                         | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Caleb Osei             | [&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;white chocolate&#34;]      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | shaved chocolate | shaved chocolate | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Caleb Osei             | [&#34;shaved chocolate&#34;,&#34;dark chocolate&#34;,&#34;white chocolate&#34;]      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Lucas Ford             | [&#34;vanilla foam&#34;,&#34;white chocolate&#34;,&#34;cinnamon&#34;]                | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Yara Haddad            | [&#34;white chocolate&#34;,&#34;dark chocolate&#34;]                         | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Tariq Hassan           | [&#34;dark chocolate&#34;,&#34;crispy rice&#34;,&#34;white chocolate&#34;,&#34;peppermin | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;|                        | t&#34;]                                                          |                                                              |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Eva Schmidt            | [&#34;white chocolate&#34;,&#34;marshmallow&#34;]                            | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | white chocolate  | white chocolate  | 5      |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;...&#xA;...&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Sawyer Hollingsworth   | [&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;]                | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | crispy rice      | crispy rice      | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Sawyer Hollingsworth   | [&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;]                | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | peppermint       | peppermint       | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Alexis Thorndale       | [&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;dark chocolate&#34;]              | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | crispy rice      | crispy rice      | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Declan Summerfield     | [&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]  | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | crispy rice      | crispy rice      | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Declan Summerfield     | [&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]  | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | peppermint       | peppermint       | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Samantha Brightwood    | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;dark chocolate&#34;]                | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | crispy rice      | crispy rice      | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Samantha Brightwood    | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;dark chocolate&#34;]                | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | peppermint       | peppermint       | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Tristan Ashbrook       | [&#34;crispy rice&#34;]                                              | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | crispy rice      | crispy rice      | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Lauren Silverton       | [&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;,&#34;dark chocolate | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | caramel drizzle  | caramel drizzle  | 9      |&#xA;|                        | &#34;]                                                           | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Landon Whitworth       | [&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]                           | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | caramel drizzle  | caramel drizzle  | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Kayla Mansfield        | [&#34;vanilla foam&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;]             | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | peppermint       | peppermint       | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;| Kayla Mansfield        | [&#34;vanilla foam&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;]             | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | shaved chocolate | shaved chocolate | 9      |&#xA;|                        |                                                              | ate&#34;]                                                        |                  |                  |        |&#xA;+------------------------+--------------------------------------------------------------+--------------------------------------------------------------+------------------+------------------+--------+&#xA;sqlite&gt; &#xA;&#xA;&#xA;&#xA;&#xA;&#xA;sqlite&gt; WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT *                                             &#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value;&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| passenger_id |     passenger_name     |                       favorite_mixins                        | car_id | key |      value       | type |       atom       | id | parent | fullkey | path | car_id |                       available_mixins                       | total_stock | key |      value       | type |       atom       | id | parent | fullkey | path |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 2            | Mateo Cruz             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 2      | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 2            | Mateo Cruz             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 2      | 2   | white chocolate  | text | white chocolate  | 37 |        | $[2]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 3            | Nia Grant              | [&#34;shaved chocolate&#34;]                                         | 5      | 0   | shaved chocolate | text | shaved chocolate | 2  |        | $[0]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 4            | Hiro Tanaka            | [&#34;shaved chocolate&#34;]                                         | 2      | 0   | shaved chocolate | text | shaved chocolate | 2  |        | $[0]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 6            | Ravi Patel             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 5      | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 6            | Ravi Patel             | [&#34;caramel drizzle&#34;,&#34;shaved chocolate&#34;,&#34;white chocolate&#34;]     | 5      | 2   | white chocolate  | text | white chocolate  | 37 |        | $[2]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 9            | Elena Morales          | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]     | 6      | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 0   | white chocolate  | text | white chocolate  | 2  |        | $[0]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 9            | Elena Morales          | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;,&#34;caramel drizzle&#34;]     | 6      | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 10           | Diego Ramos            | [&#34;shaved chocolate&#34;]                                         | 1      | 0   | shaved chocolate | text | shaved chocolate | 2  |        | $[0]    | $    | 5      | [&#34;white chocolate&#34;,&#34;shaved chocolate&#34;]                       | 412         | 1   | shaved chocolate | text | shaved chocolate | 19 |        | $[1]    | $    |&#xA;...&#xA;...&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 171          | Sawyer Hollingsworth   | [&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;]                | 8      | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 171          | Sawyer Hollingsworth   | [&#34;crispy rice&#34;,&#34;dark chocolate&#34;,&#34;peppermint&#34;]                | 8      | 2   | peppermint       | text | peppermint       | 30 |        | $[2]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 1   | peppermint       | text | peppermint       | 14 |        | $[1]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 172          | Alexis Thorndale       | [&#34;vanilla foam&#34;,&#34;crispy rice&#34;,&#34;dark chocolate&#34;]              | 9      | 1   | crispy rice      | text | crispy rice      | 16 |        | $[1]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 173          | Declan Summerfield     | [&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]  | 7      | 2   | crispy rice      | text | crispy rice      | 30 |        | $[2]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 173          | Declan Summerfield     | [&#34;dark chocolate&#34;,&#34;marshmallow&#34;,&#34;crispy rice&#34;,&#34;peppermint&#34;]  | 7      | 3   | peppermint       | text | peppermint       | 42 |        | $[3]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 1   | peppermint       | text | peppermint       | 14 |        | $[1]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 174          | Samantha Brightwood    | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;dark chocolate&#34;]                | 4      | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 174          | Samantha Brightwood    | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;dark chocolate&#34;]                | 4      | 1   | peppermint       | text | peppermint       | 14 |        | $[1]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 1   | peppermint       | text | peppermint       | 14 |        | $[1]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 175          | Tristan Ashbrook       | [&#34;crispy rice&#34;]                                              | 1      | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 0   | crispy rice      | text | crispy rice      | 2  |        | $[0]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 178          | Lauren Silverton       | [&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;,&#34;cinnamon&#34;,&#34;dark chocolate | 2      | 0   | caramel drizzle  | text | caramel drizzle  | 2  |        | $[0]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 2   | caramel drizzle  | text | caramel drizzle  | 25 |        | $[2]    | $    |&#xA;|              |                        | &#34;]                                                           |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 179          | Landon Whitworth       | [&#34;caramel drizzle&#34;,&#34;vanilla foam&#34;]                           | 8      | 0   | caramel drizzle  | text | caramel drizzle  | 2  |        | $[0]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 2   | caramel drizzle  | text | caramel drizzle  | 25 |        | $[2]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 180          | Kayla Mansfield        | [&#34;vanilla foam&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;]             | 2      | 1   | peppermint       | text | peppermint       | 16 |        | $[1]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 1   | peppermint       | text | peppermint       | 14 |        | $[1]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;| 180          | Kayla Mansfield        | [&#34;vanilla foam&#34;,&#34;peppermint&#34;,&#34;shaved chocolate&#34;]             | 2      | 2   | shaved chocolate | text | shaved chocolate | 27 |        | $[2]    | $    | 9      | [&#34;crispy rice&#34;,&#34;peppermint&#34;,&#34;caramel drizzle&#34;,&#34;shaved chocol | 354         | 3   | shaved chocolate | text | shaved chocolate | 42 |        | $[3]    | $    |&#xA;|              |                        |                                                              |        |     |                  |      |                  |    |        |         |      |        | ate&#34;]                                                        |             |     |                  |      |                  |    |        |         |      |&#xA;+--------------+------------------------+--------------------------------------------------------------+--------+-----+------------------+------+------------------+----+--------+---------+------+--------+--------------------------------------------------------------+-------------+-----+------------------+------+------------------+----+--------+---------+------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Now what we did here?&#xA;- Create the top 3 stocked cars as CTE&#xA;- The main query first fetches each passenger with its favorite mixin expanded &#xA;- It joined the table of available_mixin when?&#xA;- The mixin from passenger is equal to the mixin in the available mixin in the car&#xA;- Hence we get the car_id as the mixin served for that passenger.&#xA;&#xA;You can see we have multiple rows for each user, we might not want that. Though nothing is wrong with it, but the report looks quite long, especially if there were more than a couple of mixins for the passenger or the number of cars were more.&#xA;&#xA;We can group by the `passenger_name` and sort of concatenate the &#xA;&#xA;```sql&#xA;WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT &#xA;     passengers.passenger_name,&#xA;     &#39;[&#39; || GROUP_CONCAT(DISTINCT stocked_cars.car_id) || &#39;]&#39; AS cocoa_cars&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value&#xA;GROUP BY passengers.passenger_name;&#xA;```&#xA;&#xA;Here, we have added,&#xA;&#xA;```&#xA; &#39;[&#39; || GROUP_CONCAT(DISTINCT stocked_cars.car_id) || &#39;]&#39; &#xA;```&#xA;&#xA;AND to group by &#xA;&#xA;```&#xA;GROUP BY passengers.passenger_name;&#xA;```&#xA;&#xA;So this will squish down all the separate rows of the passenger favorite mixins and the cars that have it, with a single row and we defined how we want to squish the different `car_id`s with a [GROUP_CONCAT](https://www.sqlite.org/lang_aggfunc.html#group_concat). This function can concatentate (join or attach or combine) together the multiple strings with a specific separator (by default the separate is `,`).&#xA;We also use the `||` concatenation operator to add `[` at the start and `]` at the end of the list of `car_ids`.  &#xA;&#xA;&gt; The group_concat() function returns a string which is the concatenation of all non-NULL values of X. If parameter Y is present then it is used as the separator between instances of X. A comma (&#34;,&#34;) is used as the separator if Y is omitted.&#xA;&#xA;&#xA;&#xA;```&#xA;sqlite&gt; WITH stocked_cars as (&#xA;    SELECT * FROM cocoa_cars ORDER BY total_stock DESC LIMIT 3&#xA;)&#xA;SELECT passengers.passenger_name, &#39;[&#39; || GROUP_CONCAT(DISTINCT stocked_cars.car_id) || &#39;]&#39; AS cocoa_cars&#xA;FROM passengers&#xA;JOIN json_each(passengers.favorite_mixins) as passenger_mixin&#xA;JOIN stocked_cars&#xA;JOIN json_each(stocked_cars.available_mixins) AS available_mixin&#xA;    ON passenger_mixin.value = available_mixin.value&#xA;   ...&gt; GROUP BY passengers.passenger_name;&#xA;+------------------------+------------+&#xA;|     passenger_name     | cocoa_cars |&#xA;+------------------------+------------+&#xA;| Aaron Ashcroft         | [2]        |&#xA;| Abigail Worthington    | [2,9]      |&#xA;| Aiden Stanfield        | [9]        |&#xA;| Alexander Stratton     | [5,9]      |&#xA;| Alexis Thorndale       | [9]        |&#xA;| Alice Merriweather     | [5,2,9]    |&#xA;| Amanda Fitzroy         | [5,2,9]    |&#xA;| Amelia Rosewood        | [9]        |&#xA;| Anna Westfield         | [2]        |&#xA;| Aria Blackwood         | [5,2,9]    |&#xA;| Ashley Fairhaven       | [5,2,9]    |&#xA;| August Blackwell       | [5,2,9]    |&#xA;| Aurora Whitfield       | [5,2,9]    |&#xA;| Benjamin Fairweather   | [9]        |&#xA;| Benjamin Patel         | [9]        |&#xA;| Bianca Pereira         | [9]        |&#xA;| Blake Dunwood          | [5,9]      |&#xA;| Brandon Locklear       | [5,2,9]    |&#xA;| Caleb Osei             | [5,9]      |&#xA;| Cameron Gladstone      | [2,9]      |&#xA;| Cameron Rogers         | [5,2,9]    |&#xA;| Carlos Mendez          | [5,9]      |&#xA;| Caroline Bannister     | [2,9]      |&#xA;| Carter Brookfield      | [2]        |&#xA;| Charlotte Singh        | [5,2,9]    |&#xA;| Charlotte Waverly      | [5,9]      |&#xA;| Chase Ashbury          | [5,9]      |&#xA;| Chloe Blackstone       | [5,9]      |&#xA;| Claire Berkshire       | [5,9]      |&#xA;| Clara Westbrook        | [2]        |&#xA;| Cole Ashland           | [2]        |&#xA;| Connor Redmond         | [2,9]      |&#xA;| Daniel Murphy          | [5,2]      |&#xA;| Declan Summerfield     | [2,9]      |&#xA;| Diana Brightwell       | [5,9]      |&#xA;| Diego Ramos            | [5,9]      |&#xA;| Dylan Blakely          | [5,2,9]    |&#xA;| Eleanor Morrison       | [5,9]      |&#xA;| Elena Morales          | [5,2,9]    |&#xA;| Elijah Wakefield       | [2,9]      |&#xA;| Eliza Radcliffe        | [5,9]      |&#xA;| Elizabeth Chesterfield | [5,2,9]    |&#xA;| Emily Claridge         | [5,9]      |&#xA;| Emily Johnson          | [2]        |&#xA;| Emma Gilmore           | [5,2,9]    |&#xA;| Emma Svensson          | [5,9]      |&#xA;| Ethan Marlowe          | [5,2,9]    |&#xA;| Eva Schmidt            | [5,2]      |&#xA;| Eva Templeton          | [5,2,9]    |&#xA;| Fatima Noor            | [9]        |&#xA;| Felix Schneider        | [9]        |&#xA;| Felix Whitmore         | [2]        |&#xA;| Finn Lockhart          | [2,9]      |&#xA;| Gabriel Winthrop       | [5,2,9]    |&#xA;| Grace Aldridge         | [5,2,9]    |&#xA;| Grace Thornhill        | [5,2,9]    |&#xA;| Grayson Westmore       | [2,9]      |&#xA;| Griffin Hartwell       | [9]        |&#xA;| Hannah Livingston      | [5,2,9]    |&#xA;| Hazel Kincaid          | [2,9]      |&#xA;| Henry Treadwell        | [2,9]      |&#xA;| Hiro Tanaka            | [5,9]      |&#xA;| Hunter Bellingham      | [5,9]      |&#xA;| Ian Landsman           | [2]        |&#xA;| Ingrid Nilsen          | [5,2,9]    |&#xA;| Iris Pembroke          | [5,2,9]    |&#xA;| Isaac Kendrick         | [2]        |&#xA;| Isabella Laurent       | [5,9]      |&#xA;| Isabella Rodriguez     | [9]        |&#xA;| Ivy Winslow            | [2,9]      |&#xA;| Jack Carmichael        | [9]        |&#xA;| Jackson Thorpe         | [2,9]      |&#xA;| James Garrison         | [5,9]      |&#xA;| James Kim              | [5,2,9]    |&#xA;| James Simon            | [5,2,9]    |&#xA;| Jason Stewart          | [5]        |&#xA;| Jasper Thorne          | [5,2,9]    |&#xA;| Jessica Langston       | [5,2,9]    |&#xA;| Jonah Wolfe            | [2]        |&#xA;| Jordan Waverly         | [5,9]      |&#xA;| Julia Pendleton        | [5,2,9]    |&#xA;| Julian Blackburn       | [5]        |&#xA;| Kayla Mansfield        | [5,9]      |&#xA;| Keiko Ito              | [2]        |&#xA;| Landon Whitworth       | [2,9]      |&#xA;| Laura Thornbury        | [5,2,9]    |&#xA;| Lauren Silverton       | [2,9]      |&#xA;| Layla Brooks           | [2,9]      |&#xA;| Leo Greyson            | [2,9]      |&#xA;| Liam Donovan           | [2,9]      |&#xA;| Liam OConnor           | [2,9]      |&#xA;| Linda Lee              | [5,2]      |&#xA;| Logan Sherwood         | [5,9]      |&#xA;| Lucas Ford             | [5,2]      |&#xA;| Lucas Prescott         | [9]        |&#xA;| Lucy Morris            | [2,9]      |&#xA;| Luna Hartley           | [5]        |&#xA;| Madeline Ramsey        | [2,9]      |&#xA;| Madison Clearwater     | [5,2,9]    |&#xA;| Margaret Fairbanks     | [2,9]      |&#xA;| Mason Fairmont         | [2,9]      |&#xA;| Mateo Cruz             | [5,2,9]    |&#xA;| Matthew Sutherland     | [2,9]      |&#xA;| Megan Redfield         | [5,9]      |&#xA;| Melissa Ravenscroft    | [5]        |&#xA;| Mia Chen               | [5,9]      |&#xA;| Mila Novak             | [2,9]      |&#xA;| Miles Brennan          | [5,2,9]    |&#xA;| Milo Ashford           | [2,9]      |&#xA;| Mira Zhao              | [2,9]      |&#xA;| Morgan Steadman        | [5,9]      |&#xA;| Natalie Warwick        | [2]        |&#xA;| Nathan Whitley         | [5,9]      |&#xA;| Nia Grant              | [5,9]      |&#xA;| Nicholas Montague      | [5]        |&#xA;| Nicole Ashridge        | [9]        |&#xA;| Noah Fischer           | [2,9]      |&#xA;| Nolan Murphy           | [2,9]      |&#xA;| Nolan Young            | [5,2,9]    |&#xA;| Nora Calloway          | [9]        |&#xA;| Nova Adams             | [2]        |&#xA;| Oliver Ashby           | [9]        |&#xA;| Olivia Carrington      | [2]        |&#xA;| Olivia Dubois          | [5,9]      |&#xA;| Omar Qureshi           | [2]        |&#xA;| Owen Fairchild         | [5,9]      |&#xA;| Parker Blackmore       | [5,2,9]    |&#xA;| Penelope Sinclair      | [5,2,9]    |&#xA;| Rachel Wyndham         | [5,9]      |&#xA;| Rafael Silva           | [2,9]      |&#xA;| Ravi Patel             | [5,2,9]    |&#xA;| Rebecca Ashcombe       | [2,9]      |&#xA;| Robert Smith           | [9]        |&#xA;| Rose Drummond          | [5,2,9]    |&#xA;| Ruby Hawthorne         | [2]        |&#xA;| Ryan Beckett           | [5,9]      |&#xA;| Samantha Brightwood    | [9]        |&#xA;| Samuel Kingsley        | [5,2,9]    |&#xA;| Sara Johansson         | [2]        |&#xA;| Sarah Davis            | [9]        |&#xA;| Sarah Remington        | [5,2,9]    |&#xA;| Sawyer Hollingsworth   | [9]        |&#xA;| Scarlett Dalton        | [5,2,9]    |&#xA;| Silas Merrick          | [2]        |&#xA;| Sofia Kim              | [2]        |&#xA;| Sophia Brookshire      | [2,9]      |&#xA;| Sophia Rossi           | [5,2,9]    |&#xA;| Sophie Langford        | [5,9]      |&#xA;| Stella Beaumont        | [2,9]      |&#xA;| Tariq Hassan           | [5,9]      |&#xA;| Taylor Woodridge       | [5,2,9]    |&#xA;| Theodore Hadley        | [2]        |&#xA;| Tim Cook               | [9]        |&#xA;| Tristan Ashbrook       | [9]        |&#xA;| Victoria Harwood       | [2,9]      |&#xA;| Violet Sterling        | [2,9]      |&#xA;| William Beauchamp      | [5,2,9]    |&#xA;| William Becker         | [9]        |&#xA;| William Chen           | [5,2,9]    |&#xA;| Wyatt Alderton         | [5]        |&#xA;| Yara Haddad            | [5]        |&#xA;| Youssef El-Sayed       | [2,9]      |&#xA;| Zachary Collins        | [2,9]      |&#xA;| Zachary Stafford       | [5,9]      |&#xA;| Zara Sheikh            | [9]        |&#xA;| Zoe Wilson             | [2]        |&#xA;+------------------------+------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;I think we are done.&#xA;&#xA;We did it, it was a bit different one.&#xA;&#xA;Some wired hack here and there but we made it!&#xA;&#xA;Day 7 done and dusted.&#xA;&#xA;On to the day 8.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 6: Days of Delight</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-6</link>
      <description>Advent of SQL Day 6: Days of Delight It is day 6 of advent of SQL. Let&#39;s jump straight into the sql for the day. So, we have two tables: table table The first t</description>
      <pubDate>Sun, 21 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL Day 6: Days of Delight&#xA;&#xA;It is day 6 of advent of SQL.&#xA;&#xA;Let&#39;s jump straight into the sql for the day.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS families;&#xA;DROP TABLE IF EXISTS deliveries_assigned;&#xA;&#xA;CREATE TABLE families (&#xA;    id INT PRIMARY KEY,&#xA;    family_name TEXT&#xA;);&#xA;&#xA;CREATE TABLE deliveries_assigned (&#xA;    id INT PRIMARY KEY,&#xA;    family_id INT,&#xA;    gift_date DATE,&#xA;    gift_name TEXT&#xA;);&#xA;&#xA;INSERT INTO families (id, family_name) VALUES&#xA;    (1, &#39;Isla Martinez&#39;),&#xA;    (2, &#39;Nolan Garcia&#39;),&#xA;    (3, &#39;Yara Chen&#39;),&#xA;    (4, &#39;Tariq Nguyen&#39;),&#xA;    (5, &#39;Mila Hernandez&#39;),&#xA;    (6, &#39;Casey Kim&#39;),&#xA;    (7, &#39;Mateo Hernandez&#39;),&#xA;    (8, &#39;Keiko Petrov&#39;),&#xA;    (9, &#39;Ethan Flores&#39;),&#xA;    (10, &#39;Mateo Nakamura&#39;),&#xA;    (11, &#39;Maya Fernandez&#39;),&#xA;    (12, &#39;Mila Davis&#39;),&#xA;    (13, &#39;Yara Rossi&#39;),&#xA;    (14, &#39;Nolan Phillips&#39;),&#xA;    (15, &#39;Amina Perez&#39;);&#xA;&#xA;INSERT INTO deliveries_assigned (id, family_id, gift_date, gift_name) VALUES&#xA;    (1, 1, &#39;2025-12-01&#39;, &#39;roasted cashews&#39;),&#xA;    (2, 1, &#39;2025-12-02&#39;, &#39;cookie decorating kit&#39;),&#xA;    (3, 1, &#39;2025-12-03&#39;, &#39;dark chocolate assortment&#39;),&#xA;    (4, 1, &#39;2025-12-04&#39;, &#39;white chocolate candies&#39;),&#xA;    (5, 1, &#39;2025-12-05&#39;, &#39;reindeer headband&#39;),&#xA;    (6, 1, &#39;2025-12-06&#39;, &#39;holiday brownie bites&#39;),&#xA;    (7, 1, &#39;2025-12-07&#39;, &#39;shortbread cookie tin&#39;),&#xA;    (8, 1, &#39;2025-12-08&#39;, &#39;chocolate chip cookies&#39;),&#xA;    (9, 1, &#39;2025-12-11&#39;, &#39;holiday jam trio&#39;),&#xA;    (10, 1, &#39;2025-12-12&#39;, &#39;white chocolate popcorn&#39;),&#xA;    (11, 1, &#39;2025-12-14&#39;, &#39;holiday jam trio&#39;),&#xA;    (12, 1, &#39;2025-12-15&#39;, &#39;fudge bites&#39;),&#xA;    (13, 1, &#39;2025-12-16&#39;, &#39;holiday sticker sheet&#39;),&#xA;    (14, 1, &#39;2025-12-18&#39;, &#39;hot cocoa bombs&#39;),&#xA;    (15, 1, &#39;2025-12-19&#39;, &#39;honey roasted nuts&#39;),&#xA;    (16, 1, &#39;2025-12-20&#39;, &#39;holiday mug&#39;),&#xA;    (17, 1, &#39;2025-12-21&#39;, &#39;white chocolate candies&#39;),&#xA;    (18, 1, &#39;2025-12-22&#39;, &#39;puzzle book&#39;),&#xA;    (19, 1, &#39;2025-12-23&#39;, &#39;snowman plush&#39;),&#xA;    (20, 1, &#39;2025-12-24&#39;, &#39;scented hand cream&#39;),&#xA;    (21, 1, &#39;2025-12-25&#39;, &#39;vanilla bean wafers&#39;),&#xA;    (22, 2, &#39;2025-12-01&#39;, &#39;roasted cashews&#39;),&#xA;    (23, 2, &#39;2025-12-02&#39;, &#39;holiday brownie bites&#39;),&#xA;    (24, 2, &#39;2025-12-03&#39;, &#39;peppermint bark bites&#39;),&#xA;    (25, 2, &#39;2025-12-04&#39;, &#39;holiday jam trio&#39;),&#xA;    (26, 2, &#39;2025-12-05&#39;, &#39;festive notepad&#39;),&#xA;    (27, 2, &#39;2025-12-06&#39;, &#39;scented pine sachet&#39;),&#xA;    (28, 2, &#39;2025-12-07&#39;, &#39;holiday mug&#39;),&#xA;    (29, 2, &#39;2025-12-08&#39;, &#39;shortbread cookie tin&#39;),&#xA;    (30, 2, &#39;2025-12-09&#39;, &#39;dark chocolate assortment&#39;);&#xA;&#xA;```&#xA;&#xA;So, we have two tables:&#xA;1. `families` table&#xA;2. `deliveries_assigned` table&#xA;&#xA;The first table, `family` just has the id and the name of the family.&#xA;&#xA;The second table, `deliveries_assigned` has the id, family id, gift date, and gift name.&#xA;&#xA;Let&#39;s look at the problem statement.&#xA;&#xA;## Problem&#xA;&#xA;&gt; Generate a report that returns the dates and families that have no delivery assigned after December 14th, using the `families` and `deliveries_assigned`.&#xA;&gt; &#xA;&gt; Each row in the report should be a date and family name that represents the dates in which families don&#39;t have a delivery assigned yet.&#xA;&gt; &#xA;&gt; Label the columns as `unassigned_date` and `name`. Order the results by `unassigned_date` and `name`, respectively, both in ascending order.&#xA;&#xA;&#xA;Ok, so we need to list the deliveries_assigned for each family first, to check what kind of pattern are we looking for.&#xA;&#xA;```sql&#xA;SELECT * FROM deliveries_assigned WHERE family_id = 1;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; select * from deliveries_assigned where family_id=1;&#xA;+----+-----------+------------+---------------------------+&#xA;| id | family_id | gift_date  |         gift_name         |&#xA;+----+-----------+------------+---------------------------+&#xA;| 1  | 1         | 2025-12-01 | roasted cashews           |&#xA;| 2  | 1         | 2025-12-02 | cookie decorating kit     |&#xA;| 3  | 1         | 2025-12-03 | dark chocolate assortment |&#xA;| 4  | 1         | 2025-12-04 | white chocolate candies   |&#xA;| 5  | 1         | 2025-12-05 | reindeer headband         |&#xA;| 6  | 1         | 2025-12-06 | holiday brownie bites     |&#xA;| 7  | 1         | 2025-12-07 | shortbread cookie tin     |&#xA;| 8  | 1         | 2025-12-08 | chocolate chip cookies    |&#xA;| 9  | 1         | 2025-12-11 | holiday jam trio          |&#xA;| 10 | 1         | 2025-12-12 | white chocolate popcorn   |&#xA;| 11 | 1         | 2025-12-14 | holiday jam trio          |&#xA;| 12 | 1         | 2025-12-15 | fudge bites               |&#xA;| 13 | 1         | 2025-12-16 | holiday sticker sheet     |&#xA;| 14 | 1         | 2025-12-18 | hot cocoa bombs           |&#xA;| 15 | 1         | 2025-12-19 | honey roasted nuts        |&#xA;| 16 | 1         | 2025-12-20 | holiday mug               |&#xA;| 17 | 1         | 2025-12-21 | white chocolate candies   |&#xA;| 18 | 1         | 2025-12-22 | puzzle book               |&#xA;| 19 | 1         | 2025-12-23 | snowman plush             |&#xA;| 20 | 1         | 2025-12-24 | scented hand cream        |&#xA;| 21 | 1         | 2025-12-25 | vanilla bean wafers       |&#xA;+----+-----------+------------+---------------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;So, we are missing gifts for family id `1` on `09`, `10`, `13`, and `17`. But we are only asked for gifts after `14` (December 14th).&#xA;&#xA;&gt; Generate a report that returns the dates and families that have **no delivery assigned after December 14th**, using the families and deliveries_assigned.&#xA;&#xA;So, we can discard `09`, `10`, and `13` as they are before December 14th.&#xA;&#xA;```sql&#xA;SELECT * FROM deliveries_assigned WHERE family_id = 1 AND gift_date &gt; &#39;2025-12-14&#39;;&#xA;```&#xA;&#xA;It gives the right dates where the gifts are assigned after December 14th. But the problem is we need to get dates which are missing in this `deliveries_assigned` table record for each family.&#xA;&#xA;Finding something missing is kind of wired, because you don&#39;t have what is missing. Especially for dates, like dates are very painful.&#xA;&#xA;We need to find among the sequential order of the dates, when some of the dates are missing, that is simple here, but you can see it could be quite cubersome if we have to manually add each date in the list for comparing with.&#xA;&#xA;### JOINs with NOT EXISTS&#xA;&#xA;So, the basic dirty solution is to check the missing dates for each family, one by one.&#xA;&#xA;```sql&#xA;SELECT &#xA;    families.family_name AS name,&#xA;    dates.column1 AS unassigned_date&#xA;FROM families&#xA;JOIN (&#xA;    VALUES &#xA;        (&#39;2025-12-15&#39;), (&#39;2025-12-16&#39;), (&#39;2025-12-17&#39;),&#xA;        (&#39;2025-12-18&#39;), (&#39;2025-12-19&#39;), (&#39;2025-12-20&#39;),&#xA;        (&#39;2025-12-21&#39;), (&#39;2025-12-22&#39;), (&#39;2025-12-23&#39;),&#xA;        (&#39;2025-12-24&#39;), (&#39;2025-12-25&#39;)&#xA;) AS dates ON 1=1&#xA;WHERE NOT EXISTS (&#xA;    SELECT 1 &#xA;    FROM deliveries_assigned&#xA;    WHERE deliveries_assigned.family_id = families.id &#xA;    AND deliveries_assigned.gift_date = dates.column1&#xA;)&#xA;ORDER BY unassigned_date, name;&#xA;```&#xA;&#xA;Let&#39;s break it down:&#xA;1. We have a `families` table with id and family name.&#xA;2. We have a `deliveries_assigned` table with id, family id, gift date, and gift name.&#xA;3. We create a list of dates from `2025-12-15` to `2025-12-25` using the `VALUES` keyword.&#xA;   - This just appends one date after other and names the columns as `column1` with the table as `dates`.&#xA;   - The `ON` condition is `1=1` to make sure the `WHERE` condition is true for the JOIN to happen.&#xA;4. We use the `NOT EXISTS` keyword to check if the `deliveries_assigned` table has a record for each date in the list.&#xA;   - we use `NOT EXISTS` because we want to check if there is no date for each date in the assigned list.&#xA;5. We order the results by `unassigned_date` and `name`, respectively, both in ascending order.&#xA;&#xA;So, that is not the best way to solve this, I think.&#xA;&#xA;### Recursive CTEs&#xA;&#xA;We can generate a table full of dates, and then cross join the table with the `families` table. This will give us all possible combinations of dates and families like the gift should ideally be there for each family from 1st to 25th of December(however we are only interested from 15th to 25th December). Then once we have that full table of combination, we can check from the `deliveries_assigned` table and inner join for each family and filter out the rows which have family_id as `NULL` because some dates will be missing for that family.&#xA;&#xA;First we&#39;ll create a recursive table of dates from `2025-12-15` to `2025-12-25`&#xA;&#xA;```sql&#xA;WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;)&#xA;SELECT * FROM dates;&#xA;```&#xA;&#xA;This will give us the dates from `2025-12-15` to `2025-12-25` in a recursive table.&#xA;&#xA;What is a recursive table?&#xA;&#xA;&gt; A recursive table is a table that is defined as a combination of itself.&#xA;&#xA;So, we can create a recursive table of dates from `2025-12-15` to `2025-12-25`, The base case is `2025-12-15`, and the recursive case is `SELECT date(gift_date, &#39;+1 day&#39;) FROM dates WHERE gift_date &lt; &#39;2025-12-25&#39;` which means it will call it recursively for `2025-12-16` on the first call inside it because of `+1 day` as the interval in the `date` function.&#xA;&#xA;The date function is a [function](https://sqlite.org/lang_datefunc.html#modifiers) which takes in a date and we can add modifiers to it to manipulate or extract parts of the date. Here we have added the modifier as `+1 day` which will increment the day by one. We then call that in the `dates` CTE again till we have the `date` as less then `2025-12-25`. Till then we will have created all the `dates` from `15` till `25` including `25`. We can just type in the values manually as we did in the first dirty solution, but I wanted to see how we can generate dates dynamically in sqlite.&#xA;&#xA;```&#xA;sqlite&gt; WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;)&#xA;SELECT * FROM dates;&#xA;+------------+&#xA;| gift_date  |&#xA;+------------+&#xA;| 2025-12-15 |&#xA;| 2025-12-16 |&#xA;| 2025-12-17 |&#xA;| 2025-12-18 |&#xA;| 2025-12-19 |&#xA;| 2025-12-20 |&#xA;| 2025-12-21 |&#xA;| 2025-12-22 |&#xA;| 2025-12-23 |&#xA;| 2025-12-24 |&#xA;| 2025-12-25 |&#xA;+------------+&#xA;Run Time: real 0.000 user 0.000225 sys 0.000009&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now we&#39;ll cross join the `families` table with the `dates` table.&#xA;&#xA;```sql&#xA;SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;FROM families&#xA;CROSS JOIN dates;&#xA;```&#xA;&#xA;OK! Wait include the `dates` CTE above too, was just simplifying the query.&#xA;&#xA;```sql&#xA;WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;)&#xA;SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;FROM families&#xA;CROSS JOIN dates;&#xA;```&#xA;This will give us the full table of combination of dates and families.&#xA;&#xA;```sql&#xA;SELECT COUNT(*) FROM families;&#xA;```&#xA;&#xA;```sql&#xA;sqlite&gt; SELECT COUNT(*) FROM families;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 250      |&#xA;+----------+&#xA;Run Time: real 0.000 user 0.000113 sys 0.000006&#xA;sqlite&gt;&#xA;```&#xA;&#xA;That is :&#xA;&#xA;- There are 11 dates right? `2025-12-15` to `2025-12-25`, -&gt; `15` (1), `16` (2), `17` (3), `18` (4), `19` (5), `20` (6), `21` (7), `22` (8), `23` (9), `24` (10), `25` (11)&#xA;- There are I think `250` families.&#xA;- So, a `CROSS JOIN` will give us `11 * 250 = 2750` rows.&#xA;&#xA;&#xA;```sql&#xA;SELECT COUNT(*) FROM (WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;)&#xA;SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;FROM families&#xA;CROSS JOIN dates) as count;&#xA;```&#xA;&#xA;11 dates for each family.&#xA;&#xA;```&#xA;sqlite&gt; SELECT COUNT(*) FROM (WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;)&#xA;SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;FROM families&#xA;CROSS JOIN dates) as count;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 2750     |&#xA;+----------+&#xA;Run Time: real 0.001 user 0.000407 sys 0.000000&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now, we simply have to join the `deliveries_assigned` table with the above table.&#xA;&#xA;Why?&#xA;Because we need to map which dates are assigned and which dates are missing.&#xA;&#xA;We need to do a which join?&#xA;&#xA;LEFT, RIGHT or INNER&#xA;&#xA;LEFT&#xA;&#xA;WHy?&#xA;&#xA;Because, the left table will have all the dates, and the right table will have the assigned dates. **We need all the records from the `left` table (the combination, cross join table)**&#xA;&#xA;We need all the rows in the `left` or the combination(cross join) of date and family table as to map which dates are assigned and which dates are missing.&#xA;&#xA;If the `left` table has a record, then it means the date is assigned. If the `right` table has a record i.e. the gift is assigned, then it means the date is not assigned.&#xA;&#xA;Hence, we can simply then filter out the relations with `NULL` as the `family_id` in the `deliveries_assigned` table.&#xA;&#xA;```sql&#xA;WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;),&#xA;combination AS (&#xA;    SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;    FROM families&#xA;    CROSS JOIN dates&#xA;)&#xA;SELECT&#xA;    *&#xA;FROM combination&#xA;LEFT JOIN deliveries_assigned&#xA;    ON deliveries_assigned.family_id = combination.family_id&#xA;    AND deliveries_assigned.gift_date = combination.gift_date&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;),&#xA;combination AS (&#xA;    SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;    FROM families&#xA;    CROSS JOIN dates&#xA;)&#xA;SELECT&#xA;    *&#xA;FROM combination&#xA;LEFT JOIN deliveries_assigned&#xA;    ON deliveries_assigned.family_id = combination.family_id&#xA;    AND deliveries_assigned.gift_date = combination.gift_date LIMIT 30;&#xA;+-----------+---------------+------------+----+-----------+------------+-------------------------+&#xA;| family_id |  family_name  | gift_date  | id | family_id | gift_date  |        gift_name        |&#xA;+-----------+---------------+------------+----+-----------+------------+-------------------------+&#xA;| 1         | Isla Martinez | 2025-12-15 | 12 | 1         | 2025-12-15 | fudge bites             |&#xA;| 1         | Isla Martinez | 2025-12-16 | 13 | 1         | 2025-12-16 | holiday sticker sheet   |&#xA;| 1         | Isla Martinez | 2025-12-17 |    |           |            |                         |&#xA;| 1         | Isla Martinez | 2025-12-18 | 14 | 1         | 2025-12-18 | hot cocoa bombs         |&#xA;| 1         | Isla Martinez | 2025-12-19 | 15 | 1         | 2025-12-19 | honey roasted nuts      |&#xA;| 1         | Isla Martinez | 2025-12-20 | 16 | 1         | 2025-12-20 | holiday mug             |&#xA;| 1         | Isla Martinez | 2025-12-21 | 17 | 1         | 2025-12-21 | white chocolate candies |&#xA;| 1         | Isla Martinez | 2025-12-22 | 18 | 1         | 2025-12-22 | puzzle book             |&#xA;| 1         | Isla Martinez | 2025-12-23 | 19 | 1         | 2025-12-23 | snowman plush           |&#xA;| 1         | Isla Martinez | 2025-12-24 | 20 | 1         | 2025-12-24 | scented hand cream      |&#xA;| 1         | Isla Martinez | 2025-12-25 | 21 | 1         | 2025-12-25 | vanilla bean wafers     |&#xA;| 2         | Nolan Garcia  | 2025-12-15 | 36 | 2         | 2025-12-15 | mini marshmallow tubes  |&#xA;| 2         | Nolan Garcia  | 2025-12-16 | 37 | 2         | 2025-12-16 | white chocolate candies |&#xA;| 2         | Nolan Garcia  | 2025-12-17 | 38 | 2         | 2025-12-17 | gingerbread cookie kit  |&#xA;| 2         | Nolan Garcia  | 2025-12-18 | 39 | 2         | 2025-12-18 | family card game        |&#xA;| 2         | Nolan Garcia  | 2025-12-19 |    |           |            |                         |&#xA;| 2         | Nolan Garcia  | 2025-12-20 | 40 | 2         | 2025-12-20 | santa hat               |&#xA;| 2         | Nolan Garcia  | 2025-12-21 | 41 | 2         | 2025-12-21 | holiday sticker sheet   |&#xA;| 2         | Nolan Garcia  | 2025-12-22 |    |           |            |                         |&#xA;| 2         | Nolan Garcia  | 2025-12-23 | 42 | 2         | 2025-12-23 | pecan praline bites     |&#xA;| 2         | Nolan Garcia  | 2025-12-24 |    |           |            |                         |&#xA;| 2         | Nolan Garcia  | 2025-12-25 | 43 | 2         | 2025-12-25 | santa hat               |&#xA;| 3         | Yara Chen     | 2025-12-15 | 57 | 3         | 2025-12-15 | peppermint bark bites   |&#xA;| 3         | Yara Chen     | 2025-12-16 |    |           |            |                         |&#xA;| 3         | Yara Chen     | 2025-12-17 |    |           |            |                         |&#xA;| 3         | Yara Chen     | 2025-12-18 | 58 | 3         | 2025-12-18 | cheddar popcorn         |&#xA;| 3         | Yara Chen     | 2025-12-19 |    |           |            |                         |&#xA;| 3         | Yara Chen     | 2025-12-20 | 59 | 3         | 2025-12-20 | festive notepad         |&#xA;| 3         | Yara Chen     | 2025-12-21 | 60 | 3         | 2025-12-21 | fruit assortment        |&#xA;| 3         | Yara Chen     | 2025-12-22 |    |           |            |                         |&#xA;+-----------+---------------+------------+----+-----------+------------+-------------------------+&#xA;...&#xA;...&#xA;| 249       | Jude Bautista     | 2025-12-24 |      |           |            |                            |&#xA;| 249       | Jude Bautista     | 2025-12-25 | 5073 | 249       | 2025-12-25 | almond brittle             |&#xA;| 250       | Bianca Muller     | 2025-12-15 | 5086 | 250       | 2025-12-15 | cocoa mix bundle           |&#xA;| 250       | Bianca Muller     | 2025-12-16 | 5087 | 250       | 2025-12-16 | cookie decorating kit      |&#xA;| 250       | Bianca Muller     | 2025-12-17 | 5088 | 250       | 2025-12-17 | shortbread cookie tin      |&#xA;| 250       | Bianca Muller     | 2025-12-18 |      |           |            |                            |&#xA;| 250       | Bianca Muller     | 2025-12-19 | 5089 | 250       | 2025-12-19 | snowflake candle           |&#xA;| 250       | Bianca Muller     | 2025-12-20 |      |           |            |                            |&#xA;| 250       | Bianca Muller     | 2025-12-21 | 5090 | 250       | 2025-12-21 | trail mix trio             |&#xA;| 250       | Bianca Muller     | 2025-12-22 | 5091 | 250       | 2025-12-22 | shortbread cookie tin      |&#xA;| 250       | Bianca Muller     | 2025-12-23 |      |           |            |                            |&#xA;| 250       | Bianca Muller     | 2025-12-24 |      |           |            |                            |&#xA;| 250       | Bianca Muller     | 2025-12-25 | 5092 | 250       | 2025-12-25 | gingerbread cookie kit     |&#xA;+-----------+-------------------+------------+------+-----------+------------+----------------------------+&#xA;Run Time: real 0.028 user 0.013392 sys 0.013096&#xA;sqlite&gt;&#xA;```&#xA;&#xA;This will give all the gifts that have been assigned as well as not assigned for each family on each date from `2025-12-15` to `2025-12-25`.&#xA;&#xA;Now, you can see, what we need and what we don&#39;t we simply can get the gifts or records with the `family_id` in the `deliveries_assigned` table as `NULL`, since there was no record for the family_id in the `deliveries_assigned` table for that date.&#xA;&#xA;&#xA;```sql&#xA;WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;),&#xA;combination AS (&#xA;    SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;    FROM families&#xA;    CROSS JOIN dates&#xA;)&#xA;SELECT&#xA;    *&#xA;FROM combination&#xA;LEFT JOIN deliveries_assigned&#xA;    ON deliveries_assigned.family_id = combination.family_id&#xA;    AND deliveries_assigned.gift_date = combination.gift_date&#xA;WHERE deliveries_assigned.family_id IS NULL&#xA;```&#xA;So, a simple `WHERE` clause with deliveries_assigned.family_id `IS NULL` will give us the missing dates for each family.&#xA;&#xA;```&#xA;sqlite&gt; WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;),&#xA;combination AS (&#xA;    SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;    FROM families&#xA;    CROSS JOIN dates&#xA;)&#xA;SELECT&#xA;    *&#xA;FROM combination&#xA;LEFT JOIN deliveries_assigned&#xA;    ON deliveries_assigned.family_id = combination.family_id&#xA;    AND deliveries_assigned.gift_date = combination.gift_date&#xA;WHERE deliveries_assigned.family_id IS NULL&#xA;   ...&gt; ;&#xA;+-----------+-------------------+------------+----+-----------+-----------+-----------+&#xA;| family_id |    family_name    | gift_date  | id | family_id | gift_date | gift_name |&#xA;+-----------+-------------------+------------+----+-----------+-----------+-----------+&#xA;| 1         | Isla Martinez     | 2025-12-17 |    |           |           |           |&#xA;| 2         | Nolan Garcia      | 2025-12-19 |    |           |           |           |&#xA;| 2         | Nolan Garcia      | 2025-12-22 |    |           |           |           |&#xA;| 2         | Nolan Garcia      | 2025-12-24 |    |           |           |           |&#xA;| 3         | Yara Chen         | 2025-12-16 |    |           |           |           |&#xA;| 3         | Yara Chen         | 2025-12-17 |    |           |           |           |&#xA;| 3         | Yara Chen         | 2025-12-19 |    |           |           |           |&#xA;| 3         | Yara Chen         | 2025-12-22 |    |           |           |           |&#xA;| 4         | Tariq Nguyen      | 2025-12-16 |    |           |           |           |&#xA;...&#xA;...&#xA;| 247       | Malik Kim         | 2025-12-18 |    |           |           |           |&#xA;| 247       | Malik Kim         | 2025-12-20 |    |           |           |           |&#xA;| 247       | Malik Kim         | 2025-12-21 |    |           |           |           |&#xA;| 248       | Tariq Flores      | 2025-12-16 |    |           |           |           |&#xA;| 248       | Tariq Flores      | 2025-12-17 |    |           |           |           |&#xA;| 248       | Tariq Flores      | 2025-12-18 |    |           |           |           |&#xA;| 249       | Jude Bautista     | 2025-12-16 |    |           |           |           |&#xA;| 249       | Jude Bautista     | 2025-12-20 |    |           |           |           |&#xA;| 249       | Jude Bautista     | 2025-12-21 |    |           |           |           |&#xA;| 249       | Jude Bautista     | 2025-12-23 |    |           |           |           |&#xA;| 249       | Jude Bautista     | 2025-12-24 |    |           |           |           |&#xA;| 250       | Bianca Muller     | 2025-12-18 |    |           |           |           |&#xA;| 250       | Bianca Muller     | 2025-12-20 |    |           |           |           |&#xA;| 250       | Bianca Muller     | 2025-12-23 |    |           |           |           |&#xA;| 250       | Bianca Muller     | 2025-12-24 |    |           |           |           |&#xA;+-----------+-------------------+------------+----+-----------+-----------+-----------+&#xA;Run Time: real 0.014 user 0.011508 sys 0.002046&#xA;sqlite&gt;&#xA;```&#xA;&#xA;All right, we now need to order by `unassigned_date` and `name` which are the `dates` from the `combination` table and the `family_name` from the `families` table.&#xA;&#xA;```sql&#xA;WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;),&#xA;combination AS (&#xA;    SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;    FROM families&#xA;    CROSS JOIN dates&#xA;)&#xA;SELECT&#xA;    combination.gift_date as unassigned_date,&#xA;    family_name&#xA;FROM combination&#xA;LEFT JOIN deliveries_assigned&#xA;    ON deliveries_assigned.family_id = combination.family_id&#xA;    AND deliveries_assigned.gift_date = combination.gift_date&#xA;WHERE deliveries_assigned.family_id IS NULL&#xA;ORDER BY unassigned_date, family_name;&#xA;```&#xA;&#xA;```&#xA;+-----------------+-------------------+&#xA;| unassigned_date |    family_name    |&#xA;+-----------------+-------------------+&#xA;| 2025-12-15      | Adil Rossi        |&#xA;| 2025-12-15      | Aisha Connor      |&#xA;| 2025-12-15      | Amina Perez       |&#xA;| 2025-12-15      | Amina Wong        |&#xA;| 2025-12-15      | Andre Flores      |&#xA;| 2025-12-15      | Anya Singh        |&#xA;| 2025-12-15      | Arjun Wong        |&#xA;| 2025-12-15      | Bianca Connor     |&#xA;| 2025-12-15      | Caleb Petrov      |&#xA;| 2025-12-15      | Caleb Roberts     |&#xA;| 2025-12-15      | Carmen Carter     |&#xA;| 2025-12-15      | Carmen Garcia     |&#xA;| 2025-12-15      | Casey Flores      |&#xA;| 2025-12-15      | Chi Hughes        |&#xA;| 2025-12-15      | Clara Johnson     |&#xA;| 2025-12-15      | Dara Bautista     |&#xA;| 2025-12-15      | David Ramirez     |&#xA;| 2025-12-15      | Elias Petrov      |&#xA;| 2025-12-15      | Elias Petrov      |&#xA;| 2025-12-15      | Ethan Flores      |&#xA;| 2025-12-15      | Eva Gonzalez      |&#xA;...&#xA;...&#xA;| 2025-12-25      | Owen Park         |&#xA;| 2025-12-25      | Priya Khan        |&#xA;| 2025-12-25      | Rafael Singh      |&#xA;| 2025-12-25      | Ravi Abdallah     |&#xA;| 2025-12-25      | Ravi Mitchell     |&#xA;| 2025-12-25      | Rosa Turner       |&#xA;| 2025-12-25      | Sara Jensen       |&#xA;| 2025-12-25      | Sara Lopez        |&#xA;| 2025-12-25      | Sara Rossi        |&#xA;| 2025-12-25      | Sarah Phillips    |&#xA;| 2025-12-25      | Seth Garcia       |&#xA;| 2025-12-25      | Sienna Lopez      |&#xA;| 2025-12-25      | Sofia Nakamura    |&#xA;| 2025-12-25      | Tariq Nguyen      |&#xA;| 2025-12-25      | Uma Ali           |&#xA;| 2025-12-25      | Uma Phillips      |&#xA;| 2025-12-25      | Yara Chen         |&#xA;| 2025-12-25      | Yara Rossi        |&#xA;| 2025-12-25      | Yusuf Ali         |&#xA;| 2025-12-25      | Yusuf Hansen      |&#xA;| 2025-12-25      | Yusuf Perez       |&#xA;| 2025-12-25      | Yusuf Rossi       |&#xA;| 2025-12-25      | Zara Khan         |&#xA;+-----------------+-------------------+&#xA;Run Time: real 0.008 user 0.004613 sys 0.002845&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Phew!&#xA;Ok, that looks a mamoth query.&#xA;&#xA;```sql&#xA;WITH RECURSIVE dates(gift_date) AS (&#xA;    SELECT &#39;2025-12-15&#39;&#xA;    UNION ALL&#xA;    SELECT date(gift_date, &#39;+1 day&#39;)&#xA;    FROM dates&#xA;    WHERE gift_date &lt; &#39;2025-12-25&#39;&#xA;),&#xA;combination AS (&#xA;    SELECT families.id AS family_id, families.family_name as family_name, dates.gift_date as gift_date&#xA;    FROM families&#xA;    CROSS JOIN dates&#xA;)&#xA;SELECT&#xA;    combination.gift_date as unassigned_date,&#xA;    family_name&#xA;FROM combination&#xA;LEFT JOIN deliveries_assigned&#xA;    ON deliveries_assigned.family_id = combination.family_id&#xA;    AND deliveries_assigned.gift_date = combination.gift_date&#xA;WHERE deliveries_assigned.family_id IS NULL&#xA;ORDER BY unassigned_date, family_name;&#xA;```&#xA;&#xA;So, to recap&#xA;&#xA;- Generate a table of dates from `2025-12-15` to `2025-12-25` using a recursive CTE.&#xA;- Create a table of combinations of `families` and `dates` using a cross join.&#xA;- Left join the `deliveries_assigned` table with the combination table.&#xA;- Filter out the rows where the `family_id` in the `deliveries_assigned` table is `NULL`.&#xA;- Order the results by `unassigned_date` and `name`, respectively, both in ascending order.&#xA;&#xA;Simple right?&#xA;&#xA;That&#39;s it from day 6. &#xA;&#xA;It getting serious out there!&#xA;&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 5: EchoTrack Wrapped</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-5</link>
      <description>Advent of SQL Day 5 - EchoTrack Wrapped It is day 5 of advent of SQL. Let&#39;s get rollin. It looks like a good problem. I am excited! Here&#39;s the SQL to get starte</description>
      <pubDate>Sat, 20 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL Day 5 - EchoTrack Wrapped&#xA;&#xA;It is day 5 of advent of SQL.&#xA;&#xA;Let&#39;s get rollin. It looks like a good problem. I am excited!&#xA;&#xA;Here&#39;s the SQL to get started.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS listening_logs;&#xA;&#xA;CREATE TABLE listening_logs (&#xA;    id INTEGER PRIMARY KEY,&#xA;    user_name TEXT,&#xA;    artist TEXT,&#xA;    played_at TIMESTAMP,&#xA;    content_type TEXT&#xA;);&#xA;&#xA;INSERT INTO listening_logs (id, user_name, artist, played_at, content_type) VALUES&#xA;    (1, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-04-08 00:21:53&#39;, &#39;song&#39;),&#xA;    (2, &#39;Zoe Garcia&#39;, &#39;Huberman Lab&#39;, &#39;2025-11-10 19:18:47&#39;, &#39;podcast&#39;),&#xA;    (3, &#39;Zoe Garcia&#39;, &#39;Huberman Lab&#39;, &#39;2025-01-20 15:31:02&#39;, &#39;podcast&#39;),&#xA;    (4, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-01-06 17:33:11&#39;, &#39;song&#39;),&#xA;    (5, &#39;Zoe Garcia&#39;, &#39;Candace&#39;, &#39;2025-03-06 14:07:54&#39;, &#39;podcast&#39;),&#xA;    (6, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-06-05 17:57:59&#39;, &#39;song&#39;),&#xA;    (7, &#39;Zoe Garcia&#39;, &#39;Huberman Lab&#39;, &#39;2025-01-01 20:05:22&#39;, &#39;podcast&#39;),&#xA;    (8, &#39;Zoe Garcia&#39;, &#39;Huberman Lab&#39;, &#39;2025-11-01 12:04:03&#39;, &#39;podcast&#39;),&#xA;    (9, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-09-28 12:42:12&#39;, &#39;song&#39;),&#xA;    (10, &#39;Zoe Garcia&#39;, &#39;The Ben Shapiro Show&#39;, &#39;2025-09-15 01:05:15&#39;, &#39;podcast&#39;),&#xA;    (11, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-04-26 05:31:02&#39;, &#39;song&#39;),&#xA;    (12, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-10-13 17:34:03&#39;, &#39;song&#39;),&#xA;    (13, &#39;Zoe Garcia&#39;, &#39;Mariah Carey&#39;, &#39;2025-01-20 11:21:37&#39;, &#39;song&#39;),&#xA;    (14, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-11-28 03:55:31&#39;, &#39;song&#39;),&#xA;    (15, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-07-17 05:18:16&#39;, &#39;song&#39;),&#xA;    (16, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-08-20 02:07:45&#39;, &#39;song&#39;),&#xA;    (17, &#39;Zoe Garcia&#39;, &#39;Kendrick Lamar&#39;, &#39;2025-02-16 13:25:27&#39;, &#39;song&#39;),&#xA;    (18, &#39;Zoe Garcia&#39;, &#39;Huberman Lab&#39;, &#39;2025-08-13 19:55:00&#39;, &#39;podcast&#39;),&#xA;    (19, &#39;Zoe Garcia&#39;, &#39;Bruno Mars&#39;, &#39;2025-09-13 07:09:43&#39;, &#39;song&#39;),&#xA;    (20, &#39;Zoe Garcia&#39;, &#39;Arijit Singh&#39;, &#39;2025-04-12 06:30:44&#39;, &#39;song&#39;);&#xA;```&#xA;&#xA;Let&#39;s open a SQLite shell and get started.&#xA;&#xA;```&#xA;$ sqlite3&#xA;SQLite version 3.50.4 2025-07-30 19:33:53&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read day5-inserts.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE listening_logs (&#xA;    id INTEGER PRIMARY KEY,&#xA;    user_name TEXT,&#xA;    artist TEXT,&#xA;    played_at TIMESTAMP,&#xA;    content_type TEXT&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM listening_logs LIMIT 20;&#xA;+----+------------+----------------------+---------------------+--------------+&#xA;| id | user_name  |        artist        |      played_at      | content_type |&#xA;+----+------------+----------------------+---------------------+--------------+&#xA;| 1  | Zoe Garcia | Arijit Singh         | 2025-04-08 00:21:53 | song         |&#xA;| 2  | Zoe Garcia | Huberman Lab         | 2025-11-10 19:18:47 | podcast      |&#xA;| 3  | Zoe Garcia | Huberman Lab         | 2025-01-20 15:31:02 | podcast      |&#xA;| 4  | Zoe Garcia | Arijit Singh         | 2025-01-06 17:33:11 | song         |&#xA;| 5  | Zoe Garcia | Candace              | 2025-03-06 14:07:54 | podcast      |&#xA;| 6  | Zoe Garcia | Arijit Singh         | 2025-06-05 17:57:59 | song         |&#xA;| 7  | Zoe Garcia | Huberman Lab         | 2025-01-01 20:05:22 | podcast      |&#xA;| 8  | Zoe Garcia | Huberman Lab         | 2025-11-01 12:04:03 | podcast      |&#xA;| 9  | Zoe Garcia | Arijit Singh         | 2025-09-28 12:42:12 | song         |&#xA;| 10 | Zoe Garcia | The Ben Shapiro Show | 2025-09-15 01:05:15 | podcast      |&#xA;| 11 | Zoe Garcia | Arijit Singh         | 2025-04-26 05:31:02 | song         |&#xA;| 12 | Zoe Garcia | Arijit Singh         | 2025-10-13 17:34:03 | song         |&#xA;| 13 | Zoe Garcia | Mariah Carey         | 2025-01-20 11:21:37 | song         |&#xA;| 14 | Zoe Garcia | Arijit Singh         | 2025-11-28 03:55:31 | song         |&#xA;| 15 | Zoe Garcia | Arijit Singh         | 2025-07-17 05:18:16 | song         |&#xA;| 16 | Zoe Garcia | Arijit Singh         | 2025-08-20 02:07:45 | song         |&#xA;| 17 | Zoe Garcia | Kendrick Lamar       | 2025-02-16 13:25:27 | song         |&#xA;| 18 | Zoe Garcia | Huberman Lab         | 2025-08-13 19:55:00 | podcast      |&#xA;| 19 | Zoe Garcia | Bruno Mars           | 2025-09-13 07:09:43 | song         |&#xA;| 20 | Zoe Garcia | Arijit Singh         | 2025-04-12 06:30:44 | song         |&#xA;+----+------------+----------------------+---------------------+--------------+&#xA;sqlite&gt;&#xA;sqlite&gt; SELECT COUNT(*) FROM listening_logs;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 18174    |&#xA;+----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;OK! We have around 18k records in a single table! That&#39;s a lot but not not much!&#xA;&#xA;Let&#39;s see what we have to do&#xA;&#xA;## Problem&#xA;&#xA;&gt; Write a query that returns the top 3 artists per user. Order the results by the most played&#xA;&#xA;Alright, this is quite a problem to solve, if you are thinking, it easy peasy, then hold on!&#xA;&#xA;We clearly see, that we have 2 columns of our interest.&#xA;1. `user_name`&#xA;2. `artist`&#xA;&#xA;We need to group for each user his most played artists, and we need to rank those top 3 artist per user.&#xA;&#xA;And each entry is a song or podcast that the user has litened to.&#xA;&#xA;We need to aggregate, group, and then rank, and then what?&#xA;&#xA;How would you chunk out the top three?&#xA;It&#39;s time to put your SQL glasses and gloves on, it&#39;s getting colder!&#xA;&#xA;### Counting Artists Per User&#xA;&#xA;Let&#39;s take one step at a time, we need to first count how many times each artist has been played for that user.&#xA;&#xA;```sql&#xA;SELECT&#xA;    user_name,&#xA;    artist,&#xA;    COUNT(*) AS play_count&#xA;FROM listening_logs&#xA;GROUP BY user_name, artist&#xA;ORDER BY user_name, play_count DESC, artist;&#xA;```&#xA;OK, simple right?&#xA;&#xA;Select all the usernames, artist and then group by the username and the artist and count the number of times the user had played that artist. Simply then order by username, the playcount and the artist (if there is a tie in count, I think we can choose the artist name as the breaker)&#xA;&#xA;But it gives all the artists, not just the top 3, it ranks them in the decreasing order of the number of plays, but we only want to list the top 3 per user.&#xA;&#xA;That&#39;s tricky!&#xA;&#xA;### With SELF JOIN&#xA;&#xA;What if we can join the table with itself?&#xA;&#xA;Then we can compare the number of times the user has played the artist, with the number of times the user has played another artist. Then we can remove the artist if it has played less than 3 times, this then will filter out the top 3 for us.&#xA;&#xA;```sql&#xA;SELECT &#xA;    a.user_name,&#xA;    a.artist AS current_artist,&#xA;    a.play_count AS current_plays,&#xA;    b.artist AS other_artist,&#xA;    b.play_count AS other_plays&#xA;FROM (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) a&#xA;LEFT JOIN (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) b ON a.user_name = b.user_name&#xA;ORDER BY a.user_name, a.play_count DESC, b.play_count DESC;&#xA;```&#xA;&#xA;This would create a cross join of sorts between the same table.&#xA;&#xA;- Select the required columns (user_name, artist and count)&#xA;Look at this part&#xA;This is `a`&#xA;```sql&#xA;-- a&#xA;SELECT user_name, artist, COUNT(*) AS play_count&#xA;FROM listening_logs&#xA;GROUP BY user_name, artist&#xA;ORDER BY user_name, play_count DESC, artist&#xA;```&#xA;&#xA;Then we need to join this with itself&#xA;&#xA;- Select the required columns from the same table (user_name, artist and count)&#xA;This is `b`&#xA;```sql&#xA;-- b&#xA;SELECT user_name, artist, COUNT(*) AS play_count&#xA;FROM listening_logs&#xA;GROUP BY user_name, artist&#xA;ORDER BY user_name, play_count DESC, artist&#xA;```&#xA;Both `a` and `b` are the same, just that we want a cross join of sorts.&#xA;&#xA;And then we need to join `a` and `b`&#xA;- Join with itself on the user_name&#xA;- Order by user_name, play_count desc, artist&#xA;&#xA;```sql&#xA;SELECT &#xA;    a.user_name,&#xA;    a.artist AS current_artist,&#xA;    a.play_count AS current_plays,&#xA;    b.artist AS other_artist,&#xA;    b.play_count AS other_plays&#xA;FROM (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) a&#xA;LEFT JOIN (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) b ON a.user_name = b.user_name&#xA;ORDER BY a.user_name, a.play_count DESC, b.play_count DESC;&#xA;```&#xA;&#xA;Now, we want to keep the rows where the `b` table has play count greater than `a` table, or if they are equal, then if the `b` table has artist less than `a` table.&#xA;&#xA;To do that we can continue the `JOIN` condition with `AND` and add `b.play_count &gt; a.play_count` and `b.artist &lt; a.artist` in case of a tie.&#xA;The idea here is subtle:&#xA;- For a given artist `a`, we count how many artists `b` (for the same user) have more plays, or the same plays but come earlier alphabetically.&#xA;- If fewer than 3 artists beat `a`, then `a` must be in the top 3.&#xA;&#xA;So the query becomes this:&#xA;&#xA;```sql&#xA;SELECT &#xA;    a.user_name,&#xA;    a.artist,&#xA;    a.play_count&#xA;FROM (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) a&#xA;LEFT JOIN (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) b ON a.user_name = b.user_name&#xA;   AND (&#xA;       b.play_count &gt; a.play_count&#xA;       OR (b.play_count = a.play_count AND b.artist &lt; a.artist)&#xA;   )&#xA;GROUP BY a.user_name, a.artist, a.play_count&#xA;HAVING COUNT(b.artist) &lt; 3&#xA;ORDER BY a.user_name, a.play_count DESC, a.artist;&#xA;```&#xA;```&#xA;sqlite&gt; SELECT&#xA;    a.user_name,&#xA;    a.artist,&#xA;    a.play_count&#xA;FROM (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) a&#xA;LEFT JOIN (&#xA;    SELECT user_name, artist, COUNT(*) AS play_count&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) b ON a.user_name = b.user_name&#xA;   AND (&#xA;       b.play_count &gt; a.play_count&#xA;       OR (b.play_count = a.play_count AND b.artist &lt; a.artist)&#xA;   )&#xA;GROUP BY a.user_name, a.artist, a.play_count&#xA;HAVING COUNT(b.artist) &lt; 3&#xA;ORDER BY a.user_name, a.play_count DESC, a.artist;&#xA;+-------------------+--------------------------------------------+------------+&#xA;|     user_name     |                   artist                   | play_count |&#xA;+-------------------+--------------------------------------------+------------+&#xA;| Abigail Hernandez | Ed Sheeran                                 | 78         |&#xA;| Abigail Hernandez | Rotten Mango                               | 15         |&#xA;| Abigail Hernandez | Billie Eilish                              | 4          |&#xA;| Adrian Cox        | Kendrick Lamar                             | 128        |&#xA;| Adrian Cox        | Stuff You Should Know                      | 30         |&#xA;| Adrian Cox        | Fuerza Regida                              | 6          |&#xA;| Alex Rivera       | Ed Sheeran                                 | 274        |&#xA;| Alex Rivera       | Call Her Daddy (Alex Cooper)               | 42         |&#xA;| Alex Rivera       | Green Day                                  | 11         |&#xA;| Anders Nilsson    | Snow Patrol                                | 101        |&#xA;| Anders Nilsson    | SmartLess                                  | 29         |&#xA;| Anders Nilsson    | Blink-182                                  | 5          |&#xA;| Anthony King      | Pentatonix                                 | 114        |&#xA;| Anthony King      | The Tucker Carlson Show                    | 14         |&#xA;| Anthony King      | Angels &amp; Airwaves                          | 5          |&#xA;...&#xA;...&#xA;| Zara Sheikh       | Green Day                                  | 138        |&#xA;| Zara Sheikh       | This Past Weekend w Theo Von               | 20         |&#xA;| Zara Sheikh       | The Beatles                                | 7          |&#xA;| Zoe Garcia        | Arijit Singh                               | 50         |&#xA;| Zoe Garcia        | Huberman Lab                               | 14         |&#xA;| Zoe Garcia        | Kendrick Lamar                             | 5          |&#xA;| Zoe Wilson        | Pentatonix                                 | 65         |&#xA;| Zoe Wilson        | The Mel Robbins Podcast                    | 14         |&#xA;| Zoe Wilson        | Angels &amp; Airwaves                          | 4          |&#xA;| Zuri Okafor       | Kendrick Lamar                             | 96         |&#xA;| Zuri Okafor       | The Tim Dillon Show                        | 14         |&#xA;| Zuri Okafor       | Ed Sheeran                                 | 5          |&#xA;+-------------------+--------------------------------------------+------------+&#xA;Run Time: real 0.088 user 0.082510 sys 0.003999&#xA;```&#xA;This is the final query, it looks long, it might be not the best way to do it, but its definitely not the worst way to do it.&#xA;&#xA;### With Window Functions&#xA;&#xA;Ok! I don&#39;t know window functions, but I searched and found that we could partition things before we group them or order them in the final result set. That&#39;s what we want right?&#xA;&#xA;We had grouped the logs for each username and artist and counted the number of plays. Now we want to rank the artists for each user in the decreasing order of number of plays. &#xA;&#xA;So, we start with the same thing:&#xA;&#xA;```sql&#xA;SELECT&#xA;    user_name,&#xA;    artist,&#xA;    COUNT(*) AS play_count&#xA;FROM listening_logs&#xA;GROUP BY user_name, artist&#xA;ORDER BY user_name, play_count DESC, artist;&#xA;```&#xA;&#xA;You can see that we have `user_name` column, what if we could separate out the users, and then we could rank the artists for each user separately.&#xA;&#xA;For that we can use `ROW_NUMBER` window function. This function needs a `PARTITION BY` clause which lets us create separate partitions based on certain columns, and then we can rank them using the `ORDER BY` clause as we do with ordinary statement. That becomes the row_number, which we can use as a rank to rank each artist for a user based on the number of time the user has played him/her.&#xA;&#xA;```sql&#xA;SELECT &#xA;    user_name,&#xA;    artist,&#xA;    COUNT(*) AS play_count,&#xA;    ROW_NUMBER() OVER (&#xA;        PARTITION BY user_name &#xA;        ORDER BY COUNT(*) DESC, artist&#xA;    ) AS ranks&#xA;FROM listening_logs&#xA;GROUP BY user_name, artist&#xA;```&#xA;&#xA;Here, `ROW_NUMBER` is a window function that assigns a rank to each row in the result set. It partitions the result set by `user_name` and orders the rows in the decreasing order of `COUNT(*)` and `artist` name.&#xA;&#xA;So, imagine this table:&#xA;&#xA;```&#xA;+-------------------+-------------------------------------+------------+-------+&#xA;|     user_name     |               artist                | play_count | ranks |&#xA;+-------------------+-------------------------------------+------------+-------+&#xA;| Abigail Hernandez | Ed Sheeran                          | 78         | 1     |&#xA;| Abigail Hernandez | Rotten Mango                        | 15         | 2     |&#xA;| Abigail Hernandez | Billie Eilish                       | 4          | 3     |&#xA;| Abigail Hernandez | Hans Zimmer                         | 4          | 4     |&#xA;| Abigail Hernandez | John Legend                         | 3          | 5     |&#xA;| Abigail Hernandez | John Williams                       | 3          | 6     |&#xA;| Abigail Hernandez | The Beatles                         | 3          | 7     |&#xA;| Abigail Hernandez | The Rolling Stones                  | 3          | 8     |&#xA;| Abigail Hernandez | Angels &amp; Airwaves                   | 2          | 9     |&#xA;| Abigail Hernandez | Bad Bunny                           | 2          | 10    |&#xA;| Abigail Hernandez | Beyonce                             | 2          | 11    |&#xA;| Abigail Hernandez | Coldplay                            | 2          | 12    |&#xA;| Abigail Hernandez | Foo Fighters                        | 2          | 13    |&#xA;| Abigail Hernandez | Fuerza Regida                       | 2          | 14    |&#xA;| Abigail Hernandez | Kendrick Lamar                      | 2          | 15    |&#xA;| Abigail Hernandez | Ludovico Einaudi                    | 2          | 16    |&#xA;| Abigail Hernandez | Mariah Carey                        | 2          | 17    |&#xA;| Abigail Hernandez | Pentatonix                          | 2          | 18    |&#xA;| Abigail Hernandez | SmartLess                           | 2          | 19    |&#xA;| Abigail Hernandez | The Weeknd                          | 2          | 20    |&#xA;| Abigail Hernandez | Adele                               | 1          | 21    |&#xA;| Abigail Hernandez | Ariana Grande                       | 1          | 22    |&#xA;| Abigail Hernandez | Armchair Expert With Dax Shepard    | 1          | 23    |&#xA;| Abigail Hernandez | Bruno Mars                          | 1          | 24    |&#xA;| Abigail Hernandez | Candace                             | 1          | 25    |&#xA;| Abigail Hernandez | Crime, Conspiracy, Cults and Murder | 1          | 26    |&#xA;| Abigail Hernandez | Green Day                           | 1          | 27    |&#xA;| Abigail Hernandez | Matt and Shanes Secret Podcast      | 1          | 28    |&#xA;| Abigail Hernandez | On Purpose With Jay Shetty          | 1          | 29    |&#xA;| Abigail Hernandez | Snow Patrol                         | 1          | 30    |&#xA;| Abigail Hernandez | Sufjan Stevens                      | 1          | 31    |&#xA;| Abigail Hernandez | Taylor Swift                        | 1          | 32    |&#xA;| Abigail Hernandez | The Mel Robbins Podcast             | 1          | 33    |&#xA;| Abigail Hernandez | Unseen                              | 1          | 34    |&#xA;| Adrian Cox        | Kendrick Lamar                      | 128        | 1     |&#xA;| Adrian Cox        | Stuff You Should Know               | 30         | 2     |&#xA;| Adrian Cox        | Fuerza Regida                       | 6          | 3     |&#xA;| Adrian Cox        | Pentatonix                          | 6          | 4     |&#xA;| Adrian Cox        | Taylor Swift                        | 6          | 5     |&#xA;| Adrian Cox        | Snow Patrol                         | 5          | 6     |&#xA;+-------------------+-------------------------------------+------------+-------+&#xA;```&#xA;&#xA;Now, we have ranked the artist for each user, there are `34` artist played by `Abigail Hernandez` so there are `34` ranks. Now, we need to filter out the top 3 artist for each user. That would be simple right?&#xA;&#xA;Just add the `WHERE` clause:&#xA;&#xA;```sql&#xA;SELECT&#xA;    user_name,&#xA;    artist,&#xA;    COUNT(*) AS play_count,&#xA;    ROW_NUMBER() OVER (&#xA;        PARTITION BY user_name &#xA;        ORDER BY COUNT(*) DESC, artist&#xA;    ) AS ranks&#xA;FROM listening_logs&#xA;WHERE ranks &lt;= 3&#xA;GROUP BY user_name, artist&#xA;ORDER BY user_name, play_count DESC, artist;&#xA;```&#xA;&#xA;Well not really!&#xA;&#xA;```&#xA;sqlite&gt; SELECT&#xA;    user_name,&#xA;    artist,&#xA;    COUNT(*) AS play_count,&#xA;    ROW_NUMBER() OVER (&#xA;        PARTITION BY user_name &#xA;        ORDER BY COUNT(*) DESC, artist&#xA;    )&#xA;FROM listening_logs&#xA;WHERE ranks &lt;= 3&#xA;GROUP BY user_name, artist&#xA;ORDER BY user_name, play_count DESC, artist;&#xA;Run Time: real 0.000 user 0.000110 sys 0.000000&#xA;Parse error: misuse of aliased window function ranks&#xA;```&#xA;&#xA;We can&#39;t filter the window function column with a `WHERE` clause. At the time the `WHERE` clause is evaluated, the `SELECT` list (including ranks) has not been computed yet. So ranks doesn&#39;t exist yet!&#xA;&#xA;So, now what, so close yet so far!&#xA;&#xA;Let&#39;s wrap the `SELECT` in a subquery:&#xA;&#xA;```sql&#xA;SELECT user_name, artist, play_count&#xA;FROM (&#xA;    SELECT &#xA;        user_name,&#xA;        artist,&#xA;        COUNT(*) AS play_count,&#xA;        ROW_NUMBER() OVER (&#xA;            PARTITION BY user_name&#xA;            ORDER BY COUNT(*) DESC, artist&#xA;        ) AS ranks&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;) ranked&#xA;WHERE ranks &lt;= 3&#xA;ORDER BY user_name, play_count DESC, artist;&#xA;```&#xA;&#xA;And there we have it!&#xA;&#xA;We wrapped the `SELECT` in a subquery, and now we can filter the window function column with a `WHERE` clause.&#xA;&#xA;We can even do it with a `CTE` i.e. common table expression. That&#39;s just a subquery with a name.&#xA;&#xA;Not sure when it could be handy, but here it is.&#xA;&#xA;```sql&#xA;WITH ranked AS (&#xA;    SELECT&#xA;        user_name,&#xA;        artist,&#xA;        COUNT(*) AS play_count,&#xA;        ROW_NUMBER() OVER (&#xA;            PARTITION BY user_name&#xA;            ORDER BY COUNT(*) DESC, artist&#xA;        ) AS ranks&#xA;    FROM listening_logs&#xA;    GROUP BY user_name, artist&#xA;)&#xA;SELECT user_name, artist, play_count&#xA;FROM ranked&#xA;WHERE ranks &lt;= 3&#xA;ORDER BY user_name, play_count DESC, artist;&#xA;```&#xA;&#xA;So, that is day 5, ok that is getting a little tricky now!&#xA;&#xA;Off to day 6.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 4: WinterFest Volunteers</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-4</link>
      <description>Advent of SQL Day 4 WinterFest Volunteers It is day 4 of advent of SQL. No fuss, straight to the problem, the elves and humans are getting dumber as the days pr</description>
      <pubDate>Fri, 19 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL Day 4 WinterFest Volunteers&#xA;&#xA;It is day 4 of advent of SQL.&#xA;&#xA;No fuss, straight to the problem, the elves and humans are getting dumber as the days progress.&#xA;&#xA;Let&#39;s download the SQL inserts for the day.&#xA;&#xA;And load it into a SQLite shell.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS official_shifts;&#xA;DROP TABLE IF EXISTS last_minute_signups;&#xA;&#xA;CREATE TABLE official_shifts (&#xA;    id INT PRIMARY KEY,&#xA;    volunteer_name TEXT,&#xA;    role TEXT,&#xA;    shift_time TEXT,&#xA;    age_group TEXT,&#xA;    code TEXT&#xA;);&#xA;&#xA;CREATE TABLE last_minute_signups (&#xA;    id INT PRIMARY KEY,&#xA;    volunteer_name TEXT,&#xA;    assigned_task TEXT,&#xA;    time_slot TEXT&#xA;);&#xA;&#xA;INSERT INTO official_shifts (id, volunteer_name, role, shift_time, age_group, code) VALUES&#xA;    (1, &#39;Jude Thompson&#39;, &#39;choir_assistant&#39;, &#39;12:00 PM&#39;, &#39;senior&#39;, NULL),&#xA;    (2, &#39;Mateo Cruz&#39;, &#39;choir_assistant&#39;, &#39;12:00 PM&#39;, &#39;senior&#39;, NULL),&#xA;    (3, &#39;Olivia Dubois&#39;, &#39;choir_assistant&#39;, &#39;2:00 PM&#39;, &#39;teen&#39;, &#39;A1&#39;),&#xA;    (4, &#39;Jeff Bezos&#39;, &#39;choir_assistant&#39;, &#39;10:00 AM&#39;, &#39;adult&#39;, &#39;X7&#39;),&#xA;    (5, &#39;Kian Rahimi&#39;, &#39;stage_setup&#39;, &#39;12:00 PM&#39;, &#39;adult&#39;, &#39;X7&#39;),&#xA;    (6, &#39;Haruto Sato&#39;, &#39;cocoa_station&#39;, &#39;10:00 AM&#39;, &#39;adult&#39;, &#39;X7&#39;),&#xA;    (7, &#39;Uma Singh&#39;, &#39;parking_support&#39;, &#39;10:00 AM&#39;, &#39;adult&#39;, NULL),&#xA;    (8, &#39;Owen Scott&#39;, &#39;parking_support&#39;, &#39;10:00 AM&#39;, &#39;adult&#39;, &#39;X7&#39;),&#xA;    (9, &#39;Adil Rahman&#39;, &#39;stage_setup&#39;, &#39;2:00 PM&#39;, &#39;adult&#39;, &#39;A1&#39;),&#xA;    (10, &#39;Aaron Diaz&#39;, &#39;choir_assistant&#39;, &#39;2:00 PM&#39;, &#39;senior&#39;, &#39;X7&#39;),&#xA;    (11, &#39;Carter Lewis&#39;, &#39;cocoa_station&#39;, &#39;10:00 AM&#39;, &#39;senior&#39;, &#39;B2&#39;),&#xA;    (12, &#39;Anya Pavlov&#39;, &#39;stage_setup&#39;, &#39;10:00 AM&#39;, &#39;senior&#39;, &#39;OLD&#39;),&#xA;    (13, &#39;Ethan Brown&#39;, &#39;stage_setup&#39;, &#39;2:00 PM&#39;, &#39;adult&#39;, &#39;A1&#39;),&#xA;    (14, &#39;Lucia Fernandez&#39;, &#39;choir_assistant&#39;, &#39;12:00 PM&#39;, &#39;senior&#39;, &#39;X7&#39;),&#xA;    (15, &#39;Casey Morgan&#39;, &#39;choir_assistant&#39;, &#39;12:00 PM&#39;, &#39;teen&#39;, &#39;OLD&#39;);&#xA;&#xA;INSERT INTO last_minute_signups (id, volunteer_name, assigned_task, time_slot) VALUES&#xA;    (1, &#39;Jude Thompson&#39;, &#39;Choir&#39;, &#39;noon&#39;),&#xA;    (2, &#39;Mateo Cruz&#39;, &#39;choir&#39;, &#39;noon&#39;),&#xA;    (3, &#39;Olivia Dubois&#39;, &#39;choir&#39;, &#39;2 PM&#39;),&#xA;    (4, &#39;Jeff Bezos&#39;, &#39;choir assistant&#39;, &#39;10AM&#39;),&#xA;    (5, &#39;Kian Rahimi&#39;, &#39;stage setup&#39;, &#39;noon&#39;),&#xA;    (6, &#39;Haruto Sato&#39;, &#39;cocoa station&#39;, &#39;10AM&#39;),&#xA;    (7, &#39;Uma Singh&#39;, &#39;parking_support&#39;, &#39;10AM&#39;),&#xA;    (8, &#39;Owen Scott&#39;, &#39;parking&#39;, &#39;10AM&#39;),&#xA;    (9, &#39;Adil Rahman&#39;, &#39;Stage-Setup&#39;, &#39;2 PM&#39;),&#xA;    (10, &#39;Aaron Diaz&#39;, &#39;Choir&#39;, &#39;2 PM&#39;),&#xA;    (11, &#39;Carter Lewis&#39;, &#39;Cocoa Station&#39;, &#39;10AM&#39;),&#xA;    (12, &#39;Anya Pavlov&#39;, &#39;stage_setup&#39;, &#39;10AM&#39;),&#xA;    (13, &#39;Olivia Brown&#39;, &#39;stage setup&#39;, &#39;2 PM&#39;),&#xA;    (14, &#39;Lena Fischer&#39;, &#39;cocoa station&#39;, &#39;2 pm&#39;),&#xA;    (15, &#39;Nolan Murphy&#39;, &#39;parking-support&#39;, &#39;10AM&#39;);&#xA;```&#xA;&#xA;Once the data is loaded, let&#39;s sneak peak.&#xA;&#xA;```sql&#xA;SELECT * FROM official_shifts LIMIT 15;&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM last_minute_signups LIMIT 15;&#xA;```&#xA;&#xA;Let&#39;s count how many rows in each table we have:&#xA;&#xA;```sql&#xA;SELECT COUNT(*) FROM official_shifts LIMIT 15;&#xA;SELECT COUNT(*) FROM last_minute_signups LIMIT 15;&#xA;```&#xA;&#xA;Alright, the data is visible and we can head on to the problem statement.&#xA;&#xA;&#xA;&#xA;```&#xA;$ sqlite3&#xA;SQLite version 3.45.1 2024-01-30 16:01:20&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read day4_inserts.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE official_shifts (&#xA;    id INT PRIMARY KEY,&#xA;    volunteer_name TEXT,&#xA;    role TEXT,&#xA;    shift_time TEXT,&#xA;    age_group TEXT,&#xA;    code TEXT&#xA;);&#xA;CREATE TABLE last_minute_signups (&#xA;    id INT PRIMARY KEY,&#xA;    volunteer_name TEXT,&#xA;    assigned_task TEXT,&#xA;    time_slot TEXT&#xA;);&#xA;&#xA;sqlite&gt; SELECT * FROM official_shifts LIMIT 15;&#xA;1|Jude Thompson|choir_assistant|12:00 PM|senior|&#xA;2|Mateo Cruz|choir_assistant|12:00 PM|senior|&#xA;3|Olivia Dubois|choir_assistant|2:00 PM|teen|A1&#xA;4|Jeff Bezos|choir_assistant|10:00 AM|adult|X7&#xA;5|Kian Rahimi|stage_setup|12:00 PM|adult|X7&#xA;6|Haruto Sato|cocoa_station|10:00 AM|adult|X7&#xA;7|Uma Singh|parking_support|10:00 AM|adult|&#xA;8|Owen Scott|parking_support|10:00 AM|adult|X7&#xA;9|Adil Rahman|stage_setup|2:00 PM|adult|A1&#xA;10|Aaron Diaz|choir_assistant|2:00 PM|senior|X7&#xA;11|Carter Lewis|cocoa_station|10:00 AM|senior|B2&#xA;12|Anya Pavlov|stage_setup|10:00 AM|senior|OLD&#xA;13|Ethan Brown|stage_setup|2:00 PM|adult|A1&#xA;14|Lucia Fernandez|choir_assistant|12:00 PM|senior|X7&#xA;15|Casey Morgan|choir_assistant|12:00 PM|teen|OLD&#xA;&#xA;sqlite&gt; .mode table &#xA;sqlite&gt; SELECT * FROM official_shifts LIMIT 15;&#xA;+----+-----------------+-----------------+------------+-----------+------+&#xA;| id | volunteer_name  |      role       | shift_time | age_group | code |&#xA;+----+-----------------+-----------------+------------+-----------+------+&#xA;| 1  | Jude Thompson   | choir_assistant | 12:00 PM   | senior    |      |&#xA;| 2  | Mateo Cruz      | choir_assistant | 12:00 PM   | senior    |      |&#xA;| 3  | Olivia Dubois   | choir_assistant | 2:00 PM    | teen      | A1   |&#xA;| 4  | Jeff Bezos      | choir_assistant | 10:00 AM   | adult     | X7   |&#xA;| 5  | Kian Rahimi     | stage_setup     | 12:00 PM   | adult     | X7   |&#xA;| 6  | Haruto Sato     | cocoa_station   | 10:00 AM   | adult     | X7   |&#xA;| 7  | Uma Singh       | parking_support | 10:00 AM   | adult     |      |&#xA;| 8  | Owen Scott      | parking_support | 10:00 AM   | adult     | X7   |&#xA;| 9  | Adil Rahman     | stage_setup     | 2:00 PM    | adult     | A1   |&#xA;| 10 | Aaron Diaz      | choir_assistant | 2:00 PM    | senior    | X7   |&#xA;| 11 | Carter Lewis    | cocoa_station   | 10:00 AM   | senior    | B2   |&#xA;| 12 | Anya Pavlov     | stage_setup     | 10:00 AM   | senior    | OLD  |&#xA;| 13 | Ethan Brown     | stage_setup     | 2:00 PM    | adult     | A1   |&#xA;| 14 | Lucia Fernandez | choir_assistant | 12:00 PM   | senior    | X7   |&#xA;| 15 | Casey Morgan    | choir_assistant | 12:00 PM   | teen      | OLD  |&#xA;+----+-----------------+-----------------+------------+-----------+------+&#xA;&#xA;sqlite&gt; SELECT * FROM last_minute_signups LIMIT 15;&#xA;+----+----------------+-----------------+-----------+&#xA;| id | volunteer_name |  assigned_task  | time_slot |&#xA;+----+----------------+-----------------+-----------+&#xA;| 1  | Jude Thompson  | Choir           | noon      |&#xA;| 2  | Mateo Cruz     | choir           | noon      |&#xA;| 3  | Olivia Dubois  | choir           | 2 PM      |&#xA;| 4  | Jeff Bezos     | choir assistant | 10AM      |&#xA;| 5  | Kian Rahimi    | stage setup     | noon      |&#xA;| 6  | Haruto Sato    | cocoa station   | 10AM      |&#xA;| 7  | Uma Singh      | parking_support | 10AM      |&#xA;| 8  | Owen Scott     | parking         | 10AM      |&#xA;| 9  | Adil Rahman    | Stage-Setup     | 2 PM      |&#xA;| 10 | Aaron Diaz     | Choir           | 2 PM      |&#xA;| 11 | Carter Lewis   | Cocoa Station   | 10AM      |&#xA;| 12 | Anya Pavlov    | stage_setup     | 10AM      |&#xA;| 13 | Olivia Brown   | stage setup     | 2 PM      |&#xA;| 14 | Lena Fischer   | cocoa station   | 2 pm      |&#xA;| 15 | Nolan Murphy   | parking-support | 10AM      |&#xA;+----+----------------+-----------------+-----------+&#xA;&#xA;sqlite&gt; SELECT count(*) FROM official_shifts;&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 250      |&#xA;+----------+&#xA;&#xA;sqlite&gt; SELECT count(*) FROM last_minute_signups;&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 126      |&#xA;+----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;## Problem&#xA;&#xA;Here&#39;s the challenge for day 4&#xA;&#xA;&gt; Using the official_shifts and last_minute_signups tables, create a combined de-duplicated volunteer list.&#xA;&gt; &#xA;&gt; Ensure the list has standardized role labels of Stage Setup, Cocoa Station, Parking Support, Choir Assistant, Snow Shoveling, Handwarmer Handout.&#xA;&gt; &#xA;&gt; Make sure that the timeslot formats follow John&#39;s official shifts format.&#xA;&#xA;What we have here is a official shift table which could have been entered by the system. However the `last_minute_shift` is messy and has been added from a sheet, so we need to clean it up and combine those two tables data into a single de-duplicated list of volunteers.&#xA;&#xA;Let&#39;s see what we got&#xA;&#xA;```sql&#xA;SELECT * FROM last_minute_signups;&#xA;```&#xA;&#xA;Ok, we have 126 records and the columns are:&#xA;1.  `volunteer_name` which doesn&#39;t look bad, &#xA;2. `assigned_task` which looks wonky&#xA;3. `time_slot` is just wild, we have wired definition of times there.&#xA;&#xA;And let&#39;s look at the `official_shifts`&#xA;&#xA;```sql&#xA;SELECT * FROM offical_shifts;&#xA;```&#xA;&#xA;This looks neat and tidy, nothing looking off from each other.&#xA;&#xA;So we need to make sure we are cleaning up the `last_minute_signups` before we merge them.&#xA;&#xA;```sql&#xA;SELECT DISTINCT assigned_task FROM last_minute_signups;&#xA;```&#xA;Ok, so casing is one thing we can see, `-` and ` ` space are the things to normalize, and then some inconsistent naming convention like `choir` and `choir assistant`, then `parking_support` and `parking`. We need to clean&#39;em up.&#xA;&#xA;We can search for &#xA;```sql&#xA;SELECT &#xA;    id,&#xA;    volunteer_name,&#xA;    time_slot,&#xA;    CASE &#xA;        WHEN assigned_task LIKE &#39;choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;stage%&#39; THEN &#39;stage_setup&#39; &#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39;  &#xA;   END AS assigned_task&#xA;FROM last_minute_signups;&#xA;``` &#xA;&#xA;We just do a case match for:&#xA;- `LIKE &#39;choir%&#39;` which will match any case (`Choir`, `choir`) and also anything after `choir....` like `choir assistant`&#xA;- `LIKE &#39;stage%&#39;` which will match any case (`Stage`, `stage`) and also anything after `stage...` like `Stage-Setup`, `stage    setup` or `stage_setup`.&#xA;- `LIKE &#39;%cocoa&#39;` which will match any case (`Cocoa`, `cocoa`) and also anything before or after `...cocoa...` like `Cocoa Station`, `cocoa station`, etc.&#xA;- `LIKE &#39;parking%&#39;` which will match any case (`Parking`, `parking`) and also anything after `parking...` like `parking-support` or `parking_support`, etc.&#xA;- `LIKE &#39;hand%&#39;` which will match any case (`Hand`, `hand`) and also anything after `hand...` like `handwarmer handout`, `handwarmers`, `Handwarmer-Handout`, etc.&#xA;- `LIKE &#39;%shovel%&#39;` which will match any case (`Shovel`, `shovel`) and also anything before and after `...shovel...`  like `Snow-Shoveling`, `shovel`, `snow shoveling`, etc.&#xA;&#xA;Ok, now this looks unified for the `assigned_task`.&#xA;&#xA;```&#xA;sqlite&gt; SELECT &#xA;    id,&#xA;    volunteer_name,&#xA;    time_slot,&#xA;    CASE &#xA;        WHEN assigned_task LIKE &#39;choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS assigned_task &#xA;FROM last_minute_signups;&#xA;&#xA;+-----+-------------------+-----------+-----------------+&#xA;| id  |  volunteer_name   | time_slot |  assigned_task  |&#xA;+-----+-------------------+-----------+-----------------+&#xA;| 1   | Jude Thompson     | noon      | choir_assistant |&#xA;| 2   | Mateo Cruz        | noon      | choir_assistant |&#xA;| 3   | Olivia Dubois     | 2 PM      | choir_assistant |&#xA;| 4   | Jeff Bezos        | 10AM      | choir_assistant |&#xA;| 5   | Kian Rahimi       | noon      | stage_setup     |&#xA;| 6   | Haruto Sato       | 10AM      | cocoa_station   |&#xA;| 7   | Uma Singh         | 10AM      | parking_support |&#xA;| 8   | Owen Scott        | 10AM      | parking_support |&#xA;| 9   | Adil Rahman       | 2 PM      | stage_setup     |&#xA;| 10  | Aaron Diaz        | 2 PM      | choir_assistant |&#xA;| 11  | Carter Lewis      | 10AM      | cocoa_station   |&#xA;| 12  | Anya Pavlov       | 10AM      | stage_setup     |&#xA;| 13  | Olivia Brown      | 2 PM      | stage_setup     |&#xA;| 14  | Lena Fischer      | 2 pm      | cocoa_station   |&#xA;| 15  | Nolan Murphy      | 10AM      | parking_support |&#xA;+-----+-------------------+-----------+-----------------+&#xA;&#xA;```&#xA;&#xA;&#xA;&#xA;We need to make it for the time slot too.&#xA;&#xA;```sql&#xA;SELECT DISTINCT time_slot FROM last_minute_signups&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT DISTINCT time_slot FROM last_minute_signups;&#xA;+-----------+&#xA;| time_slot |&#xA;+-----------+&#xA;| noon      |&#xA;| 2 PM      |&#xA;| 10AM      |&#xA;| 2 pm      |&#xA;| 10 am     |&#xA;+-----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Since we have to follow the `official_shifts` let&#39;s check over there.&#xA;```sql&#xA;SELECT distinct shift_time FROM official_shifts;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT distinct shift_time FROM official_shifts;&#xA;+------------+&#xA;| shift_time |&#xA;+------------+&#xA;| 12:00 PM   |&#xA;| 2:00 PM    |&#xA;| 10:00 AM   |&#xA;+------------+&#xA;```&#xA;&#xA;Ok we have only 3 times to change.&#xA;&#xA;Alright, we have some small things to standardize.&#xA;&#xA;```sql&#xA;SELECT &#xA;    DISTINCT CASE &#xA;        WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39;&#xA;    END AS time_slot &#xA;FROM last_minute_signups;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT &#xA;    DISTINCT CASE &#xA;        WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39;&#xA;        WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39;&#xA;    END AS time_slot &#xA;FROM last_minute_signups;&#xA;&#xA;+-----------+&#xA;| time_slot |&#xA;+-----------+&#xA;| 12:00 PM  |&#xA;| 2:00 PM   |&#xA;| 10:00 AM  |&#xA;+-----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;So, we have simply standardize the time_slots.&#xA;&#xA;- `LIKE &#39;2%&#39;` will match any case but we need `LIKE` to match the `%` rest of the stuff after `2`.&#xA;- `LIKE &#39;10%&#39;` will match any case but we need `LIKE` to match the `%` rest of the stuff after `10`. We can&#39;t keep it `LIKE &#39;1%&#39;` as it will match `1:00` as well&#xA;- `LIKE &#39;noon&#39;` will match any case of `noon` like `NOON` or `Noon`, etc. And we need to cast it to the `HH:MM AM or PM` format.&#xA;&#xA;So, now we can combine them.&#xA;&#xA;```sql&#xA;SELECT id, volunteer_name,&#xA;    CASE                                                                                                                                                               &#xA;        WHEN assigned_task LIKE &#39;choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS role, &#xA;    CASE&#xA;        WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 PM&#39;&#xA;    END AS shift_time &#xA;FROM last_minute_signups;&#xA;```&#xA;Just changed the column names from `assigned_task` to `role` and `time_slot` to `shift_time` as per the name convention in the `official_shifts` table.&#xA;Phew! its a long statement.&#xA;&#xA;```&#xA;sqlite&gt; SELECT id, volunteer_name, CASE                                                                                                                                                               WHEN assigned_task LIKE &#39;choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS assigned_task, CASE WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39; WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39; WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39; END AS time_slot FROM last_minute_signups;&#xA;&#xA;+-----+-------------------+-----------------+-----------+&#xA;| id  |  volunteer_name   |  assigned_task  | time_slot |&#xA;+-----+-------------------+-----------------+-----------+&#xA;| 1   | Jude Thompson     | choir_assistant | 12:00 PM  |&#xA;| 2   | Mateo Cruz        | choir_assistant | 12:00 PM  |&#xA;| 3   | Olivia Dubois     | choir_assistant | 2:00 PM   |&#xA;| 4   | Jeff Bezos        | choir_assistant | 10:00 AM  |&#xA;| 5   | Kian Rahimi       | stage_setup     | 12:00 PM  |&#xA;| 6   | Haruto Sato       | cocoa_station   | 10:00 AM  |&#xA;| 7   | Uma Singh         | parking_support | 10:00 AM  |&#xA;| 8   | Owen Scott        | parking_support | 10:00 AM  |&#xA;| 9   | Adil Rahman       | stage_setup     | 2:00 PM   |&#xA;| 10  | Aaron Diaz        | choir_assistant | 2:00 PM   |&#xA;| 11  | Carter Lewis      | cocoa_station   | 10:00 AM  |&#xA;| 12  | Anya Pavlov       | stage_setup     | 10:00 AM  |&#xA;| 13  | Olivia Brown      | stage_setup     | 2:00 PM   |&#xA;| 14  | Lena Fischer      | cocoa_station   | 2:00 PM   |&#xA;| 15  | Nolan Murphy      | parking_support | 10:00 AM  |&#xA;+-----+-------------------+-----------------+-----------+&#xA;```&#xA;&#xA;SO, now we have the table of `last_minute_signups` cleaned up, just with select, we can update them if needed.&#xA;&#xA;We now need to combine the both tables, cleaned up `last_minute_signups` and the `official_shifts`, we can use `UNION` to take out the duplicates from the two selection.&#xA;REMEMBER to order the rows correctly in both the tables.&#xA;- volunteer_name&#xA;- role&#xA;- shift_time&#xA;&#xA;I don&#39;t think name should be same, but I am keeping it same for clarity.&#xA;&#xA;Why `UNION`&#xA;- Because we have data in both the tables.&#xA;- We don&#39;t have a relation in both of the tables, those are the same tables just that the columns are not cleaned or in proper format.&#xA;- We want to grab all of them from one, all from other table, and remove the duplicates, that&#39;s a definition of `UNION`&#xA;&#xA;We can&#39;t use `UNION ALL` as it would include all the rows from both the tables without removing duplicates.&#xA;&#xA;```sql&#xA;SELECT volunteer_name, &#xA;    CASE &#xA;        WHEN assigned_task LIKE &#39;choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS assigned_task,&#xA;    CASE &#xA;        WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39;&#xA;    END AS time_slot&#xA;FROM last_minute_signups &#xA;UNION  &#xA;SELECT &#xA;    volunteer_name,&#xA;    role,&#xA;    shift_time&#xA;FROM official_shifts &#xA;ORDER BY volunteer_name;&#xA;&#xA;```&#xA;&#xA;Ok, that is a mess, isn&#39;t it?&#xA;&#xA;```&#xA;sqlite&gt; SELECT volunteer_name, &#xA;    CASE &#xA;        WHEN assigned_task LIKE &#39;choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS assigned_task,&#xA;    CASE &#xA;        WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39;&#xA;        WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39;&#xA;    END AS time_slot&#xA;FROM last_minute_signups &#xA;UNION &#xA;SELECT &#xA;    volunteer_name,&#xA;    role,&#xA;    shift_time&#xA;FROM official_shifts &#xA;ORDER BY volunteer_name;&#xA;&#xA;+-------------------+-----------------+-----------+&#xA;|  volunteer_name   |  assigned_task  | time_slot |&#xA;+-------------------+-----------------+-----------+&#xA;| Aaron Carter      | parking_support | 2:00 PM   |&#xA;| Aaron Diaz        | choir_assistant | 2:00 PM   |&#xA;| Aaron Diaz        | choir_assistant | 2:00 PM   |&#xA;| Aaron Evans       | cocoa_station   | 2:00 PM   |&#xA;| Aaron Francis     | hand_warmer     | 2:00 PM   |&#xA;| Abigail Hernandez | stage_setup     | 10:00 AM  |&#xA;| Adam King         | stage_setup     | 10:00 AM  |&#xA;| Adil Foster       | stage_setup     | 2:00 PM   |&#xA;| Adil Rahman       | stage_setup     | 2:00 PM   |&#xA;| Adil Rahman       | stage_setup     | 2:00 PM   |&#xA;| Adrian Cox        | cocoa_station   | 10:00 AM  |&#xA;| Aisha Bennett     | cocoa_station   | 12:00 PM  |&#xA;| Aisha Khan        | choir_assistant | 12:00 PM  |&#xA;| Aisha Khan        | choir_assistant | 12:00 PM  |&#xA;| Aisha Mohammed    | cocoa_station   | 2:00 PM   |&#xA;+-------------------+-----------------+-----------+&#xA;```&#xA;&#xA;There we have it.&#xA;&#xA;Let&#39;s count the number of distinct volunteers in the shifts.&#xA;&#xA;```sql&#xA;SELECT COUNT(*) FROM (SELECT volunteer_name, role, shift_time FROM official_shifts UNION  SELECT volunteer_name, CASE &#xA;        WHEN assigned_task LIKE &#39;%choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;%stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;%parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;%hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS role, CASE WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39; WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39; WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39; END AS shift_time FROM last_minute_signups) AS volunteers;&#xA;```&#xA;&#xA;Just counted the full union of the statement using `SELECT COUNT(*) FROM &lt;THE FULL QUERY&gt; AS volunteers)&#xA;&#xA;```&#xA;sqlite&gt; SELECT COUNT(*) FROM (SELECT volunteer_name, role, shift_time FROM official_shifts UNION  SELECT volunteer_name, CASE &#xA;        WHEN assigned_task LIKE &#39;%choir%&#39; THEN &#39;choir_assistant&#39;&#xA;        WHEN assigned_task LIKE &#39;%stage%&#39; THEN &#39;stage_setup&#39;&#xA;        WHEN assigned_task LIKE &#39;%cocoa%&#39; THEN &#39;cocoa_station&#39;&#xA;        WHEN assigned_task LIKE &#39;%parking%&#39; THEN &#39;parking_support&#39;&#xA;        WHEN assigned_task LIKE &#39;%hand%&#39; THEN &#39;hand_warmer&#39;&#xA;        WHEN assigned_task LIKE &#39;%shovel%&#39; THEN &#39;snow_showel&#39; &#xA;    END AS role, CASE WHEN time_slot LIKE &#39;2%&#39; THEN &#39;2:00 PM&#39; WHEN time_slot LIKE &#39;noon&#39; THEN &#39;12:00 PM&#39; WHEN time_slot LIKE &#39;10%&#39; THEN &#39;10:00 AM&#39; END AS shift_time FROM last_minute_signups) AS volunteers;&#xA;&#xA;+------------+&#xA;| volunteers |&#xA;+------------+&#xA;| 284        |&#xA;+------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;So, we have around `284` rows. Looks good.&#xA;&#xA;Pinebrook can see the volunteer list now. The cleaned one.&#xA;&#xA;Off to day 5!&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 3: Hotline Messages</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-3</link>
      <description>Advent of SQL Day 3 - Hotline Messages This is day 3 from the Advent of SQL Grab the SQL Statements Let&#39;s take the insert statements i.e. to create and populate</description>
      <pubDate>Thu, 18 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Advent of SQL Day 3 - Hotline Messages&#xA;&#xA;This is day 3 from the Advent of SQL&#xA;&#xA;## Grab the SQL Statements&#xA;&#xA;Let&#39;s take the insert statements i.e. to create and populate tables and rows into the database. I am using SQLite.&#xA;&#xA;It works without any special shenanigans, as it was intended to used for Postgres, but the table and use case looks very simple, so nothing specific to Postgres used yet! We are good!&#xA;&#xA;Here is the SQL setup, if you want to play it in the playground here:&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS hotline_messages;&#xA;&#xA;CREATE TABLE hotline_messages (&#xA;    id INT PRIMARY KEY,&#xA;    caller_name TEXT,&#xA;    transcript TEXT,&#xA;    tag TEXT,&#xA;    status TEXT&#xA;);&#xA;&#xA;INSERT INTO hotline_messages (id, caller_name, transcript, tag, status) VALUES&#xA;    (1, &#39;Saanvi A.&#39;, &#39;I just found a refrigerator portal that leads to a disco party hosted by dancing llamas—please send help!&#39;, &#39;possible dragon&#39;, &#39;spam&#39;),&#xA;    (2, &#39;Fatima Q.&#39;, &#39;Hi Santa, I would love a magical unicorn that lights up at night!&#39;, &#39;wish list&#39;, NULL),&#xA;    (3, &#39;Lillian Z.&#39;, &#39;Hi Santa, I would love the magical fairy garden set, please!&#39;, &#39;wish list&#39;, &#39;approved&#39;),&#xA;    (4, &#39;Carter Y.&#39;, &#39;Thank you, Santa, for making Christmas so special with your wonderful spirit!&#39;, &#39;thank you&#39;, &#39;approved&#39;),&#xA;    (5, &#39;Omar R.&#39;, &#39;Hi Santa, I would love a rainbow unicorn plushie that has a glittery horn!&#39;, &#39;wish list&#39;, &#39;approved&#39;),&#xA;    (6, &#39;Diego Y.&#39;, &#39;Hi Santa, I would love a magical unicorn plushie that glows in the dark!&#39;, &#39;wish list&#39;, NULL),&#xA;    (7, &#39;Layla X.&#39;, &#39;Thank you, Santa, for spreading joy and magic every Christmas!&#39;, &#39;thank you&#39;, NULL),&#xA;    (8, &#39;Sophia K.&#39;, &#39;Santa, my cat said she wants to visit the candy cane forest next week.&#39;, NULL, NULL),&#xA;    (9, &#39;Eli H.&#39;, &#39;Hi Santa, I would love the magical fairy castle with twinkling lights!&#39;, &#39;wish list&#39;, &#39;approved&#39;),&#xA;    (10, &#39;Logan F.&#39;, &#39;Santa, I think the reindeer are starting a band with the garden gnomes.&#39;, &#39;needs clarification&#39;, NULL),&#xA;    (11, &#39;Carlos P.&#39;, &#39;Thank you, Santa, for making Christmas so special every year!&#39;, &#39;thank you&#39;, NULL),&#xA;    (12, &#39;Zain G.&#39;, &#39;Thank you, Santa, for bringing joy to all the children around the world!&#39;, &#39;thank you&#39;, NULL),&#xA;    (13, &#39;Haruto R.&#39;, &#39;Thank you, Santa, for spreading so much joy and magic every Christmas!&#39;, &#39;thank you&#39;, &#39;approved&#39;),&#xA;    (14, &#39;Oliver L.&#39;, &#39;Thank you, Santa, for spreading joy and making Christmas extra special!&#39;, &#39;thank you&#39;, NULL),&#xA;    (15, &#39;Luca M.&#39;, &#39;Hi Santa, could I please have the super cool glow-in-the-dark rocket ship?&#39;, &#39;wish list&#39;, NULL),&#xA;    (16, &#39;Samuel C.&#39;, &#39;sorry, Santa, my teddy bear said he wants to be a reindeer this year.&#39;, &#39;needs clarification&#39;, NULL);&#xA;```&#xA;&#xA;Here&#39;s the setup I did to check the data.&#xA;&#xA;```sql&#xA;SELECT * FROM hotline_messages;&#xA;```&#xA;&#xA;```plaintext&#xA;$ sqlite3&#xA;SQLite version 3.45.1 2024-01-30 16:01:20&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;&#xA;sqlite&gt; .read day3-inserts.sql&#xA;&#xA;sqlite&gt; .schema&#xA;CREATE TABLE hotline_messages (&#xA;    id INT PRIMARY KEY,&#xA;    caller_name TEXT,&#xA;    transcript TEXT,&#xA;    tag TEXT,&#xA;    status TEXT&#xA;);&#xA;&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM hotline_messages LIMIT 10;&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| id | caller_name |                          transcript                          |         tag         |  status  |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 1  | Saanvi A.   | I just found a refrigerator portal that leads to a disco par | possible dragon     | spam     |&#xA;|    |             | ty hosted by dancing llamas—please send help!                |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 2  | Fatima Q.   | Hi Santa, I would love a magical unicorn that lights up at n | wish list           |          |&#xA;|    |             | ight!                                                        |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 3  | Lillian Z.  | Hi Santa, I would love the magical fairy garden set, please! | wish list           | approved |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 4  | Carter Y.   | Thank you, Santa, for making Christmas so special with your  | thank you           | approved |&#xA;|    |             | wonderful spirit!                                            |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 5  | Omar R.     | Hi Santa, I would love a rainbow unicorn plushie that has a  | wish list           | approved |&#xA;|    |             | glittery horn!                                               |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 6  | Diego Y.    | Hi Santa, I would love a magical unicorn plushie that glows  | wish list           |          |&#xA;|    |             | in the dark!                                                 |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 7  | Layla X.    | Thank you, Santa, for spreading joy and magic every Christma | thank you           |          |&#xA;|    |             | s!                                                           |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 8  | Sophia K.   | Santa, my cat said she wants to visit the candy cane forest  |                     |          |&#xA;|    |             | next week.                                                   |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 9  | Eli H.      | Hi Santa, I would love the magical fairy castle with twinkli | wish list           | approved |&#xA;|    |             | ng lights!                                                   |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 10 | Logan F.    | Santa, I think the reindeer are starting a band with the gar | needs clarification |          |&#xA;|    |             | den gnomes.                                                  |                     |          |&#xA;+----+-------------+--------------------------------------------------------------+---------------------+----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;So, we just have one table, called `hotline_messages` and it has a few columns like&#xA;&#xA;1. `caller_name`&#xA;    &#xA;2. `transcript`&#xA;    &#xA;3. `tag`&#xA;    &#xA;4. `status`&#xA;    &#xA;&#xA;What do we want to do with those?&#xA;&#xA;Well! Let&#39;s get into the problem statement.&#xA;&#xA;## Problem&#xA;&#xA;Here goes the challenge for day 3&#xA;&#xA;&gt; Using the `hotline_messages` table, update any record that has &#34;sorry&#34; (case insensitive) in the transcript and doesn&#39;t currently have a status assigned to have a status of &#34;approved&#34;.&#xA;&gt; &#xA;&gt; Then delete any records where the tag is &#34;penguin prank&#34;, &#34;time-loop advisory&#34;, &#34;possible dragon&#34;, or &#34;nonsense alert&#34; or if the caller&#39;s name is &#34;Test Caller&#34;.&#xA;&gt; &#xA;&gt; After updating and deleting the records as described, write a final query that returns how many messages currently have a status of &#34;approved&#34; and how many still need to be reviewed (i.e., status is NULL).&#xA;&#xA;It&#39;s divided into 3 parts, so we need three queries? Maybe, I don&#39;t want to have a single long query for doing all of these. And after reading it, it seems it should not be a single query, it can be 2 queries, one is for updation and the other for selection after those updates.&#xA;&#xA;So, we have to do three things.&#xA;&#xA;1. Find the records which have `sorry` in the transcript text and mark their `status` as `approved` (What a lovely gesture)&#xA;    &#xA;2. Find all records with the tags as either `penguin prank`, `time-loop advisory`, `possible dragon`, or `nonsense alert` and even if the `caller_name` is `Test Caller` then delete those records, yes take&#39;em out of my way.&#xA;    &#xA;3. After doing those 2 things, we have to count the number of records with `status` as `approved` and the number of records that are still not `approved` (they are in review or the status is `NULL` )&#xA;    &#xA;&#xA;So, let&#39;s do them step by step.&#xA;&#xA;### Be generous&#xA;&#xA;Let&#39;s be generous like Santa says and mark the records with `status` as `approved` whose transcript have the word `sorry` in them. Let those children be gifted their reward of being generous and humble.&#xA;&#xA;How do that in SQL, well let&#39;s first look at what we are updating.&#xA;&#xA;```sql&#xA;SELECT * FROM hotline_messages WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;```&#xA;&#xA;So, will it be sufficient? I think so.&#xA;&#xA;Because&#xA;&#xA;* `LIKE` is **case insensitive**, so it can catch `sorry`, `Sorry`, `SoRRY`, `sorrY`&#xA;    &#xA;* `%` before and after will catch the word `sorry` in middle of the sentence and not necessarily in the start.&#xA;    &#xA;&#xA;I can see 104 rows selected with this condition. I always try to check before updation or deletion how many rows are affected. Because, sometimes we start `UPDATE hotline_messages SET status = &#39;approved&#39;` and forget the where! This gets worse for delete believe me!&#xA;&#xA;```sql&#xA;SELECT count(*) FROM hotline_messages WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;SELECT count(*) FROM hotline_messages;&#xA;```&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT * FROM hotline_messages WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;|  id  | caller_name  |                          transcript                          |         tag         |  status  |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 16   | Samuel C.    | sorry, Santa, my teddy bear said he wants to be a reindeer t | needs clarification |          |&#xA;|      |              | his year.                                                    |                     |          |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 33   | Jacob F.     | sorry, Hi Santa, I would love the magical unicorn plushie th | wish list           |          |&#xA;|      |              | at glows in the dark!                                        |                     |          |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 46   | Jun Y.       | sorry, Hi Santa, I would love a magical unicorn stuffed anim | wish list           |          |&#xA;|      |              | al that glows in the dark!                                   |                     |          |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;&#xA;sqlite&gt; SELECT count(*) FROM hotline_messages WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 104      |&#xA;+----------+&#xA;sqlite&gt; SELECT count(*) FROM hotline_messages ;--WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 1067     |&#xA;+----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;So, once I know `104` rows will be affected out of `1067` I can create the update statement.&#xA;&#xA;We want to update the status and set it to `approved` for the rows which we selected just now (have `sorry` in the transcript text)&#xA;&#xA;```sql&#xA;UPDATE hotline_messages&#xA;SET status = &#39;approved&#39;&#xA;WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;```&#xA;&#xA;Now, when we select again&#xA;&#xA;```sql&#xA;SELECT * FROM hotline_messages WHERE transcript LIKE &#39;%sorry%&#39;; &#xA;```&#xA;&#xA;All approved!&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT * FROM hotline_messages WHERE transcript LIKE &#39;%sorry%&#39;;&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;|  id  | caller_name  |                          transcript                          |         tag         |  status  |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 16   | Samuel C.    | sorry, Santa, my teddy bear said he wants to be a reindeer t | needs clarification | approved |&#xA;|      |              | his year.                                                    |                     |          |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 33   | Jacob F.     | sorry, Hi Santa, I would love the magical unicorn plushie th | wish list           | approved |&#xA;|      |              | at glows in the dark!                                        |                     |          |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;| 46   | Jun Y.       | sorry, Hi Santa, I would love a magical unicorn stuffed anim | wish list           | approved |&#xA;|      |              | al that glows in the dark!                                   |                     |          |&#xA;+------+--------------+--------------------------------------------------------------+---------------------+----------+&#xA;```&#xA;&#xA;To the next step then&#xA;&#xA;### Remove Spam&#xA;&#xA;To reiterate the second part of the challenge&#xA;&#xA;&gt; Then delete any records where the tag is &#34;penguin prank&#34;, &#34;time-loop advisory&#34;, &#34;possible dragon&#34;, or &#34;nonsense alert&#34; or if the caller&#39;s name is &#34;Test Caller&#34;.&#xA;&#xA;We basically need to&#xA;&#xA;Find all records with the tags as either `penguin prank`, `time-loop advisory`, `possible dragon`, or `nonsense alert` and even if the `caller_name` is `Test Caller` then delete those records, yes take&#39;em out of my way.&#xA;&#xA;So, again, select first update or delete later.&#xA;&#xA;```sql&#xA;SELECT * FROM hotline_messages WHERE tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;);&#xA;```&#xA;&#xA;Here `IN` is a great helper as we can do equivalent of this into a compact statement.&#xA;&#xA;```sql&#xA;SELECT * FROM hotline_messages &#xA;WHERE &#xA;    tag = &#39;penguin prank&#39;&#xA;    OR tag = &#39;time-loop advisory&#39;&#xA;    OR tag = &#39;possible dragon&#39;&#xA;    OR tag = &#39;nonsense alert&#39;;&#xA;```&#xA;&#xA;That is a lot of `OR tag =` that is saved by `IN` a list of values. Handy little operator.&#xA;&#xA;The count here is `68`&#xA;&#xA;```sql&#xA;SELECT count(*) FROM hotline_messages WHERE tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;);&#xA;```&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT count(*) FROM hotline_messages WHERE tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;);&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 68       |&#xA;+----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Also we need to check if the `caller_name` is `Test Caller`&#xA;&#xA;It could be `OR` here&#xA;&#xA;```sql&#xA;SELECT * FROM hotline_messages &#xA;WHERE &#xA;     tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;)&#xA;     OR caller_name = &#39;Test Caller&#39;;&#xA;```&#xA;&#xA;That&#39;s it let&#39;s count the number of rows we will be deleting soon.&#xA;&#xA;```plaintext&#xA;SELECT count(*) FROM hotline_messages&#xA;WHERE&#xA;     tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;) &#xA;     OR caller_name = &#39;Test Caller&#39;;&#xA;```&#xA;&#xA;So, we have `89` rows to deleted after considering the spamy tags and test callers out.&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT count(*) FROM hotline_messages WHERE tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;) OR caller_name = &#39;Test Caller&#39;;&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 89       |&#xA;+----------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Let&#39;s get the spam outta here!&#xA;&#xA;```sql&#xA;DELETE FROM hotline_messages&#xA;WHERE tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;) OR caller_name = &#39;Test Caller&#39;;&#xA;```&#xA;&#xA;Phew! Done 89 spammy records removed! Santa might be relieved.&#xA;&#xA;```plaintext&#xA;sqlite&gt; DELETE FROM hotline_messages&#xA;WHERE tag IN (&#39;penguin prank&#39;, &#39;time-loop advisory&#39;, &#39;possible dragon&#39;, &#39;nonsense alert&#39;) OR caller_name = &#39;Test Caller&#39;;&#xA;&#xA;sqlite&gt; SELECT changes();&#xA;+-----------+&#xA;| changes() |&#xA;+-----------+&#xA;| 89        |&#xA;+-----------+&#xA;```&#xA;&#xA;The changes are done, now we simply have to select and count the things which are approved and in review.&#xA;&#xA;### Count&#39;em down&#xA;&#xA;* After doing those 2 things, we have to count the number of records with `status` as `approved` and the number of records that are still not `approved` (they are in review or the status is `NULL` )&#xA;    &#xA;&#xA;So, we need to get the count of&#xA;&#xA;1. Records with `status` as `approved`&#xA;    &#xA;2. Records with `status` as `NULL`&#xA;    &#xA;&#xA;#### Separate Queries&#xA;&#xA;This looks straight forward, you can write two separate queries for doing the things.&#xA;&#xA;```sql&#xA;SELECT COUNT(*) as approved_count FROM hotline_messages WHERE status = &#39;approved&#39;;&#xA;SELECT COUNT(*) as in_review_count FROM hotline_messages WHERE status IS NULL;&#xA;```&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT COUNT(*) as approved_count FROM hotline_messages WHERE status = &#39;approved&#39;;&#xA;SELECT COUNT(*) as in_review_count FROM hotline_messages WHERE status IS NULL;&#xA;+----------------+&#xA;| approved_count |&#xA;+----------------+&#xA;| 477            |&#xA;+----------------+&#xA;+-----------------+&#xA;| in_review_count |&#xA;+-----------------+&#xA;| 501             |&#xA;+-----------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;#### Group by Status&#xA;&#xA;But however, can we do it in 1 query?&#xA;&#xA;Think!&#xA;&#xA;There are just 2 types of status right?&#xA;&#xA;Let&#39;s check&#xA;&#xA;```sql&#xA;SELECT DISTINCT status FROM hotline_messages;&#xA;```&#xA;&#xA;Hmm! 2? `NULL` and `approved` !&#xA;&#xA;```sql&#xA;sqlite&gt; SELECT DISTINCT status FROM hotline_messages;&#xA;+----------+&#xA;|  status  |&#xA;+----------+&#xA;|          |&#xA;| approved |&#xA;+----------+&#xA;```&#xA;&#xA;So we can simply do the same thing, but just group by `status` right? Like so:&#xA;&#xA;```sql&#xA;SELECT status, COUNT(*) as count&#xA;FROM hotline_messages&#xA;GROUP BY status;&#xA;```&#xA;&#xA;And this should give us back the two rows with the count of `NULL` and `approved` .&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT status, COUNT(*) as count&#xA;FROM hotline_messages&#xA;GROUP BY status; &#xA;+----------+-------+&#xA;|  status  | count |&#xA;+----------+-------+&#xA;|          | 501   |&#xA;| approved | 477   |&#xA;+----------+-------+&#xA;```&#xA;&#xA;Is there a better way?&#xA;&#xA;This looks a little wired! Like status is empty (NULL) and it kind of makes a little wired view for people to look at, can we do something different?&#xA;&#xA;#### Cases when then else&#xA;&#xA;This is simple use case for a `CASE WHEN ... THEN ... ELSE ... END`&#xA;&#xA;For each when we can check certain conditions and do certain things or do other thing.&#xA;&#xA;In this case, if the status is `approved`, we can increment the count to 1 or we can keep 0, similarly the other when can be used for grouping the count of status being `NULL` .&#xA;&#xA;```sql&#xA;SELECT &#xA;    COUNT(CASE WHEN status = &#39;approved&#39; THEN 1 END) AS approved_count,&#xA;    COUNT(CASE WHEN status IS NULL THEN 1 END) AS in_review_count&#xA;FROM &#xA;    hotline_messages;&#xA;```&#xA;&#xA;What this will do is, for each row, we will count up the number of either `approved_count`or `in_review_count` depending on the value of the `status` cell. If that is `approved` we increment the `approved` count else if that is `NULL` we increment the `in_review_count`.&#xA;&#xA;Slick!&#xA;&#xA;```sql&#xA;sqlite&gt; SELECT &#xA;    COUNT(CASE WHEN status = &#39;approved&#39; THEN 1 END) AS approved_count,&#xA;    COUNT(CASE WHEN status IS NULL THEN 1 END) AS in_review_count&#xA;FROM &#xA;    hotline_messages;&#xA;+----------------+-----------------+&#xA;| approved_count | in_review_count |&#xA;+----------------+-----------------+&#xA;| 477            | 501             |&#xA;+----------------+-----------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;That&#39;s it from day 3 hopefully Santa is happy, and in sight of getting madder as the elves get dumber.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025 Day 2: Snowballs</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-2</link>
      <description>SQLog: Advent of SQL Day 2 Here we are on the day 2 of Advent of SQL As I said in the previous day this is in SQLite so I won&#39;t be doing it in the playground. S</description>
      <pubDate>Wed, 17 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## SQLog: Advent of SQL Day 2&#xA;&#xA;Here we are on the day 2 of Advent of SQL&#xA;&#xA;As I said in the previous day this is in SQLite so I won&#39;t be doing it in the playground. So here is your SQLite playground :)&#xA;&#xA;```sql&#xA;SELECT 1;&#xA;```&#xA;&#xA;From now on no setup straight to the problem!&#xA;&#xA;Let&#39;s download the .sql file for today&#39;s problem to see what data we are playing with.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS snowball_categories;&#xA;DROP TABLE IF EXISTS snowball_inventory;&#xA;&#xA;CREATE TABLE snowball_categories (&#xA;    id INT PRIMARY KEY,&#xA;    official_category TEXT&#xA;);&#xA;&#xA;CREATE TABLE snowball_inventory (&#xA;    id INT PRIMARY KEY,&#xA;    batch_id TEXT,&#xA;    category_name TEXT,&#xA;    quantity INT,&#xA;    status TEXT&#xA;);&#xA;```&#xA;&#xA;Well, neat and clean!&#xA;&#xA;Straight pulling it in the sqlite shell.&#xA;&#xA;```plaintext&#xA;$ sqlite3 &#xA;SQLite version 3.45.1 2024-01-30 16:01:20&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read day2-inserts.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE snowball_categories (&#xA;    id INT PRIMARY KEY,&#xA;    official_category TEXT&#xA;);&#xA;CREATE TABLE snowball_inventory (&#xA;    id INT PRIMARY KEY,&#xA;    batch_id TEXT,&#xA;    category_name TEXT,&#xA;    quantity INT,&#xA;    status TEXT&#xA;);&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Worked right off the bat.&#xA;&#xA;Straight to problem now.&#xA;&#xA;If you are working in the playground, do add this full code with inserts in order to get a sense of what the data looks like, note that this below statements is not all data, just sharing to get you a taste of the problem.&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS snowball_categories;&#xA;DROP TABLE IF EXISTS snowball_inventory;&#xA;&#xA;CREATE TABLE snowball_categories (&#xA;    id INT PRIMARY KEY,&#xA;    official_category TEXT&#xA;);&#xA;&#xA;CREATE TABLE snowball_inventory (&#xA;    id INT PRIMARY KEY,&#xA;    batch_id TEXT,&#xA;    category_name TEXT,&#xA;    quantity INT,&#xA;    status TEXT&#xA;);&#xA;&#xA;INSERT INTO snowball_categories (id, official_category) VALUES&#xA;    (1, &#39;frost-flight deluxe&#39;),&#xA;    (2, &#39;north ridge compact&#39;),&#xA;    (3, &#39;glacier sphere (xl)&#39;),&#xA;    (4, &#39;polar precision microball&#39;),&#xA;    (5, &#39;everfrost training round&#39;),&#xA;    (6, &#39;arctic blast premium&#39;);&#xA;&#xA;INSERT INTO snowball_inventory (id, batch_id, category_name, quantity, status) VALUES&#xA;    (1, &#39;BATCH-35443-J&#39;, &#39;frost-flight deluxe&#39;, 19, NULL),&#xA;    (2, &#39;BATCH-59767-M&#39;, &#39;frost-flight deluxe&#39;, 41, &#39;incomplete&#39;),&#xA;    (3, &#39;BATCH-44795-B&#39;, &#39;frost-flight deluxe&#39;, 21, &#39;ready&#39;),&#xA;    (4, &#39;BATCH-23396-C&#39;, &#39;north ridge compact&#39;, 0, &#39;incomplete&#39;),&#xA;    (5, &#39;BATCH-88907-A&#39;, &#39;frost-flight deluxe&#39;, -2, &#39;incomplete&#39;),&#xA;    (6, &#39;BATCH-42662-D&#39;, &#39;frost-flight deluxe&#39;, 47, &#39;needs review&#39;),&#xA;    (7, &#39;BATCH-37460-V&#39;, &#39;north ridge compact&#39;, 43, &#39;ready&#39;),&#xA;    (8, &#39;BATCH-21395-S&#39;, &#39;frost-flight deluxe&#39;, -2, &#39;ready&#39;),&#xA;    (9, &#39;BATCH-36100-E&#39;, &#39;frost-flight deluxe&#39;, 46, &#39;ready&#39;),&#xA;    (10, &#39;BATCH-64987-H&#39;, &#39;frost-flight deluxe&#39;, 43, NULL),&#xA;    (11, &#39;BATCH-57576-Z&#39;, &#39;melty deluxe&#39;, -5, &#39;ready&#39;),&#xA;    (12, &#39;BATCH-56025-U&#39;, &#39;snowball v2&#39;, 11, &#39;ready&#39;),&#xA;    (13, &#39;BATCH-86556-W&#39;, &#39;snowball v2&#39;, 12, &#39;ready&#39;),&#xA;    (14, &#39;BATCH-83385-N&#39;, &#39;frost-flight deluxe&#39;, 38, &#39;incomplete&#39;),&#xA;    (15, &#39;BATCH-85156-M&#39;, &#39;prototype x-12&#39;, 28, &#39;incomplete&#39;),&#xA;    (16, &#39;BATCH-82135-F&#39;, &#39;north ridge compact&#39;, 32, &#39;incomplete&#39;),&#xA;    (17, &#39;BATCH-10074-T&#39;, &#39;frost-flight deluxe&#39;, 49, &#39;needs review&#39;),&#xA;    (18, &#39;BATCH-22676-L&#39;, &#39;frost-flight deluxe&#39;, 16, &#39;incomplete&#39;),&#xA;    (19, &#39;BATCH-31174-R&#39;, &#39;north ridge compact&#39;, 33, &#39;incomplete&#39;),&#xA;    (20, &#39;BATCH-41385-B&#39;, &#39;frost-flight deluxe&#39;, 4, &#39;ready&#39;),&#xA;    (21, &#39;BATCH-50404-L&#39;, &#39;frost-flight deluxe&#39;, -4, &#39;needs review&#39;),&#xA;    (22, &#39;BATCH-92240-F&#39;, &#39;north ridge compact&#39;, 20, &#39;ready&#39;),&#xA;    (23, &#39;BATCH-29198-J&#39;, &#39;beta test sphere&#39;, 0, &#39;incomplete&#39;),&#xA;    (24, &#39;BATCH-64987-H&#39;, &#39;glacier sphere (xl)&#39;, 18, &#39;needs review&#39;),&#xA;    (25, &#39;BATCH-80008-A&#39;, &#39;frost-flight deluxe&#39;, 3, &#39;incomplete&#39;),&#xA;    (26, &#39;BATCH-88907-A&#39;, &#39;polar precision microball&#39;, 48, &#39;incomplete&#39;),&#xA;    (27, &#39;BATCH-55830-J&#39;, &#39;north ridge compact&#39;, 0, &#39;needs review&#39;),&#xA;    (28, &#39;BATCH-69470-A&#39;, &#39;frost-flight deluxe&#39;, -3, &#39;incomplete&#39;),&#xA;    (29, &#39;BATCH-46211-R&#39;, &#39;frost-flight deluxe&#39;, -3, &#39;ready&#39;),&#xA;    (30, &#39;BATCH-18675-G&#39;, &#39;glacier sphere (xl)&#39;, -1, &#39;ready&#39;);&#xA;```&#xA;&#xA;## Problem&#xA;&#xA;Let&#39;s check the challenge of day 2&#xA;&#xA;&gt; Using the `snowball_inventory` and `snowball_categories` tables, write a query that returns valid snowball categories with the count of valid snowballs per category. Your final table should have the columns `official_category` and `total_usable_snowballs`. Sort the output from fewest to most `total_usable_snowballs`.&#xA;&#xA;So, we have two tables:&#xA;&#xA;1. snowball categories&#xA;    &#xA;2. snowball inventory&#xA;    &#xA;&#xA;The snowball categories looks a quite a small table with just the category name and the id which is not really the data, the name is.&#xA;&#xA;```sql&#xA;SELECT * FROM snowball_categories;&#xA;```&#xA;&#xA;Just 6 entries with some names of the categories, the column is official categories.&#xA;&#xA;```sql&#xA;sqlite&gt; SELECT * FROM snowball_categories;&#xA;+----+---------------------------+&#xA;| id |     official_category     |&#xA;+----+---------------------------+&#xA;| 1  | frost-flight deluxe       |&#xA;| 2  | north ridge compact       |&#xA;| 3  | glacier sphere (xl)       |&#xA;| 4  | polar precision microball |&#xA;| 5  | everfrost training round  |&#xA;| 6  | arctic blast premium      |&#xA;+----+---------------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;And the other table has oddles of data:&#xA;&#xA;```sql&#xA;SELECT * FROM snowball_inventory limit 10;&#xA;```&#xA;&#xA;So, we have quite a few columns,&#xA;&#xA;1. batch\_id&#xA;    &#xA;2. category\_name&#xA;    &#xA;3. quantity&#xA;    &#xA;4. status&#xA;    &#xA;&#xA;It has 200,000 rows, that&#39;s quite a lot.&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT count(*) FROM snowball_inventory;&#xA;+----------+&#xA;| count(*) |&#xA;+----------+&#xA;| 200000   |&#xA;+----------+&#xA;&#xA;sqlite&gt; SELECT * FROM snowball_inventory limit 10;&#xA;+----+---------------+---------------------+----------+--------------+&#xA;| id |   batch_id    |    category_name    | quantity |    status    |&#xA;+----+---------------+---------------------+----------+--------------+&#xA;| 1  | BATCH-35443-J | frost-flight deluxe | 19       |              |&#xA;| 2  | BATCH-59767-M | frost-flight deluxe | 41       | incomplete   |&#xA;| 3  | BATCH-44795-B | frost-flight deluxe | 21       | ready        |&#xA;| 4  | BATCH-23396-C | north ridge compact | 0        | incomplete   |&#xA;| 5  | BATCH-88907-A | frost-flight deluxe | -2       | incomplete   |&#xA;| 6  | BATCH-42662-D | frost-flight deluxe | 47       | needs review |&#xA;| 7  | BATCH-37460-V | north ridge compact | 43       | ready        |&#xA;| 8  | BATCH-21395-S | frost-flight deluxe | -2       | ready        |&#xA;| 9  | BATCH-36100-E | frost-flight deluxe | 46       | ready        |&#xA;| 10 | BATCH-64987-H | frost-flight deluxe | 43       |              |&#xA;+----+---------------+---------------------+----------+--------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;What we have to do?&#xA;&#xA;&gt; return valid snowball categories with the count of valid snowballs per category&#xA;&#xA;That&#39;s a mouthful.&#xA;&#xA;The problem is trying to get at counting the number of inventory items per category I think.&#xA;&#xA;Because there are quite less categories and tons of inventory records.&#xA;&#xA;Also the `category_name` in the `snowball_inventory` table is not trustworthy.&#xA;&#xA;&gt; Santa hurried to the snowball storage board, but the situation only got stranger. Whole batches appeared twice. Some batches claimed they had negative snowballs (“a bookkeeping accident,” the elves muttered). Others had a quantity of zero but were still marked “Ready.” And many batches referenced categories that didn’t appear anywhere in the official Snowball Category Guide&#xA;&#xA;## Naive Approach&#xA;&#xA;The first approach I see is scan through all the inventory records and check if the category is in the `snowball_category` table and its quantity is positive or more than 0, also I wonder if the status needs to be checked as `ready`.&#xA;&#xA;&gt; “We need to know what we actually have left,” Santa said. “Not puddles. Not phantom batches. Real, usable, throw-ready snowballs.”&#xA;&#xA;**Real, throw-ready snowballs**&#xA;&#xA;Let&#39;s check how to do that with simple sub query.&#xA;&#xA;```sql&#xA;SELECT * FROM snowball_inventory&#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;;&#xA;```&#xA;&#xA;OK!&#xA;&#xA;What we did? Simply selected all the data from `snowball_inventory` table,&#xA;&#xA;in which the category name is matching either of the 6 categories in the `snowball_categories` table.&#xA;&#xA;Also the quantity is positive and not zero, also the status is set to be `ready`.&#xA;&#xA;This looks naive to me for using subquery because for each 200,000 records we will scan the `snowball_categories` . Ew!&#xA;&#xA;Wait its done! We need the count of each category! We need to group by the `category_name`&#xA;&#xA;```sql&#xA;SELECT * FROM snowball_inventory&#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name;&#xA;```&#xA;&#xA;```sql&#xA;sqlite&gt; SELECT * FROM snowball_inventory&#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name;&#xA;+-----+---------------+---------------------------+----------+--------+&#xA;| id  |   batch_id    |       category_name       | quantity | status |&#xA;+-----+---------------+---------------------------+----------+--------+&#xA;| 240 | BATCH-55793-L | arctic blast premium      | 45       | ready  |&#xA;| 163 | BATCH-75333-O | everfrost training round  | 23       | ready  |&#xA;| 3   | BATCH-44795-B | frost-flight deluxe       | 21       | ready  |&#xA;| 39  | BATCH-22704-V | glacier sphere (xl)       | 37       | ready  |&#xA;| 7   | BATCH-37460-V | north ridge compact       | 43       | ready  |&#xA;| 125 | BATCH-81987-E | polar precision microball | 47       | ready  |&#xA;+-----+---------------+---------------------------+----------+--------+&#xA;```&#xA;&#xA;That sort of looks wired right?&#xA;&#xA;Why?&#xA;&#xA;Because what happens to the quantity? Is that summed? averaged, or what just happened to the batch\_id?&#xA;&#xA;No it takes one value out of the `200 000` rows for each category. That&#39;s not what we want right?&#xA;&#xA;We want this&#xA;&#xA;&gt; return valid snowball categories with the count of valid snowballs per category&#xA;&#xA;So, we just want the category name and the count of those category. Basically count per category.&#xA;&#xA;```sql&#xA;SELECT category_name as official_category, sum(quantity) as total_usable_snowballs&#xA;FROM snowball_inventory &#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name;&#xA;```&#xA;&#xA;Here we are only fetching the `category_name` and the `count` which will bundle up all the counts from the grouped of the category\_name.&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT category_name as official_category, sum(quantity) as total_usable_snowballs &#xA;FROM snowball_inventory &#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| frost-flight deluxe       | 952019                 |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| polar precision microball | 70773                  |&#xA;+---------------------------+------------------------+&#xA;sqlite&gt; &#xA;&#xA;```&#xA;&#xA;Not done yet!&#xA;&#xA;We need to order by the count.&#xA;&#xA;&gt; Sort the output from fewest to most `total_usable_snowballs`.&#xA;&#xA;```sql&#xA;SELECT category_name, sum(quantity)  as total_usable_snowballs&#xA;FROM snowball_inventory&#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name&#xA;ORDER BY total_usable_snowballs;&#xA;```&#xA;&#xA;We can provide `ORDER BY total_usable_snowballs ASC` but ascending is default. I prefer keeping things default, you can be explicit and mention it as `ASC` to make it clear and readable.&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT category_name as official_category, sum(quantity) as total_usable_snowballs&#xA;FROM snowball_inventory &#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;&#xA;sqlite&gt; &#xA;```&#xA;&#xA;There it is!&#xA;&#xA;The solution to the day 2.&#xA;&#xA;But Santa, I am using subqueries. Is that fine?&#xA;&#xA;## Joins?&#xA;&#xA;We can use joins here, since we just require the count of each category.&#xA;&#xA;Which JOIN though?&#xA;&#xA;LEFT, RIGHT, INNER?&#xA;&#xA;any really?&#xA;&#xA;You don&#39;t choose the join based on the problem, you define your output columns and then choose the type of join that would give you the result.&#xA;&#xA;If the thing that is to be searched from left to right and then its a left join, i.e. join everything from the left. and so on.&#xA;&#xA;Here,&#xA;&#xA;I need all the `official_category` column right, which is in the `snowball_categories` table.&#xA;&#xA;If I assume the `snowball_categories` is on the left, I can join everything for that row in the left to match all the `category_name` rows in the `snowball_inventory` table which would be in the right. Like so.&#xA;&#xA;```sql&#xA;SELECT &#xA;    snowball_categories.official_category as official_category,&#xA;    SUM(snowball_inventory.quantity) as total_usable_snowballs&#xA;FROM snowball_categories&#xA;LEFT JOIN snowball_inventory&#xA;     ON snowball_categories.official_category = snowball_inventory.category_name&#xA;     AND snowball_inventory.quantity &gt; 0&#xA;     AND snowball_inventory.status == &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;```&#xA;&#xA;We JOIN on the condition of `snowball_categories.official_category = snowball_inventory.category_name` which is to say, the `category_name` in the `snowball_inventory` table should match the `official category` from the `snowball_categories` table. Also the other conditions like `quantity` should be more than `0` and the `status` should be `ready`.&#xA;&#xA;We will still need to group by and order by as the joins will basically be the filtering criteria in which to select or reject the bad categories. However to group by the category name and obtain the count of each category we need to group by the name of the category.&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT snowball_categories.official_category AS official_category, SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_categories&#xA;LEFT JOIN snowball_inventory&#xA;     ON snowball_categories.official_category = snowball_inventory.category_name&#xA;     AND snowball_inventory.quantity &gt; 0&#xA;     AND snowball_inventory.status == &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;```&#xA;&#xA;This is it! The same thing that we did for the subquery. Just the filtering part is different.&#xA;&#xA;How about INNER JOIN?&#xA;&#xA;`INNER JOIN` is different from `LEFT JOIN` as it will by default take only rows that match the condition, whereas `LEFT JOIN` will include records/rows from the `LEFT` table even if they are empty or `NULL` . Similar is `RIGHT JOIN`, it will include everything from the RIGHT table even if that is `NULL` so that all the records from the `RIGHT` table are in the result set.&#xA;&#xA;For INNER JOIN if we flip the tables end it won&#39;t matter, as it only relies on the condition and not on the order of where the tables are placed (left or right).&#xA;&#xA;```sql&#xA;SELECT&#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;```&#xA;&#xA;This is with INNER JOIN nothing changed here? did it?&#xA;&#xA;except the word `LEFT` is no longer there, the default join is `INNER JOIN` .&#xA;&#xA;In our case i don&#39;t think we have `NULL` values in the `snowball_categories` table or `snowball_inventory` table with the category name columns. So LEFT AND INNER JOIN won&#39;t be different in terms of the results.&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT&#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;sqlite&gt; &#xA;```&#xA;&#xA;However if we decide to make the `snowball_inventory` table as the `Left` table then does it make a difference?&#xA;&#xA;Well we don&#39;t want that to be the left table if we are doing a `LEFT` JOIN as it might list all the wrong categories in the result list as well even if they are not in the `official_category` those will be 0 or NULL but that would make a mess.&#xA;&#xA;We instead can do a RIGHT JOIN.&#xA;&#xA;With the tables order as&#xA;&#xA;1. snowball\_inventory&#xA;    &#xA;2. snowball\_categories&#xA;    &#xA;&#xA;And we will RIGHT JOIN that is select all the columns from the right to be in the result set even if the condition is not true. It will be NULL. But the case is not true for here, as there are only 6 non-NULL category names in the snowball\_categories table.&#xA;&#xA;```sql&#xA;&#xA;SELECT                                                       &#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;RIGHT JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;```&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT                                                       &#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;RIGHT JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;```&#xA;&#xA;However if we try to use `LEFT JOIN` with `snowball_inventory` as the `LEFT` table, we might get something wired.&#xA;&#xA;```sql&#xA;SELECT                                                &#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;LEFT JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;```&#xA;&#xA;We have 6 and 1 more category as empty ?&#xA;&#xA;That category is for all the rows that didn&#39;t met the criteria of JOIN but still as asked to `LEFT JOIN` it will try to show the results for all the rows in the left table which is the `snowball_inventory` table.&#xA;&#xA;```plaintext&#xA;&#xA;sqlite&gt; SELECT                                                &#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;LEFT JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;|                           | 1692814                |&#xA;+---------------------------+------------------------+&#xA;```&#xA;&#xA;## Timer&#xA;&#xA;Phew! Let&#39;s see which one of them is the best one&#xA;&#xA;Let&#39;s see the timings:&#xA;&#xA;```plaintext&#xA;.timer on&#xA;```&#xA;&#xA;### SUBQUERY&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT category_name, SUM(quantity)AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;WHERE&#xA;   category_name IN (SELECT official_category FROM snowball_categories)&#xA;   AND quantity &gt; 0&#xA;   AND status == &#39;ready&#39;&#xA;GROUP BY category_name&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|       category_name       | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;Run Time: real 0.077 user 0.073886 sys 0.002942&#xA;&#xA;sqlite&gt; &#xA;```&#xA;&#xA;### LEFT JOIN&#xA;&#xA;```plaintext&#xA;sqlite&gt;  SELECT snowball_categories.official_category AS official_category, SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_categories&#xA;LEFT JOIN snowball_inventory&#xA;     ON snowball_categories.official_category = snowball_inventory.category_name &#xA;     AND snowball_inventory.quantity &gt; 0&#xA;     AND snowball_inventory.status == &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;Run Time: real 0.182 user 0.171433 sys 0.009907&#xA;sqlite&gt; &#xA;```&#xA;&#xA;### INNER JOIN&#xA;&#xA;```plaintext&#xA;sqlite&gt;  SELECT&#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;Run Time: real 0.072 user 0.069623 sys 0.001929&#xA;sqlite&gt; &#xA;```&#xA;&#xA;### RIGHT JOIN&#xA;&#xA;```plaintext&#xA;sqlite&gt; SELECT                                                       &#xA;  snowball_categories.official_category AS official_category,&#xA;  SUM(snowball_inventory.quantity) AS total_usable_snowballs&#xA;FROM snowball_inventory&#xA;RIGHT JOIN snowball_categories&#xA;  ON snowball_categories.official_category = snowball_inventory.category_name&#xA;  AND snowball_inventory.quantity &gt; 0&#xA;  AND snowball_inventory.status = &#39;ready&#39;&#xA;GROUP BY official_category&#xA;ORDER BY total_usable_snowballs;&#xA;+---------------------------+------------------------+&#xA;|     official_category     | total_usable_snowballs |&#xA;+---------------------------+------------------------+&#xA;| arctic blast premium      | 11470                  |&#xA;| everfrost training round  | 24248                  |&#xA;| polar precision microball | 70773                  |&#xA;| glacier sphere (xl)       | 165158                 |&#xA;| north ridge compact       | 594119                 |&#xA;| frost-flight deluxe       | 952019                 |&#xA;+---------------------------+------------------------+&#xA;Run Time: real 0.151 user 0.149690 sys 0.001957&#xA;&#xA;sqlite&gt; &#xA;```&#xA;&#xA;Time wise ranking:&#xA;&#xA;1. INNER JOIN&#xA;    &#xA;2. SUBQUERY&#xA;    &#xA;3. RIGHT JOIN&#xA;    &#xA;4. LEFT JOIN&#xA;    &#xA;&#xA;That&#39;s quite quick, not much can be measured here. I am sure there are other ways to do this. But stopping here for today. Explored JOINs a little.&#xA;&#xA;See you for the day 3&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>Advent of SQL 2025: Wish List</title>
      <link>https://meetgor.com/sqlog/advent-of-sql-2025-day-1</link>
      <description>Learning SQLite: Advent of SQL Day 1 I am trying to learn SQLite, I want to understand that database. It&#39;s quite simple yet the whole world uses it for various</description>
      <pubDate>Tue, 16 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Learning SQLite: Advent of SQL Day 1&#xA;&#xA;I am trying to learn SQLite, I want to understand that database. It&#39;s quite simple yet the whole world uses it for various kinds of things ranging from developers&#39; toy database to spaceships. What a tiny engineering marvel!&#xA;&#xA;I am happy to see this happening: [Advent of SQL](https://databaseschool.com/series/advent-of-sql-videos-308)&#xA;&#xA;What a better time to learn more. I guess I want to start by exploring all the specificities of the INSERT statement in SQLite after exploring most of the things of the CREATE TABLE statement.&#xA;&#xA;But here I am jumping to this. Why? Because I want to solve something first before exploring other branches.&#xA;&#xA;Today I am going to try to solve the day 1 part.&#xA;&#xA;&gt; **NOTE:** I would be using my local sqlite db for this or a playground on this for testing. I am not going to use the browser-based playground attached in the databaseschool.com app for a reason. I want to use SQLite. The database is some form of Postgres; I don&#39;t mind using it, but I want to do it in SQLite.&#xA;&#xA;I have a playground on my blog for SQLite, you can try it out here:&#xA;&#xA;```sql&#xA;SELECT 1;&#xA;```&#xA;&#xA;It uses an embedded SQLite version (3.49.1) with sql-js as a wasm extension: [sql.js v1.13.0](https://github.com/sql-js/sql.js/releases/tag/v1.13.0)&#xA;&#xA;Back to the problem elves!&#xA;&#xA;## Setup&#xA;&#xA;This is the first day, so advent calendar usually requires some setup or preparation for the rest of the days. Luckily it&#39;s optional for you if you are doing it in the playground of database school or in PostgreSQL Database.&#xA;&#xA;We have some .sql files as input for creation and insertions of tables and rows in the database. It&#39;s for constructing the schema (tables) and populating the rows that the problem requires us to do.&#xA;&#xA;The SQL looked something simpler like:&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS wish_list CASCADE;&#xA;&#xA;CREATE TABLE wish_list (&#xA;   id          BIGSERIAL PRIMARY KEY,&#xA;   child_name  TEXT,&#xA;   raw_wish    TEXT&#xA;);&#xA;&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (1, &#39;James A.&#39;, &#39; BLUEY SUPERMARKET PLAY SET&#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (2, &#39;Sade C.&#39;, &#39;lego star wars set &#39;);&#xA;```&#xA;&#xA;There are around 499,000 rows!&#xA;&#xA;However when I tried to read directly into a SQLite shell as:&#xA;&#xA;```&#xA;.read day1-wish-list.sql&#xA;```&#xA;&#xA;It got an error for parsing the DROP TABLE statement:&#xA;&#xA;```&#xA;sqlite&gt; .read day1-wish-list.sql&#xA;Parse error near line 11: near &#34;CASCADE&#34;: syntax error&#xA;  DROP TABLE IF EXISTS wish_list CASCADE;&#xA;                       error here ---^&#xA;```&#xA;&#xA;Obviously it was designed for Postgres. It won&#39;t work in SQLite.&#xA;&#xA;SQLite is minimal. It might not have everything that PostgreSQL has, but PostgreSQL might have everything that SQLite has (maybe but not as is).&#xA;&#xA;So, we need to remove the CASCADE, which is an option to decide what to do with the related data rows when a relation is removed. In this case, it is cascading—deleting all the other related data points or records in the related tables. SQLite doesn&#39;t have options to modify the relations for the DROP TABLE statement. It has it for CREATE TABLE with the foreign key constraint.&#xA;&#xA;Now we need to remove it. It can&#39;t be in the DROP TABLE statement for SQLite database:&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS wish_list;&#xA;```&#xA;&#xA;Now, let&#39;s check by running the queries again:&#xA;&#xA;```&#xA;.read day1-wish-list.sql&#xA;```&#xA;&#xA;That works!&#xA;&#xA;```&#xA;$ sqlite3&#xA;SQLite version 3.45.1 2024-01-30 16:01:20&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read day1-wish-list_sqlite1.sql&#xA;sqlite&gt; .schema&#xA;CREATE TABLE wish_list (&#xA;   id          BIGSERIAL PRIMARY KEY,&#xA;   child_name  TEXT,&#xA;   raw_wish    TEXT&#xA;);&#xA;```&#xA;&#xA;But this looks weird:&#xA;&#xA;```sql&#xA;CREATE TABLE wish_list (&#xA;   id          BIGSERIAL PRIMARY KEY,&#xA;   child_name  TEXT,&#xA;   raw_wish    TEXT&#xA;);&#xA;```&#xA;&#xA;BIGSERIAL is not a datatype in SQLite. It might be in PostgreSQL. But does it matter? In SQLite, if the table is not STRICT, it doesn&#39;t matter what type the column is or even if it is NOT specified of any type. That&#39;s fine.&#xA;&#xA;INSERT some rows shall we?&#xA;&#xA;```sql&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (1, &#39;James A.&#39;, &#39; BLUEY SUPERMARKET PLAY SET&#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (2, &#39;Sade C.&#39;, &#39;lego star wars set &#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (3, &#39;Juan Q.&#39;, &#39;   SCOOTER &#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (4, &#39;Samir S.&#39;, &#39;   LEGO STAR WARS SET  &#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (5, &#39;Priya E.&#39;, &#39;shaved ice machine   &#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (6, &#39;Henry L.&#39;, &#39;   mini brands fill the fridge&#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (7, &#39;Ayumi C.&#39;, &#39;VR HEADSET&#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (8, &#39;Juan Y.&#39;, &#39;BARBIE DREAMHOUSE   &#39;);&#xA;INSERT INTO wish_list (id, child_name, raw_wish) VALUES (9, &#39;Priya O.&#39;, &#39;  VR HEADSET  &#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM wish_list;&#xA;```&#xA;&#xA;&#xA;But I want to know what is the type of `wish_list.id`?&#xA;&#xA;```sql&#xA;SELECT distinct(typeof(id)) FROM wish_list;&#xA;```&#xA;&#xA;Here&#39;s the output:&#xA;&#xA;```&#xA;sqlite&gt; SELECT distinct(typeof(id)) FROM wish_list;&#xA;integer&#xA;```&#xA;&#xA;It is integer, because of type affinity, I guess. Good work SQLite.&#xA;&#xA;Looks like the data is fine.&#xA;&#xA;Now to the problem.&#xA;&#xA;## Problem&#xA;&#xA;[Link to the challenge](https://databaseschool.com/series/advent-of-sql-videos-309-text-challenge-using-the-and-get-started)&#xA;&#xA;**Challenge:** Using the wish_list table, count how many times each cleaned toy name appears, from most requested to least requested. Return the results in two columns: wish and count. Make sure the wish results have no extra leading or trailing spaces and are all lowercase.&#xA;&#xA;So, simply we need two columns:&#xA;- **wish** (the text)&#xA;- **count** (the number of times that wish is wished)&#xA;&#xA;### Subtleties&#xA;&#xA;&gt; Some children had typed extra spaces. Some wrote in ALL CAPS. Some had letters that danced between cases like playful snowflakes.&#xA;&gt; &#xA;&gt; I know there are some issues with spelling, the extra spaces, or the funny capitalization, but I just need to know what the children truly meant.&#xA;&#xA;So, we need to either use lower caps or upper caps the wish text and trim off the space.&#xA;&#xA;SCALAR FUNCTIONS!!&#xA;&#xA;### Scalar Functions&#xA;&#xA;[SQLite Core Functions](https://sqlite.org/lang_corefunc.html)&#xA;&#xA;I read through the list of around ~70 of them, most of them are kind of the same with different parameters.&#xA;&#xA;The one that I found relevant are:&#xA;- [LOWER](https://sqlite.org/lang_corefunc.html#lower)&#xA;- [TRIM](https://sqlite.org/lang_corefunc.html#trim)&#xA;&#xA;That&#39;s it, right? Convert into LOWER (or UPPER) and TRIM off the spaces.&#xA;&#xA;```sql&#xA;SELECT LOWER(TRIM(raw_wish)) FROM wish_list;&#xA;```&#xA;&#xA;Don&#39;t run just yet!&#xA;&#xA;```sql&#xA;SELECT LOWER(TRIM(raw_wish)) FROM wish_list LIMIT 100;&#xA;```&#xA;&#xA;Looks good. Now to the next step.&#xA;&#xA;### Grouping and Counting&#xA;&#xA;We need to count them i.e. to group by the wish.&#xA;&#xA;&gt; GROUP BY: What group by does is that it condenses the rows of certain column into a singular column for instance if there are 10 entries for &#34;lego star wars set&#34; adding a group by wish will create a singular entry for that wish and we can then perform operations like sum, count, average and all of that on other rows&#xA;&#xA;```sql&#xA;SELECT LOWER(TRIM(raw_wish)) AS wish, count(*) AS count&#xA;FROM wish_list&#xA;GROUP BY wish;&#xA;```&#xA;&#xA;Here we are grouping by wish because we don&#39;t want 10 entries of &#34;lego star wars set&#34; we just want one common entry to view the unique wishes.&#xA;&#xA;Also by using `COUNT(*)` we are counting the instances of each row. As I said, the multiple rows with the same wish are squished into a single row. So now we can use aggregate functions like count, sum, in our case we want to count how many instances of those particular wish are.&#xA;&#xA;&#xA;### Ordering Results&#xA;&#xA;Does that solve it? Mostly, just need the ORDER BY now.&#xA;&#xA;Because we also need to order the results:&#xA;&#xA;```sql&#xA;SELECT LOWER(TRIM(raw_wish)) AS wish, count(*) AS count&#xA;FROM wish_list&#xA;GROUP BY wish&#xA;ORDER BY count DESC;&#xA;```&#xA;&#xA;Perfect? Probably.&#xA;&#xA;&gt; What ORDER BY does is that it just determines which way the row should be aligned based on what column and how i.e. the column name and either ASC(ending) or DESC(ending).&#xA;&#xA;&#xA;Here we have ordered by count so that we can filter the most wished toy or least wished toy at the top and increasing or decreasing order of it.&#xA;&#xA;### Results&#xA;&#xA;Now with the mode table:&#xA;&#xA;```&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT LOWER(TRIM(raw_wish)) as wish, count(*) as count FROM wish_list group by wish order by count desc;&#xA;+-----------------------------+-------+&#xA;|            wish             | count |&#xA;+-----------------------------+-------+&#xA;| lego city f1 car            | 32893 |&#xA;| barbie dreamhouse           | 32785 |&#xA;| nerf blaster                | 32746 |&#xA;| lego star wars set          | 32611 |&#xA;| beyblade battle arena       | 29564 |&#xA;| magna-tiles pet playhouse   | 29529 |&#xA;| bluey supermarket play set  | 26292 |&#xA;| lego friends amusement park | 25982 |&#xA;| pokemon trainer box         | 25968 |&#xA;| duplo building set          | 23005 |&#xA;| mini brands fill the fridge | 22965 |&#xA;| electric toy train set      | 22885 |&#xA;| toniebox audio player       | 19529 |&#xA;| scooter                     | 19496 |&#xA;| vr headset                  | 16468 |&#xA;| squishmallows               | 16304 |&#xA;| shaved ice machine          | 16263 |&#xA;| drone for kids              | 13151 |&#xA;| coding robot                | 13025 |&#xA;| headphones                  | 13006 |&#xA;| interactive robot dog       | 9770  |&#xA;| fidget spinner              | 3590  |&#xA;| yo-yo                       | 3565  |&#xA;| slime kit                   | 3553  |&#xA;| littlest pet shop playset   | 3543  |&#xA;| chatter telephone           | 3527  |&#xA;| fingerlings robot monkey    | 3511  |&#xA;| rubik&#39;s revolution          | 3474  |&#xA;+-----------------------------+-------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;---&#xA;&#xA;Day 1 done, moving on to day two by helping those pesky elves tomorrow. I am amazed at how stupider problems humans create with those elves as target. Just kidding, humans are elves :)&#xA;&#xA;Happy Coding :)&#xA;&#xA;Happy Squealing&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLITE SQL: Create Table with STRICT Option</title>
      <link>https://meetgor.com/sqlog/sqlite-create-strict-table</link>
      <description>We have seen how to create loosely Not adhering to the types. Because if we create a table with column name of type text, and insert a integer, it will happily</description>
      <pubDate>Fri, 12 Dec 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;We have seen how to create `TABLE` loosely Not adhering to the types. Because if we create a table with column name of type text, and insert a integer, it will happily store it as text. It is very flexible as we saw. So, in such `CREATE TABLE` statement in SQLite, without the `STRICT` constraint, the types don&#39;t matter.&#xA;&#xA;&#xA;If you created a table with column type as XYZ it will accept it, because it really doesn&#39;t see that. It will see the data coming in from the insert statement and store it whatever it thinks is the best one for that piece of data. Look at the below example:&#xA;&#xA;```sql&#xA;CREATE TABLE t1 (n xyz);&#xA;INSERT INTO t1 values(4);&#xA;INSERT INTO t1 values(&#34;gg&#34;);&#xA;SELECT rowid, n, typeof(n) FROM t1;&#xA;```&#xA;&#xA;&#xA;```&#xA;rowid    n    typeof(n)&#xA;1        4    integer&#xA;2       gg    text&#xA;```&#xA;&#xA;&#xA;See? The column type, it doesn&#39;t matter.&#xA;&#xA;Unless it&#39;s strict or any constraints, or generated conditions have been added.&#xA;&#xA;The STRICT table option&#xA;&#xA;Let&#39;s quote from the documentation what it means&#xA;&#xA;&#xA;&gt; *   Every column definition must specify a datatype for that column.&#xA;&gt;&#xA;&gt; *   The freedom to specify a column without a datatype is removed.&#xA;&gt;&#xA;&gt; *   The datatype must be one of the following:&#xA;&gt;&#xA;&gt;     *   INT&#xA;&gt;&#xA;&gt;     *   INTEGER&#xA;&gt;&#xA;&gt;     *   REAL&#xA;&gt;&#xA;&gt;     *   TEXT&#xA;&gt;&#xA;&gt;     *   BLOB&#xA;&gt;&#xA;&gt;     *   ANY&#xA;&gt;&#xA;&gt; *   The [PRAGMA integrity\_check](https://sqlite.org/pragma.html#pragma_integrity_check) and [PRAGMA quick\_check](https://sqlite.org/pragma.html#pragma_quick_check) commands check the type of the content of all columns in STRICT tables and show errors if anything is amiss.&#xA;&gt;&#xA;&#xA;There are other nuances of the STRICT table options and the kind of constraint that you put on the columns, but that requires studying very specific examples. We&#39;ll check those nuances later.&#xA;&#xA;For now though, we need to understand how to create a strictly typed table, and what the strict option adds to the table.&#xA;&#xA;```sql&#xA;CREATE TABLE users(&#xA;    name TEXT,&#xA;    age  INT,&#xA;    credits REAL,&#xA;    profile_pic BLOB&#xA;) STRICT;&#xA;```&#xA;&#xA;So, we have all the actual possible types we can use in a table column when defining a table. If you don&#39;t provide an column type, or provide any other type than `TEXT`, `INT` or `INTEGER`, `REAL`, `BLOB`, or `ANY` (don&#39;t put any, you lose the purpose of strict) it won&#39;t compile and execute the table creation. You need to provide a valid type among the 5 types.&#xA;&#xA;However if you try to create a strict table with wrong column type or no column type.&#xA;&#xA;```sql&#xA;CREATE TABLE t1 (t) STRICT;&#xA;-- Error: missing datatype for t1.t&#xA;&#xA;CREATE TABLE t1 (t something) STRICT;&#xA;-- Error: unknown datatype for t1.t: &#34;something&#34;&#xA;```&#xA;&#xA;&#xA;&#xA;Without STRICT it works as usual:&#xA;&#xA;```sql&#xA;CREATE TABLE t1 (t  something);&#xA;INSERT INTO t1 values(123), (&#39;abc&#39;), (X&#39;&#39;), (123.45);&#xA;SELECT t, typeof(t) FROM t1;&#xA;```&#xA;&#xA;&#xA;```&#xA;t    typeof(t)&#xA;123    integer&#xA;abc    text&#xA;    blob&#xA;123.45    real&#xA;```&#xA;&#xA;&#xA;Now back to the original example:&#xA;&#xA;Insert a couple of rows:&#xA;&#xA;```sql&#xA;-- All are NULL Values&#xA;INSERT INTO users DEFAULT VALUES;&#xA;&#xA;INSERT INTO users (name, age, credits, profile_pic)&#xA;VALUES (&#xA;    &#39;Alice&#39;,&#xA;    30,&#xA;    100.0,&#xA;    X&#39;89504E470D0A1A0A&#39;&#xA;);&#xA;```&#xA;&#xA;&#xA;This will insert two rows, the first one, all the columns will be `NULL`. If you look at the type of these statement. Those will be as per the table schema, consistent for all rows.&#xA;&#xA;```&#xA;name    typeof(name)    age    typeof(age)    credits    typeof(credits)    profile_pic    typeof(profile_pic)&#xA;null        null        null        null&#xA;Alice    text    30    integer    100    real    137,80,78,71,13,10,26,10    blob&#xA;```&#xA;&#xA;This has rightly added NULL type for the null values but when the data is in the row, it forces that type stated in the schema of the table.&#xA;&#xA;&#xA;Now, if we try to mess up the column data, it won&#39;t work&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age, credits, profile_pic)&#xA;VALUES (34, &#39;4&#39;, 8, 123);&#xA;-- Error: cannot store INT value in BLOB column users.profile_pic&#xA;&#xA;INSERT INTO users (name, age, credits, profile_pic)&#xA;VALUES (34, &#39;4&#39;, 8, &#39;&#39;);&#xA;-- Error: cannot store TEXT value in BLOB column users.profile_pic&#xA;&#xA;INSERT INTO users (name, age, credits, profile_pic)&#xA;VALUES (34, &#39;abc&#39;, 8, X&#39;&#39;);&#xA;-- Error: cannot store TEXT value in INT column users.age&#xA;```&#xA;&#xA;This will work, as type affinity and the conversion is possible within the column types here.&#xA;&#xA;```&#xA;INSERT INTO users (name, age, credits, profile_pic)&#xA;VALUES (CAST(34 AS INT), &#39;3&#39;, 8, X&#39;&#39;);&#xA;```&#xA;&#xA;But if some data is not able to convert into that strict type, it will fail the constraint of strict column type.&#xA;&#xA;For instance&#xA;- `123` or `&#34;&#34;` is not force convertible to BLOB which is binary large object. We need to parse it with X&#39;&#39; strings for some raw data to make it a BLOB like object in SQLite.&#xA;- `abc`  is not convertible/casteble to INTEGER or REAL Value.&#xA;&#xA;So, the strict type is actually strict as we see the pattern repeating in SQLite.&#xA;&#xA;&gt; It is flexible till you allow it to be, you can at anytime change the lever and make it strict&#xA;&#xA;This is true for column-row level type checking with the STRICT table option while creating table.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Table with Select</title>
      <link>https://meetgor.com/sqlog/sqlite-create-table-from-select</link>
      <description>Creating table with Select Query Sometimes you want to create a new table based on data that already exists,maybe you’re archiving old rows, generating a summ</description>
      <pubDate>Sun, 28 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Creating table with Select Query&#xA;&#xA;Sometimes you want to create a new table based on data that already exists,maybe you’re archiving old rows, generating a summary, cloning a table, or transforming data into a new structure. Doing this might require you to write a `CREATE TABLE` statement with all column definitions, then running one or more `INSERT INTO` statements to populate it. That’s a lot of work.&#xA;&#xA;Well, SQL is more flexible then you might think, Instead of manually defining columns and inserting data, you write one statement that both builds the table and fills it with the rows returned by your `SELECT` query. This makes it incredibly useful for backups, and whatever you are doing (hopefully not taking down the prod db).&#xA;&#xA;We have CTAS, `CREATE TABLE AS SELECT` statements.&#xA;This basically means to take whatever the SELECT gives, and turn it into a table.&#xA;&#xA;## Creating CTAS&#xA;&#xA;Lets&#39;s start with a simple example, we will use our goodol users table first and populate it with some data.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    age INTEGER NOT NULL&#xA;);&#xA;```&#xA;&#xA;This would create a table called `users` with the columns `id`, `name`, and `age`. The `id` column is the primary key, and the `name` and `age` columns are not null.&#xA;&#xA;Let&#39;s insert some data into the table.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age)&#xA;VALUES (&#39;abc&#39;, 20), (&#39;def&#39;, 30), (&#39;ghi&#39;, 40);&#xA;```&#xA;&#xA;This would insert three rows into the `users` table, with the names `abc`, `def`, and `ghi`, and the ages 20, 30, and 40.&#xA;&#xA;Now, let&#39;s use the `CREATE TABLE AS SELECT` statement to create a new table called `users_copy` that is a copy of the `users` table.&#xA;&#xA;```sql&#xA;CREATE TABLE users_copy AS SELECT * FROM users;&#xA;```&#xA;This would create a new table called `users_copy` that is a copy of the `users` table, with the same columns and data.&#xA;&#xA;It will copy the same column structure and data from the `users` table to the `users_copy` table.&#xA;&#xA;```sql&#xA;SELECT * FROM users_copy;&#xA;```&#xA;&#xA;Let&#39;s check the schema of the new table `users_copy`&#xA;&#xA;```sql&#xA;PRAGMA table_info(users_copy);&#xA;PRAGMA table_info(users);&#xA;```&#xA;&#xA;This would print the schema of the `users_copy` table.&#xA;&#xA;```&#xA;PRAGMA table_info(users_copy);&#xA;+-----+------+------+---------+------------+----+&#xA;| cid | name | type | notnull | dflt_value | pk |&#xA;+-----+------+------+---------+------------+----+&#xA;| 0   | id   | INT  | 0       |            | 0  |&#xA;| 1   | name | TEXT | 0       |            | 0  |&#xA;| 2   | age  | INT  | 0       |            | 0  |&#xA;+-----+------+------+---------+------------+----+&#xA;&#xA;&#xA;PRAGMA table_info(users);&#xA;+-----+------+---------+---------+------------+----+&#xA;| cid | name |  type   | notnull | dflt_value | pk |&#xA;+-----+------+---------+---------+------------+----+&#xA;| 0   | id   | INTEGER | 0       |            | 1  |&#xA;| 1   | name | TEXT    | 1       |            | 0  |&#xA;| 2   | age  | INTEGER | 1       |            | 0  |&#xA;+-----+------+---------+---------+------------+----+&#xA;```&#xA;&#xA;You can see the difference here, the `users.id` column is a primary key in the `users` table, however in the `users_copy` table it isn&#39;t. Also the constraints like `notnull` are not reciprocated in the `users_copy` table.&#xA;&#xA;So, is this a good way to create a copy of a table?&#xA;&#xA;Maybe, its just copying the data and the bare-bone structure of the table and not the schema.&#xA;&#xA;```sqlite&#xA;&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    age INTEGER NOT NULL&#xA;);&#xA;sqlite&gt; INSERT INTO users (name, age)&#xA;VALUES (&#39;abc&#39;, 20), (&#39;def&#39;, 30), (&#39;ghi&#39;, 40);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+-----+&#xA;| id | name | age |&#xA;+----+------+-----+&#xA;| 1  | abc  | 20  |&#xA;| 2  | def  | 30  |&#xA;| 3  | ghi  | 40  |&#xA;+----+------+-----+&#xA;sqlite&gt; CREATE TABLE users_copy AS SELECT * FROM users;&#xA;sqlite&gt; SELECT * FROM users_copy;&#xA;+----+------+-----+&#xA;| id | name | age |&#xA;+----+------+-----+&#xA;| 1  | abc  | 20  |&#xA;| 2  | def  | 30  |&#xA;| 3  | ghi  | 40  |&#xA;+----+------+-----+&#xA;sqlite&gt; PRAGMA table_info(users_copy);&#xA;+-----+------+------+---------+------------+----+&#xA;| cid | name | type | notnull | dflt_value | pk |&#xA;+-----+------+------+---------+------------+----+&#xA;| 0   | id   | INT  | 0       |            | 0  |&#xA;| 1   | name | TEXT | 0       |            | 0  |&#xA;| 2   | age  | INT  | 0       |            | 0  |&#xA;+-----+------+------+---------+------------+----+&#xA;sqlite&gt; PRAGMA table_info(users);&#xA;+-----+------+---------+---------+------------+----+&#xA;| cid | name |  type   | notnull | dflt_value | pk |&#xA;+-----+------+---------+---------+------------+----+&#xA;| 0   | id   | INTEGER | 0       |            | 1  |&#xA;| 1   | name | TEXT    | 1       |            | 0  |&#xA;| 2   | age  | INTEGER | 1       |            | 0  |&#xA;+-----+------+---------+---------+------------+----+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;&#xA;## What CTAS do?&#xA;&#xA;It will create a new table called `users_copy` that is a copy of the `users` table, with the same columns and data.&#xA;&#xA;However, it just copies the data and structure but not the entire schema.&#xA;&#xA;What I means is that if you have constraints and indexes, the select statement won&#39;t copy it over to the new table.&#xA;&#xA;So, it will not copy the following;&#xA;- Constraints&#xA;- indexes&#xA;- Triggers&#xA;&#xA;However, it will copy everything in the select statement&#39;s output.&#xA;&#xA;So, you can add custom columns, computed columns, etc.&#xA;&#xA;You can also filter the select statement with `WHERE` and `ORDER BY` statements in order to store the limited set of data that you wished to store.&#xA;&#xA;## Copying only the structure&#xA;&#xA;If you only wanted to only copy the structure and not the entire data, you can invalidate any rows from the select statement with the `WHERE` clause. This will just create columns in the new table. Giving you a fresh copy of the new table, without any data.&#xA;&#xA;```sql&#xA;CREATE TABLE users_copy AS SELECT * FROM users WHERE 0;&#xA;```&#xA;&#xA;This will create a new table called `users_copy` that is a copy of the `users` table, but without any data.&#xA; &#xA; ```sqlite&#xA;&#xA;sqlite&gt; CREATE TABLE users_copy AS SELECT * FROM users WHERE 0;&#xA;Parse error: table users_copy already exists&#xA;  CREATE TABLE users_copy AS SELECT * FROM users WHERE 0;&#xA;               ^--- error here&#xA;sqlite&gt; CREATE TABLE IF NOT EXISTS users_copy AS SELECT * FROM users WHERE 0;&#xA;sqlite&gt; SELECT * FROM users_copy;&#xA;+----+------+-----+&#xA;| id | name | age |&#xA;+----+------+-----+&#xA;| 1  | abc  | 20  |&#xA;| 2  | def  | 30  |&#xA;| 3  | ghi  | 40  |&#xA;+----+------+-----+&#xA;sqlite&gt; DROP TABLE users_copy;&#xA;sqlite&gt; CREATE TABLE IF NOT EXISTS users_copy AS SELECT * FROM users WHERE 0;&#xA;sqlite&gt; SELECT * FROM users_copy;&#xA;sqlite&gt; PRAGMA table_info(users_copy);&#xA;+-----+------+------+---------+------------+----+&#xA;| cid | name | type | notnull | dflt_value | pk |&#xA;+-----+------+------+---------+------------+----+&#xA;| 0   | id   | INT  | 0       |            | 0  |&#xA;| 1   | name | TEXT | 0       |            | 0  |&#xA;| 2   | age  | INT  | 0       |            | 0  |&#xA;+-----+------+------+---------+------------+----+&#xA;sqlite&gt;&#xA; ```&#xA; As you can see, we need to actually drop the table in order to create a new table form scratch just like any normal table in sqlite.&#xA;&#xA; So, there are a few more gotchas and details of this, than I thought to be, will be taking a closer look at each one by one in the next few posts.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Table If Not Exists</title>
      <link>https://meetgor.com/sqlog/sqlite-create-table-if-not-exists</link>
      <description>Creating table if not already exists The has one clause that we can add to create table if it doesn&#39;t exist already. So this comes handy if you already have cre</description>
      <pubDate>Fri, 26 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Creating table if not already exists&#xA;&#xA;The `CREATE TABLE` has one clause that we can add to create table if it doesn&#39;t exist already. So this comes handy if you already have created a table and instead of throwing an error it simply gracefully handles the query and doesn&#39;t re-create the table. &#xA;&#xA;Let&#39;s first creat a table, as usual, we&#39;ll use the goodol `users` table.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;```&#xA;Let&#39;s insert some data into the table.&#xA;&#xA;```sql&#xA;INSERT INTO users (name) VALUES (&#39;abc&#39;), (&#39;def&#39;);&#xA;```&#xA;&#xA;Ok, the table `users` exist and also has some data in it.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Now, if we try to create the same table again, it will throw an error.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;```&#xA;&#xA;It will result in `users` table already exists error.&#xA;&#xA;You cannot re-create a table that already exist, you need to drop the table and create again, or alter any columns you wanted to if that is possible.&#xA;&#xA;Or, if you just want to create a table only if it doesn&#39;t already exists, then you can add the `IF NOT EXISTS` clause:&#xA;&#xA;```sql&#xA;CREATE TABLE IF NOT EXISTS users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;```&#xA;&#xA;Now, if we try to create the table again, it will not throw any error.&#xA;&#xA;However, it won&#39;t re-create the `users` table, it will just skip the creation.&#xA;&#xA;The table would be as is.&#xA;&#xA;```sqlite&#xA;&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;sqlite&gt; INSERT INTO users (name) VALUES (&#39;abc&#39;), (&#39;def&#39;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;+----+------+&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;Parse error: table users already exists&#xA;  CREATE TABLE users (     id INTEGER PRIMARY KEY,     name TEXT NOT NULL,     e&#xA;               ^--- error here&#xA;sqlite&gt; CREATE TABLE IF NOT EXISTS users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;+----+------+&#xA;sqlite&gt; .schema users&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;sqlite&gt;&#xA;```&#xA;&#xA;## Where can we use this?&#xA;&#xA;- Accidental cases: Avoiding creating a table that already exists.&#xA;- Code Generation tools: If you are using schema-based code generation tools like [sqlc](https://github.com/sqlc-dev/sqlc), you might keep on re-running the generation code again and again, to avoid the error statements, you can use this to stop table from re-creating or erroing queries.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Temporary Table</title>
      <link>https://meetgor.com/sqlog/sqlite-create-temp-table</link>
      <description>Temporary Table in SQLite A Temporary table as the name suggests, is a temporary table. It only resisdes on the database until the current session of the databa</description>
      <pubDate>Wed, 24 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Temporary Table in SQLite&#xA;&#xA;A Temporary table as the name suggests, is a temporary table. It only resisdes on the database until the current session of the database.&#xA;&#xA;In case of sqlite, the temporary table is available in the session until the file reader or session driver closes the connection.&#xA;&#xA;The table is created in a separate database file called `temp` that is stored in your temporary path.&#xA;&#xA;## Create a temporary table&#xA;&#xA;To create the temporary table, simply use the `TEMP` or `TEMPORARY` keyword before the `TABLE` in `CREATE TABLE` expression. So, `CREATE TEMP TABLE` or `CREATE TEMPORARY TABLE` will be the notation to create a temporary table in sqlite.&#xA;&#xA;```sql&#xA;CREATE TEMP TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;```&#xA;This will create a temporary table called `users` in the `temp` database.&#xA;&#xA;You can insert, update, delete, query and relate to other tables just like a normal table.&#xA;&#xA;```sql&#xA;INSERT INTO users (name) VALUES (&#39;abc&#39;), (&#39;def&#39;), (&#39;ghi&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;```sql&#xA;UPDATE users SET name = &#39;xyz&#39; WHERE name = &#39;abc&#39;;&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE name = &#39;def&#39;;&#xA;```&#xA;&#xA;```sql&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    title TEXT,&#xA;    content TEXT,&#xA;    user_id INTEGER,&#xA;    FOREIGN KEY (user_id) REFERENCES users (id)&#xA;);&#xA;```&#xA;&#xA;```sqlite&#xA;$ sqlite3 myusers.db&#xA;&#xA;sqlite&gt; CREATE TEMP TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;&#xA;sqlite&gt; .tables&#xA;temp.users&#xA;&#xA;sqlite&gt; INSERT INTO users (name) VALUES (&#39;abc&#39;), (&#39;def&#39;), (&#39;ghi&#39;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt; UPDATE users SET name = &#39;xyz&#39; WHERE name = &#39;abc&#39;;&#xA;SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | xyz  |&#xA;| 2  | def  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt; DELETE FROM users WHERE name = &#39;def&#39;;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | xyz  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt; CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    title TEXT,&#xA;    content TEXT,&#xA;    user_id INTEGER,&#xA;    FOREIGN KEY (user_id) REFERENCES users (id)&#xA;);&#xA;&#xA;sqlite&gt; .tables&#xA;posts temp.users&#xA;&#xA;sqlite&gt; INSERT INTO posts(title, content, user_id) VALUES(&#39;banger post&#39;, &#39;ai slop&#39;, 1);&#xA;sqlite&gt; INSERT INTO posts(title, content, user_id) VALUES(&#39;ai slop&#39;, &#39;banger content&#39;, 3);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+-------------+----------------+---------+&#xA;| id |    title    |    content     | user_id |&#xA;+----+-------------+----------------+---------+&#xA;| 1  | banger post | ai slop        | 1       |&#xA;| 2  | ai slop     | banger content | 3       |&#xA;+----+-------------+----------------+---------+&#xA;sqlite&gt;&#xA;&#xA;```&#xA;&#xA;So, this creates a temporary table, but what if we already have a `users` table in the database?&#xA;&#xA;&gt; You can notice, when I query all the tables with the dot-command `.tables`, it adds a prefix of `temp.` to the temporary table. That is an indication of the table being stored in a temporary database file. Hence if there were two `users` table created, we would be able to identify it with the `temp` and the `main` as the database name.&#xA;&#xA;The `main` is optional, but if you want to refer to the `temporary` table then explicitly mention it as `temp.users` or `temp.&lt;table_name&gt;`&#xA;&#xA;However, if you try to perform anything on the actual table, you will have to explicitly mention the `main` keyword, as the temp table will take precedence over the table in the main database.&#xA;&#xA;I reponed the database as `myusers.db` , now the `users` table which was the temporary table, is gone.&#xA;&#xA;```sql&#xA;.tables&#xA;```&#xA;&#xA;This only shows, `posts` table, but the temporary table was destroyed.&#xA;&#xA;Let&#39;s create back the `users` table as a temporary table in the database.&#xA;&#xA;```sql&#xA;CREATE TEMP TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;```&#xA;&#xA;This will create the table in the `temp` database.&#xA;&#xA;Now, if we check the tables, we will see the `temp.users` table.&#xA;&#xA;```sql&#xA;.tables&#xA;```&#xA;Let&#39;s create a new table `users` in the main database.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;```&#xA;This would create a `users` table that will be stored permanently in the main database i.e. the `myusers.db` in my case to the file.&#xA;&#xA;Let&#39;s view what the `.tables` output, it shows 3 tables now&#xA;&#xA;- `posts`&#xA;- `temp.users`&#xA;- `users`&#xA;&#xA;Now, here, if you insert bunch of different values into the `users` table.&#xA;&#xA;```sql&#xA;INSERT INTO users (name) VALUES (&#39;abc&#39;), (&#39;def&#39;), (&#39;ghi&#39;);&#xA;```&#xA;&#xA;And query the `users` table&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Which table is that inserted and queried?&#xA;&#xA;The `temp.users` table, because the precedence of temporary table is higher than the main table.&#xA;&#xA;So, let&#39;s try to query the `temp.users` table.&#xA;&#xA;```sql&#xA;SELECT * FROM temp.users;&#xA;```&#xA;&#xA;As you can see the table has the records. &#xA;&#xA;But if we try to query the `main.users` table, it will have no records.&#xA;&#xA;```sql&#xA;SELECT * FROM main.users;&#xA;```&#xA;This is empty.&#xA;As, expected, just the `users` table will refer to th temporary table and not the main table.&#xA;&#xA;So, let&#39;s insert different values in the `main.users` table.&#xA;&#xA;```sql&#xA;INSERT INTO main.users (name) VALUES (&#39;pqr&#39;), (&#39;stu&#39;);&#xA;```&#xA;&#xA;And query the `main.users` table.&#xA;&#xA;```sql&#xA;SELECT * FROM main.users;&#xA;```&#xA;This will show the inserted records in the `main.users` table.&#xA;&#xA;Here&#39;s the SQLog :)&#xA;&#xA;```sqlite&#xA;$ sqlite3 myusers.db&#xA;sqlite&gt; .tables&#xA;posts&#xA;&#xA;sqlite&gt; CREATE TEMP TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;&#xA;sqlite&gt; .tables&#xA;posts       temp.users&#xA;&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;&#xA;sqlite&gt; .tables&#xA;posts       temp.users  users&#xA;&#xA;sqlite&gt; .schema temp.users&#xA;&#xA;CREATE TABLE temp.users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;&#xA;sqlite&gt; .schema main.users&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT&#xA;);&#xA;&#xA;sqlite&gt; SELECT * FROM users;&#xA;sqlite&gt; SELECT * FROM temp.users;&#xA;sqlite&gt; INSERT INTO users (name) VALUES (&#39;abc&#39;), (&#39;def&#39;), (&#39;ghi&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt; SELECT * FROM temp.users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt; SELECT * FROM main.users;&#xA;sqlite&gt; INSERT INTO main.users (name) VALUES (&#39;pqr&#39;), (&#39;stu&#39;);&#xA;sqlite&gt; SELECT * FROM main.users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | pqr  |&#xA;| 2  | stu  |&#xA;+----+------+&#xA;sqlite&gt; SELECT * FROM temp.users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+&#xA;| id | name |&#xA;+----+------+&#xA;| 1  | abc  |&#xA;| 2  | def  |&#xA;| 3  | ghi  |&#xA;+----+------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;So, to sum up:&#xA;- `CREATE TEMP TABLE` or `CREATE TEMPORARY TABLE` will create a temporary table in sqlite.&#xA;- The temporary table is available in the session until the file reader or session driver closes the connection.&#xA;- The temporary table is stored on the separate temporary file&#xA;- The temporary table is prefixed with the `temp` table.&#xA;- If there are two tables, one temporary and one permanent, with the same name, then the temporary table will be prefered unless `main` is prefixed to the table name.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Collate Column Modifier</title>
      <link>https://meetgor.com/sqlog/sqlite-collate-column-modifier</link>
      <description>Collate Column Modifier Collate is a modifier in SQL that specifies the column how to arrange and compare the values that would be populated in the rows. So, fo</description>
      <pubDate>Mon, 22 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Collate Column Modifier&#xA;&#xA;Collate is a modifier in SQL that specifies the column how to arrange and compare the values that would be populated in the rows.&#xA;&#xA;So, for a given table, we can specify a few modifiers that would let SQL decide how to handle the values.&#xA;&#xA;## Adding Collate Modifier&#xA;&#xA;To add a collate modifier, we can use the `COLLATE` keyword, followed by the collation name.&#xA;&#xA;There are 3 collations available in SQLite:&#xA;&#xA;- BINARY&#xA;- NOCASE&#xA;- RTRIM&#xA;&#xA;These basically define how the column value might be treated in later comparisons or populating in querying.&#xA;&#xA;Let&#39;s quickly understand each with the basic example&#xA;&#xA;## BINARY&#xA;&#xA;The `BINARY` modifier is the **default collation** in SQLite.  &#xA;It compares values **byte by byte**, meaning it is case-sensitive and space-sensitive.&#xA;&#xA;Let&#39;s create a demo table:&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    name TEXT COLLATE BINARY&#xA;);&#xA;```&#xA;Now, let&#39;s insert some values&#xA;&#xA;```sql&#xA;INSERT INTO users (name) VALUES (&#39;jane&#39;), (&#39;John&#39;), (&#39;JANE&#39;), (&#39;Jane   &#39;);&#xA;```&#xA;&#xA;Here, I have inserted 4 names in the `users` table. I have deliberately inserted a mix of cases. Let&#39;s now see how the values are populated in the table.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;We see that there are `4` rows rightly being populated.&#xA;&#xA;```sql&#xA;SELECT * FROM users WHERE name = &#39;jane&#39;;&#xA;```&#xA;Now, we see that only the `jane` name is being populated. This is because the `BINARY` modifier is not case sensitive and space sensitive.&#xA;&#xA;```sqlite&#xA;&#xA;sqlite&gt; CREATE TABLE users (&#xA;    name TEXT COLLATE BINARY&#xA;);&#xA;sqlite&gt; INSERT INTO users (name) VALUES (&#39;jane&#39;), (&#39;John&#39;), (&#39;JANE&#39;), (&#39;Jane   &#39;);&#xA;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+---------+&#xA;|  name   |&#xA;+---------+&#xA;| jane    |&#xA;| John    |&#xA;| JANE    |&#xA;| Jane    |&#xA;+---------+&#xA;sqlite&gt; SELECT * FROM users WHERE name = &#39;jane&#39;;&#xA;+------+&#xA;| name |&#xA;+------+&#xA;| jane |&#xA;+------+&#xA;```&#xA;&#xA;This is the modifier added by default to the columns that have `TEXT` affinity (not necessarily the `TEXT` type).&#xA;&#xA;## NOCASE&#xA;&#xA;The `NOCASE` modifier is used to ignore the case of the values in the column. When comparing the two values of the same column, if that values are comparable as strings, then the case is not considered.&#xA;&#xA;Let&#39;s create a typical `users` table with a `name` column&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    name TEXT COLLATE NOCASE &#xA;);&#xA;```&#xA;&#xA;Here, we have added a `COLLATE NOCASE` modifier to the `name` column.&#xA;&#xA;Let&#39;s insert some values:&#xA;&#xA;```sql&#xA;INSERT INTO users (name) VALUES (&#39;John&#39;), (&#39;jane&#39;), (&#39;JANE&#39;);&#xA;```&#xA;&#xA;Here, I have inserted 3 names in the `users` table. I have deliberately inserted a mix of cases. Let&#39;s now see how the values are populated in the table. The `Jane` name is in lowercase as well as uppercase.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;We see that there are `3` rows rightly being populated.&#xA;&#xA;Now, what would happen if the user queries for `jane`?&#xA;&#xA;```sql&#xA;SELECT * FROM users WHERE name = &#39;jane&#39;;&#xA;```&#xA;This comes with both the `jane`s the `jane` with lowercase as well `JANE` with uppercase. This is because the `NOCASE` modifier is not case sensitive.&#xA;&#xA;```sqlite&#xA;sqlite&gt; CREATE TABLE users (&#xA;    name TEXT COLLATE NOCASE&#xA;);&#xA;&#xA;sqlite&gt; INSERT INTO users (name) VALUES (&#39;John&#39;), (&#39;jane&#39;), (&#39;JANE&#39;);&#xA;&#xA;sqlite&gt; .mode table&#xA;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+------+&#xA;| name |&#xA;+------+&#xA;| John |&#xA;| jane |&#xA;| JANE |&#xA;+------+&#xA;&#xA;sqlite&gt; SELECT * FROM users WHERE name = &#39;jane&#39;;&#xA;+------+&#xA;| name |&#xA;+------+&#xA;| jane |&#xA;| JANE |&#xA;+------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;That is the basic idea of how the `NOCASE` collation works.&#xA;&#xA;&#xA;## RTRIM&#xA;&#xA;The `RTRIM` modifier is used to remove the trailing spaces from the values in the column.&#xA;&#xA;Let&#39;s create a typical `users` table with a `name` column&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    name TEXT COLLATE RTRIM &#xA;);&#xA;```&#xA;&#xA;Here, we have added a `COLLATE RTRIM` modifier to the `name` column.&#xA;&#xA;Let&#39;s insert some values:&#xA;&#xA;```sql&#xA;INSERT INTO users (name) VALUES (&#39;John&#39;), (&#39;jane&#39;), (&#39;JANE&#39;), (&#39;Jane   &#39;), (&#39;jane   &#39;);&#xA;```&#xA;Here, I have inserted 5 names in the `users` table. I have deliberately inserted a mix of cases. Let&#39;s now see how the values are populated in the table.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;We see that there are `5` rows rightly being populated.&#xA;&#xA;```sql&#xA;SELECT * FROM users WHERE name = &#39;jane&#39;;&#xA;```&#xA;&#xA;We see 2 rows being populated. This is because the `RTRIM` modifier is removing the trailing spaces.&#xA;&#xA;The first row is the `jane` name without any spaces, the 2nd row that we added. The second row is the `jane` name with the trailing space, the last one that was added.&#xA;&#xA;&#xA;```sql&#xA;sqlite&gt; CREATE TABLE users (&#xA;    name TEXT COLLATE RTRIM&#xA;);&#xA;sqlite&gt; INSERT INTO users (name) VALUES (&#39;John&#39;), (&#39;jane&#39;), (&#39;JANE&#39;), (&#39;Jane   &#39;), (&#39;jane   &#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+---------+&#xA;|  name   |&#xA;+---------+&#xA;| John    |&#xA;| jane    |&#xA;| JANE    |&#xA;| Jane    |&#xA;| jane    |&#xA;+---------+&#xA;sqlite&gt; SELECT * FROM users WHERE name = &#39;jane&#39;;&#xA;+---------+&#xA;|  name   |&#xA;+---------+&#xA;| jane    |&#xA;| jane    |&#xA;+---------+&#xA;sqlite&gt; SELECT rowid, * FROM users WHERE name = &#39;jane&#39;;&#xA;+-------+---------+&#xA;| rowid |  name   |&#xA;+-------+---------+&#xA;| 2     | jane    |&#xA;| 5     | jane    |&#xA;+-------+---------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;That is the basic idea of how the `RTRIM` collation works.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Many to Many Table without RowID</title>
      <link>https://meetgor.com/sqlog/sqlite-many-to-many-table-without-rowid</link>
      <description>Many to Many Relation We saw the basic example of Relation in the second last post from this, there we just focused on the concept of the relation and not so mu</description>
      <pubDate>Sat, 20 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Many to Many Relation&#xA;&#xA;We saw the basic example of [Many-To-Many](https://www.meetgor.com/sqlog/sqlite-many-to-many-relations/) Relation in the second last post from this, there we just focused on the concept of the relation and not so much on the structure of the junction table.&#xA;&#xA;Let&#39;s take a look at the schema again:&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    title TEXT NOT NULL,&#xA;    content TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE author_post (&#xA;    user_id INTEGER NOT NULL,&#xA;    post_id INTEGER NOT NULL,&#xA;    PRIMARY KEY (user_id, post_id),&#xA;    FOREIGN KEY (user_id) REFERENCES users (id),&#xA;    FOREIGN KEY (post_id) REFERENCES posts (id)&#xA;);&#xA;```&#xA;&#xA;Let&#39;s now populate the tables.&#xA;&#xA;```sql&#xA;-- adding authors/users&#xA;INSERT INTO users(name) VALUES (&#39;Glauber&#39;), (&#39;Jamie&#39;), (&#39;Pekka&#39;);&#xA;&#xA;-- adding posts written by pekka&#xA;INSERT INTO posts(title, content) VALUES (&#39;Switching to Zig from Rust&#39;, &#39;I love C&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;RAG in SQLite&#39;, &#39;AI first database&#39;);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (3, 1), (3, 2);&#xA;&#xA;-- adding posts written by glauber&#xA;INSERT INTO posts(title, content) VALUES (&#39;Rewriting SQLite&#39;, &#39;We are no more a sqlite-fork&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;Offline Writes in SQLite&#39;, &#39;Lets sync&#39;);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 3), (1, 4);&#xA;&#xA;-- adding a post co-authored by pekka and glauber&#xA;INSERT INTO posts(title, content) VALUES(&#39;Limbo&#39;, &#39;SQLite in Rust&#39;);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (3, 5);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 5);&#xA;```&#xA;&#xA;This has now populated the following tables:&#xA;&#xA;```sql&#xA;SELECT&#xA;    p.id,&#xA;    p.content AS post,&#xA;    GROUP_CONCAT(u.name, &#39;, &#39;) AS authors&#xA;FROM posts p&#xA;JOIN author_post up ON p.id = up.post_id&#xA;JOIN users u ON u.id = up.user_id&#xA;GROUP BY p.id;&#xA;```&#xA;&#xA;This has 5 posts, two written by `Pekka`, two written by `Glauber`, and one co-authored by both `Pekka` and `Glauber`.&#xA;&#xA;Here&#39;s the SQLog:&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    title TEXT NOT NULL,&#xA;    content TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE author_post (&#xA;    user_id INTEGER NOT NULL,&#xA;    post_id INTEGER NOT NULL,&#xA;    PRIMARY KEY (user_id, post_id),&#xA;    FOREIGN KEY (user_id) REFERENCES users (id),&#xA;    FOREIGN KEY (post_id) REFERENCES posts (id)&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; INSERT INTO users(name) VALUES (&#39;Glauber&#39;), (&#39;Jamie&#39;), (&#39;Pekka&#39;);&#xA;sqlite&gt; INSERT INTO posts(title, content) VALUES (&#39;Switching to Zig from Rust&#39;, &#39;I love C&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;RAG in SQLite&#39;, &#39;AI first database&#39;);&#xA;SELECT * FROM posts;&#xA;+----+----------------------------+-------------------+&#xA;| id |           title            |      content      |&#xA;+----+----------------------------+-------------------+&#xA;| 1  | Switching to Zig from Rust | I love C          |&#xA;| 2  | RAG in SQLite              | AI first database |&#xA;+----+----------------------------+-------------------+&#xA;sqlite&gt; INSERT INTO posts(title, content) VALUES (&#39;Rewriting SQLite&#39;, &#39;We are no more a sqlite-fork&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;Offline Writes in SQLite&#39;, &#39;Lets sync&#39;);&#xA;SELECT * FROM posts;&#xA;+----+----------------------------+------------------------------+&#xA;| id |           title            |           content            |&#xA;+----+----------------------------+------------------------------+&#xA;| 1  | Switching to Zig from Rust | I love C                     |&#xA;| 2  | RAG in SQLite              | AI first database            |&#xA;| 3  | Rewriting SQLite           | We are no more a sqlite-fork |&#xA;| 4  | Offline Writes in SQLite   | Lets sync                    |&#xA;+----+----------------------------+------------------------------+&#xA;sqlite&gt; INSERT INTO author_post(user_id, post_id) VALUES (1, 3), (1, 4);&#xA;sqlite&gt; INSERT INTO author_post(user_id, post_id) VALUES (3, 1), (3, 2);&#xA;sqlite&gt; INSERT INTO posts(title, content) VALUES(&#39;Limbo&#39;, &#39;SQLite in Rust&#39;);&#xA;sqlite&gt; INSERT INTO author_post(user_id, post_id) VALUES (3, 5);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 5);&#xA;sqlite&gt; SELECT&#xA;    p.id,&#xA;    p.content AS post,&#xA;    GROUP_CONCAT(u.name, &#39;, &#39;) AS authors&#xA;FROM posts p&#xA;JOIN author_post up ON p.id = up.post_id&#xA;JOIN users u ON u.id = up.user_id&#xA;GROUP BY p.id;&#xA;+----+------------------------------+----------------+&#xA;| id |             post             |    authors     |&#xA;+----+------------------------------+----------------+&#xA;| 1  | I love C                     | Pekka          |&#xA;| 2  | AI first database            | Pekka          |&#xA;| 3  | We are no more a sqlite-fork | Glauber        |&#xA;| 4  | Lets sync                    | Glauber        |&#xA;| 5  | SQLite in Rust               | Pekka, Glauber |&#xA;+----+------------------------------+----------------+&#xA;sqlite&gt; SELECT * FROM author_post;&#xA;+---------+---------+&#xA;| user_id | post_id |&#xA;+---------+---------+&#xA;| 1       | 3       |&#xA;| 1       | 4       |&#xA;| 3       | 1       |&#xA;| 3       | 2       |&#xA;| 3       | 5       |&#xA;| 1       | 5       |&#xA;+---------+---------+&#xA;&#xA;sqlite&gt; SELECT rowid, * FROM author_post;&#xA;+-------+---------+---------+&#xA;| rowid | user_id | post_id |&#xA;+-------+---------+---------+&#xA;| 1     | 1       | 3       |&#xA;| 2     | 1       | 4       |&#xA;| 3     | 3       | 1       |&#xA;| 4     | 3       | 2       |&#xA;| 5     | 3       | 5       |&#xA;| 6     | 1       | 5       |&#xA;+-------+---------+---------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;The interesting part here is this `rowid`&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM author_post;&#xA;```&#xA;&#xA;This table `author_post` has primary key identified from the combination of the `user_id` and `post_id` column. However, storing `rowid` makes no sense, as it is just redundant, right?&#xA;&#xA;Why would you not want to have the `rowid`?&#xA;&#xA;The answer is very naive and subtle, but might impact the database querying if the `author_post` has millions and billions of rows. The rowid space will be wasted for no reason.&#xA;&#xA;Hence, we can avoid creating the rowid using the `WITHOUT ROWID` parameter or option while creating the table.&#xA;&#xA;```sql&#xA;DROP TABLE author_post;&#xA;&#xA;CREATE TABLE author_post (&#xA;    user_id INTEGER NOT NULL,&#xA;    post_id INTEGER NOT NULL,&#xA;    PRIMARY KEY (user_id, post_id),&#xA;    FOREIGN KEY (user_id) REFERENCES users (id),&#xA;    FOREIGN KEY (post_id) REFERENCES posts (id)&#xA;) WITHOUT ROWID;&#xA;```&#xA;&#xA;The rest of the commands from inserts to selection and all of the things we saw above will remain the same.&#xA;&#xA;```sql&#xA;-- adding authors/users&#xA;INSERT INTO users(name) VALUES (&#39;Glauber&#39;), (&#39;Jamie&#39;), (&#39;Pekka&#39;);&#xA;&#xA;-- adding posts written by pekka&#xA;INSERT INTO posts(title, content) VALUES (&#39;Switching to Zig from Rust&#39;, &#39;I love C&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;RAG in SQLite&#39;, &#39;AI first database&#39;);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (3, 1), (3, 2);&#xA;&#xA;-- adding posts written by glauber&#xA;INSERT INTO posts(title, content) VALUES (&#39;Rewriting SQLite&#39;, &#39;We are no more a sqlite-fork&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;Offline Writes in SQLite&#39;, &#39;Lets sync&#39;);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 3), (1, 4);&#xA;&#xA;-- adding a post co-authored by pekka and glauber&#xA;INSERT INTO posts(title, content) VALUES(&#39;Limbo&#39;, &#39;SQLite in Rust&#39;);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (3, 5);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 5);&#xA;```&#xA;&#xA;However, when you query the `author_post` table, the `rowid` will not be returned as it doesn&#39;t exists.&#xA;&#xA;```sql&#xA;SELECT * FROM author_post;&#xA;```&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM author_post;&#xA;```&#xA;This would return error as the `rowid` doesn&#39;t exists.&#xA;&#xA;We will still need the `rowid` in `posts` and `users` table as those are the `PRIMARY KEY` columns.&#xA;&#xA;Both the tables `users` and `posts` will have `rowid` referenced or aliased as id of their respective columns as we know because of the special case of `INTEGER PRIMARY KEY`&#xA;&#xA;```sql&#xA;SELECT rowid, _rowid_, * FROM users;&#xA;```&#xA;&#xA;```sql&#xA;SELECT rowid, _rowid_, * FROM posts;&#xA;```&#xA;&#xA;Only in the case of `author_post` or any junction table, where the primary key is referenced as a combination of other two foreign keys, we don&#39;t need the `rowid` as the junction table doesn&#39;t have any unique data in it, it is just the mapping of two existing data points/records in the other two tables in the database.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: One to One Relations</title>
      <link>https://meetgor.com/sqlog/sqlite-one-to-one-relations</link>
      <description>One to One Relations I might missed this basic relationship model as I haven&#39;t really found it quite commonly used, but its still used in very specific examples</description>
      <pubDate>Fri, 19 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## One to One Relations&#xA;&#xA;I might missed this basic relationship model as I haven&#39;t really found it quite commonly used, but its still used in very specific examples.&#xA;&#xA;That one relationship is `one-to-one`. As the name suggests, it maps one row to exactly one row.&#xA;&#xA;Unlike the `one-to-many` relation which had one of the records/rows from the tables or entities connected by multiple ropes, both the records/rows from the tables or entities are connected by only one rope in `one-to-one` relation.&#xA;&#xA;Where do you think this could be used?&#xA;&#xA;Since it can&#39;t be applied to a user and a posts relation model, as that will be a too rigid constrained relationship. One user can only author one post, and one post can only be authored by one user.&#xA;&#xA;So, can you think of a relation where one record or row is tied to exactly one other record or row in the other table?&#xA;&#xA;Well, it could be user and his unique card, passport or even his subscriptions.&#xA;&#xA;1. One user can only have one passport (or any other identity document).&#xA;2. One user can only have one credit card (for a specific bank)&#xA;3. One user can only have one subscription (to a specific service)&#xA;&#xA;In those cases, I think the `one-to-one` relation serves well and is maybe the only way to get over the constraint.&#xA;&#xA;You can definitely restrict the `one-to-many` relation to get this done, but might be a little wired. Will check that in other post.&#xA;&#xA;## Creating a One to One Relation&#xA;&#xA;How do we define a one to one relation, if we add a foreign key to the table that can refer multiple entities, so maybe if that foreign key is the primary key?&#xA;&#xA;Ok, let me explain more clearly.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE news_subscriptions (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    price INTEGER NOT NULL,&#xA;    status TEXT NOT NULL,&#xA;    expiry_date TEXT NOT NULL,&#xA;    user_id INTEGER NOT NULL,&#xA;    FOREIGN KEY (user_id) REFERENCES users(id)&#xA;);&#xA;```&#xA;&#xA;In the above example, we have a `users` table and `news_subscriptions` table, where `news_subscriptions` table is a one to one relation to `users` table.&#xA;&#xA;Here, the `user_id` is a foreign key in `news_subscriptions` table that can refer to the `id` of the `users` table.&#xA;&#xA;This as you know is a `one-to-many` relation, as one user can have multiple news subscriptions.&#xA;&#xA;Becuase here, one user can create multiple subscriptions, we don&#39;t want that right?&#xA;&#xA;How can we restrict the many subscriptions to one user?&#xA;&#xA;What if the `PRIMARY KEY` of the `subscriptions` table was the same as the `users` table?&#xA;&#xA;That will restrict it to add any duplicate subscription for the same user.&#xA;&#xA;Solved!&#xA;&#xA;```sql&#xA;DROP TABLE IF EXISTS users;&#xA;DROP TABLE IF EXISTS news_subscriptions;&#xA;&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE news_subscriptions (&#xA;    user_id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    price INTEGER NOT NULL,&#xA;    status TEXT NOT NULL,&#xA;    expiry_date TEXT NOT NULL,&#xA;    FOREIGN KEY (user_id) REFERENCES users(id)&#xA;);&#xA;```&#xA;&#xA;Here, instead of a separate `user_id` column, we have made the `user_id` the `PRIMARY KEY` of the `news_subscriptions` table.&#xA;&#xA;NOTE: OOPs! don&#39;t forget to turn on the `PRAGMA foreign_keys` option.&#xA;&#xA;&gt; PRAGMA foreign_keys = ON;&#xA;&#xA;This will tie the `news_subscriptions` table directly to an record/row that would require only one record/row in the `users` table.&#xA;&#xA;Let&#39;s try inserting a few users:&#xA;&#xA;```sql&#xA;INSERT INTO users(email, password) VALUES(&#39;harry&#39;, &#39;harry123&#39;),&#xA;(&#39;ron&#39;, &#39;ron123&#39;),&#xA;(&#39;malfoy&#39;, &#39;malfoy123&#39;);&#xA;```&#xA;&#xA;This will insert 3 records/rows in the `users` table.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Now, let&#39;s try inserting a subscription:&#xA;&#xA;```sql&#xA;INSERT INTO news_subscriptions(user_id, name, price, status, expiry_date)&#xA;VALUES(1, &#39;Wizardry Weekly&#39;, 500, &#39;active&#39;, &#39;2025-12-31&#39;);&#xA;```&#xA;&#xA;This will insert 1 record/row in the `news_subscriptions` table. This is a subscription for the user with `id` as `1`.&#xA;&#xA;```sql&#xA;SELECT * FROM news_subscriptions;&#xA;```&#xA;&#xA;Let&#39;s get the user info too:&#xA;&#xA;```sql&#xA;SELECT &#xA;    u.id, u.email,&#xA;    n.*&#xA;FROM users u&#xA;JOIN news_subscriptions n&#xA;ON u.id = n.user_id;&#xA;```&#xA;&#xA;Now, if we try to insert another subscription for the same user:&#xA;&#xA;```sql&#xA;INSERT INTO news_subscriptions(user_id, name, price, status, expiry_date)&#xA;VALUES(1, &#39;Muggle Cup&#39;, 250, &#39;active&#39;, &#39;2025-12-31&#39;);&#xA;```&#xA;&#xA;This will throw an error, as we have already inserted a subscription for the user with `id` as `1`.&#xA;&#xA;Hence we successfully created a one to one relation between the `users` and `news_subscriptions` tables.&#xA;&#xA;&#xA;```&#xA;sqlite&gt;&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    email TEXT NOT NULL,&#xA;    password TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE news_subscriptions (&#xA;    user_id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    price INTEGER NOT NULL,&#xA;    status TEXT NOT NULL,&#xA;    expiry_date TEXT NOT NULL,&#xA;    FOREIGN KEY (user_id) REFERENCES users(id)&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; INSERT INTO users(email, password) VALUES(&#39;harry&#39;, &#39;harry123&#39;), (&#39;ron&#39;, &#39;ron123&#39;), (&#39;malfoy&#39;, &#39;malfoy123&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+--------+-----------+&#xA;| id | email  | password  |&#xA;+----+--------+-----------+&#xA;| 1  | harry  | harry123  |&#xA;| 2  | ron    | ron123    |&#xA;| 3  | malfoy | malfoy123 |&#xA;+----+--------+-----------+&#xA;&#xA;sqlite&gt; INSERT INTO news_subscriptions(user_id, name, price, status, expiry_date) VALUES(1, &#39;Wizardry Weekly&#39;, 500, &#39;active&#39;, &#39;2025-12-31&#39;);&#xA;sqlite&gt; SELECT * FROM news_subscriptions;&#xA;+---------+-----------------+-------+--------+-------------+&#xA;| user_id |      name       | price | status | expiry_date |&#xA;+---------+-----------------+-------+--------+-------------+&#xA;| 1       | Wizardry Weekly | 500   | active | 2025-12-31  |&#xA;+---------+-----------------+-------+--------+-------------+&#xA;&#xA;sqlite&gt; INSERT INTO news_subscriptions(user_id, name, price, status, expiry_date) VALUES(1, &#39;Muggle Cup&#39;, 250, &#39;active&#39;, &#39;2025-12-31&#39;);&#xA;Runtime error: UNIQUE constraint failed: news_subscriptions.user_id (19)&#xA;sqlite&gt;&#xA;&#xA;sqlite&gt; SELECT&#xA;    u.id, u.email,&#xA;    n.*&#xA;FROM users u&#xA;JOIN news_subscriptions n&#xA;ON u.id = n.user_id;&#xA;+----+-------+---------+-----------------+-------+--------+-------------+&#xA;| id | email | user_id |      name       | price | status | expiry_date |&#xA;+----+-------+---------+-----------------+-------+--------+-------------+&#xA;| 1  | harry | 1       | Wizardry Weekly | 500   | active | 2025-12-31  |&#xA;+----+-------+---------+-----------------+-------+--------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Dead simple, this is how we create `one-to-one` relation.&#xA;&#xA;We define the `primary key` of the child table as the `foreign key` of the parent table.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Many to Many Relations</title>
      <link>https://meetgor.com/sqlog/sqlite-many-to-many-relations</link>
      <description>Many to Many Relations We have seen relationship and also the relationship. Let&#39;s extend it with relationship. Here one row can refer to many other rows, but th</description>
      <pubDate>Thu, 18 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Many to Many Relations&#xA;&#xA;We have seen [one-to-many](https://www.meetgor.com/sqlog/sqlite-one-to-many-foreign-key/) relationship and also the [self-join](https://www.meetgor.com/sqlog/sqlite-self-join-relations) relationship.&#xA;&#xA;Let&#39;s extend it with `many-to-many` relationship. Here one row can refer to many other rows, but that other row can also be referred by multiple rows.&#xA;&#xA;That was confusing, let&#39;s take the simple example of `users` and `posts` as usual.&#xA;&#xA;- In `One-To-Many` relation, one post can only be written by one user. However, one user can author multiple posts.&#xA;- In `Many-To-Many` relation, one user can author multiple posts and one post can be written by multiple users.&#xA;&#xA;That is the distinction, it is not about the relation, it is more about the design that your system needs.&#xA;&#xA;Let&#39;s start creating one simple example:&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    title TEXT NOT NULL,&#xA;    content TEXT NOT NULL&#xA;);&#xA;```&#xA;&#xA;We have created two tables, `users` and `posts` each representing their own data. However, we haven&#39;t referenced the users from the posts table, because that might not work out well.&#xA;&#xA;Why?&#xA;Well, there are few considerations in a `many-to-many` relationship.&#xA;&#xA;- In a `one-to-many` relation, we could simply add a `user_id` column in posts. That works because each post has exactly one author.&#xA;- But in a `many-to-many` relation, a single post can have multiple authors. If we put a `user_id` inside posts, we’d be forced to choose only one user, right?&#xA;&#xA;What if we tried multiple columns like `user_id_1`, `user_id_2` and so on?&#xA;You can see that won&#39;t work or rather create a lot of confusion and redundancy. What if a post has 4 authors? Or 6? Not practical but could be an exception.&#xA;&#xA;When we say, `many-to-many` we truly have to mean  `MANY` posts can be written by `ONE` author and `MANY` authors can write one `post`.&#xA;&#xA;This is where we need to create a separate table usually called as the junction or an association table. There are [other](https://en.wikipedia.org/wiki/Associative_entity) names too.&#xA;&#xA;For this we can create a table `author_post` (not the best of names but it&#39;s usually the table and it&#39;s relation as the name). This table will refer the `PRIMARY KEY`s of both the `users` and the `posts` table to map it to a single `post`, and the `PRIMARY KEY` of this table becomes the combinational pair of those two primary keys.&#xA;&#xA;```sql&#xA;CREATE TABLE author_post (&#xA;    user_id INTEGER NOT NULL,&#xA;    post_id INTEGER NOT NULL,&#xA;    PRIMARY KEY (user_id, post_id),&#xA;    FOREIGN KEY (user_id) REFERENCES users(id),&#xA;    FOREIGN KEY (post_id) REFERENCES posts(id)&#xA;);&#xA;```&#xA;&#xA;Here, the `user_id` and `post_id` are the `PRIMARY KEY`s of the `users` and the `posts` table respectively, however those are the `FOREIGN KEY`s for this `author_post` table. The `PRIMARY KEY` of the `author_post` table becomes, the combination of both `user_id` and `post_id` as `PRIMARY KEY(user_id, post_id)`.&#xA;&#xA;Let&#39;s take a example to make it clear as usual:&#xA;&#xA;Let&#39;s say `Pekka`(sir) is writing very technical details in a post and `Glauber` (sir) is writing the practical details in a post, both are important.They write individual posts, both of them have their audiences on the same blog. But, there is a topic they want to write together, how can they co-author a `posts` with the previous design of `one-to-many` relation?&#xA;&#xA;In the case of `one-to-many` relation, the `users` let&#39;s say `Pekka` with `id` as `3` and `Glauber` as `id` with `1`.  They wrote their own posts with id `98` and `99` respectively. Now, for the `100` th `post` they want to write it together. If we create a `posts` record with the `author_id` as `3`, then it will just refer the user with the id `3` which is `Pekka`, then the user with `id` as `1` can&#39;t be referred to the `posts` record.&#xA;&#xA;If we tried to create 2 posts with different `user_id` it might be redundant to create 2 posts, and those might lead to inconsistent number of post counts, since each post will have unique identifier.&#xA;&#xA;So, essentially, co-authoring in this `one-to-many` relationship, is not possible, we need to do something to make them write together.&#xA;&#xA;The `many-to-many` relation will help them write a `post` together.&#xA;&#xA;They both can write the `posts` record with just the `title` and `content`. Let&#39;s say the `post` was created with `id` as `100`&#xA;&#xA;Them, while publishing the `post`, they can provide the `user_id`s, so here, two records of `author_post` will be created, one with `user_id` as `3`and `post_id` as `100`, similarly another record will be created with `user_id` as `1` and `post_id` as `100`. Notice how we are duplicating the `post_id` (it is same, as `100`), but the `user_id` is different in each case. Hence the constraint is satisfied, the `user_id` and `post_id` the combination of those two is unique. Hence those two can write a `post` together now, the `100` th post is co-authored by `Pekka` and `Glauber`.&#xA;&#xA;And finally `Pekka` and `Glauber` decided to write a [post](https://turso.tech/blog/introducing-limbo-a-complete-rewrite-of-sqlite-in-rust)&#xA;&#xA;NOTE: This is just a made up example, I found it easier to relate and come up with a example of the similar blog post kind of table, so took that and wrote this, sorry if anyone got hurt.&#xA;&#xA;Let&#39;s look at actual queries:&#xA;&#xA;Create three users `Glauber`, `Jamie` and `Pekka`.&#xA;&#xA;```sql&#xA;INSERT INTO users(name) VALUES (&#39;Glauber&#39;), (&#39;Jamie&#39;), (&#39;Pekka&#39;);&#xA;```&#xA;These users are the authors with the ids `1`, `2` and `3` respectively.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;As we can see in this users table.&#xA;&#xA;Then, let&#39;s Pekka write the posts. He wrote 2 posts, one with the title `Switching to Zig from Rust` and the other with the title `RAG in SQLite`.&#xA;This is just for fun.&#xA;&#xA;```sql&#xA;INSERT INTO posts(title, content) VALUES (&#39;Switching to Zig from Rust&#39;, &#39;I love C&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;RAG in SQLite&#39;, &#39;AI first database&#39;);&#xA;SELECT * FROM posts;&#xA;```&#xA;Here, the `id` of the first post is `1` and the `id` of the second post is `2`. So, `Pekka` has authored posts with id `1` and `2` but that is not mapped yet.&#xA;&#xA;Let&#39;s map it in the `author_post` table, we know that the `PRIMARY KEY` for the `author_post` is a combination of `user_id` or `author_id` and the `post_id`, we know that `Pekka` has a `user_id` of `3` and he has written the `post` with `id`s `1` and `2`, so we&#39;ll simply insert those in to the `author_post` table as individual records.&#xA;&#xA;```sql&#xA;INSERT INTO author_post(user_id, post_id) VALUES (3, 1), (3, 2);&#xA;```&#xA;This will create two distinct records rightly so, for each posts.&#xA;&#xA;Then, Glauber decided to write a couple of posts. Let&#39;s say he wrote 2 posts, one with the title `Rewriting SQLite` and the other with the title `Offline Writes in SQLite`.&#xA;&#xA;```sql&#xA;INSERT INTO posts(title, content) VALUES (&#39;Rewriting SQLite&#39;, &#39;We are no more a sqlite-fork&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;Offline Writes in SQLite&#39;, &#39;Lets sync&#39;);&#xA;SELECT * FROM posts;&#xA;```&#xA;Here, the posts created by `Glauber` have ids `3` and `4`. Now, we need to map them in the `author_post` table.&#xA;&#xA;We know that `Glauber` has a `user_id` of `1` and he has written the `post` with `id`s `3` and `4`, so we&#39;ll simply insert those in to the `author_post` table as individual records.&#xA;&#xA;```sql&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 3), (1, 4);&#xA;```&#xA;&#xA;So, this will create two distinct entries in the `author_post` table for the two posts created by `users` with `id` as `1`.&#xA;&#xA;Now, the question of how they co-author a post? That is a piece of cake.&#xA;&#xA;Let&#39;s say they wrote a post with the title `Limbo`.&#xA;&#xA;```sql&#xA;INSERT INTO posts(title, content) VALUES(&#39;Limbo&#39;, &#39;SQLite in Rust&#39;);&#xA;```&#xA;&#xA;Now, this will create a new post with `id` as `5`, let&#39;s map it in the `author_post` table&#xA;&#xA;```sql&#xA;INSERT INTO author_post(user_id, post_id) VALUES (3, 5);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 5);&#xA;```&#xA;&#xA;Here, we create two records but the `post_id` value is the same, the `user_id` value changes for both the posts, one for `Glauber` and one for `Pekka` with ids 1 and 3 respectively.&#xA;&#xA;SQueaLicios!&#xA;&#xA;Now, you will say, isn&#39;t it still doing redundant insertions of two `author_post` records?&#xA;&#xA;Well ... yes ... but. &#xA;&#xA;Yes, that is true, it looks like we are doing redundant as we will create records in the `author_post` table (let&#39;s call it the junction table or the middle or connector table). But there is no redundant data, every row gives us unique insight into what it is actually storing. We aren&#39;t storing the original `posts` record twice or multiple twice, we are just `referencing` it with the `user_id` and the `post_id`&#xA;&#xA;Another question would be, how the heck do we query this `post` which is co-authored? Or that has multiple authors in it, easy peasy, sqly.&#xA;&#xA;Bear with some SQL shenanigans, some joins and wizardry.&#xA;&#xA;```sql&#xA;SELECT &#xA;    p.id,&#xA;    p.content AS post,&#xA;    GROUP_CONCAT(u.name, &#39;, &#39;) AS authors&#xA;FROM posts p&#xA;JOIN author_post up ON p.id = up.post_id&#xA;JOIN users u ON u.id = up.user_id&#xA;GROUP BY p.id&#xA;HAVING COUNT(u.id) &gt; 1;&#xA;```&#xA;Don&#39;t worry if you don&#39;t get this query as a whole, just understand three things.&#xA;- What are we querying? The post id, the post content, and the author names concatenated with `,`&#xA;- From where? The `posts`, `users` and the `author_post` table.&#xA;- How? Querying all the posts, joining the `posts` with the id in the `author_post` table, then also fetching the `name` of the `author` from the `users` name. We are grouping by post id in order to keep it per post, and not duplicate the posts in case of multiple authors, the last filtering is the `HAVING` that is done as a aggregate of the count of users in the `author_post` table having more than one author in the mapping.&#xA;&#xA;What about all the posts?&#xA;&#xA;We just remove the filter of `HAVING` clause to include all the posts, rather than only posts having more than 1 author.&#xA;&#xA;```sql&#xA;SELECT&#xA;    p.id,&#xA;    p.content AS post,&#xA;    GROUP_CONCAT(u.name, &#39;, &#39;) AS authors&#xA;FROM posts p&#xA;JOIN author_post up ON p.id = up.post_id&#xA;JOIN users u ON u.id = up.user_id&#xA;GROUP BY p.id;&#xA;```&#xA;&#xA;&#xA;The heavy SQLog:&#xA;&#xA;```sql&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    title TEXT NOT NULL,&#xA;    content TEXT NOT NULL&#xA;);&#xA;sqlite&gt; CREATE TABLE author_post (&#xA;    user_id INTEGER NOT NULL,&#xA;    post_id INTEGER NOT NULL,&#xA;    PRIMARY KEY (user_id, post_id),&#xA;    FOREIGN KEY (user_id) REFERENCES users(id),&#xA;    FOREIGN KEY (post_id) REFERENCES posts(id)&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; INSERT INTO users(name) VALUES (&#39;Glauber&#39;), (&#39;Jamie&#39;), (&#39;Pekka&#39;);&#xA;sqlite&gt; INSERT INTO posts(title, content) VALUES (&#39;Switching to Zig from Rust&#39;, &#39;I love C&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;RAG in SQLite&#39;, &#39;AI first database&#39;);&#xA;SELECT * FROM posts;&#xA;+----+----------------------------+-------------------+&#xA;| id |           title            |      content      |&#xA;+----+----------------------------+-------------------+&#xA;| 1  | Switching to Zig from Rust | I love C          |&#xA;| 2  | RAG in SQLite              | AI first database |&#xA;+----+----------------------------+-------------------+&#xA;sqlite&gt; INSERT INTO author_post(user_id, post_id) VALUES (3, 1), (3, 2);&#xA;sqlite&gt; INSERT INTO posts(title, content) VALUES (&#39;Rewriting SQLite&#39;, &#39;We are no more a sqlite-fork&#39;);&#xA;INSERT INTO posts(title, content) VALUES (&#39;Offline Writes in SQLite&#39;, &#39;Lets sync&#39;);&#xA;SELECT * FROM posts;&#xA;+----+----------------------------+------------------------------+&#xA;| id |           title            |           content            |&#xA;+----+----------------------------+------------------------------+&#xA;| 1  | Switching to Zig from Rust | I love C                     |&#xA;| 2  | RAG in SQLite              | AI first database            |&#xA;| 3  | Rewriting SQLite           | We are no more a sqlite-fork |&#xA;| 4  | Offline Writes in SQLite   | Lets sync                    |&#xA;+----+----------------------------+------------------------------+&#xA;sqlite&gt; INSERT INTO author_post(user_id, post_id) VALUES (1, 3), (1, 4);&#xA;sqlite&gt; INSERT INTO posts(title, content) VALUES(&#39;Limbo&#39;, &#39;SQLite in Rust&#39;);&#xA;sqlite&gt; INSERT INTO author_post(user_id, post_id) VALUES (3, 5);&#xA;INSERT INTO author_post(user_id, post_id) VALUES (1, 5);&#xA;sqlite&gt; SELECT&#xA;    p.id,&#xA;    p.content AS post,&#xA;    GROUP_CONCAT(u.name, &#39;, &#39;) AS authors&#xA;FROM posts p&#xA;JOIN author_post up ON p.id = up.post_id&#xA;JOIN users u ON u.id = up.user_id&#xA;GROUP BY p.id&#xA;HAVING COUNT(u.id) &gt; 1;&#xA;+----+----------------+----------------+&#xA;| id |      post      |    authors     |&#xA;+----+----------------+----------------+&#xA;| 5  | SQLite in Rust | Pekka, Glauber |&#xA;+----+----------------+----------------+&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+----------------------------+------------------------------+&#xA;| id |           title            |           content            |&#xA;+----+----------------------------+------------------------------+&#xA;| 1  | Switching to Zig from Rust | I love C                     |&#xA;| 2  | RAG in SQLite              | AI first database            |&#xA;| 3  | Rewriting SQLite           | We are no more a sqlite-fork |&#xA;| 4  | Offline Writes in SQLite   | Lets sync                    |&#xA;| 5  | Limbo                      | SQLite in Rust               |&#xA;+----+----------------------------+------------------------------+&#xA;sqlite&gt; SELECT&#xA;    p.id,&#xA;    p.content AS post,&#xA;    GROUP_CONCAT(u.name, &#39;, &#39;) AS authors&#xA;FROM posts p&#xA;JOIN author_post up ON p.id = up.post_id&#xA;JOIN users u ON u.id = up.user_id&#xA;GROUP BY p.id;&#xA;+----+------------------------------+----------------+&#xA;| id |             post             |    authors     |&#xA;+----+------------------------------+----------------+&#xA;| 1  | I love C                     | Pekka          |&#xA;| 2  | AI first database            | Pekka          |&#xA;| 3  | We are no more a sqlite-fork | Glauber        |&#xA;| 4  | Lets sync                    | Glauber        |&#xA;| 5  | SQLite in Rust               | Pekka, Glauber |&#xA;+----+------------------------------+----------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;That’s the basic idea of a many-to-many relation in SQLite, instead of stuffing multiple IDs into one table, we create a separate mapping table. Each row in that table ties one user to one post. If multiple users are tied to the same post, we just add more mapping rows.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Self Join Relations</title>
      <link>https://meetgor.com/sqlog/sqlite-self-join-relations</link>
      <description>Self Join Relations In the last post, we had defined a relation between the table and the table. Now, let&#39;s create a single table where we can define a relation</description>
      <pubDate>Wed, 17 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Self Join Relations&#xA;&#xA;In the last post, we had defined a `one-to-many` relation between the `users` table and the `posts` table.&#xA;&#xA;Now, let&#39;s create a single table where we can define a `one-to-many` relationship. Yes, within a single table we can reference the primary key and it can act like a link. Since it isn&#39;t really about the table, it&#39;s more about the row relation form each other, we an connect any rows.&#xA;&#xA;Let&#39;s take a example of `comments`.&#xA;&#xA;```sql&#xA;CREATE TABLE comments (&#xA;    id INTEGER PRIMARY KEY,&#xA;    author TEXT NOT NULL,&#xA;    content TEXT NOT NULL,&#xA;    parent_id REFERENCES comments (id)&#xA;);&#xA;```&#xA;Here, we have created a table `comments`, it has a simple `id` as a `INTEGER PRIMARY KEY` (we know what that means right? check out 2 articles back to find out.). There are `author` and `content` columns that just hold some data, it could be any thing that actually stores the data. The final column is vital as it makes the `comments` table usable.&#xA;&#xA;We have defined `parent_id` as a `REFERENCES comments (id)`, this is the key to the `self-join` relationship. It will refer any other comment with its id.&#xA;&#xA;So, let&#39;s say, you have created a first comment `A`, here we don&#39;t have any previous comment, so that can be NULL. However, if someone comments `B` on my comment, then the parent_id will be of `A`. &#xA;&#xA;```sql&#xA;INSERT INTO comments (id, author, content, parent_id)&#xA;VALUES (1, &#39;Harry Potter&#39;, &#39;I think Snape is hiding something.&#39;, NULL);&#xA;&#xA;INSERT INTO comments (id, author, content, parent_id)&#xA;VALUES (2, &#39;Ron Weasley&#39;, &#39;What exactly? &#39;, 1);&#xA;&#xA;INSERT INTO comments (id, author, content, parent_id)&#xA;VALUES (3, &#39;Draco Malfoy&#39;, &#39;Potter, you just want attention.&#39;, 1);&#xA;&#xA;INSERT INTO comments (id, author, content, parent_id)&#xA;VALUES (4, &#39;Harry Potter&#39;, &#39;Attention is all you need&#39;, 3);&#xA;&#xA;INSERT INTO comments (id, author, content, parent_id)&#xA;VALUES (5, &#39;Ron Weasley&#39;, &#39;Calm down guys&#39;, 3);&#xA;```&#xA;&#xA;Ok, we have inserted a lot at once, let&#39;s break it down&#xA;&#xA;- Harry adds the first comment `I think Snape is hiding something.` (parent_id will be NULL, as that is the first comment)&#xA;- Ron replies to the first comment `What exactly?` (parent_id will be 1, as that is the id of the first comment)&#xA;- Draco replies to the first comment `Potter, you just want attention.` (parent_id will be 1, as that is the id of the first comment)&#xA;- Harry replies to Draco&#39;s comment `Attention is all you need` (parent_id will be 3, as that is the id of Draco&#39;s comment)&#xA;- Ron replies to Draco&#39;s comment `Calm down guys` (parent_id will be 3, as that is the id of Draco&#39;s comment)&#xA;&#xA;This is really cool, as it makes use of the same id or primary key of the table, referencing some other row.&#xA;&#xA;Let&#39;s query the data and help you understand better.&#xA;&#xA;To get the first comment (base comment), we can use the following query&#xA;&#xA;```sql&#xA;SELECT id, author, content FROM comments WHERE parent_id IS NULL;&#xA;```&#xA;&#xA;To get the replies for a given comment id, we can use the following query&#xA;&#xA;```sql&#xA;SELECT id, content, parent_id FROM comments WHERE parent_id = 1;&#xA;```&#xA;&#xA;To get all the comments for a given author, we can use the following query:&#xA;&#xA;```sql&#xA;SELECT id, author, content FROM comments WHERE author = &#39;Harry Potter&#39;;&#xA;```&#xA;&#xA;Here&#39;s a SQLog:&#xA;&#xA;```sqlite&#xA;sqlite&gt; CREATE TABLE comments (&#xA;    id INTEGER PRIMARY KEY,&#xA;    author TEXT NOT NULL,&#xA;    content TEXT NOT NULL,&#xA;    parent_id REFERENCES comments (id)&#xA;);&#xA;sqlite&gt; .mode table&#xA;&#xA;sqlite&gt; INSERT INTO comments (id, author, content, parent_id)&#xA;VALUES (1, &#39;Harry Potter&#39;, &#39;I think Snape is hiding something.&#39;, NULL),&#xA;(2, &#39;Ron Weasley&#39;, &#39;What exactly? &#39;, 1),&#xA;(3, &#39;Draco Malfoy&#39;, &#39;Potter, you just want attention.&#39;, 1),&#xA;(4, &#39;Harry Potter&#39;, &#39;Attention is all you need&#39;, 3),&#xA;(5, &#39;Ron Weasley&#39;, &#39;Calm down guys&#39;, 3);&#xA;&#xA;sqlite&gt; SELECT * FROM comments;&#xA;+----+--------------+------------------------------------+-----------+&#xA;| id |    author    |              content               | parent_id |&#xA;+----+--------------+------------------------------------+-----------+&#xA;| 1  | Harry Potter | I think Snape is hiding something. |           |&#xA;| 2  | Ron Weasley  | What exactly?                      | 1         |&#xA;| 3  | Draco Malfoy | Potter, you just want attention.   | 1         |&#xA;| 4  | Harry Potter | Attention is all you need          | 3         |&#xA;| 5  | Ron Weasley  | Calm down guys                     | 3         |&#xA;+----+--------------+------------------------------------+-----------+&#xA;&#xA;sqlite&gt; SELECT id, author, content FROM comments WHERE parent_id IS NULL;&#xA;+----+--------------+------------------------------------+&#xA;| id |    author    |              content               |&#xA;+----+--------------+------------------------------------+&#xA;| 1  | Harry Potter | I think Snape is hiding something. |&#xA;+----+--------------+------------------------------------+&#xA;&#xA;sqlite&gt; SELECT id, content, parent_id FROM comments WHERE author = &#39;Harry Potter&#39;;&#xA;+----+------------------------------------+-----------+&#xA;| id |              content               | parent_id |&#xA;+----+------------------------------------+-----------+&#xA;| 1  | I think Snape is hiding something. |           |&#xA;| 4  | Attention is all you need          | 3         |&#xA;+----+------------------------------------+-----------+&#xA;&#xA;sqlite&gt; SELECT id, content, parent_id FROM comments WHERE parent_id = 1;&#xA;+----+----------------------------------+-----------+&#xA;| id |             content              | parent_id |&#xA;+----+----------------------------------+-----------+&#xA;| 2  | What exactly?                    | 1         |&#xA;| 3  | Potter, you just want attention. | 1         |&#xA;+----+----------------------------------+-----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;As you can see, we have created a `one-to-many` relation with the same single table.&#xA;&#xA;Here, one comment can have mulitple comments. This makes it a bit recursive but it is what it is.&#xA;&#xA;That&#39;s how certain systems are.&#xA;&#xA;Self referencing relations!&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: One to Many Relation with Foreign Key</title>
      <link>https://meetgor.com/sqlog/sqlite-one-to-many-foreign-key</link>
      <description>One to Many Relation with Foreign Key Back to the question that we raised in the post, &#34;What would happen if there are more than one row in the primary table th</description>
      <pubDate>Tue, 16 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## One to Many Relation with Foreign Key&#xA;&#xA;Back to the question that we raised in the [previous](https://www.meetgor.com/sqlog/sqlite-foreign-key-table-constraint/) post, &#34;What would happen if there are more than one row in the primary table that references the same id in the foreign table?&#34;&#xA;&#xA;This is what is precisely called the `one-to-many` relation, or what the foreign key is used for.&#xA;&#xA;- One row in the foreign(parent/other) table&#xA;- that can be referenced by many rows in the primary(child) table&#xA;&#xA;So, taking the example from the previous post of author and posts,&#xA;&#xA;Let&#39;s define the schema again:&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id)&#xA;);&#xA;```&#xA;&#xA;the analogy can be stated as:&#xA;&#xA;- One author can have many posts&#xA;- One post can have only one author&#xA;&#xA;Right?&#xA;&#xA;If you look at the schema, the posts table is having the foreign key `author_id` that is referencing the primary key `id` of the `users` table. So the author can be referenced by multiple posts, however one post record can be refer only one author. I mean that is the design, a post can have multiple authors if we wanted to but we then would have to change the design then, we check that in the next post.&#xA;&#xA;Let&#39;s keep the `Foreign Key` pragma/setting constraint on throughout this post.&#xA;&#xA;```sql&#xA;PRAGMA foreign_keys=on;&#xA;```&#xA;&#xA;Let&#39;s insert a few users and posts to get what we mean by the `one-to-many` relation.&#xA;&#xA;```sql&#xA;INSERT INTO users(name) VALUES(&#39;harry&#39;), (&#39;ron&#39;), (&#39;malfoy&#39;);&#xA;```&#xA;We are inserting 3 users, `harry`, `ron`, and `malfoy` which will have ids as `1`, `2`, `3` respectively.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;As you would see in the output, we have 3 users with ids `1`, `2`, and `3`.&#xA;&#xA;Now, let&#39;s insert a few posts.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES&#xA;(&#39;hi, I am harry&#39;, 1),&#xA;(&#39;Expecto Patronum&#39;, 1),&#xA;(&#39;hi, I am ron&#39;, 2),&#xA;(&#39;hi, I am malfoy&#39;, 3);&#xA;```&#xA;&#xA;I am inserting the `2nd` post as `harry`&#39;s because it will confuse with the author_id and the `post.id`&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;As you would see in the output, we have 4 posts with ids `1`, `2`, `3`, `4` the first 2 are of `harry`, then the next ron and malfoy respectively.&#xA;&#xA;Below are some neat little queries to visualize it better.&#xA;&#xA;```sql&#xA;SELECT * FROM posts JOIN users ON posts.author_id = users.id;&#xA;```&#xA;&#xA;```sql&#xA;SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;```&#xA;&#xA;This is the sqlog for the things that we did so far.&#xA;&#xA;```sqlite&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id)&#xA;);&#xA;sqlite&gt; PRAGMA foreign_keys=on;&#xA;sqlite&gt; INSERT INTO users(name) VALUES(&#39;harry&#39;), (&#39;ron&#39;), (&#39;malfoy&#39;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+--------+&#xA;| id |  name  |&#xA;+----+--------+&#xA;| 1  | harry  |&#xA;| 2  | ron    |&#xA;| 3  | malfoy |&#xA;+----+--------+&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES&#xA;(&#39;hi, I am harry&#39;, 1),&#xA;(&#39;Expecto Patronum&#39;, 1),&#xA;(&#39;hi, I am ron&#39;, 2),&#xA;(&#39;hi, I am malfoy&#39;, 3);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;| 3  | hi, I am ron     | 2         |&#xA;| 4  | hi, I am malfoy  | 3         |&#xA;+----+------------------+-----------+&#xA;sqlite&gt; SELECT * FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+------------------+-----------+----+--------+&#xA;| id |     content      | author_id | id |  name  |&#xA;+----+------------------+-----------+----+--------+&#xA;| 1  | hi, I am harry   | 1         | 1  | harry  |&#xA;| 2  | Expecto Patronum | 1         | 1  | harry  |&#xA;| 3  | hi, I am ron     | 2         | 2  | ron    |&#xA;| 4  | hi, I am malfoy  | 3         | 3  | malfoy |&#xA;+----+------------------+-----------+----+--------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author_name FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+------------------+-----------+-------------+&#xA;| id |     content      | author_id | author_name |&#xA;+----+------------------+-----------+-------------+&#xA;| 1  | hi, I am harry   | 1         | harry       |&#xA;| 2  | Expecto Patronum | 1         | harry       |&#xA;| 3  | hi, I am ron     | 2         | ron         |&#xA;| 4  | hi, I am malfoy  | 3         | malfoy      |&#xA;+----+------------------+-----------+-------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now, let&#39;s explore this:&#xA;&#xA;- Harry has 2 posts, which means one user can author multiple post by design.&#xA;- One post can be always be author by only one user.&#xA;&#xA;Let&#39;s assume all the `NULL` values are not allowed for now, we can see one post can only be by one author, that is a constraint of the `FOREIGN KEY`, as we are referencing only one foreign key in the posts table.&#xA;&#xA;This is what we call `one-author-many-posts` or `one-to-many` relation. You can also call it `many-to-one` but then you have to change it to `many-posts-one-author` that sounds a little wired. Its basically the same thing but from the different perspective, I prefer saying `one-to-many` since most of the times, the foreign/parent table needs to exist first inorder for the parent/primary table to reference any key from it.&#xA;&#xA;One is on the side of the parent table (foreign table) and the Many is usually on the primary or the place where the `foreign key` is placed (in this case it is placed in the posts table).&#xA;&#xA;&#xA;What about multiple foreign keys?&#xA;&#xA;Yes, you can have multiple foreign keys, but those will be different keys, you can&#39;t have `author_id_1`, `author_id_2` and then refer the same foreign key as `users.id`, that is possible. However, we need to define separate table for mapping users and posts and that will be called as `many-to-many` relations as will check in the next post.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Foreign KEY table constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-foreign-key-table-constraint</link>
      <description>The Foreign KEY Table constraint Foreign Keys are the fundamentals of any relational databases, as they are the ones that make the connection or the relations a</description>
      <pubDate>Mon, 15 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## The Foreign KEY Table constraint&#xA;&#xA;Foreign Keys are the fundamentals of any relational databases, as they are the ones that make the connection or the relations among the tables in our database system.&#xA;&#xA;&gt; Foreign key as the name suggest, this is a key referencing or pointing to a foreign (other) table, and that key could be a primary key for that table, hence referred to as key.&#xA;&#xA;So, with a foreign key we can connect the data/records/row from other table to the table in which the foreign key is linked.&#xA;&#xA;Think of it like a string(rope) attaching one record to the other. It is a link between two tables.&#xA;&#xA;Now, how we define the `FOREIGN KEY` constraint is what the rope will be tied to and how.&#xA;&#xA;Let&#39;s take a look at how the syntax of defining a `FOREIGN KEY` constraint looks like in SQLite.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id)&#xA;);&#xA;```&#xA;&#xA;You will need atleast two tables in order to map a foreign key constraint. In the above example, we have created a table `users` that will form the base table, and the table `posts` is what is linking the `id` from the `users` table to itself. &#xA;&#xA;This means one post can link to one user, we are using that reference in the current `posts` table with the alias of `author_id`. This way the `author_id` is the rope (string whatever you prefer saying, I think developers get confused when I say string) that connects a record in the post table to the record in the users table.&#xA;&#xA;We can conclude a few things from this:&#xA;- We need to define what that `author_id` is for, for each record we insert into the `posts` table.&#xA;- If we don&#39;t define the `author_id` then will it be automatically populated? Nope!&#xA;- The `FOREIGN KEY` at its core is just like any other column, its just that how we set that column is what is important for the linking part.&#xA;- It depends on how the actual key i.e. the column in the foreign key is defined in the foreign table.&#xA;&#xA;Might still not make sense, let me explain with a few example.&#xA;&#xA;Let&#39;s insert a bunch of users.&#xA;&#xA;```sql&#xA;INSERT INTO users(name) VALUES(&#39;harry&#39;), (&#39;ron&#39;), (&#39;malfoy&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;This has 3 records each assigned ids from 1 to 3.&#xA;&#xA;Now, let&#39;s insert a bunch of posts.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content) VALUES (&#39;hi, I am who?&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;That `author_id` is `NULL` because we didn&#39;t inserted anythign against it.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL&#xA;);&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id)&#xA;);&#xA;sqlite&gt; INSERT INTO users(name) VALUES(&#39;harry&#39;), (&#39;ron&#39;), (&#39;malfoy&#39;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+--------+&#xA;| id |  name  |&#xA;+----+--------+&#xA;| 1  | harry  |&#xA;| 2  | ron    |&#xA;| 3  | malfoy |&#xA;+----+--------+&#xA;sqlite&gt; INSERT INTO posts(content) VALUES (&#39;hi, I am who?&#39;);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+---------------+-----------+&#xA;| id |    content    | author_id |&#xA;+----+---------------+-----------+&#xA;| 1  | hi, I am who? |           |&#xA;+----+---------------+-----------+&#xA;sqlite&gt; INSERT INTO users(name, id) VALUES(&#39;neville&#39;, NULL);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+---------------+-----------+&#xA;| id |    content    | author_id |&#xA;+----+---------------+-----------+&#xA;| 1  | hi, I am who? |           |&#xA;+----+---------------+-----------+&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 3  | malfoy  |&#xA;| 4  | neville |&#xA;+----+---------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;We need to be careful while inserting anything in that `author_id` column, becuase think for a moment:&#xA;What do we insert in the `author_id` column?&#xA;&#xA;Well, we need to have a valid `users(id)` right, that is the id in the users table, that is what we are referencing by that `FOREIGN KEY` constraint.&#xA;&#xA;There are other questions that should arise now, like:&#xA;&#xA;- What happens if we link something that doesn&#39;t exist in the foreign table? (in this case the `users` table)&#xA;- What happens if we what we have linked is deleted or changed in the foreign table?&#xA;- What happend if multiple rows refer the same id in the foreign table?&#xA;&#xA;The `FOREIGN KEY` constraint will make sure we are linking the valid id and hence if we try to insert any invalid (non-existent) id of the foreign table (here the `users` table) we will fail this constraint and hence the record/row won&#39;t be inserted or updated.&#xA;&#xA;As I said, this is a rope, the one end of the rope is always the current table record and the other one is what you attach it when inserting or updating a record.&#xA;&#xA;So, for starters, we&#39;ll insert the users with the id that exist in the first place.&#xA;&#xA;In our case, we have 4 authors created with the id from 1 to 4 and names as `harry`, `ron`, `malfoy` and `neville` respectively.&#xA;&#xA;Now, let&#39;s insert a post with the `author_id` as 1, that is the id of the user with the name `harry` which will be the author of the post.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;```&#xA;&#xA;This will set the `author_id` column of the `posts` table to 1, which is the id of the `harry` user.&#xA;&#xA;If you want to see some better results, just run some of these to get it clear:&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;Get both tables data:&#xA;&#xA;```sql&#xA;SELECT * FROM posts JOIN users ON author_id = id;&#xA;```&#xA;&#xA;Make column names clear and remove redundant columns:&#xA;&#xA;```sql&#xA;SELECT posts.*, users.name FROM posts JOIN users ON posts.author_id = users.id;&#xA;```&#xA;&#xA;&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+----------------+-----------+&#xA;| id |    content     | author_id |&#xA;+----+----------------+-----------+&#xA;| 1  | hi, I am who?  |           |&#xA;| 2  | hi, I am harry | 1         |&#xA;+----+----------------+-----------+&#xA;sqlite&gt; SELECT * FROM posts JOIN users ON author_id = id;&#xA;Parse error: ambiguous column name: id&#xA;sqlite&gt; SELECT * FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+----------------+-----------+----+-------+&#xA;| id |    content     | author_id | id | name  |&#xA;+----+----------------+-----------+----+-------+&#xA;| 2  | hi, I am harry | 1         | 1  | harry |&#xA;+----+----------------+-----------+----+-------+&#xA;sqlite&gt; SELECT posts.*, users.name FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+----------------+-----------+-------+&#xA;| id |    content     | author_id | name  |&#xA;+----+----------------+-----------+-------+&#xA;| 2  | hi, I am harry | 1         | harry |&#xA;+----+----------------+-----------+-------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+----------------+-----------+--------+&#xA;| id |    content     | author_id | author |&#xA;+----+----------------+-----------+--------+&#xA;| 2  | hi, I am harry | 1         | harry  |&#xA;+----+----------------+-----------+--------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now, try inserting a post with the `author_id` which doesn&#39;t exist in the `users` table i.e. the id of the users above 4.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am hermoine&#39;, 5);&#xA;```&#xA;&#xA;That inserted successfully? How can that be allowed? The user with id `5` doesn&#39;t exist and we are referencing the author (users) that doesn&#39;t exist.&#xA;&#xA;## Pragma Foreign keys&#xA;&#xA;Well, we have been lied to all along, the SQLite is so flexible by default, you need to nudge it inorder to be a little strict:&#xA;&#xA;We need to enable the foreign key constraint checking by setting the `PRAGMA foreign_keys=on;`&#xA;&#xA;```&#xA;PRAGMA foreign_keys=on;&#xA;```&#xA;This setting is off by default due to backward compatibility reasons.&#xA;&#xA;You can check if the foreign key constraint checking is enabled by running the following command:&#xA;&#xA;```&#xA;PRAGMA foreign_keys;&#xA;```&#xA;&#xA;If it returns `1` then its enabled, or `0` if not.&#xA;&#xA;Now, now if we run the query where the author id is `5` which means the `user` referenced by that id doesn&#39;t exist, we will get the following error:&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am hermoine&#39;, 5);&#xA;```&#xA;&#xA;&gt; Runtime error: FOREIGN KEY constraint failed&#xA;&#xA;Exactly as we expected.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am hermoine&#39;, 5);&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+----------------+-----------+--------+&#xA;| id |    content     | author_id | author |&#xA;+----+----------------+-----------+--------+&#xA;| 2  | hi, I am harry | 1         | harry  |&#xA;+----+----------------+-----------+--------+&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+-------------------+-----------+&#xA;| id |      content      | author_id |&#xA;+----+-------------------+-----------+&#xA;| 1  | hi, I am who?     |           |&#xA;| 2  | hi, I am harry    | 1         |&#xA;| 3  | hi, I am hermoine | 5         |&#xA;+----+-------------------+-----------+&#xA;sqlite&gt; PRAGMA foreign_keys=on&#xA;   ...&gt; ;&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am hermoine&#39;, 5);&#xA;Runtime error: FOREIGN KEY constraint failed (19)&#xA;sqlite&gt;&#xA;```&#xA;However if we insert the `posts` record with a valid `author_id` then:&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;```&#xA;This will readily insert the record, since the `author_id` of 2 exists in the `users` table.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;Neat.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+-------------------+-----------+&#xA;| id |      content      | author_id |&#xA;+----+-------------------+-----------+&#xA;| 1  | hi, I am who?     |           |&#xA;| 2  | hi, I am harry    | 1         |&#xA;| 3  | hi, I am hermoine | 5         |&#xA;| 4  | hi, I am ron      | 2         |&#xA;+----+-------------------+-----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now, back to those three questions:&#xA;&#xA;- What happens if we link something that doesn&#39;t exist in the foreign table? (in this case the `users` table)&#xA;- What happens if we what we have linked is deleted or changed in the foreign table?&#xA;- What happend if multiple rows refer the same id in the foreign table?&#xA;&#xA;&#xA;Let&#39;s tackle one by one &#xA;&#xA;&gt; NOTE: We&#39;ll assume PRAGMA foreign_keys=on; from here on&#xA;&#xA;## Invalid Foreign Key&#xA;&#xA;We already covered this in the previous discussion, if we link something that doesn&#39;t exist in the foreign table (in this case the `users` table) then we will fail the `FOREIGN KEY` constraint and hence the record/row won&#39;t be inserted or updated. But for this to fail we need to make sure the `PRAGMA foreign_keys` is on or toggled on.&#xA;&#xA;## Deleted or Updated Foreign Key&#xA;&#xA;What happens, if we inserted a record to the primary table (here the `posts` table) the `author_id` was existing and valid at the time of insertion, however after a while the `user` got deleted, what happens to the record that the `posts` record/row  that still references the deleted user id that is the `author_id` alias.&#xA;&#xA;&#xA;Enter `ON DELETE` and `ON UPDATE` clause on the `FOREIGN KEY` constraint.&#xA;&#xA;It turns out, we can define the behaviour of the `FOREIGN KEY` constraint when the referenced row is deleted or updated in the foreign table (here the `users` table).&#xA;&#xA;So, we need to define, what happens to the record/row in the primary table (here the `posts` table) when the referenced row in the foreign table (here the `users` table) is deleted or updated.&#xA;&#xA;There are 5 options to choose from:&#xA;1. `NO ACTION`: Do nothing (default)&#xA;2. `RESTRICT`: Stop the foreign key table from deleting the record/key in the foreign table&#xA;3. `SET NULL`: Set the primary table&#39;s record foreign key to `NULL`&#xA;4. `SET DEFAULT`: Set the primary table&#39;s record foreign key to the default value.&#xA;5. `CASCADE`: Delete the primary table&#39;s record/row if the referenced row in the foreign table is deleted.&#xA;&#xA;Phew! This opens a lot of options to play with:&#xA;&#xA;We already know the `NO ACTION`, it will just let it happen, nothing will be done. It&#39;s not recommended though, but SQLite is flexible, how many times I have to say it? Double edged sword.&#xA;&#xA;### Restrict&#xA;&#xA;Let&#39;s drop the `posts` table and start a fresh with the `ON DELETE` and `ON UPDATE` as `RESTRICT` option on the `FOREIGN KEY`  constraint.&#xA;&#xA;#### Restrict on delete&#xA;&#xA;We can restrict the deletion of the foreign records referred by the primary record. We use the `ON DELETE RESTRICT` as the option on the `FOREIGN KEY` constraint.&#xA;&#xA;```sql&#xA;DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE RESTRICT&#xA;);&#xA;```&#xA;This will now restrict the deletion of the foreign records referred by the primary record. In this case, if a record/row in `users` table which is refereced in the `posts` table is deleted, it will prevent it, it won&#39;t be deleted. However, any other record in the `users` table can be deleted.&#xA;&#xA;Let&#39;s first insert a few rows into the `posts` table since we dropped it.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;```&#xA;&#xA;These are valid records, since the `author_id` of 1 and 2 exist in the `users` table.&#xA;&#xA;However the following query will fail, since the `author_id` of 5 doesn&#39;t exist in the `users` table. We already know this:&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am hermoine&#39;, 5);&#xA;```&#xA;&#xA;Now, Let&#39;s look at all the `posts` and their authors&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;This is the list of `users` that we have:&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Now, let&#39;s try deleting a record in the `users` table.&#xA;&#xA;We know that `users` with id `1` and `2` are referenced in the `posts` table.&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE id = 2;&#xA;```&#xA;&#xA;This will fail, since we are trying to delete a record that is referenced in the `posts` table.&#xA;&#xA;We can check if the `users` with `id` 2 is still there:&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;It is indeed ther, it restricted the deletion.&#xA;&#xA;However if we delete the `users` where id is `3` which is not referenced in the `posts` table, it will succeed.&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE id = 3;&#xA;```&#xA;This deleted the `users` with `id` 3.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;As you can see the dirt `malfoy` was deleted.&#xA;&#xA;```&#xA;sqlite&gt; DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE RESTRICT&#xA;);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+----------------+-----------+&#xA;| id |    content     | author_id |&#xA;+----+----------------+-----------+&#xA;| 1  | hi, I am harry | 1         |&#xA;| 2  | hi, I am ron   | 2         |&#xA;+----+----------------+-----------+&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am hermoine&#39;, 5);&#xA;Runtime error: FOREIGN KEY constraint failed (19)&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 3  | malfoy  |&#xA;| 4  | neville |&#xA;+----+---------+&#xA;sqlite&gt; DELETE FROM users where id = 2;&#xA;Runtime error: FOREIGN KEY constraint failed (19)&#xA;sqlite&gt; DELETE FROM users where id = 3;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 4  | neville |&#xA;+----+---------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;What about updates?&#xA;&#xA;#### Restrict on update&#xA;&#xA;What if we update the users record will it restrict it, nope, unless we set `ON UPDATE` to `RESTRICT`:&#xA;&#xA;&gt; This option of `UPDATE ON` is usually not needed, as this is referring to the `UPDATE ON` the `FOREIGN KEY` and not on the entire foreign table columns, so only if you update the foreign key it will restrict or any action will be performed.&#xA;&#xA;Which means, if you update the `name` of the `users` table of a record which is referenced in the `posts` table, it will allow it, however when you update the `users` id (which doen&#39;t happen unless you&#39;ve skewed up or it&#39;s by design), it will prevent that updation. Note, updating foreign keys which are usually the primary keys of the foreign table is very rare, and not done usually, as it might corrupt the existing data.&#xA;&#xA;So, unless you have a great usecase for updating the foreign key of a table, `UPDATE ON` is never used.&#xA;&#xA;```sql&#xA;DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE RESTRICT ON UPDATE RESTRICT&#xA;);&#xA;```&#xA;&#xA;Here, we are adding the `ON UPDATE RESTRICT` option on the `FOREIGN KEY` constraint, the `ON DELETE RESTRICT` option won&#39;t affect the update option, so it&#39;s only triggered when we try to delete the record (hard delete).&#xA;&#xA;Let&#39;s add few more rows as usual:&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;```&#xA;&#xA;Now, let&#39;s try updating the `users` table:&#xA;&#xA;```sql&#xA;UPDATE users SET name = &#39;harry potter&#39; WHERE id = 1;&#xA;```&#xA;&#xA;This is allowed, nothing got restricted, as we didn&#39;t update the `id` in the `users` table which is a foreign key in the `posts` table, unless we update the `id` of the `users` table, the `update` option will allow any updates of the columns in the foreign table (users table)&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;This readily updated the row with `id` 1 to `harry potter` as it didn&#39;t mutate/change/update the foreign key `id` in the `users` table which is referenced as `author_id` in the `posts` table.&#xA;&#xA;&#xA;However, if we try to update the `id` of the `users` table, it will fail.&#xA;&#xA;```sql&#xA;UPDATE users SET id = 3 WHERE id = 2;&#xA;```&#xA;&#xA;We are updating the id of the `users` table to 3 which is not the id of the `users` table referenced in the `posts` table, so it will fail. The existing row of id = 2 is `ron`, hence it already has a posts entry which makes it a reference, as we have added a constraint to restrict on updation of the id the foreign key.&#xA;&#xA;Now, let&#39;s try to update the `id` of the `users` table to the id of the `users` table not referenced in the `posts` table&#xA;&#xA;```sql&#xA;UPDATE users SET id = 5 WHERE id = 4;&#xA;```&#xA;&#xA;The row with id = 4 in the `users` table is `neville` which doesn&#39;t have any posts entry, so it will succeed.&#xA;This will succeed, as we are updating the `id` of the `users` table to the id of the `users` table not referenced in the `posts` table&#xA;&#xA;&#xA;```&#xA;sqlite&gt; DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE RESTRICT ON UPDATE RESTRICT&#xA;);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+----------------+-----------+&#xA;| id |    content     | author_id |&#xA;+----+----------------+-----------+&#xA;| 1  | hi, I am harry | 1         |&#xA;| 2  | hi, I am ron   | 2         |&#xA;+----+----------------+-----------+&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 4  | neville |&#xA;+----+---------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+----------------+-----------+--------+&#xA;| id |    content     | author_id | author |&#xA;+----+----------------+-----------+--------+&#xA;| 1  | hi, I am harry | 1         | harry  |&#xA;| 2  | hi, I am ron   | 2         | ron    |&#xA;+----+----------------+-----------+--------+&#xA;sqlite&gt; UPDATE users SET name = &#39;harry potter&#39; where id = 1;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+--------------+&#xA;| id |     name     |&#xA;+----+--------------+&#xA;| 1  | harry potter |&#xA;| 2  | ron          |&#xA;| 4  | neville      |&#xA;+----+--------------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+----------------+-----------+--------------+&#xA;| id |    content     | author_id |    author    |&#xA;+----+----------------+-----------+--------------+&#xA;| 1  | hi, I am harry | 1         | harry potter |&#xA;| 2  | hi, I am ron   | 2         | ron          |&#xA;+----+----------------+-----------+--------------+&#xA;sqlite&gt; UPDATE users SET name = &#39;harry&#39; where id = 1;&#xA;sqlite&gt; UPDATE users SET id = 3 where id = 2;&#xA;Runtime error: FOREIGN KEY constraint failed (19)&#xA;sqlite&gt; UPDATE users SET id = 5 where id = 4;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 5  | neville |&#xA;+----+---------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;### Set NULL&#xA;&#xA;For the rest of this post, we&#39;ll only be looking at the `ON DELETE` constraint, the `ON UPDATE` is very rarely used and we have explored already when it is used.&#xA;&#xA;In this option, when the record in the foreign table is deleted, the foreign key reference of the  record in the primary table will be set to `NULL`.&#xA;&#xA;For our example, if we set `ON DELETE SET NULL` to the `FOREIGN KEY` constraint, when the `users` record with id = 2 is deleted, the `author_id` of the `posts` record with id = 3 will be set to `NULL`.&#xA;&#xA;```sql&#xA;DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE SET NULL&#xA;);&#xA;```&#xA;&#xA;Now, let&#39;s add few more rows as usual:&#xA;NOTE: I have added one more post for harry to avoid the 1-1 mapping of posts and users in the data&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;Expecto Patronum&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;```&#xA;&#xA;Let&#39;s take a look at the posts table.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA; &#xA;Now, let&#39;s delete the `users` record with id = 2:&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE id = 2;&#xA;```&#xA;This will allow and should, as there is no restriction on the `posts` table, the `FOREIGN KEY` constraint is `SET NULL` when the reference in the `users` table is deleted.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;As you can see, the `author_id` of the `posts` record with id = 3 has been set to `NULL`&#xA;&#xA;```&#xA;sqlite&gt; DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE SET NULL&#xA;);&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;Expecto Patronum&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;| 3  | hi, I am ron     | 2         |&#xA;+----+------------------+-----------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+------------------+-----------+--------+&#xA;| id |     content      | author_id | author |&#xA;+----+------------------+-----------+--------+&#xA;| 1  | hi, I am harry   | 1         | harry  |&#xA;| 2  | Expecto Patronum | 1         | harry  |&#xA;| 3  | hi, I am ron     | 2         | ron    |&#xA;+----+------------------+-----------+--------+&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 5  | neville |&#xA;+----+---------+&#xA;sqlite&gt; DELETE FROM users WHERE id = 2;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;| 3  | hi, I am ron     |           |&#xA;+----+------------------+-----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;### Set Default&#xA;&#xA;In this option, when the record in the foreign table is deleted, the foreign key reference of the  record in the primary table will be set to the default value of the foreign key.&#xA;&#xA;NOTE: We have not set the default value for the `author_id` column in the `posts` table, so it will be set to `NULL` by default. We can add normal constraints like `DEFAULT`, `UNIQUE`, `NOT NULL`, `CHECK`, etc on the foreign key as and when required.&#xA;&#xA;```sql&#xA;DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER DEFAULT 1,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE SET DEFAULT&#xA;);&#xA;```&#xA;We are setting default foreign key as `1` which means the `users` record with id = 1 will be the default reference for the `posts` table. If you don&#39;t provide the `author_id`, all post will be authored by `harry` what a funny quirk of the posts.&#xA;&#xA;Let&#39;s add rows as usual:&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;Expecto Patronum&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;```&#xA;&#xA;Now, let&#39;s take a look at the posts table:&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;We can now try deleting the `users` record with id = 2, Since it is already tied / referenced by the `posts` table, the users record will be deleted, but the `posts` record with `author_id` 2 will be set to the default value `1` which is the id of the `users` record with name `harry`.&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE id = 2;&#xA;```&#xA;It ran successfully, but the `posts` record with `author_id` 2 will be set to the default value `1` which is the id of the `users` record with name `harry`.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;Let&#39;s view more closely:&#xA;&#xA;```sql&#xA;SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;```&#xA; &#xA; View the SQLog :)&#xA;&#xA;```&#xA;sqlite&gt; DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER DEFAULT 1,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE SET DEFAULT&#xA;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 5  | neville |&#xA;+----+---------+&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;Expecto Patronum&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;| 3  | hi, I am ron     | 2         |&#xA;+----+------------------+-----------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+------------------+-----------+--------+&#xA;| id |     content      | author_id | author |&#xA;+----+------------------+-----------+--------+&#xA;| 1  | hi, I am harry   | 1         | harry  |&#xA;| 2  | Expecto Patronum | 1         | harry  |&#xA;| 3  | hi, I am ron     | 2         | ron    |&#xA;+----+------------------+-----------+--------+&#xA;sqlite&gt; DELETE FROM users where id = 2;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;| 3  | hi, I am ron     | 1         |&#xA;+----+------------------+-----------+&#xA;sqlite&gt; SELECT posts.*, users.name AS author FROM posts JOIN users ON posts.author_id = users.id;&#xA;+----+------------------+-----------+--------+&#xA;| id |     content      | author_id | author |&#xA;+----+------------------+-----------+--------+&#xA;| 1  | hi, I am harry   | 1         | harry  |&#xA;| 2  | Expecto Patronum | 1         | harry  |&#xA;| 3  | hi, I am ron     | 1         | harry  |&#xA;+----+------------------+-----------+--------+&#xA;sqlite&gt;&#xA;```&#xA;Let&#39;s take a look at the `CASCADE` option&#xA;&#xA;### Cascade&#xA;&#xA;If we set `ON DELETE CASCADE` to the `FOREIGN KEY` constraint, when the `users` record with id = 2 is deleted, the `posts` record with `author_id` 2 will be deleted as well.&#xA;&#xA;We can set the `CASCADE` option to the `FOREIGN KEY` constraint as follows:&#xA;&#xA;```sql&#xA;DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE CASCADE&#xA;);&#xA;```&#xA;&#xA;Now, let&#39;s add few more rows as usual&#xA;&#xA;```sql&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;Expecto Patronum&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;```&#xA;&#xA;We now have 3 posts, associated with 2 users.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;Now, let&#39;s delete the `users` record with id = 2&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE id = 2;&#xA;```&#xA;&#xA;This will not only delete the `users` record with id = 2, but it will also delete the `posts` record with `author_id` 2.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;Cascade as the name suggest, when the foreign key in the foreign table is deleted, the primary row is cascaded or deleted.&#xA;&#xA;```sql&#xA;sqlite&gt; DROP TABLE posts;&#xA;&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT NOT NULL,&#xA;    author_id INTEGER,&#xA;    FOREIGN KEY (author_id) REFERENCES users (id) ON DELETE CASCADE&#xA;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | harry   |&#xA;| 2  | ron     |&#xA;| 5  | neville |&#xA;+----+---------+&#xA;sqlite&gt; INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am harry&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;Expecto Patronum&#39;, 1);&#xA;INSERT INTO posts(content, author_id) VALUES (&#39;hi, I am ron&#39;, 2);&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;| 3  | hi, I am ron     | 2         |&#xA;+----+------------------+-----------+&#xA;sqlite&gt; DELETE FROM users where id = 2;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+------------------+-----------+&#xA;| id |     content      | author_id |&#xA;+----+------------------+-----------+&#xA;| 1  | hi, I am harry   | 1         |&#xA;| 2  | Expecto Patronum | 1         |&#xA;+----+------------------+-----------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;## Many Foreign Keys&#xA;&#xA;What would happen if there are more than one row in the primary table that references the same id in the foreign table?&#xA;&#xA;That is a question for another post.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: PRIMARY KEY table constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-primary-key-table-constraint</link>
      <description>The PRIMARY KEY Table constraint The PRIMARY KEY constraint is not actually a column constraint, it is a table constraint. In the previous section we learnt abo</description>
      <pubDate>Sun, 14 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## The PRIMARY KEY Table constraint&#xA;&#xA;The PRIMARY KEY constraint is not actually a column constraint, it is a table constraint. In the previous section we learnt about how to handle single column-level `PRIMARY KEY`. In this section we&#39;ll understand how to use `PRIMARY KEY` as table constraint, with that we can use multiple columns to combine the key.&#xA;&#xA;Since, `PRIMARY KEY` is a table level constraint, it only can be defined once for the table, as it needs to be unique for each record inserted in that table. So, we can define it with the column or define it at the end, doesn&#39;t matter, but it needs to be defined only once.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    user_id INTEGER PRIMARY KEY,&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL&#xA;);&#xA;```&#xA;This is equivalent to:&#xA;&#xA;```&#xA;CREATE TABLE users (&#xA;    user_id INTEGER,&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    PRIMARY KEY (user_id)&#xA;);&#xA;```&#xA;&#xA;Both of these are setting the column `user_id` as the `PRIMARY KEY` for the table `users`.&#xA;&#xA;This will do the same thing, we saw the first one in the previous blog post, in this we will do something different.&#xA;&#xA;## Multiple column PRIMARY KEY&#xA;&#xA;The schem design looks a little fragile, we can have `username` and `email` as duplicates right? But we have `user_id` as unique.&#xA;&#xA;Rather, we want a combination of the `username` and `email` as unique, do we care about the `user_id` as the `PRIMARY KEY`?&#xA;&#xA;It depends:&#xA;- If your application logic only wants to use `username` and `email` as unique, then the `user_id` is not required.&#xA;- If your applicaation logic only requires the `user_id` then it doesn&#39;t matter if the `username` of `email` are duplicated, the `user_id` will be unique.&#xA;&#xA;So, assuming the 1st scenario, let&#39;s change the users table schema:&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    PRIMARY KEY (username, email)&#xA;);&#xA;```&#xA;&#xA;Here we have defined the `username` and `email` as `PRIMARY KEY` for the table `users`. This will take a combination of the `username` and the `email` coluns, and add a `UNIQUE` constraint to it, here there is one more addition that we did which is `NOT NULL` since both the columns are `TEXT` we need to add teh `NOT NULL` constraint, for `INTEGER` and `PRIMARY KEY` it will be handled for us.&#xA;&#xA;Let&#39;s try inserting a few default values:&#xA;&#xA;```sql&#xA;INSERT INTO users DEFAULT VALUES;&#xA;```&#xA;&#xA;This won&#39;t work as the `username` and `email` columns are `NULL` and there is no `NOT NULL` constraint defined on them.&#xA;&#xA;Let&#39;s try inserting actual values:&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;john&#39;, &#39;john@abc.com&#39;);&#xA;```&#xA;&#xA;This successfully inserts the row into the table.&#xA;&#xA;And if we try to insert it again, the duplicate combination of `username` and `email` will throw an error:&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;john&#39;, &#39;john@abc.com&#39;);&#xA;```&#xA;&#xA;So, we either need the `username` or `email` to be unique, but not necessarily both.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;john&#39;, &#39;john_new@abc.com&#39;);&#xA;```&#xA;&#xA;This successfully inserts the row into the table, as the `username` is same but `email` is unique. Since the PRIMARY key is a combination of `username` and `email` we can allow the combinational unique values into a new record.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Here&#39;s the SQLog :)&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL,&#xA;    PRIMARY KEY (username, email)&#xA;);&#xA;sqlite&gt; INSERT INTO users DEFAULT VALUES;&#xA;Runtime error: NOT NULL constraint failed: users.username (19)&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;john&#39;, &#39;john@abc.com&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----------+--------------+&#xA;| username |    email     |&#xA;+----------+--------------+&#xA;| john     | john@abc.com |&#xA;+----------+--------------+&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | john     | john@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;john&#39;, &#39;john@abc.com&#39;);&#xA;Runtime error: UNIQUE constraint failed: users.username, users.email (19)&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;john&#39;, &#39;john_new@abc.com&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----------+------------------+&#xA;| username |      email       |&#xA;+----------+------------------+&#xA;| john     | john@abc.com     |&#xA;| john     | john_new@abc.com |&#xA;+----------+------------------+&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+------------------+&#xA;| rowid | username |      email       |&#xA;+-------+----------+------------------+&#xA;| 1     | john     | john@abc.com     |&#xA;| 2     | john     | john_new@abc.com |&#xA;+-------+----------+------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;I think that is all we need about `PRIMARY KEY` for the moment, if you need any specific example, we will be exploring `AUTOINCREMENT`, `NOT NULL` cases for `PRIMARY KEY`s and other schema design principles in separate sections.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: PRIMARY KEY column constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-primary-key-column-constraint</link>
      <description>The PRIMARY KEY column constraint The PRIMARY KEY constraint is not actually a column constraint, it is a table constraint. But in this section, we&#39;ll only lear</description>
      <pubDate>Sat, 13 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## The PRIMARY KEY column constraint&#xA;&#xA;The PRIMARY KEY constraint is not actually a column constraint, it is a table constraint. But in this section, we&#39;ll only learn about how to handle single column-level `PRIMARY KEY`.&#xA;&#xA;The [PRIMARY KEY](https://sqlite.org/lang_createtable.html#the_primary_key) column constraint is a constraint that ensures that a column contains unique values and is the `PRIMARY` way to distinguish between all the rows of that table.&#xA;&#xA;From the documentation:&#xA;&#xA;&gt; Each table in SQLite may have at most one PRIMARY KEY. If the keywords PRIMARY KEY are added to a column definition, then the primary key for the table consists of that single column. Or, if a PRIMARY KEY clause is specified as a table-constraint, then the primary key of the table consists of the list of columns specified as part of the PRIMARY KEY clause. The PRIMARY KEY clause must contain only column names — the use of expressions in an indexed-column of a PRIMARY KEY is not supported. An error is raised if more than one PRIMARY KEY clause appears in a CREATE TABLE statement. The PRIMARY KEY is optional for ordinary tables but is required for WITHOUT ROWID tables.&#xA;&gt; If a table has a single column primary key and the declared type of that column is &#34;INTEGER&#34; and the table is not a WITHOUT ROWID table, then the column is known as an INTEGER PRIMARY KEY. See below for a description of the special properties and behaviors associated with an INTEGER PRIMARY KEY.&#xA;&gt; Each row in a table with a primary key must have a unique combination of values in its primary key columns. For the purposes of determining the uniqueness of primary key values, NULL values are considered distinct from all other values, including other NULLs. If an INSERT or UPDATE statement attempts to modify the table content so that two or more rows have identical primary key values, that is a constraint violation.&#xA;&gt; According to the SQL standard, PRIMARY KEY should always imply NOT NULL. Unfortunately, due to a bug in some early versions, this is not the case in SQLite. Unless the column is an INTEGER PRIMARY KEY or the table is a WITHOUT ROWID table or a STRICT table or the column is declared NOT NULL, SQLite allows NULL values in a PRIMARY KEY column. SQLite could be fixed to conform to the standard, but doing so might break legacy applications. Hence, it has been decided to merely document the fact that SQLite allows NULLs in most PRIMARY KEY columns.&#xA;&#xA;Here is what I think in short:&#xA;&#xA;- Used to identify the rows of a table. Hence must be unique&#xA;- At most one primary key per table (default rowid, multiple primary key not allowed)&#xA;- Primary key can be a combination of multiple columns or even single column&#xA;- Only column names allowed (not expressions)&#xA;- Required for WITHOUT ROWID&#xA;- INTEGER PRIMARY KEY special behavior&#xA;- NULL behavior (SQLite quirk vs SQL standard)&#xA;&#xA;## Creating a simple table&#xA;&#xA;By default, if you don&#39;t  create a column in a table as a `PRIMARY KEY`, it is automatically assumed that `rowid` will be used as the unique identifier for that table.&#xA;&#xA;Let&#39;s create a simple table with columns like `username` and `email`, non of which are `PRIMARY KEY` explicitly mentioned.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL&#xA;);&#xA;```&#xA;Now, let&#39;s insert a few columns and check what we can observe.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;jim&#39;, &#39;jim@abc.com&#39;);&#xA;INSERT INTO users(username, email) VALUES(&#39;jack&#39;, &#39;jack@abc.com&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;As, you can see, we have populated the table with `username` and `email` columns.&#xA;&#xA;But wait, there should be `rowid` right? It is there but its a hidden column (do not touch it). &#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;You can see the `rowid` column in the output.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jim&#39;, &#39;jim@abc.com&#39;);&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jack&#39;, &#39;jack@abc.com&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----------+--------------+&#xA;| username |    email     |&#xA;+----------+--------------+&#xA;| jim      | jim@abc.com  |&#xA;| jack     | jack@abc.com |&#xA;+----------+--------------+&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 2     | jack     | jack@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;The rowid is used as a primary key for the table, this means it will take the max of the current state of the table and use the next number as its rowid.&#xA;&#xA;We can verify that if we try to insert a duplicate value/record/row in the `users` table:&#xA;&#xA;Let&#39;s insert the `jack` as the user and check if it inserts it, as it would have errored if the username or the email was the primary key, as this value is duplicated. But the `username` and `email` are not the `PRIMARY KEY` in this table, hence it will happily insert it.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;jack&#39;, &#39;jack@abc.com&#39;);&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;As, you can see it inserted the value and the `rowid` is now `3`.&#xA;&#xA;```&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 2     | jack     | jack@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jack&#39;, &#39;jack@abc.com&#39;);&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 2     | jack     | jack@abc.com |&#xA;| 3     | jack     | jack@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;What would happen if the middle values are deleted?&#xA;&#xA;let&#39;s delete the row with `rowid` `2`:&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE rowid = 2;&#xA;```&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;As, you can see, the `rowid` `2` is deleted. The `rowid` will still take the max of the current state of the table, and will use the next number as the `rowid` which in this state should be `4`.&#xA;&#xA;Let&#39;s insert a new record and check if the `rowid` is the max of the current state of the table.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;jill&#39;, &#39;jill@abc.com&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;As you can see, the `rowid` is now `4`.&#xA;&#xA;Let&#39;s delete that and insert one more.&#xA;&#xA;```sql&#xA;DELETE FROM users WHERE rowid = 4;&#xA;```&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;Now, that the max row of the table is `3` but we had `4` earlier, we need to check what could be the next rowid?&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;jill&#39;, &#39;jill@abc.com&#39;);&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;Ok, as you can see, the `rowid` is now `4`. It is reusing the previous `rowid` value, which might be an issue, very rarely but this is a expected behaviour.&#xA;&#xA;Which gives us the answer that there is maybe a quirk or a limitation of `rowid` that it can insert values which are max of the rowid in the current state.&#xA;&#xA;```&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 2     | jack     | jack@abc.com |&#xA;| 3     | jack     | jack@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt; DELETE FROM users WHERE rowid = 2;&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 3     | jack     | jack@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jill&#39;, &#39;jill@abc.com&#39;);&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 3     | jack     | jack@abc.com |&#xA;| 4     | jill     | jill@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt; DELETE FROM users WHERE rowid = 4;&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 3     | jack     | jack@abc.com |&#xA;+-------+----------+--------------+&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;joe&#39;, &#39;joe@abc.com&#39;);&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+--------------+&#xA;| rowid | username |    email     |&#xA;+-------+----------+--------------+&#xA;| 1     | jim      | jim@abc.com  |&#xA;| 3     | jack     | jack@abc.com |&#xA;| 4     | joe      | joe@abc.com  |&#xA;+-------+----------+--------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Ok, that was a wired tangent, but worth noting.&#xA;We will check how to fix it with the `AUTOINCREMENT` clause in the future posts.&#xA;&#xA;Now, let&#39;s create a `PRIMARY KEY` column that we can safely call a primary key.&#xA;&#xA;## Creating a column as primary key&#xA;&#xA;To create a column as a `PRIMARY KEY` we need to simply add the `PRIMARY KEY` constraint to the column. &#xA;&#xA;Let&#39;s create a simple table with columns like `username` and `email`, and here `email` is the primary key.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT PRIMARY KEY&#xA;);&#xA;```&#xA;&#xA;Mentioning `PRIMARY KEY` with email means that each record needs to have a unique `email`.&#xA;&#xA;If you attach a `PRIMARY KEY` to a column, the `UNIQUE` constraint is also added automatically. So, saying `email TEXT PRIMARY KEY UNIQUE` is redundant.&#xA;&#xA;Let&#39;s try inserting default values to the users.&#xA;&#xA;```sql&#xA;INSERT INTO users DEFAULT VALUES;&#xA;```&#xA;As you can see, this will fail as the `username` is a `NOT NULL` column.&#xA;&#xA;OK, let&#39;s just satisfy that `NOT NULL` constraint for username.&#xA;&#xA;```sql&#xA;INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;```&#xA;OK, that inserted the value.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;But, the email is empty! Becuase we didn&#39;t specify it? Yes, it is a text field, and a primary key, so you need to handle that. What about the rowid?&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;As you can see, the rowid is `1` and email is empty.&#xA;&#xA;That&#39;s worth noting that `rowid` will be populated independently of the `PRIMARY KEY` column. But if speicifed as `WITHOUT ROWID` in the table creation, it won&#39;t be there.&#xA;&#xA;Now, let&#39;s insert the same record again. We have said the `email` is a `PRIMARY KEY`, so it will be unique right? RIGHT?&#xA;&#xA;```sql&#xA;INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;```&#xA;&#xA;Surprisingly, it inserted that too.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;The email is empty for both the rows, but the username is same, so the `email` is NULL, and you guessed it right, `NULL` is identified as `UNIQUE`. That is a bad schema design. The users table had a text key as a primary key, and we need to add `NOT NULL` to the `PRIMARY KEY` column.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT PRIMARY KEY&#xA;);&#xA;sqlite&gt; INSERT INTO users DEFAULT VALUES;&#xA;Runtime error: NOT NULL constraint failed: users.username (19)&#xA;sqlite&gt; INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----------+-------+&#xA;| username | email |&#xA;+----------+-------+&#xA;| james    |       |&#xA;+----------+-------+&#xA;sqlite&gt; INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----------+-------+&#xA;| username | email |&#xA;+----------+-------+&#xA;| james    |       |&#xA;| james    |       |&#xA;+----------+-------+&#xA;sqlite&gt; INSERT INTO users(username) VALUES(&#39;jack&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----------+-------+&#xA;| username | email |&#xA;+----------+-------+&#xA;| james    |       |&#xA;| james    |       |&#xA;| jack     |       |&#xA;+----------+-------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&gt; NOTE: NOT NULL is only added automatically for `INTEGER PRIMARY KEY` column, which means the column becomes an alias for the `rowid` table.&#xA; &#xA;## PRIMARY KEY with NOT NULL&#xA;&#xA;It turns out that `NOT NULL` is only added automatically for `INTEGER PRIMARY KEY` column, so for rest of the combination of `PRIMARY KEY` columns we need to explicitly add the `NOT NULL` constraint. &#xA;&#xA;So, let&#39;s add that:&#xA;&#xA;```sql&#xA;DROP TABLE users;&#xA;CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT PRIMARY KEY NOT NULL&#xA;);&#xA;```&#xA;&#xA;Now, let&#39;s try inserting default values to the users.&#xA;&#xA;```sql&#xA;INSERT INTO users DEFAULT VALUES;&#xA;```&#xA;&#xA;As you can see, this will fail as the `username` is a `NOT NULL` column. As we knew in the first attempt.&#xA;&#xA;```sql&#xA;INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;```&#xA;OK, that is what we expected, it should fail, as have not specified the `email` column, as that is a primary key and it cannot be null. I mean it could be if we wanted to, but that is not expected to behave.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;```&#xA;Ok, this should insert it successfully. &#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;As you can see, we have populated the table with `username` and `email` columns. This is the perfect thing to setup.&#xA;&#xA;The `rowid` will still be there as discussed, the `email` now is the `PRIMARY KEY` which means any duplicate string email will be constraint violation.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;```&#xA;As you can see, this will fail, as we have already inserted a record with the same email. This is expected as we noted that the `PRIMARY KEY` automatically adds the `UNIQUE` constraint to the column.&#xA;&#xA;Similary, we can check a few other things.&#xA;&#xA;Like if we try to insert a record with the same username but different email:&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james_new@abc.com&#39;);&#xA;```&#xA;&#xA;This will succeed, as we have not specified the `username` as primary key, and it is ok to not have unique values for it.&#xA;&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    username TEXT NOT NULL,&#xA;    email TEXT PRIMARY KEY NOT NULL&#xA;);&#xA;&#xA;sqlite&gt; INSERT INTO users DEFAULT VALUES;&#xA;Runtime error: NOT NULL constraint failed: users.username (19)&#xA;&#xA;sqlite&gt; INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;Runtime error: NOT NULL constraint failed: users.email (19)&#xA;&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+---------------+&#xA;| rowid | username |     email     |&#xA;+-------+----------+---------------+&#xA;| 1     | james    | james@abc.com |&#xA;+-------+----------+---------------+&#xA;&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;Runtime error: UNIQUE constraint failed: users.email (19)&#xA;&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james_new@abc.com&#39;);&#xA;&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+-------------------+&#xA;| rowid | username |       email       |&#xA;+-------+----------+-------------------+&#xA;| 1     | james    | james@abc.com     |&#xA;| 2     | james    | james_new@abc.com |&#xA;+-------+----------+-------------------+&#xA;&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jill&#39;, &#39;james_new@abc.com&#39;);&#xA;Runtime error: UNIQUE constraint failed: users.email (19)&#xA;&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jill&#39;, &#39;jill@abc.com&#39;);&#xA;&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+-------+----------+-------------------+&#xA;| rowid | username |       email       |&#xA;+-------+----------+-------------------+&#xA;| 1     | james    | james@abc.com     |&#xA;| 2     | james    | james_new@abc.com |&#xA;| 3     | jill     | jill@abc.com      |&#xA;+-------+----------+-------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;That is how we primary key works, atleast the basic of it.&#xA;&#xA;## INTEGER PRIMARY KEY&#xA;&#xA;This is a special case where the column defined as `PRIMARY KEY` with INTEGER type becomes an alias to the `rowid` and hence we will have both `UNIQUE` and `NOT NULL` constraints added to it.&#xA;&#xA;Let&#39;s create a table with `INTEGER PRIMARY KEY` column.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    user_id INTEGER PRIMARY KEY,&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL&#xA;);&#xA;```&#xA;In this table `users` we have `user_id` which is `INTEGER PRIMARY KEY` column. It is an alias for the `rowid` table as we&#39;ll see.&#xA;&#xA;Now, let&#39;s insert some data and check what we can observe.&#xA;&#xA;```sql&#xA;INSERT INTO users DEFAULT VALUES;&#xA;```&#xA;Obivously, this will fail, as we have not specified the `username` and `email` as `NOT NULL`, and we haven&#39;t specified in the insertion.&#xA;&#xA;Let&#39;s specify both and check if it can violate any constraints.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;```&#xA;&#xA;As you can see, we successfully inserted the record, as we have specified the `username` and `email` as `NOT NULL` and the primary key which is `user_id` and it will automatically get populated. The `user_id` will be automatically populated as it will be a alias to the `rowid` column, and we know it will populate automatically for each inserted value for each row, by incrementing the max row id at that point.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;As you can see, the `user_id` is `1` and `username` is `james` and `email` is `james@abc.com`. But what is `rowid`?&#xA;&#xA;Let&#39;s check that too.&#xA;&#xA;```sql&#xA;SELECT rowid, * FROM users;&#xA;```&#xA;&#xA;As you can see, the `user_id` is `1` but the rowid has become the `user_id` which is basically an alias for the `rowid`.&#xA;&#xA;Let&#39;s see if we can add more records.&#xA;&#xA;Let&#39;s try adding the same record again.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;```&#xA;Interestingly, this works, because we have not specified the `user_id` as `PRIMARY KEY` and inserting duplicate username and email should not violate any constraints unless we have specified as `UNIQUE` or others.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE users (&#xA;    user_id INTEGER PRIMARY KEY,&#xA;    username TEXT NOT NULL,&#xA;    email TEXT NOT NULL&#xA;);&#xA;sqlite&gt; INSERT INTO users DEFAULT VALUES;&#xA;Runtime error: NOT NULL constraint failed: users.username (19)&#xA;sqlite&gt; INSERT INTO users(username) VALUES(&#39;james&#39;);&#xA;Runtime error: NOT NULL constraint failed: users.email (19)&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+---------+----------+---------------+&#xA;| user_id | username |     email     |&#xA;+---------+----------+---------------+&#xA;| 1       | james    | james@abc.com |&#xA;+---------+----------+---------------+&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+---------+---------+----------+---------------+&#xA;| user_id | user_id | username |     email     |&#xA;+---------+---------+----------+---------------+&#xA;| 1       | 1       | james    | james@abc.com |&#xA;+---------+---------+----------+---------------+&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;james&#39;, &#39;james@abc.com&#39;);&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+---------+---------+----------+---------------+&#xA;| user_id | user_id | username |     email     |&#xA;+---------+---------+----------+---------------+&#xA;| 1       | 1       | james    | james@abc.com |&#xA;| 2       | 2       | james    | james@abc.com |&#xA;+---------+---------+----------+---------------+&#xA;sqlite&gt; INSERT INTO users(username, email) VALUES(&#39;jill&#39;, &#39;jill@abc.com&#39;);&#xA;sqlite&gt; SELECT rowid, * FROM users;&#xA;+---------+---------+----------+---------------+&#xA;| user_id | user_id | username |     email     |&#xA;+---------+---------+----------+---------------+&#xA;| 1       | 1       | james    | james@abc.com |&#xA;| 2       | 2       | james    | james@abc.com |&#xA;| 3       | 3       | jill     | jill@abc.com  |&#xA;+---------+---------+----------+---------------+&#xA;sqlite&gt;&#xA;```&#xA;So that is how `INTEGER PRIMARY KEY` works.&#xA;&#xA;You can add `PRIMARY KEY` to any column, but you need to ensure that it doesn&#39;t violate the constraint of allowing duplicate enties.&#xA;&#xA;We know that the `PRIMARY KEY` is not a column specific constraint. It is table wide constraint, we&#39;ll check out the table wide constraint that is combinational constraint in the next section.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: CHECK column constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-check-column-constraint</link>
      <description>The CHECK column constraint The clause is a column constraint that allows us to define certain conditions that we want to evaluate before inserting the column f</description>
      <pubDate>Fri, 12 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## The CHECK column constraint&#xA;&#xA;The [CHECK](https://sqlite.org/lang_createtable.html#check_constraints) clause is a column constraint that allows us to define certain conditions that we want to evaluate before inserting the column for the row and populate it accordingly. In this post, we will check, what this check is and when it is performed.&#xA;&#xA;## What it checks&#xA;&#xA;The `CHECK` constraint basically acts a validator, we define the condition and if the condition is not met (is false) then the row is not inserted or updated.&#xA;&#xA;Let&#39;s take a simplest example with our good-ol users table:&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    id INTEGER PRIMARY KEY,&#xA;    name TEXT NOT NULL,&#xA;    age INTEGER CHECK(age &gt;= 18)&#xA;);&#xA;```&#xA;&#xA;Let&#39;s insert a user:&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age) VALUES (&#39;John&#39;, 21);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;This rightly populates with the user `John` with age `21`.&#xA;&#xA;If we try to insert a user with age less than 18, it will fail:&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age) VALUES (&#39;Alice&#39;, 17);&#xA;```&#xA;This should fail the constraint as the age is less than 18.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, age) VALUES (&#39;Alice&#39;, 17);&#xA;Runtime error: CHECK constraint failed: age &gt;= 18 (19)&#xA;sqlite&gt;&#xA;```&#xA;As you would see, the insertion query failed as a `CHECK` constraint failed.&#xA;&#xA;Let&#39;s add one more valid user:&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age) VALUES (&#39;Bob&#39;, 20);&#xA;```&#xA;&#xA;This is fine, and now, let&#39;s try to update the age of a user:&#xA;&#xA;```sql&#xA;UPDATE users SET age = 17 WHERE id = 2;&#xA;```&#xA;&#xA;This should fail the constraint as the age is less than 18.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, age) VALUES (&#39;Bob&#39;, 20);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+-----+&#xA;| id | name | age |&#xA;+----+------+-----+&#xA;| 1  | John | 21  |&#xA;| 2  | Bob  | 20  |&#xA;+----+------+-----+&#xA;&#xA;sqlite&gt; UPDATE users SET age = 17 where id = 2;&#xA;Runtime error: CHECK constraint failed: age &gt;= 18 (19)&#xA;sqlite&gt;&#xA;```&#xA;&#xA;As you would see, the update query failed as a `CHECK` constraint failed.&#xA;&#xA;## When it is checked?&#xA;&#xA;As we saw it is checked before the row is inserted or updated.&#xA;&#xA;## Disabling CHECK constraint&#xA;&#xA;We can disable the `CHECK` constraint on all the tables in the database by setting the `PRAGMA` of `ignore_check_constraints` to `TRUE`:&#xA;&#xA;```sql&#xA;PRAGMA ignore_check_constraints = TRUE;&#xA;```&#xA;&#xA;This will disable all the `CHECK` constraints on all the tables in the database.&#xA;&#xA;Let&#39;s try again updating the user&#39;s age with less then 18:&#xA;&#xA;```sql&#xA;UPDATE users SET age = 17 WHERE id = 2;&#xA;```&#xA;&#xA;This time, it readily updates the age to less then 18, without failing the constraint, which means the constraint was disable or shunned down.&#xA;&#xA;```&#xA;sqlite&gt; UPDATE users SET age = 17 where id = 2;&#xA;Runtime error: CHECK constraint failed: age &gt;= 18 (19)&#xA;sqlite&gt; PRAGMA ignore_check_constraints = TRUE;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+-----+&#xA;| id | name | age |&#xA;+----+------+-----+&#xA;| 1  | John | 21  |&#xA;| 2  | Bob  | 20  |&#xA;+----+------+-----+&#xA;sqlite&gt; UPDATE users SET age = 17 where id = 2;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+------+-----+&#xA;| id | name | age |&#xA;+----+------+-----+&#xA;| 1  | John | 21  |&#xA;| 2  | Bob  | 17  |&#xA;+----+------+-----+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Similarly the inserts will be allowed if the constraint is disabled.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age) VALUES (&#39;Harry&#39;, 14);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, age) VALUES (&#39;Harry&#39;, 14);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+----+-------+-----+&#xA;| id | name  | age |&#xA;+----+-------+-----+&#xA;| 1  | John  | 21  |&#xA;| 2  | Bob   | 17  |&#xA;| 3  | Harry | 14  |&#xA;+----+-------+-----+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;## Table level check constraint&#xA;&#xA;The `CHECK` constraint can be applied to a table too, in fact it doesn&#39;t really matter you add it to a column or a table, as it will be evaluated for insertion or updation of the column and it&#39;s not tied to a column.&#xA;&#xA;Let&#39;s take a better example:&#xA;&#xA;```sql&#xA;CREATE TABLE accounts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    account_type TEXT NOT NULL,&#xA;    balance INTEGER NOT NULL,&#xA;    CHECK (&#xA;        (account_type = &#39;savings&#39; AND balance &gt;= 0)&#xA;        OR (account_type = &#39;loan&#39; AND balance &lt;= 0)&#xA;    )&#xA;);&#xA;```&#xA;&#xA;Here, we have a `CHECK` constraint that checks if the account type is `savings` and balance is greater than or equal to 0, or if the account type is `loan` and balance is less than or equal to 0.&#xA;&#xA;So we are combining two columns to form a condition and check if the row is actually valid or not to be inserted.&#xA;&#xA;Let&#39;s insert a few rows:&#xA;&#xA;```sql&#xA;INSERT INTO accounts (account_type, balance) VALUES (&#39;savings&#39;, 1000);&#xA;INSERT INTO accounts (account_type, balance) VALUES (&#39;loan&#39;, -1000);&#xA;```&#xA;This will insert readily the `savings` and the `loan` account as the constraint passes.&#xA;&#xA;However if we try to do the opposite and make the constraint check fail.&#xA;&#xA;```sql&#xA;INSERT INTO accounts (account_type, balance) VALUES (&#39;savings&#39;, -1000);&#xA;```&#xA;&#xA;```sql&#xA;INSERT INTO accounts (account_type, balance) VALUES (&#39;loan&#39;, 1000);&#xA;```&#xA;&#xA;It will fail as the check condition is not TRUE anymore for both the cases.&#xA;&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO accounts (account_type, balance) VALUES (&#39;savings&#39;, 1000);&#xA;sqlite&gt; INSERT INTO accounts (account_type, balance) VALUES (&#39;loan&#39;, -1000);&#xA;sqlite&gt; INSERT INTO accounts (account_type, balance) VALUES (&#39;loan&#39;, 1000);&#xA;Runtime error: CHECK constraint failed: (account_type = &#39;savings&#39; AND balance &gt;= 0)&#xA;        OR (account_type = &#39;loan&#39; AND balance &lt;= 0) (19)&#xA;sqlite&gt; INSERT INTO accounts (account_type, balance) VALUES (&#39;savings&#39;, -1000);&#xA;Runtime error: CHECK constraint failed: (account_type = &#39;savings&#39; AND balance &gt;= 0)&#xA;        OR (account_type = &#39;loan&#39; AND balance &lt;= 0) (19)&#xA;sqlite&gt;&#xA;```&#xA;As you can see the `CHECK` condition failed for both the cases.&#xA;&#xA;&#xA;## Check constraint values&#xA;&#xA;Note, here the value will be either `TRUE` or `FALSE`. But what about other values? NULL, 0, ?&#xA;&#xA;That brings us to the fundamental of how `CHECK` considers the values as `TRUE` and `FALSE`.&#xA;&#xA;From the documentation:&#xA;&#xA;&gt; If the result is zero (integer value 0 or real value 0.0), then a constraint violation has occurred.&#xA;&gt; If the CHECK expression evaluates to NULL, or any other non-zero value, it is not a constraint violation&#xA;&#xA;&#xA;So, its not really `TRUE` or `FALSE` but:&#xA;- `TRUE`: is any non-zero value or `NULL` (if a condition is true, it is set as `1`)&#xA;- `FALSE`: is `0` (if a condition is false, it is set as `0`)&#xA;&#xA;So, let&#39;s create a few tables with different `CHECK` constraints having different values for the check constraint:&#xA;&#xA;```sql&#xA;CREATE TABLE test (&#xA;    val INTEGER,&#xA;    CHECK(val)&#xA;);&#xA;```&#xA;Now, let&#39;s insert some values&#xA;&#xA;The below query will evaluate to false, hence failing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (0);&#xA;```&#xA;&#xA;The value is `1` which is a non-zero value, hence passing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (1);&#xA;```&#xA;&#xA;Here the value is `0.000` which is still zero, hence failing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (0.000);&#xA;```&#xA;&#xA;The value here is `0.0001` which is a non-zero value, hence passing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (0.0001);&#xA;```&#xA;&#xA;The `FALSE` is alias for `0` hence failing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (FALSE);&#xA;```&#xA;&#xA;Similarly, `TRUE` is an alias for `1` hence passing the constraint.&#xA;&#xA;```sql&#xA;INSERT INTO test(val) VALUES (TRUE);&#xA;```&#xA;&#xA;&#xA;The empty string is casted as numeric, it evaluates to `0`, hence failing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (&#39;&#39;);&#xA;```&#xA;&#xA;Similarly, the string `&#39;a&#39;` is casted as numeric, it evaluates to `0`, hence failing the constraint.&#xA;```sql&#xA;INSERT INTO test(val) VALUES (&#39;a&#39;);&#xA;```&#xA;&#xA;Here&#39;s the log, the SQLog :)&#xA;&#xA;```sql&#xA;sqlite&gt; CREATE TABLE test (&#xA;    val INTEGER,&#xA;    CHECK(val)&#xA;);&#xA;sqlite&gt; INSERT INTO test(val) VALUES (0);&#xA;Runtime error: CHECK constraint failed: val (19)&#xA;sqlite&gt; INSERT INTO test(val) VALUES (1);&#xA;sqlite&gt; INSERT INTO test(val) VALUES (0.000);&#xA;Runtime error: CHECK constraint failed: val (19)&#xA;sqlite&gt; INSERT INTO test(val) VALUES (0.0001);&#xA;sqlite&gt; INSERT INTO test(val) VALUES (FALSE);&#xA;Runtime error: CHECK constraint failed: val (19)&#xA;sqlite&gt; INSERT INTO test(val) VALUES (TRUE);&#xA;sqlite&gt; INSERT INTO test(val) VALUES (&#39;&#39;);&#xA;Runtime error: CHECK constraint failed: val (19)&#xA;sqlite&gt; INSERT INTO test(val) VALUES (&#39;a&#39;);&#xA;Runtime error: CHECK constraint failed: val (19)&#xA;sqlite&gt; INSERT INTO test(val) VALUES (124);&#xA;sqlite&gt; SELECT * FROM test;&#xA;+--------+&#xA;|  val   |&#xA;+--------+&#xA;| 1      |&#xA;| 0.0001 |&#xA;| 1      |&#xA;| 124    |&#xA;+--------+&#xA;sqlite&gt;&#xA;&#xA;sqlite&gt; INSERT INTO test(val) VALUES (NULL);&#xA;sqlite&gt; INSERT INTO test DEFAULT VALUES;&#xA;sqlite&gt; SELECT * FROM test;&#xA;+--------+&#xA;|  val   |&#xA;+--------+&#xA;| 1      |&#xA;| 0.0001 |&#xA;| 1      |&#xA;| 124    |&#xA;|        |&#xA;|        |&#xA;+--------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now, you might be wondering why the heck `&#39;a&#39;` failed? Well, becuase casting strings in numeric form, will evaluate to `0`&#xA;&#xA;```sql&#xA;SELECT CAST(&#39;a&#39; AS NUMERIC);&#xA;```&#xA;&#xA;A wired quirk but worth noting, so won&#39;t work with strings, need to cast or perform better checks for string related values.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO test(val) VALUES (&#39;a&#39;);&#xA;Runtime error: CHECK constraint failed: val (19)&#xA;sqlite&gt; SELECT CAST(&#39;a&#39; AS NUMERIC);&#xA;+----------------------+&#xA;| CAST(&#39;a&#39; AS NUMERIC) |&#xA;+----------------------+&#xA;| 0                    |&#xA;+----------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;So that is it from the basic walkthrough of the `CHECK` constraint.&#xA;&#xA;Some TLDRs:&#xA;1. The `CHECK` clause is a column constraint that allows us to define certain conditions that we want to evaluate on the column(s) before inserting/updating the row.&#xA;2. The constraint is table wide, as it boils down to inserting/updating or not inserting/updating the row.&#xA;3. The `CHECK` constraint is evaluated before the row is inserted/updated.&#xA;4. The value of the conditions is evaluated as a `NUMERIC` value so any value is `TRUE` if it is not `0`, even `NULL` is true, and `FALSE` if it is `0`.&#xA;5. The `CHECK` constraint can be disabled using the `PRAGMA` command `PRAGMA ignore_check_constraints = TRUE;` or `PRAGMA ignore_check_constraints = ON;` `ON`, `TRUE`, either works.&#xA; &#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: GENERATED column constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-generated-column-constraint</link>
      <description>The GENERATED column constraint We saw certain limitations of the column constraint clause, that it can&#39;t generate dynamic values based on other columns or some</description>
      <pubDate>Thu, 11 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## The GENERATED column constraint&#xA;&#xA;We saw certain limitations of the `DEFAULT` column constraint clause, that it can&#39;t generate dynamic values based on other columns or some other dynamic popularities. For overcomming those kind of constraints, there is one more specific constraint, the [GENERATED](https://sqlite.org/gencol.html) column constraint.&#xA;&#xA;You can think of it as a `DEFAULT` but having more than static values, it is evaluated and computed each time a new row is inserted. However they come with one catch, you can&#39;t later update the values, you need to update the values of the dependent columns first, so it will generate **ALWAYS** a update is made to the row(relevant columns in that row).&#xA;&#xA;Let&#39;s create a simple table to understand the `GENERATED` column constraint:&#xA;&#xA;```sql&#xA;CREATE TABLE posts (&#xA;    id INTEGER PRIMARY KEY,&#xA;    content TEXT,&#xA;    word_count INTEGER GENERATED ALWAYS AS (length(content) - length(replace(content, &#39; &#39;, &#39;&#39;)) + 1),&#xA;    read_time_minutes INTEGER GENERATED ALWAYS AS (ceil(word_count / 200.0))&#xA;);&#xA;```&#xA;&#xA;The syntax of the `GENERATED` constraint is &#xA;&#xA;```&#xA;column_name &lt;type&gt; GENERATED ALWAYS AS &lt;expression&gt;&#xA;```&#xA;&#xA;Nothing fancy, just a content field as a text, then based on that value, the `word_count` and `read_time_minutes` is always calculated/generated based on the value of the `content` and the `read_time_minutes` is inturn dependent on the `word_count` value. So, the change in `content` will make a update on the `word_count` that will inturn update the `read_time_minutes`, like a chain reaction of sorts.&#xA;&#xA;Let&#39;s insert a sample post, note, we can&#39;t insert a generated value.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content) VALUES(&#39;A sample post&#39;);&#xA;```&#xA;&#xA;This will create a record with the `id` as 1, the `content` as `A sample post`, and the `word_count` will be generated/calculated as the `(length(content) - length(replace(content, &#39; &#39;, &#39;&#39;))+1)` as the number of spaces + 1. The `read_time_minutes` is generated with the `word_count` divided by 200 which is the average word read in a minute by a human (let&#39;s not go into the nity-grity) but you can get the point of this. These are just mathematical values, and those are dependent on the value of the other columns of the same record/row.&#xA;&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;As, you can see here, the value of `word_count` is `3` and the value of `read_time_minutes` is `1`.&#xA;&#xA;Let&#39;s insert a few more values and check how these generated values behave.&#xA;&#xA;&#xA;Let&#39;s insert a bit longer text.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content) VALUES(&#39;A SQLITE Post for the generated column constraint.&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;As, you can see the value of the `word_count` is `8` and the value of `read_time_minutes` is `1`.&#xA;&#xA;Let&#39;s update the `content` and check if the `word_count` and the `read_time_minutes` are updated.&#xA;&#xA;```sql&#xA;UPDATE posts SET content = content || &#39;New content&#xA;some more content&#39; where id = 2;&#xA;&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;As you can see the value of the `word_count` is `11` and the value of `read_time_minutes` is `1` still. That is because, until we write `200` words it will ceil off to 1.&#xA;&#xA;Let&#39;s add a record with more than `200` words, and check if the `read_time_minutes` is updated.&#xA;&#xA;```sql&#xA;INSERT INTO posts(content) VALUES(&#39;New post&#39;);&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;The current value of the `word_count` is `2` and the `read_time_minutes` is `1`, let&#39;s update it by adding the value of `Adding some extra words here.`.&#xA;&#xA;```sql&#xA;UPDATE posts &#xA;    SET content = content || replace(&#xA;        printf(&#39;%200c&#39;, &#39; &#39;),&#xA;        &#39; &#39;,&#xA;        &#39; Adding some extra words here.&#39;&#xA;    )&#xA;WHERE id = 3;&#xA;```&#xA;&#xA;Ok, this might be too much, but let&#39;s break it down:&#xA;&#xA;- First, we are updating the `content` by adding the value of `Adding some extra words here.` to the end of the existing value of `content`.&#xA;- In the replace function, the parameters are original string, the value to replace, and the value to replace it with.&#xA;- So in this case, we are taking the string `printf(&#39;%200c&#39;, &#39; &#39;)`, which is a string of 200 spaces, basically adding a single space character, but it will be right-justified within a field of 200 characters.&#xA;- The second parameter is the character to replace, in this case it is single white space or `&#39; &#39;` character.&#xA;- The third parameter is the text to replace that `&#39; &#39;` space character with, here it is ` Adding some extra words here`.&#xA;- So, essentially we are taking `200` white spaces and adding ` Adding some extra words here.` for each space to the end of the existing value of `content`.&#xA;&#xA;Hence we will have around ~`200 * 5` words i.e. 1000 words, this way we can check the value of the `read_time_minutes` is updated.&#xA;&#xA;```sql&#xA;SELECT * FROM posts;&#xA;```&#xA;&#xA;As you can see the value of the `word_count` is `1002` and the value of `read_time_minutes` is `6`. That is perfect. And it is getting updated for each updation to the `content` column.&#xA;&#xA;Here&#39;s a step by step output log: Its a SQLog :)&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE posts (&#xA;   id INTEGER PRIMARY KEY,&#xA;   content TEXT,&#xA;   word_count INTEGER GENERATED ALWAYS AS (length(content) - length(replace(content, &#39; &#39;, &#39;&#39;)) + 1),&#xA;   read_time_minutes INTEGER GENERATED ALWAYS AS (ceil(word_count / 200.0))&#xA;);&#xA;sqlite&gt; .mode table&#xA;&#xA;sqlite&gt; INSERT INTO posts(content) VALUES(&#39;A sample post&#39;);&#xA;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+---------------+------------+-------------------+&#xA;| id |    content    | word_count | read_time_minutes |&#xA;+----+---------------+------------+-------------------+&#xA;| 1  | A sample post | 3          | 1                 |&#xA;+----+---------------+------------+-------------------+&#xA;sqlite&gt; INSERT INTO posts(content) VALUES(&#39;A SQLITE Post for the generated column constraint.&#39;);&#xA;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+----------------------------------------------------+------------+-------------------+&#xA;| id |                      content                       | word_count | read_time_minutes |&#xA;+----+----------------------------------------------------+------------+-------------------+&#xA;| 1  | A sample post                                      | 3          | 1                 |&#xA;| 2  | A SQLITE Post for the generated column constraint. | 8          | 1                 |&#xA;+----+----------------------------------------------------+------------+-------------------+&#xA;&#xA;sqlite&gt; UPDATE posts SET content = content || &#39;New content&#xA;&#39;  ...&gt; some more content&#39; where id = 2;&#xA;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| id |                           content                            | word_count | read_time_minutes |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 1  | A sample post                                                | 3          | 1                 |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 2  | A SQLITE Post for the generated column constraint.New conten | 11         | 1                 |&#xA;|    | t                                                            |            |                   |&#xA;|    | some more content                                            |            |                   |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;&#xA;sqlite&gt; INSERT INTO posts(content) VALUES(&#39;New post&#39;);&#xA;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| id |                           content                            | word_count | read_time_minutes |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 1  | A sample post                                                | 3          | 1                 |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 2  | A SQLITE Post for the generated column constraint.New conten | 11         | 1                 |&#xA;|    | t                                                            |            |                   |&#xA;|    | some more content                                            |            |                   |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 3  | New post                                                     | 2          | 1                 |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;&#xA;sqlite&gt; UPDATE posts SET content = content || replace(printf(&#39;%200c&#39;, &#39; &#39;), &#39; &#39;, &#39; Adding some extra words here.&#39;) WHERE id = 3;&#xA;&#xA;sqlite&gt; SELECT * FROM posts;&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| id |                           content                            | word_count | read_time_minutes |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 1  | A sample post                                                | 3          | 1                 |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 2  | A SQLITE Post for the generated column constraint.New conten | 11         | 1                 |&#xA;|    | t                                                            |            |                   |&#xA;|    | some more content                                            |            |                   |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;| 3  | New post Adding some extra words here. Adding some extra wor | 1002       | 6                 |&#xA;|    | ds here. Adding some extra words here. Adding some extra wor |            |                   |&#xA;|    | ds here. Adding some extra words here. Adding some extra wor |            |                   |&#xA;|    |..............................................................|            |                   |&#xA;|    | ds here. Adding some extra words here. Adding some extra wor |            |                   |&#xA;|    | ds here.                                                     |            |                   |&#xA;+----+--------------------------------------------------------------+------------+-------------------+&#xA;&#xA;```&#xA;&#xA;## Generated columns cannot be updated manually&#xA;&#xA;If you tried to update the columns with `GENERATED ALWAYS` constraint, you will get a `constraint failed` error. As the constraint itself says, it is **GENERATED ALWAYS**, hence not to be updated or inserted.&#xA;&#xA;```sql&#xA;UPDATE posts SET word_count = 10 WHERE id = 1;&#xA;```&#xA;&#xA;It will fail with `cannot UPDATE generated column &#34;word_count&#34;` as mentioned.&#xA;&#xA;&#xA;```sql&#xA;UPDATE posts SET read_time_minutes = 10 WHERE id = 1;&#xA;```&#xA;&#xA;Similarly, this will also fail, as it is also a `GENERATED ALWAYS` constrained column.&#xA;&#xA;```&#xA;sqlite&gt; UPDATE posts SET word_count = 10 WHERE id = 1;&#xA;Parse error: cannot UPDATE generated column &#34;word_count&#34;&#xA;&#xA;sqlite&gt; UPDATE posts SET read_time_minutes = 10 WHERE id = 1;&#xA;Parse error: cannot UPDATE generated column &#34;read_time_minutes&#34;&#xA;```&#xA;&#xA;## STORED VS VIRTUAL &#xA;&#xA;Now, the Generated columns can be `STORED` as in stored in the database to the disk file, or they could be `VIRTUAL` as in computed (generated,calculated each time). There are trade-offs and I will definitely add the meme here `IT ACTUALLY DEPENDS`!&#xA;&#xA;The `STORED` column will take up space but the queries will be quick, since it doesn&#39;t need to calculate it each time, only it needs to do it at each updation (that can&#39;t be avoided).&#xA;The `VIRTUAL` column will not take up space but the queries will be a little slower, depending on the number of data to compute, since it doesn&#39;t store the column values, it will have to compute each time and it as well need to update it each time.&#xA;&#xA;&gt; The default is `VIRTUAL` if not specified.&#xA;&#xA;The commonality of `STORED` and `VIRTUAL` is that they are both updated at the updation of the column it relates to or the updation of the row, the only difference is that one stores it the other calculates it for each query made to it.&#xA;&#xA;We&#39;ll see the difference how is that the case, it generates each time in the case of the `VIRTUAL` column `GENERATED` constraint.&#xA;&#xA;I&#39;ll show the most basic example continuing with the same table, but with the `VIRTUAL` clause on the `GENERATED ALWAYS` constraint.&#xA;&#xA;### Virtual generated column&#xA;&#xA;By default, it is `VIRTUAL` it doesn&#39;t matter if you add it or not (let&#39;s add it to make it clear).&#xA;&#xA;I&#39;ll specify the `word_count` as the `VIRTUAL` column with the `GENERATED ALWAYS` constraint with the same formula to get the word count and at the end specify the `VIRTUAL` keyword (its optional).&#xA;&#xA;&gt; Let&#39;s skip the `read_time_minutes` for now, you can add it won&#39;t make it any difference (it was a little extreme example to use 200 words to update its value)&#xA;&#xA;```sql&#xA;CREATE TABLE posts_virtual (&#xA;   id INTEGER PRIMARY KEY,&#xA;   content TEXT,&#xA;   word_count INTEGER GENERATED ALWAYS AS (&#xA;     length(content) - length(replace(content, &#39; &#39;, &#39;&#39;)) + 1&#xA;   ) VIRTUAL&#xA;);&#xA;```&#xA;Now, we&#39;ll add some data to the table and check the value of the `word_count` column.&#xA;&#xA;```sql&#xA;INSERT INTO posts_virtual(content) VALUES(&#39;A sample post&#39;);&#xA;INSERT INTO posts_virtual(content) VALUES(&#39;A SQLITE Post for the virtual generated column constraint.&#39;);&#xA;SELECT * FROM posts_virtual;&#xA;```&#xA;&#xA;This we already know, the plain old `word_count` is `3` and `9` for the two posts respectively.&#xA;&#xA;But let&#39;s see how is the `word_count` column is actually stored and how slow or fast it actually is.&#xA;&#xA;```sql&#xA;PRAGMA table_info(posts_virtual);&#xA;```&#xA;&#xA;HH? There is no `word_count` column here, it is not stored, it is a `VIRTUAL` column.&#xA;&#xA;```&#xA;sqlite&gt; PRAGMA table_info(posts_virtual);&#xA;+-----+---------+---------+---------+------------+----+&#xA;| cid |  name   |  type   | notnull | dflt_value | pk |&#xA;+-----+---------+---------+---------+------------+----+&#xA;| 0   | id      | INTEGER | 0       |            | 1  |&#xA;| 1   | content | TEXT    | 0       |            | 0  |&#xA;+-----+---------+---------+---------+------------+----+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;What about extra info?&#xA;&#xA;```sql&#xA;PRAGMA table_xinfo(posts_virtual);&#xA;```&#xA;Yes, that is indeed there, and its hidden? Nope, its `generated` a special type.&#xA;- `0` is not hidden&#xA;- `1` is hidden&#xA;- `2` is generated&#xA;&#xA;```&#xA;sqlite&gt; PRAGMA table_xinfo(posts_virtual);&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;| cid |    name    |  type   | notnull | dflt_value | pk | hidden |&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;| 0   | id         | INTEGER | 0       |            | 1  | 0      |&#xA;| 1   | content    | TEXT    | 0       |            | 0  | 0      |&#xA;| 2   | word_count | INTEGER | 0       |            | 0  | 2      |&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;It&#39;s time to benchmark it then, let&#39;s bulk insert 100000 posts.&#xA;&#xA;&#xA;```sql&#xA;WITH RECURSIVE cnt(x) AS (&#xA;  SELECT 1&#xA;  UNION ALL&#xA;  SELECT x+1 FROM cnt WHERE x &lt; 100000&#xA;)&#xA;INSERT INTO posts_virtual(content)&#xA;SELECT &#39;This is a sample post number &#39; || x || &#39; with some words repeated multiple times.&#39;&#xA;FROM cnt;&#xA;```&#xA;&#xA;If we see now the `posts_virtual` table has over 100000 rows.&#xA;&#xA;```sql&#xA;SELECT COUNT(*) FROM posts_virtual;&#xA;```&#xA;&#xA;We already had `2` records, and we added `100000` more, so the total is `100002`.&#xA;&#xA;Let&#39;s check the sum of the `word_count` to be sure it inserted it right.&#xA;&#xA;```sql&#xA;SELECT SUM(word_count) FROM posts_virtual;&#xA;```&#xA;&#xA;Ok that shows the sum of the `word_count` is `1300012`.&#xA;&#xA;Let&#39;s check with the timer on and evaluate the query.&#xA;&#xA;```&#xA;sqlite&gt; WITH RECURSIVE cnt(x) AS (&#xA;  SELECT 1&#xA;  UNION ALL&#xA;  SELECT x+1 FROM cnt WHERE x &lt; 100000&#xA;)&#xA;INSERT INTO posts_virtual(content)&#xA;SELECT &#39;This is a sample post number &#39; || x || &#39; with some words repeated multiple times.&#39;&#xA;FROM cnt;&#xA;&#xA;sqlite&gt; PRAGMA table_xinfo(posts_virtual);&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;| cid |    name    |  type   | notnull | dflt_value | pk | hidden |&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;| 0   | id         | INTEGER | 0       |            | 1  | 0      |&#xA;| 1   | content    | TEXT    | 0       |            | 0  | 0      |&#xA;| 2   | word_count | INTEGER | 0       |            | 0  | 2      |&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;&#xA;sqlite&gt; .timer on&#xA;&#xA;sqlite&gt; SELECT COUNT(*) FROM posts_virtual;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 100002   |&#xA;+----------+&#xA;Run Time: real 0.001 user 0.000352 sys 0.000103&#xA;&#xA;sqlite&gt; SELECT SUM(word_count) FROM posts_virtual;&#xA;+-----------------+&#xA;| SUM(word_count) |&#xA;+-----------------+&#xA;| 1300012         |&#xA;+-----------------+&#xA;Run Time: real 0.041 user 0.040682 sys 0.000418&#xA;&#xA;sqlite&gt; PRAGMA page_count;&#xA;+------------+&#xA;| page_count |&#xA;+------------+&#xA;| 2086       |&#xA;+------------+&#xA;Run Time: real 0.000 user 0.000000 sys 0.000078&#xA;&#xA;sqlite&gt; PRAGMA page_size;&#xA;+-----------+&#xA;| page_size |&#xA;+-----------+&#xA;| 4096      |&#xA;+-----------+&#xA;Run Time: real 0.000 user 0.000025 sys 0.000006&#xA;&#xA;sqlite&gt; UPDATE posts_virtual SET content = content || &#39; extra text&#39; WHERE id % 1000 = 0;&#xA;Run Time: real 0.010 user 0.009353 sys 0.000161&#xA;```&#xA;&#xA;I think we&#39;ll leave it here and move onto creating the `STORED` column as a generated value&#xA;&#xA;### Stored generated column&#xA;&#xA;Similar to the `VIRTUAL` column, we&#39;ll create the `STORED` column with the `GENERATED ALWAYS` constraint.&#xA;&#xA;This will be the same table named as `posts_stored` but with the `word_count` being a `STORED` column rather than a `VIRTUAL` column.&#xA;&#xA;```sql&#xA;CREATE TABLE posts_stored (&#xA;   id INTEGER PRIMARY KEY,&#xA;   content TEXT,&#xA;   word_count INTEGER GENERATED ALWAYS AS (&#xA;     length(content) - length(replace(content, &#39; &#39;, &#39;&#39;)) + 1&#xA;   ) STORED&#xA;);&#xA;```&#xA;Now, we&#39;ll add some data to the table.&#xA;&#xA;```sql&#xA;INSERT INTO posts_stored(content) VALUES(&#39;A sample post&#39;);&#xA;INSERT INTO posts_stored(content) VALUES(&#39;A SQLITE Post for the virtual generated column constraint.&#39;);&#xA;SELECT * FROM posts_stored;&#xA;```&#xA;This is cool, working as expected, we have already seen it, we need to take a look at how this values are stored now.&#xA;&#xA;Let&#39;s take a look at the `table_info` for the `posts_stored` table.&#xA;&#xA;```sql&#xA;PRAGMA table_info(posts_stored);&#xA;```&#xA;It still doesn&#39;t have the `GENERATED` column `word_count`.&#xA;&#xA;Let&#39;s take a look at the `table_xinfo` for the `posts_stored` table.&#xA;&#xA;```sql&#xA;PRAGMA table_xinfo(posts_stored);&#xA;```&#xA;&#xA;Ok, wiredly it has the `GENERATED ALWAYS` column `word_count` as well but the `hidden` value is `3`.&#xA;&#xA;We knew it was `2` for hidden but `3` is for? You guessed it right, stored and generated column.&#xA;&#xA;Now, let&#39;s also populate the `posts_stored` table with some data, bulk inserting the data.&#xA;&#xA;Let&#39;s add `100000` rows in it.&#xA;&#xA;```sql&#xA;WITH RECURSIVE cnt(x) AS (&#xA;  SELECT 1&#xA;  UNION ALL&#xA;  SELECT x+1 FROM cnt WHERE x &lt; 100000&#xA;)&#xA;INSERT INTO posts_stored(content)&#xA;SELECT &#39;This is a sample post number &#39; || x || &#39; with some words repeated multiple times.&#39;&#xA;FROM cnt;&#xA;```&#xA;This is the same query except its inserting in the `posts_stored` table.&#xA;&#xA;Let&#39;s check if it has inserted the right number of rows.&#xA;&#xA;```sql&#xA;SELECT COUNT(*) FROM posts_stored;&#xA;```&#xA;&#xA;Indeed there are `100000` rows in the `posts_stored` table.&#xA;&#xA;Now, let&#39;s verify that the inserted values are rightly populated.&#xA;&#xA;```sql&#xA;SELECT SUM(word_count) FROM posts_stored;&#xA;```&#xA;&#xA;This gives `1300012` as the sum of the `word_count` from the `posts_stored` table. Which matches the sum of the `word_count` from the `posts_virtual` table.&#xA;&#xA;```sql&#xA;sqlite&gt; CREATE TABLE posts_stored (&#xA;   id INTEGER PRIMARY KEY,&#xA;   content TEXT,&#xA;   word_count INTEGER GENERATED ALWAYS AS (&#xA;     length(content) - length(replace(content, &#39; &#39;, &#39;&#39;)) + 1&#xA;   ) STORED&#xA;);&#xA;Run Time: real 0.000 user 0.000258 sys 0.000027&#xA;&#xA;sqlite&gt; INSERT INTO posts_stored(content) VALUES(&#39;A sample post&#39;);&#xA;sqlite&gt; INSERT INTO posts_stored(content) VALUES(&#39;A SQLITE Post for the virtual generated column constraint.&#39;);&#xA;&#xA;SELECT * FROM posts_stored;&#xA;+----+------------------------------------------------------------+------------+&#xA;| id |                          content                           | word_count |&#xA;+----+------------------------------------------------------------+------------+&#xA;| 1  | A sample post                                              | 3          |&#xA;| 2  | A SQLITE Post for the virtual generated column constraint. | 9          |&#xA;+----+------------------------------------------------------------+------------+&#xA;Run Time: real 0.000 user 0.000224 sys 0.000000&#xA;&#xA;sqlite&gt; PRAGMA table_info(posts_stored);&#xA;+-----+---------+---------+---------+------------+----+&#xA;| cid |  name   |  type   | notnull | dflt_value | pk |&#xA;+-----+---------+---------+---------+------------+----+&#xA;| 0   | id      | INTEGER | 0       |            | 1  |&#xA;| 1   | content | TEXT    | 0       |            | 0  |&#xA;+-----+---------+---------+---------+------------+----+&#xA;&#xA;sqlite&gt; PRAGMA table_xinfo(posts_stored);&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;| cid |    name    |  type   | notnull | dflt_value | pk | hidden |&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;| 0   | id         | INTEGER | 0       |            | 1  | 0      |&#xA;| 1   | content    | TEXT    | 0       |            | 0  | 0      |&#xA;| 2   | word_count | INTEGER | 0       |            | 0  | 3      |&#xA;+-----+------------+---------+---------+------------+----+--------+&#xA;Run Time: real 0.000 user 0.000013 sys 0.000322&#xA;&#xA;sqlite&gt; SELECT COUNT(*) FROM posts_stored;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 2        |&#xA;+----------+&#xA;Run Time: real 0.000 user 0.000108 sys 0.000012&#xA;&#xA;sqlite&gt; WITH RECURSIVE cnt(x) AS (&#xA;  SELECT 1&#xA;  UNION ALL&#xA;  SELECT x+1 FROM cnt WHERE x &lt; 100000&#xA;)&#xA;INSERT INTO posts_stored(content)&#xA;SELECT &#39;This is a sample post number &#39; || x || &#39; with some words repeated multiple times.&#39;&#xA;FROM cnt;&#xA;Run Time: real 0.071 user 0.067858 sys 0.002929&#xA;&#xA;sqlite&gt; SELECT COUNT(*) FROM posts_stored;&#xA;+----------+&#xA;| COUNT(*) |&#xA;+----------+&#xA;| 100002   |&#xA;+----------+&#xA;Run Time: real 0.000 user 0.000367 sys 0.000000&#xA;&#xA;sqlite&gt; SELECT SUM(word_count) FROM posts_stored;&#xA;+-----------------+&#xA;| SUM(word_count) |&#xA;+-----------------+&#xA;| 1300012         |&#xA;+-----------------+&#xA;Run Time: real 0.008 user 0.006492 sys 0.000941&#xA;```&#xA;&#xA;This is all setup for the comparison of both the tables.&#xA;&#xA;### The difference&#xA;&#xA;Now, let&#39;s check the difference between the `STORED` and `VIRTUAL` column.&#xA;&#xA;Let&#39;s sum the `word_count` from each of the tables `posts_stored` and `posts_virtual`&#xA;&#xA;First, lets set the timer on, which will give the breakdown of the time for each ran query.&#xA;&#xA;```&#xA;.timer on&#xA;```&#xA;The timer will give three time durations, the real, user and the system.&#xA;The breakdown is like this:&#xA;- real time : The full clock time form the start to the end of the query&#xA;- user time : The CPU time in the user space (I/O) operations on the sqlite side, computations on the obtained data, or preprocessing the data before inserting.&#xA;- system time: The CPU time in the kernel space (I/O) operations on the host side, it could be reading or writing to the actual db file, etc.&#xA;&#xA;And then run the query for the `posts_stored` table.&#xA;&#xA;```sql&#xA;SELECT SUM(word_count) FROM posts_stored;&#xA;```&#xA;&#xA;This is giving the result in around `~0.008` seconds might be approximately `0.01` in certain cases.&#xA;We can even get the breakdown of the time for each ran query.&#xA;- full query execution time: `0.008`&#xA;- user time: `0.006492`&#xA;- system time: `0.000941`&#xA;&#xA;So, here in this case, the system and user time are around the same, however there is some system time for reading from the database file.&#xA;&#xA;And then run the query for the `posts_virtual` table.&#xA;&#xA;```sql&#xA;SELECT SUM(word_count) FROM posts_virtual;&#xA;```&#xA;&#xA;This is giving the result in around `~0.039` seconds might be approximately `0.04` in certain cases.&#xA;&#xA;We can even get the breakdown of the time for each ran query.&#xA;- full query execution time: `0.039`&#xA;- user time: `0.039366`&#xA;- system time: `0.000000` this can go sometimes to `0.000652`, almost negligible, but if it happens, in that case the full query time will increase by a bit.&#xA;&#xA;So, it is taking a quite lot of time in the user space by the CPU right?&#xA;And almost 0 time in the kernel level (disk operation) to read from the database file.&#xA;&#xA;That is so clear, as this is a virtual column, and we are only reading a single column, and that too is a virtual column, we don&#39;t need to hit a read to the database file. Clever right? But that costs some CPU to compute the sum for the `100002` rows. &#xA;&#xA;This is the trade-off&#xA;&#xA;- Time spent on the CPU on the computation vs the time spent on the disk operation to read the data.&#xA;- These can be different depending on the query.&#xA;&#xA;```sql&#xA;sqlite&gt; SELECT SUM(word_count) FROM posts_stored;&#xA;+-----------------+&#xA;| SUM(word_count) |&#xA;+-----------------+&#xA;| 1300012         |&#xA;+-----------------+&#xA;Run Time: real 0.008 user 0.006492 sys 0.000941&#xA;&#xA;sqlite&gt; SELECT SUM(word_count) FROM posts_virtual;&#xA;+-----------------+&#xA;| SUM(word_count) |&#xA;+-----------------+&#xA;| 1300012         |&#xA;+-----------------+&#xA;Run Time: real 0.039 user 0.039366 sys 0.000000&#xA;```&#xA;&#xA;You can clearly see the difference, the `STORED` column is almost `4` to `5` times faster than the `VIRTUAL` column.&#xA;&#xA;### Key observations&#xA;&#xA;1. If you have lot of processing to do in the generated column, better to go with the `STORED` column.&#xA;2. If you have a lot of data to compute and not necessarily complex, better to go with the `VIRTUAL` column.&#xA;3. It really depends on the data too, and we cannot say which one is better over the other without knowing the structure and the design of the queries.&#xA;&#xA;&#xA;There are lot of details as well beyond this, it will require specific guides on how to do certain things and busting certain myths in the documentation.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: DEFAULT column constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-default-column-constraint</link>
      <description>DEFAULT Constraint The constraint in SQLite is used to specify a default value for a column, if not provided, then the column will be populated with the value a</description>
      <pubDate>Wed, 10 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## DEFAULT Constraint&#xA;&#xA;The `DEFAULT` constraint in SQLite is used to specify a default value for a column, if not provided, then the column will be populated with the `NULL` value as the default value. &#xA;&#xA;According to the docs:&#xA;&#xA;&gt; The DEFAULT clause specifies a default value to use for the column if no value is explicitly provided by the user when doing an INSERT. If there is no explicit DEFAULT clause attached to a column definition, then the default value of the column is NULL. &#xA;&#xA;NOTE: This constraint doesn&#39;t guard a `NULL` value, it only prevents the initial insert (or new record creation) of a `NULL` value, you can update the column later with a `NULL` value. It only is used to override the default `NULL` value, by providing a different one.&#xA;&#xA;The `DEFAULT` constraint can only take constant values, and it can&#39;t deduce or dynamically store values at the time of creation, if you want that kind of default values, then the other constraint [GENERATED ALWAYS](https://sqlite.org/lang_createtable.html#the_generated_always_as_clause) would help.&#xA;&#xA;&gt; Also, DEFAULT might not be a constraint, its just a clause or a column constraint clause.&#xA;&#xA;## Creating a Table with a DEFAULT Constraint&#xA;&#xA;The general syntax for creating a default constraint is basically the `DEFAULT` keyword, followed by the actual literal value.&#xA;&#xA;In the below example, let&#39;s create a dummy users table, with the `country_code` and `followers` as the columns, and their default values as `US` and `0` respectively. Note that the value can be wrapped in a `()` but it&#39;s not a expression so we can avoid it. However, writing the value in the `()` makes it more redable I think.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    username TEXT NOT NULL UNIQUE,&#xA;    country_code TEXT DEFAULT &#39;US&#39;,&#xA;    followers INTEGER DEFAULT 0&#xA;);&#xA;```&#xA;&#xA;Once we have a table with columns having default values, we can avoid inserting the values each time, as the default values will be populated.&#xA;&#xA;```sql&#xA;INSERT INTO users(username) VALUES(&#39;alice&#39;);&#xA;```&#xA;&#xA;Now, we have inserted a row with the `username` as `alice`, and the `country_code` and `followers` will be populated as the default values as they are not specified as `US` and `0` respectively.&#xA;&#xA;```sql &#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;It doens&#39;t mean that you can&#39;t add any other values, the default values will be used when you don&#39;t provide any value while inserting the row. In other words, the problem of seeing empty values (`NULL` values) when you don&#39;t specify will be gone, unless you update the record later explicitly with the `NULL` or empty values.&#xA;&#xA;```sql&#xA;INSERT INTO users(username, country_code) VALUES(&#39;bob&#39;, &#39;UK&#39;);&#xA;INSERT INTO users(username, country_code, followers) VALUES(&#39;ronaldo&#39;, &#39;PT&#39;, 100000);&#xA;```&#xA;&#xA;In the above queries, we can set the `country_code` or the `followers` if we want to insert a specific value in the `country_code` or the `followers` column. &#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;As you can see, it populates the values if provided, else uses the default values.&#xA;&#xA;&#xA;## Inserting default values &#xA;&#xA;If you want to create a record with all default values, you can specify the `DEFAULT VALUES` as the statement clause in the `INSERT` statement.&#xA;&#xA;For this to work, we need to have a default value for every column. If not specified it will be `NULL`, but if there are columns with `NOT NULL` or other constraint that needs to have some value from the user then we won&#39;t be able to insert the record without providing at least one value.&#xA;&#xA;```sql&#xA;INSERT INTO users DEFAULT VALUES;&#xA;```&#xA;This above statement will fail because we don&#39;t have any default value for the `username` column and it has a `NOT NULL` constraint, as discussed in the [NOT NULL Constraint blog post](https://www.meetgor.com/sqlog/sqlite-not-null-column-constraint/), we need to provide a value for that column in order to create a new record.&#xA;&#xA;So, it will only we suited, for tables having all values with some or the other values default or generated.&#xA;&#xA;Like the example below:&#xA;&#xA;```sql&#xA;CREATE TABLE documents (&#xA;    id INTEGER PRIMARY KEY,&#xA;    created_at DATETIME DEFAULT CURRENT_TIMESTAMP,&#xA;    title TEXT DEFAULT &#39;Untitled&#39;,&#xA;    content TEXT DEFAULT &#39;Here goes the content&#39;&#xA;);&#xA;```&#xA;&#xA;In the above table, all the columns have one or the other `default` value, like `PRIMARY KEY` this is a alias for the underlying `rowid` column (if not mentioned explicitly or with combinational columns). The default value for `created_at` at is `CURRENT_TIMESTAMP` which will look in a second, the `title` and `content` has default value of `Untitled` and `Here foes the content` respectively.&#xA;&#xA;```sql&#xA;INSERT INTO documents DEFAULT VALUES;&#xA;```&#xA;&#xA;Now, we can insert the `DEFAULT VALUES` without any value specified in the `VALUES` or columns mentioned in the statement.&#xA;&#xA;```sql&#xA;SELECT * FROM documents;&#xA;```&#xA;&#xA;As you can see, now we can insert multiple rows with default values, and the unique id will keep the records in the `documents` table unique.&#xA;&#xA;## Defaults for Date and Time related columns&#xA;&#xA;There are special values or constant expression that can be used as default expressions like &#xA;&#xA;- `CURRENT_TIME`  value in the format `HH:MM:SS`&#xA;- `CURRENT_DATE` value in the format `YYYY-MM-DD`&#xA;- `CURRENT_TIMESTAMP` value in the format `YYYY-MM-DD HH:MM:SS`&#xA;&#xA;These values can be used as default values relating to date and time related columns which are mostly text related fields.&#xA;&#xA;```sql&#xA;CREATE TABLE documents (&#xA;    id INTEGER PRIMARY KEY,&#xA;    created_at DATETIME DEFAULT CURRENT_TIMESTAMP,&#xA;    updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,&#xA;    title TEXT DEFAULT &#39;Untitled&#39;,&#xA;    content TEXT DEFAULT &#39;Here goes the content&#39;&#xA;);&#xA;```&#xA;&#xA;The various values can be used here, like for instance, `created_at`, `updated_at` which can either be `CURRENT_TIMESTAMP` or just `CURRENT_DATE` depending on the use case.&#xA;&#xA;```sql&#xA;INSERT INTO documents DEFAULT VALUES;&#xA;```&#xA;&#xA;Here, it will insert a record with the `created_at` and `updated_at` as the current date and time values.&#xA;&#xA;```sql&#xA;SELECT * FROM documents;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT * FROM documents;&#xA;+----+---------------------+---------------------+----------+-----------------------+&#xA;| id |     created_at      |     updated_at      |  title   |        content        |&#xA;+----+---------------------+---------------------+----------+-----------------------+&#xA;| 1  | 2025-09-10 17:04:43 | 2025-09-10 17:04:43 | Untitled | Here goes the content |&#xA;+----+---------------------+---------------------+----------+-----------------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;As you can see, now we can insert multiple rows with default values, and the unique id will keep the records in the `documents` table unique.&#xA;&#xA;That&#39;s the basic we can use the `DEFAULT` constraint, you can add any expression as long as it produces a constant value for each of the records.&#xA; &#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: UNIQUE column constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-unique-column-constraint</link>
      <description>Unique Constraint The constraint ensures that a column in a table contains no duplicate values. Like, if you add an constraint to a column, then no two rows can</description>
      <pubDate>Tue, 09 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Unique Constraint&#xA;&#xA;The `UNIQUE` constraint ensures that a column in a table contains no duplicate values. Like, if you add an `UNIQUE` constraint to a column, then no two rows can have the exact/same value for that column.&#xA;&#xA;The documentation is more nuanced.&#xA;&#xA;&gt; A `UNIQUE` constraint is similar to a `PRIMARY KEY` constraint, **except** that a single table may have any number of `UNIQUE` constraints. For each `UNIQUE` constraint on the table, each row must contain a unique combination of values in the columns identified by the `UNIQUE` constraint. For the purposes of `UNIQUE` constraints, `NULL` values are considered distinct from all other values, including other NULLs. As with `PRIMARY KEY`s, a `UNIQUE` table-constraint clause must contain only column names — the use of expressions in an indexed-column of a `UNIQUE` table-constraint is not supported.&#xA;&#xA;There are three points to note, I&#39;ll simplify the points:&#xA;1. If you add an `UNIQUE` constraint to a column, then no two rows can have the exact/same value for that column in that table.&#xA;2. The `UNIQUE` constraint considers `NULL` values as distinct i.e. it cannot distinguish between two rows having `NULL` values, hence two rows will pass the `UNIQUE` constraint. (Better to use `UNIQUE` and `NOT NULL`)&#xA;3. The `UNIQUE` constraint is tied to one or more columns, so you cannot add custom expressions to the `UNIQUE` constraint. (Better to use `UNIQUE INDEX` to create custom expression-based uniqueness or index for querying data.)&#xA;&#xA;&#xA;## Adding a UNIQUE constraint&#xA;&#xA;To add a unique constraint to a column, simply add the `UNIQUE` keyword to the column definition:&#xA;&#xA;Let&#39;s keep the table simple with just three fields: `name`, `email`, and `age`.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;    name TEXT,&#xA;    email TEXT UNIQUE,&#xA;    age INTEGER CHECK (age &gt; 0)&#xA;);&#xA;```&#xA;&#xA;Now, lets insert a few records&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, age) VALUES(&#39;alice&#39;, &#39;alice@wonderland.com&#39;, 12);&#xA;INSERT INTO users (name, email, age) VALUES(&#39;bob&#39;, &#39;bob@wonderland.com&#39;, 13);&#xA;```&#xA;&#xA;Now, lets try to insert a record with the existing email value:&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, age) VALUES(&#39;charlie&#39;, &#39;alice@wonderland.com&#39;, 14);&#xA;```&#xA;&#xA;This will immediately fail the `UNIQUE` constraint on the email, and the record will not be inserted.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, email, age) VALUES(&#39;charlie&#39;, &#39;alice@wonderland.com&#39;, 14);&#xA;Runtime error: UNIQUE constraint failed: users.email (19)&#xA;```&#xA;&#xA;Notice, as the constraint is only on the email column, it&#39;s not on the name or age columns.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, age) VALUES(&#39;alice&#39;, &#39;charlie@wonderland.com&#39;, 12);&#xA;&#xA;```&#xA;&#xA;This will succeed, as the `UNIQUE` constraint is only on the email column. Hence, we can conclude that the `UNIQUE` constraint is on a column level when stated next to the column while creating the table.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;```&#xA;+-------+------------------------+-----+&#xA;| name  |         email          | age |&#xA;+-------+------------------------+-----+&#xA;| alice | alice@wonderland.com   | 12  |&#xA;| bob   | bob@wonderland.com     | 13  |&#xA;| alice | charlie@wonderland.com | 12  |&#xA;+-------+------------------------+-----+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Let&#39;s understand the problem first with `NULL` values in UNIQUE constraint.&#xA;&#xA;## Adding a UNIQUE constraint with NULL values&#xA;&#xA;What if I insert a record with a `NULL` value for the `email` column?&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, age) VALUES(&#39;charlie&#39;, NULL, 14);&#xA;--OR&#xA;INSERT INTO users (name, age) VALUES(&#39;charlie&#39;, 14);&#xA;```&#xA;&#xA;That went in, it created the record, which is fine, so far, as there was no previous entry with `NULL` email in the table, or rather the `email` column.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;However, what happens if we try to insert another record with a `NULL` value for the `email` column?&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, age) VALUES(&#39;dave&#39;, NULL, 15);&#xA;-- OR&#xA;INSERT INTO users (name, age) VALUES(&#39;dave&#39;, 15);&#xA;```&#xA;&#xA;This goes in as well, Hhh? It shouldn&#39;t have right? If `alice@wonderland.com` is not allowed twice then why should `NULL` be allowed?&#xA;&#xA;That&#39;s what the documentation clearly stated:&#xA;&#xA;&gt; For the purposes of `UNIQUE` constraints, `NULL` values are considered distinct from all other values, including other NULLs.&#xA;&#xA;That is clear as crystal, SQLite won&#39;t distinguish between two `NULL` values in the `email` column. Hence, we can&#39;t have duplicate `NULL` values in the `email` column.&#xA;&#xA;For that we need to set email as `NOT NULL` as well as `UNIQUE` inorder to avoid populating `NULL` values in the `email` column.&#xA;&#xA;&#xA;## Adding UNIQUE Constraint on multiple columns&#xA;&#xA;If you wondered, can we add unique constraint on multiple columns? Yes, we can and there are subtle variations on how you interpret those as, multiple as in two or more columns unique independently or the combination of two or more columns being unique.&#xA;&#xA;1. Unique constraint independent of each other&#xA;2. Combinational Unique constraint&#xA;&#xA;### Unique constraint independent of each other&#xA;&#xA;Let&#39;s say we want to add constraint to the user table, which could have more than one column that needs to be unique, like example, the `phone_number` for two users cannot be same, and so on.&#xA;&#xA;```sql&#xA;DROP TABLE users;&#xA;CREATE TABLE users (&#xA;    name TEXT,&#xA;    email TEXT UNIQUE,&#xA;    phone_number TEXT UNIQUE,&#xA;    government_id TEXT UNIQUE,&#xA;    age INTEGER CHECK (age &gt; 0),&#xA;    bio TEXT&#xA;);&#xA;```&#xA;In the above table schema, we have three unique constraints, `email`, `phone_number`, and `government_id`. Each unique constraint is independent of each other. Let&#39;s take a look at a few cases in order to understand it better.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio) &#xA;    VALUES(&#39;alice&#39;, &#39;alice@wonderland.com&#39;, &#39;1234567890&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;bob&#39;, &#39;bob@wonderland.com&#39;, &#39;1234567891&#39;, &#39;DEF456&#39;, 13, &#39;I am bob&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;That is the simple case, However what would happen if one user has the same `phone_number`, and `government_id` but different `email`?&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;alice&#39;, &#39;new_alice@wonderland.com&#39;, &#39;1234567890&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;alice&#39;, &#39;new_alice@wonderland.com&#39;, &#39;1234567890&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;&#xA;Runtime error: UNIQUE constraint failed: users.government_id (19)&#xA;```&#xA;&#xA;Nope, it is independent of each constraint, no two rows can have the same `government_id` or `phone_number` and even the `email` column.&#xA;&#xA;Now, what if we want to allow `phone_number`, `government_id` and `email_id` to have a unique combination? Like the three of the columns together should be unique across the table, now with that they won&#39;t be unique across the single column, they are combined.&#xA;&#xA;### Combinational Unique constraint&#xA;&#xA;In a combinational unique constraint, we can add two or more columns like `UNIQUE(colum_1, column_2, ... column_n)` as a combinational constraint.&#xA;&#xA;In this type of constraint:&#xA;- Not necessarily each column needs to be unique, the combination of the `N` columns should be unique&#xA;&#xA;```sql&#xA;DROP TABLE users;&#xA;CREATE TABLE users (&#xA;    name TEXT,&#xA;    email TEXT NOT NULL,&#xA;    phone_number TEXT NOT NULL,&#xA;    government_id TEXT NOT NULL,&#xA;    age INTEGER CHECK (age &gt; 0),&#xA;    bio TEXT,&#xA;    UNIQUE(email, phone_number, government_id)&#xA;);&#xA;&#xA;-- OR&#xA;&#xA;-- CREATE TABLE users(&#xA;--     name TEXT,&#xA;--     email TEXT NOT NULL,&#xA;--     phone_number TEXT NOT NULL,&#xA;--     age INTEGER CHECK (age &gt; 0),&#xA;--     bio TEXT,&#xA;--     UNIQUE(email, phone_number)&#xA;-- );&#xA;```&#xA;&#xA;I am just making it up, its not secure, like combination of `email`, `phone_number` and `government_id` is not secure, you can debate about that, but it&#39;s  not a data-integrity and schema-design class, this post is about understanding the `UNIQUE` column constraint with combination of two or more columns.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio) &#xA;    VALUES(&#39;alice&#39;, &#39;alice@wonderland.com&#39;, &#39;1234567890&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;bob&#39;, &#39;bob@wonderland.com&#39;, &#39;1234567891&#39;, &#39;DEF456&#39;, 13, &#39;I am bob&#39;);&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;alice&#39;, &#39;new_alice@wonderland.com&#39;, &#39;1234567890&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;That works! We have `1234567890` as the `phone_number` and `ABC123` as the `government_id` which is same, but `email` is different. So, the combination of `email`, `phone_number` and `government_id` is unique.&#xA;&#xA;&#xA;```&#xA;sqlite&gt; SELECT * FROM users;&#xA;+-------+--------------------------+--------------+---------------+-----+------------+&#xA;| name  |          email           | phone_number | government_id | age |    bio     |&#xA;+-------+--------------------------+--------------+---------------+-----+------------+&#xA;| alice | alice@wonderland.com     | 1234567890   | ABC123        | 12  | I am alice |&#xA;| bob   | bob@wonderland.com       | 1234567891   | DEF456        | 13  | I am bob   |&#xA;| alice | new_alice@wonderland.com | 1234567890   | ABC123        | 12  | I am alice |&#xA;+-------+--------------------------+--------------+---------------+-----+------------+&#xA;sqlite&gt; INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;bob&#39;, &#39;new_alice@wonderland.com&#39;, &#39;0987654321&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;sqlite&gt; SELECT * FROM users;&#xA;+-------+--------------------------+--------------+---------------+-----+------------+&#xA;| name  |          email           | phone_number | government_id | age |    bio     |&#xA;+-------+--------------------------+--------------+---------------+-----+------------+&#xA;| alice | alice@wonderland.com     | 1234567890   | ABC123        | 12  | I am alice |&#xA;| bob   | bob@wonderland.com       | 1234567891   | DEF456        | 13  | I am bob   |&#xA;| alice | new_alice@wonderland.com | 1234567890   | ABC123        | 12  | I am alice |&#xA;| bob   | new_alice@wonderland.com | 0987654321   | ABC123        | 12  | I am alice |&#xA;+-------+--------------------------+--------------+---------------+-----+------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;```sql&#xA;INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;alice&#39;, &#39;new_alice@wonderland.com&#39;, &#39;1234567890&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, email, phone_number, government_id, age, bio)&#xA;    VALUES(&#39;bob&#39;, &#39;new_alice@wonderland.com&#39;, &#39;0987654321&#39;, &#39;ABC123&#39;, 12, &#39;I am alice&#39;);&#xA;Runtime error: UNIQUE constraint failed: users.email, users.phone_number, users.government_id (19)&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Now this will fail, as the `email`, `phone_number`, `government_id` combination is already unique.&#xA;&#xA;&#xA;## Primary key vs Unique Constraint&#xA;&#xA;Both of these do the same thing, but `UNIQUE` can be added to multiple columns however, the `PRIMARY KEY` should only be one, but it can contain multiple columns. In other words, UNIQUE constraint is column-level, however each table can only have one `PRIMARY KEY` that will be used to indentify the uniqueness among the rows.&#xA;&#xA;```&#xA;PRIMARY KEY = UNIQUE + NOT NULL&#xA;```&#xA;Primary key is also a shorthand for `UNIQUE` and `NOT NULL`, if you add a `PRIMARY KEY` to a column or combination of columns for a table, you needn&#39;t specify `UNIQUE` and `NOT NULL`, it already does that for you.&#xA;&#xA;So, that is the general difference of `UNIQUE` and `PRIMARY KEY`, we&#39;ll explore the `PRIMARY KEY` in later posts.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: NOT NULL column constraint</title>
      <link>https://meetgor.com/sqlog/sqlite-not-null-column-constraint</link>
      <description>NOT NULL Column constraint The constraint is a column-level constraint that will prevent the insertion/updation of a value in that column record. What is ? In S</description>
      <pubDate>Mon, 08 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## NOT NULL Column constraint&#xA;&#xA;The `NOT NULL` constraint is a column-level constraint that will prevent the insertion/updation of a `NULL` value in that column record.&#xA;&#xA;What is `NULL`?&#xA;&#xA;In SQLite, specifically, a `NULL` is a value indicating that a value is not known or just not there.&#xA;&#xA;&gt; NOTE: NULL is not equal to `&#39;&#39;` empty string, `0` or `0.0` (zero), it&#39;s just cannot be compared to any value.&#xA;&#xA;Some additional notes on the `NOT NULL` constraint:&#xA;- If a column has a `NOT NULL` constraint, and it doesn&#39;t have any other constraint like `DEFAULT`, `GENERATED`, `PRIMARY KEY`, or `FOREIGN KEY`, and there is no provided value, it will fail the constraint (i.e. error out the query).&#xA;- Only if any of the other constraint populate the column while inserting the `NOT NULL` constraint will be satisfied.&#xA;&#xA;So, to take a simple iterative example, let&#39;s take the `users` table.&#xA;&#xA;```sql&#xA;CREATE TABLE users(&#xA;    name TEXT NOT NULL,&#xA;    age INTEGER CHECK(age &gt;= 0) NOT NULL,&#xA;    username TEXT UNIQUE NOT NULL,&#xA;    language TEXT DEFAULT &#39;en&#39;,&#xA;    country_code TEXT DEFAULT &#39;US&#39; NOT NULL,&#xA;    email TEXT UNIQUE GENERATED ALWAYS AS (username || &#39;@myapp.com&#39;)&#xA;);&#xA;```&#xA;Here we have 5 fields&#xA;&#xA;- `name` which cannot be empty(`NULL`)&#xA;- `age` which cannot be less than 0 or `NULL`&#xA;- `username` which needs to be unique and cannot be `NULL`&#xA;- `language` which has a default value of `en` but it can be `NULL`&#xA;- `country_code` which has a default value of `US` but it cannot be `NULL`&#xA;- `email` which has a default value of `username || &#39;@myapp.com&#39;` and it will be unique.&#xA;&#xA;&#xA;Let&#39;s look at a few examples:&#xA;&#xA;### Correct values&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age, username) VALUES (&#39;Alice&#39;, 25, &#39;alice&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;### Duplicate username&#xA;&#xA;Won&#39;t allow duplicate username and `NULL` name fields&#xA;&#xA;```sql&#xA;INSERT INTO users (age, username) VALUES (25, &#39;alice&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;### Border case for age&#xA;&#xA;Just par case for the `age` field as `0`&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age, username) VALUES (&#39;Bob&#39;, 0, &#39;bob&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;### Invalid age check constraint&#xA;&#xA;The `age` is inserted as `-1` which will fail the `CHECK` constraint.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age, username) VALUES (&#39;Bob&#39;, -1, &#39;bob&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;### Age as null&#xA;&#xA;The `age` is inserted as `NULL` which will fail the `NOT NULL` constraint&#xA;&#xA;```sql&#xA;INSERT INTO users (name, username) VALUES (&#39;Bob&#39;, &#39;bob&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;### Correct Values&#xA;&#xA;Will correctly populate the values.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age, username) VALUES (&#39;John&#39;, 30, &#39;john&#39;);&#xA;INSERT INTO users (name, age, username) VALUES (&#39;Jim&#39;, 18, &#39;jim&#39;); &#xA;```&#xA;&#xA;### Duplicate email&#xA;&#xA;The email inserted will lead to duplicate and fail the `UNIQUE` constraint.&#xA;&#xA;```sql&#xA; INSERT INTO users (name, age, username, language) VALUES (&#39;Jim&#39;, 18, &#39;jim&#39;, &#39;fr&#39;);&#xA;```&#xA;&#xA;### Different country code&#xA;&#xA;Will correctly populate the values.&#xA;&#xA;```sql&#xA;INSERT INTO users (name, age, username, language) VALUES (&#39;Jimmy&#39;, 18, &#39;jimmy&#39;, &#39;fr&#39;);&#xA;```&#xA;&#xA;So, that is some combination of `NOT NULL` constraints along with other constraints and the insert values.&#xA;&#xA;```&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;Alice&#39;, 25, &#39;alice&#39;);&#xA;sqlite&gt; .mode table&#xA;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| name  | age | username | language | country_code |      email      |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| Alice | 25  | alice    | en       | US           | alice@myapp.com |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;&#xA;sqlite&gt; INSERT INTO users (age, username) VALUES (25, &#39;alice&#39;);&#xA;Runtime error: NOT NULL constraint failed: users.name (19)&#xA;&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;Bob&#39;, 0, &#39;bob&#39;);&#xA;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| name  | age | username | language | country_code |      email      |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| Alice | 25  | alice    | en       | US           | alice@myapp.com |&#xA;| Bob   | 0   | bob      | en       | US           | bob@myapp.com   |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;Bob&#39;, -1, &#39;bob&#39;);&#xA;Runtime error: CHECK constraint failed: age &gt;= 0 (19)&#xA;&#xA;sqlite&gt; INSERT INTO users (name, username) VALUES (&#39;Bob&#39;, &#39;bob&#39;);&#xA;Runtime error: NOT NULL constraint failed: users.age (19)&#xA;&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;John&#39;, 30, &#39;john&#39;);&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;Jim&#39;, 18, &#39;jim&#39;);&#xA;&#xA;sqlite&gt; INSERT INTO users (name, age, username, language) VALUES (&#39;Jim&#39;, 18, &#39;jim&#39;, &#39;fr&#39;);&#xA;Runtime error: UNIQUE constraint failed: users.email (19)&#xA;&#xA;sqlite&gt; INSERT INTO users (name, age, username, language) VALUES (&#39;Jimmy&#39;, 18, &#39;jimmy&#39;, &#39;fr&#39;);&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| name  | age | username | language | country_code |      email      |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| Alice | 25  | alice    | en       | US           | alice@myapp.com |&#xA;| Bob   | 0   | bob      | en       | US           | bob@myapp.com   |&#xA;| John  | 30  | john     | en       | US           | john@myapp.com  |&#xA;| Jim   | 18  | jim      | en       | US           | jim@myapp.com   |&#xA;| Jimmy | 18  | jimmy    | fr       | US           | jimmy@myapp.com |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;Robin&#39;, 24, &#39;robin&#39;);&#xA;sqlite&gt; INSERT INTO users (name, age, username) VALUES (&#39;Robinson&#39;, 24, &#39;robin&#39;);&#xA;Runtime error: UNIQUE constraint failed: users.email (19)&#xA;&#xA;sqlite&gt; SELECT * FROM users;&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| name  | age | username | language | country_code |      email      |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;| Alice | 25  | alice    | en       | US           | alice@myapp.com |&#xA;| Bob   | 0   | bob      | en       | US           | bob@myapp.com   |&#xA;| John  | 30  | john     | en       | US           | john@myapp.com  |&#xA;| Jim   | 18  | jim      | en       | US           | jim@myapp.com   |&#xA;| Jimmy | 18  | jimmy    | fr       | US           | jimmy@myapp.com |&#xA;| Robin | 24  | robin    | en       | US           | robin@myapp.com |&#xA;+-------+-----+----------+----------+--------------+-----------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Table with column constraints</title>
      <link>https://meetgor.com/sqlog/sqlite-create-table-column-constraint</link>
      <description>Create Table with column constraints We have seen the table creation with column types, we now need to look into the constraints or limitations that we can add</description>
      <pubDate>Sun, 07 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Create Table with column constraints&#xA;&#xA;We have seen the table creation with column types, we now need to look into the constraints or limitations that we can add to those columns, in order to make the data-model more robust and add some basic validations before doing any updation/mutation on the data itself.&#xA;&#xA;To add a constraint, we can specify it at the end of the column name, after the type, so there are a list of constraints that could be added to column, which are relevant to specific scenarios of the actual model of the column.&#xA;- `NOT NULL`: This will prevent insertion of `NULL` or empty values in the column field, even updating an existing filled value with `NULL` will prevent it from inserting a `NULL` value.&#xA;- `UNIQUE`: This will prevent insertion of duplicate value for an column field, same applies for updating an existing value with a duplicate will lead to violating the constraint.&#xA;- `DEFAULT`: This will add a default value to the column field, if not specified in the insertion of the value.&#xA;- `GENERATED ALWAYS AS`: A expression used to generate the value of the column field, this can be derived from the existing columns, or a constant value.&#xA;- `CHECK`: A custom check (numeric expression) can be added for that column before updating or inserting its value in the column.&#xA;- `PRIMARY KEY`: This will set the column as the primary identifier for the table, so essentially it will be like a `rowid` for the table, acting like a unique identifier among the other rows.&#xA;- `FOREIGN KEY`: This will set the column as a foreign key, and it will be a reference to the primary key of another table.&#xA;&#xA;Let&#39;s look at a basic example of each, I would go into details in each subsequent posts in the future as their is a lot of nitty-gritty details to be covered.&#xA;&#xA;## NOT NULL Constraint&#xA;&#xA;The `NOT NULL` constraint is a column-level constraint that can be added to a column in order to avoid getting a `NULL` or empty value being populated/inserted/updated in that column field for any row in the table.&#xA;&#xA;```sql&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT,&#xA;    model_code TEXT&#xA;);&#xA;```&#xA;&#xA;```sql&#xA;INSERT INTO llm_models DEFAULT VALUES;&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;OOPs! `Runtime error: NOT NULL constraint failed: llm_models.name (19)`. The default value is `NULL` or empty if not explicitly constrained on the column. So, it tried to insert the `NULL` as the `name` of the `llm_models` record/row, and while doing so sqlite hit a error, at runtime, that the constraint of the column `llm_models.name` cannot be `NULL`. &#xA;&#xA;So, one of the solution to avoid this, could be explicitly passing a value, or setting a default value in the column setting (we will see that in the later section as the default constraint).&#xA;&#xA;Let&#39;s sepcify the value of the `name` column:&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name) VALUES(&#39;abc&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;This time it successfully ran, the thing to note here is that the `model_type` or `model_code` column is populated as `NULL` becuase they don&#39;t have any constraint, especially they don&#39;t have the `NOT NULL` constraint, and hence we need to provide a value to the `name` column but the `model_type` or `model_code` columns becomes optional.&#xA;&#xA;SQLite also treats `&#39;&#39;` empty string as not null, i.e. you can insert `&#39;&#39;` into the `name` column and it will work fine. This is a bit wired but again a flexibility being a double edged sword. We need to understand what is the difference between `NULL` and empty string in this case, those are both different values.&#xA;&#xA;## UNIQUE Constraint&#xA;&#xA;&#xA;Now, we will add a constraint to the `model_code` column that it should be unique, we don&#39;t want it to be a primary key just yet, but each row/record in this table should have a unique model_code.&#xA;&#xA;We cannot alter the table to add a constraint in sqlite, but is allowed in other databases like postgres, mysql, etc. however that is a bit risky as the previous data might become redundant and also would fail the constraints updated. So ideally we need to re-create the table, you can however alter table columns with a combination of constraints in order to maintain the backwards data correction.&#xA;&#xA;Let&#39;s drop the table `llm_models` so that we can start afresh.&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT,&#xA;    model_code TEXT UNIQUE&#xA;);&#xA;```&#xA;&#xA;This way, we make sure the `model_code` column is unique, and we don&#39;t have any duplicate values in the column. When each new record is added to the table `llm_models` the `model_code` needs to be unique and even updation would fail if the new value is duplicated.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name) VALUES(&#39;abc&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;Works fine, now notice here that the `model_code` is not provided and therefore it is `NULL`, let&#39;s add one more row with the same `name` and check if that fails the unique constraint on the `model_code` as it will now have the model_code duplicated `NULL` in both the rows, that should fail right? RIGHT?&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name) VALUES(&#39;abc&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;Nopes!&#xA;&#xA;`NULL` is not a value to be called as unique. NULL is like undefined, you can&#39;t really distinguish between one `NULL` and other `NULL`s. Hence, the constraint is often associated with the `NOT NULL` as well to prevent the `NULL` confusion.&#xA;&#xA;&#xA;Let&#39;s add the `model_code` now.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code) VALUES (&#39;abc&#39;, &#39;llm-1&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;Ok, we added a record with `model_code` as `llm-1`. Now, let&#39;s add the same record again.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code) VALUES (&#39;abc&#39;, &#39;llm-1&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;And, it broke, it gave a `Runtime error: UNIQUE constraint failed: llm_models.model_code (19)` error. So, we can&#39;t add the same `model_code` twice.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT,&#xA;    model_code TEXT UNIQUE&#xA;);&#xA;sqlite&gt; INSERT INTO llm_models(name) VALUES(&#39;abc&#39;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+&#xA;| name | model_type | model_code |&#xA;+------+------------+------------+&#xA;| abc  |            |            |&#xA;+------+------------+------------+&#xA;sqlite&gt; INSERT INTO llm_models(name) VALUES(&#39;abc&#39;);&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+&#xA;| name | model_type | model_code |&#xA;+------+------------+------------+&#xA;| abc  |            |            |&#xA;| abc  |            |            |&#xA;+------+------------+------------+&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;Runtime error: UNIQUE constraint failed: llm_models.model_code (19)&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+&#xA;| name | model_type | model_code |&#xA;+------+------------+------------+&#xA;| abc  |            |            |&#xA;| abc  |            |            |&#xA;| abc  |            | llm-1      |&#xA;+------+------------+------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;So some tips for using the `UNIQUE` constraints in sqlite:&#xA;- You can&#39;t use `UNIQUE` constraint on the `NULL` values.&#xA;- Prefer using `NOT NULL` in addition to the `UNIQUE` constraint.&#xA;&#xA;&#xA;## DEFAULT Constraint&#xA;&#xA;The default constraint as the name suggest, will set a default value for a column if no value is provided in the insert statement. However, you need to provide the default value while creating the table. You can generate a default value (for that we can use the `GENERATED ALWAYS` constraint).&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL&#xA;);&#xA;```&#xA;&#xA;I have added a constraint of `model_type` as `DEFAULT &#39;text&#39;` which will set the value of `model_type` column to `text` if no value is provided in the insert statement.&#xA;&#xA;Also, the `model_code` column is updated with the constraints of `UNIQUE` and `NOT NULL`&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;And that works as expected, it populates the value of `model_type` as `text` by default.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL&#xA;);&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+&#xA;| name | model_type | model_code |&#xA;+------+------------+------------+&#xA;| abc  | text       | llm-1      |&#xA;+------+------------+------------+&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;Runtime error: UNIQUE constraint failed: llm_models.model_code (19)&#xA;sqlite&gt;&#xA;```&#xA;&#xA;And if we try to insert the same `model_code` value again, it will fail with the `UNIQUE` constraint.&#xA;&#xA;## Generated Constraint&#xA;&#xA;The [Generated](https://sqlite.org/gencol.html) constraint can add a value to a column based on evaluated expression from the values of the other columns in that row.&#xA;&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED&#xA;);&#xA;```&#xA;&#xA;We have added a `GENERATED ALWAYS AS STORED` constraint which will store the result of `name || &#39; (&#39; || model_code || &#39;)&#39;` in the `display_name` column.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;There is a variant of `VIRTUAL` instead of `STORED`, we will explore this separately, I am just getting my hands on and getting familiarity with all the constraints available in SQL.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED&#xA;);&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+--------------+&#xA;| name | model_type | model_code | display_name |&#xA;+------+------------+------------+--------------+&#xA;| abc  | text       | llm-1      | abc (llm-1)  |&#xA;+------+------------+------------+--------------+&#xA;```&#xA;&#xA;Neat! It populated, or I should say `generated` the `display_name` column by evaluating the `name || &#39; (&#39; || model_code || &#39;)&#39;` expression.&#xA;&#xA;## Check Constraint&#xA;&#xA;The `CHECK` constraint is a column-level constraint that can be added to a column in order to add a custom check (numeric expression) before updating or inserting its value in the column. Just a simple expression or a validation check before inserting the value to the column in the row.&#xA;&#xA;Let&#39;s take the `model_type` constraint with the `CHECK` constraint by limiting the values of `model_type` to `text`, `conversational`, `multimodal`, `code`.&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT CHECK(model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)) DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED&#xA;);&#xA;```&#xA;Here, we have added the `CHECK` constraint to the `model_type` column. And we have limited the values of `model_type` to `text`, `conversational`, `multimodal`, `code`. It could be anything, but its just a true or false check that will either fail or work.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;This will add the default value as `text` for the `model_type`.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code, model_type) VALUES(&#39;abc&#39;, &#39;llm-1&#39;, &#39;visual&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;Here we try to add the column `model_type` as `visual` which is not allowed by the `CHECK` constraint.&#xA;&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code, model_type) VALUES(&#39;abc&#39;, &#39;llm-1&#39;, &#39;code&#39;);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;And we try to add the column `model_type` as `code` which is not allowed by the `CHECK` constraint.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT CHECK(model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)) DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED&#xA;);&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code) VALUES(&#39;abc&#39;, &#39;llm-1&#39;);&#xA;sqlite&gt; SELECT * FROM llm_models&#xA;   ...&gt; ;&#xA;+------+------------+------------+--------------+&#xA;| name | model_type | model_code | display_name |&#xA;+------+------------+------------+--------------+&#xA;| abc  | text       | llm-1      | abc (llm-1)  |&#xA;+------+------------+------------+--------------+&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code, model_type) VALUES(&#39;abc&#39;, &#39;llm-1&#39;, &#39;visual&#39;);&#xA;Runtime error: CHECK constraint failed: model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;) (19)&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code, model_type) VALUES(&#39;abc&#39;, &#39;llm-1&#39;, &#39;code&#39;);&#xA;Runtime error: UNIQUE constraint failed: llm_models.model_code (19)&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code, model_type) VALUES(&#39;abc&#39;, &#39;llm-2&#39;, &#39;code&#39;);&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+--------------+&#xA;| name | model_type | model_code | display_name |&#xA;+------+------------+------------+--------------+&#xA;| abc  | text       | llm-1      | abc (llm-1)  |&#xA;| abc  | code       | llm-2      | abc (llm-2)  |&#xA;+------+------------+------------+--------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Right! We checked that it failed the constraint `model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)` and we checked that it failed the constraint `UNIQUE` constraint.&#xA;&#xA;&#xA;## Primary Key Constraint&#xA;&#xA;The primary key constraint is a column-level constraint that is used to identify a unique row in a table.&#xA;It is a unique constraint that can be used to identify a row in a table using a single column.&#xA;&#xA;We can provide it right after the column name or at the bottom of all the table constraints.&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT CHECK(model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)) DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED,&#xA;    PRIMARY KEY(model_code)&#xA;);&#xA;```&#xA;&#xA;OR&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT CHECK(model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)) DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL PRIMARY KEY,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED&#xA;);&#xA;```&#xA;&#xA;The `PRIMARY KEY` is a table level constraint, as it needs to unique across all the rows in the table.&#xA;&#xA;## Foreign Key Constraint&#xA;&#xA;The foreign key constraint is a column-level constraint that is used to establish a relationship between two tables.&#xA;&#xA;We will now create a separate table called `companies` for this example, this will be a table that stores the company information those will be creating or developing the LLM models.&#xA;&#xA;```sql&#xA;CREATE TABLE companies(&#xA;    id INTEGER PRIMARY KEY,&#xA;    company_name TEXT UNIQUE NOT NULL,&#xA;    founded_year INTEGER&#xA;);&#xA;```&#xA;Let&#39;s insert a few companies&#xA;&#xA;```sql&#xA;INSERT INTO companies (company_name, founded_year) VALUES(&#39;ABC&#39;, 1998);&#xA;INSERT INTO companies (company_name, founded_year) VALUES(&#39;ClosedAI&#39;, 2016);&#xA;INSERT INTO companies (company_name, founded_year) VALUES(&#39;Beta&#39;, 2014);&#xA;SELECT * FROM companies;&#xA;```&#xA;&#xA;Original companies right?&#xA;Now we will update the `llm_models` table that will store the LLM models information. We will add a foreign key, that will be a reference i.e. to make a relation for the row that will reference another row. So, the foreign key indicates a key (field/column) in another table that&#39;s why named as `FOREIGN KEY`. This foreign key will be referencing the `companies.id` column from the `companies` table.&#xA;&#xA;```sql&#xA;DROP TABLE llm_models;&#xA;CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT CHECK(model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)) DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL PRIMARY KEY,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED,&#xA;    company_id INTEGER,&#xA;    FOREIGN KEY(company_id) REFERENCES companies(id)&#xA;);&#xA;```&#xA;We insert the `llm_models` with the foreign key of 2 indicating the company `ClosedAI`.&#xA;&#xA;```sql&#xA;INSERT INTO llm_models(name, model_code, company_id) VALUES(&#39;abc&#39;, &#39;llm-1&#39;, 2);&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;Now, if we see, this has successfully inserted the row.&#xA;&#xA;```sql&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;We can join the two tables together, this will give us the combined sets form the tables.&#xA;&#xA;```sql&#xA;SELECT * FROM llm_models as llm INNER JOIN companies as comp ON comp.id = llm.company_id;&#xA;```&#xA;&#xA;That was a bit unclear, let&#39;s only keep what we needed.&#xA;&#xA;```sql&#xA;SELECT llm.*, comp.company_name  FROM llm_models as llm INNER JOIN companies as comp ON comp.id = llm.company_id;&#xA;```&#xA;&#xA;Cool, that is sweat.&#xA;&#xA;```&#xA;sqlite&gt; CREATE TABLE companies(&#xA;    id INTEGER PRIMARY KEY,&#xA;    company_name TEXT UNIQUE NOT NULL,&#xA;    founded_year INTEGER&#xA;);&#xA;sqlite&gt; INSERT INTO companies (company_name, founded_year) VALUES(&#39;ABC&#39;, 1998);&#xA;INSERT INTO companies (company_name, founded_year) VALUES(&#39;ClosedAI&#39;, 2016);&#xA;INSERT INTO companies (company_name, founded_year) VALUES(&#39;Beta&#39;, 2014);&#xA;SELECT * FROM companies;&#xA;+----+--------------+--------------+&#xA;| id | company_name | founded_year |&#xA;+----+--------------+--------------+&#xA;| 1  | ABC          | 1998         |&#xA;| 2  | ClosedAI     | 2016         |&#xA;| 3  | Beta         | 2014         |&#xA;+----+--------------+--------------+&#xA;sqlite&gt;&#xA;sqlite&gt; CREATE TABLE llm_models(&#xA;    name TEXT NOT NULL,&#xA;    model_type TEXT CHECK(model_type IN (&#39;text&#39;, &#39;conversational&#39;, &#39;multimodal&#39;, &#39;code&#39;)) DEFAULT &#39;text&#39;,&#xA;    model_code TEXT UNIQUE NOT NULL PRIMARY KEY,&#xA;    display_name TEXT GENERATED ALWAYS AS (name || &#39; (&#39; || model_code || &#39;)&#39;) STORED,&#xA;    company_id INTEGER,&#xA;    FOREIGN KEY(company_id) REFERENCES companies(id)&#xA;&#xA;);&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;sqlite&gt; INSERT INTO llm_models(name, model_code, model_type, company_id) VALUES(&#39;abc&#39;, &#39;llm-1&#39;, &#39;code&#39;, 2);&#xA;sqlite&gt; SELECT * FROM llm_models;&#xA;+------+------------+------------+--------------+------------+&#xA;| name | model_type | model_code | display_name | company_id |&#xA;+------+------------+------------+--------------+------------+&#xA;| abc  | code       | llm-1      | abc (llm-1)  | 2          |&#xA;+------+------------+------------+--------------+------------+&#xA;sqlite&gt; SELECT * FROM llm_models as llm INNER JOIN companies as comp ON comp.id = llm.company_id;&#xA;+------+------------+------------+--------------+------------+----+--------------+--------------+&#xA;| name | model_type | model_code | display_name | company_id | id | company_name | founded_year |&#xA;+------+------------+------------+--------------+------------+----+--------------+--------------+&#xA;| abc  | code       | llm-1      | abc (llm-1)  | 2          | 2  | ClosedAI     | 2016         |&#xA;+------+------------+------------+--------------+------------+----+--------------+--------------+&#xA;sqlite&gt; SELECT llm.*, comp.company_name  FROM llm_models as llm INNER JOIN companies as comp ON comp.id = llm.company_id;&#xA;+------+------------+------------+--------------+------------+--------------+&#xA;| name | model_type | model_code | display_name | company_id | company_name |&#xA;+------+------------+------------+--------------+------------+--------------+&#xA;| abc  | code       | llm-1      | abc (llm-1)  | 2          | ClosedAI     |&#xA;+------+------------+------------+--------------+------------+--------------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;That is it, a high level overview of adding constraints in SQLite, I will dig deep into each constraint in separate posts over the coming week.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Table with column types</title>
      <link>https://meetgor.com/sqlog/sqlite-create-table-column-types</link>
      <description>Creating a table with column types If you want to store values in a column with a specific type, you need to specify one of the following types in order to stor</description>
      <pubDate>Sat, 06 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Creating a table with column types&#xA;&#xA;If you want to store values in a column with a specific type, you need to specify one of the following types in order to store that value in relatively type specific data.&#xA;&#xA;- `TEXT` (string or characters, bunch of text)&#xA;- `BLOB` (binary data, images, raw files, etc)&#xA;- `INTEGER` (numbers which are whole, like 1, 67, 986, etc)&#xA;- `REAL` (floating or decimal point numbers, like 3.14, 85.98, 999.99, etc)&#xA;- `NULL` (empty or not defined)&#xA;&#xA;Type affinity is what determines how a data is being stored and treated in sqlite, so we&#39;ll need to cover that separately. For now, let&#39;s assume that the data you give in is indeed valdiated before sending over to the database (that rarely is the case though) but for now we can avoid that discussion, and focus on creation of the structure and relations of the data.&#xA;&#xA;If you are curious, you can learn about types and affinity [here](https://www.sqlite.org/datatype3.html).&#xA;&#xA;Let&#39;s see how to create a table with a few columns having a specific type.&#xA;&#xA;Let&#39;s say we are creating a database system and a schema for storing LLMs and their history. How interesting!&#xA;&#xA;```sql&#xA;CREATE TABLE llm_models(&#xA;    name TEXT,&#xA;    model_type TEXT,&#xA;    description TEXT,&#xA;    no_parameters REAL,&#xA;    f1_score REAL,&#xA;    input_token_limit INTEGER,&#xA;    output_token_limit INTEGER,&#xA;    context_window_tokens INTEGER,&#xA;    release_date TEXT,&#xA;    knowledge_cutoff_date TEXT,&#xA;    weights_file BLOB,&#xA;    license TEXT&#xA;);&#xA;```&#xA;&#xA;As you can see, we have a few columns with specific types. The name, model type, description, licence could be plain text. The number of parameters and f1_score and others could be real, as those exact number of the parameters cannot be discrete values. Similarly, the `input_token_limit`, `output_token_limit`, `context_window_tokens` can be a discrete value hence used as integer. The `release_date` and `knowledge_cutoff_date` is a date value so could be used as numeric-affinity but we need to store it as text for now. The `weights_file` is a binary file, hence storing it as a blob i.e. binary large object.&#xA;&#xA;So, if we were to insert into these the default values, what do you think what will it populate with?&#xA;&#xA;```sql&#xA;INSERT INTO llm_models DEFAULT VALUES;&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM llm_models;&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; select * from llm_models;&#xA;+------+------------+-------------+---------------+-------------------+--------------------+-----------------------+--------------+-----------------------+--------------+---------+&#xA;| name | model_type | description | no_parameters | input_token_limit | output_token_limit | context_window_tokens | release_date | knowledge_cutoff_date | weights_file | license |&#xA;+------+------------+-------------+---------------+-------------------+--------------------+-----------------------+--------------+-----------------------+--------------+---------+&#xA;|      |            |             |               |                   |                    |                       |              |                       |              |         |&#xA;+------+------------+-------------+---------------+-------------------+--------------------+-----------------------+--------------+-----------------------+--------------+---------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Well all is empty! That&#39;s becuase it is `None` or `NULL` by default, and we don&#39;t have any constraint on the columns as well as on the tables.&#xA;&#xA;We need to add some constraints on the columns as well as the tables. &#xA;&#xA;There are options to add contraints on the column like &#xA;&#xA;- `NOT NULL`&#xA;- `UNIQUE`&#xA;- `PRIMARY KEY`&#xA;- `CHECK`&#xA;- `DEFAULT`&#xA;- `COLLATE`&#xA;- `FOREIGN KEY`&#xA;&#xA;We&#39;ll explore those separately for now understand those as some restrictions on how the column values can be stored. For instance in our example, we can&#39;t store a record or a llm model without a name so we need to have it as `NOT NULL` which will prevent a record from being inserted as empty or NULL essentially.&#xA;&#xA;It turns out we can also add in a `STRICT` paramter to the end of the table to make sure the table doesn&#39;t allow `TEXT` values to be stored in `INTEGER` column.&#xA;&#xA;There are a lot of details I think I am going to leave it right here. We only need to add these for our conveniences, sqlite is so flexible that it can be a little intimidating to add these values and not get benefit right away, but imagine this. You have a quick experiment to run and test out how the data looks like, your data model works, you just need your crud app to be ready, no time in wasting which will be the primary key, which will be a not null column, what the default value is for each column, it can be skipped with SQLite. That&#39;s a double-edged sword, yes in some sense, but more powerful for most of the applications. It assumes you don&#39;t need validations unless you specify it, and believe me there are tons of validations you can add in sqlite to make it robust.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Table Bare Bones Table</title>
      <link>https://meetgor.com/sqlog/sqlite-create-table-bare-bones-table</link>
      <description>What if you only wanted one column. If you create specify a column name in create table, then there will be actually 2 columns created, one that you specified a</description>
      <pubDate>Fri, 05 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;What if you only wanted one column. If you create specify a column name in create table, then there will be actually 2 columns created, one that you specified and the other which is hidden as `rowid`, unless you mentioned rowid as the column.&#xA;&#xA;```sql&#xA;CREATE TABLE users(name);&#xA;INSERT INTO users(name) VALUES (&#34;abc&#34;), (&#34;def&#34;);&#xA;&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;You would think, you have one column `name` in the table `users`. Nope!&#xA;&#xA;```sql&#xA;SELECT rowid, * from users;&#xA;```&#xA;&#xA;```&#xA;+-------+------+&#xA;| rowid | name |&#xA;+-------+------+&#xA;| 1     | abc  |&#xA;| 2     | def  |&#xA;+-------+------+&#xA;```&#xA;&#xA;Surprise!&#xA;Any user defined table in sqlite almost always has a [rowid column](https://www.sqlite.org/rowidtable.html)&#xA;&#xA;That tempts me to create a table with one column, and see what happens, how can we uniquely identify rows in that case.&#xA;&#xA;```sql&#xA;CREATE TABLE users(rowid);&#xA;&#xA;INSERT INTO users DEFAULT VALUES;&#xA;INSERT INTO users DEFAULT VALUES;&#xA;&#xA;.mode table&#xA;&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;NOTE: DEFAULT VALUES, this means, use the default values for the columns, if you didn&#39;t specify a type for the column, the default values are `NULL`.&#xA;&#xA;&#xA;```&#xA;sqlite&gt; create table users(rowid);&#xA;sqlite&gt; insert into users default values;&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; select * from users;&#xA;+-------+&#xA;| rowid |&#xA;+-------+&#xA;|       |&#xA;+-------+&#xA;sqlite&gt; insert into users default values;&#xA;sqlite&gt; select * from users;&#xA;+-------+&#xA;| rowid |&#xA;+-------+&#xA;|       |&#xA;|       |&#xA;+-------+&#xA;sqlite&gt; select distinct * from users;&#xA;+-------+&#xA;| rowid |&#xA;+-------+&#xA;|       |&#xA;+-------+&#xA;sqlite&gt; select distinct rowid from users;&#xA;+-------+&#xA;| rowid |&#xA;+-------+&#xA;|       |&#xA;+-------+&#xA;```&#xA;Here, its kind of hard to distinguish each row, so we have kind of skewed up. There is only one row and its not anymore served as a unique identifier in the table. &#xA;As you can insert multiple same values for the users table.&#xA;&#xA;```sql&#xA;DROP TABLE users;&#xA;```&#xA;&#xA;Let&#39;s create a table with a table that increments the rowid auto-incrementally.&#xA;&#xA;```sql&#xA;CREATE TABLE users(rowid INTEGER PRIMARY KEY AUTOINCREMENT);&#xA;INSERT INTO users DEFAULT VALUES;&#xA;INSERT INTO users DEFAULT VALUES;&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Now it simply increments each time we add a row, that&#39;s a neat little table.&#xA;&#xA;I think this is far we can push sqlite, just one column.&#xA;&#xA;But wait, what&#39;s beneath this?&#xA;&#xA;Like why rowid? Why not name? Well, SQLite always creates this underlying rowid for structural reasons. When we name our column rowid, we&#39;re just making that hidden rowid visible. But if you create an `INTEGER PRIMARY KEY` (regardless of its name), that becomes an alias for the underlying `rowid` and not just any primary key, specifically `INTEGER PRIMARY KEY`&#xA;&#xA;But here, we ourselves defined the rowid, so when you create something like a `id` not `rowid` as a primary key, it will be a alias for that underlying `rowid`&#xA;&#xA;In the first example, the `CREATE TABLE users(rowid)` this is a violation of sqlite, as we now have created a table which cannot be distinctly identified with a column, as there is only one column, and that too is exposed to the user, and hence in the default case, the `rowid` is hidden column, unless the user/developer has added a primary key to a certain row. The user/developer is taking away the ownership of managing the primary key (unique identifier for a row/entry in a table) to him/herself rather than relying on the `rowid` solely.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite SQL: Create Table Basic</title>
      <link>https://meetgor.com/sqlog/sqlite-create-table</link>
      <description>We have explored enough sqlite dot commands, now I want to explore the actual nuts and bolts of SQL in SQLite, what better place then to create tables. CREATE T</description>
      <pubDate>Thu, 04 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;We have explored enough sqlite dot commands, now I want to explore the actual nuts and bolts of SQL in SQLite, what better place then to create tables.&#xA;&#xA;## CREATE TABLE Statement&#xA;&#xA;The `CREATE TABLE` as the name suggest will create a table in a sqlite database. What is a table?&#xA;&#xA;&gt; Table is a set of relations&#xA;&#xA;The syntax for the create table might look something like this:&#xA;&#xA;```sql&#xA;CREATE [TEMP | TEMPORARY] TABLE [IF NOT EXISTS] [schema-name.]table-name&#xA;( column-def ( , column-def )* [, table-constraint]* )&#xA;[WITHOUT ROWID] [STRICT]&#xA;```&#xA;Check [this diagram](https://sqlite.org/lang_createtable.html) to make things clear for you, might even confuse you if you are absolutely new, so just stick around we will explore it one piece at a time, and as usual follow a exhaustive, all combination exploration of this statemnet.&#xA;&#xA;There are a lot of things, but we&#39;ll see each of them one by one, not all at once.&#xA;&#xA;The important and mandatory things to provide are table name and atleast one column name. Not even the type of the column is needed, just the name is sufficient, since the type is really not a thing in sqlite tables unless you add `STRICT` as the table option.&#xA;&#xA;So, the bare bones table creation statement might look something like this:&#xA;&#xA;```sql&#xA;CREATE TABLE users(name);&#xA;```&#xA;&#xA;Dead! Just the table name and one column name, can you even go with no columns (Except the rowid one that is automagically added?). You can actually, but you will have to specify the column as rowid (let’s keep that aside for now).&#xA;This is wired, but was juse exploring the possibilities of what the bare minimum is required for creating a table in sqlite.&#xA;&#xA;Now, if we want to insert this into the `users` table, we can simply do this:&#xA;&#xA;```sql&#xA;INSERT INTO users(name) VALUES (&#34;abc&#34;), (&#34;def&#34;), (100), (89.8);&#xA;SELECT * FROM users;&#xA;```&#xA;Simple right?&#xA;&#xA;Note that you can insert anything in the id column, not just numbers, but you can&#39;t insert anything in the name column, since that is a mandatory column. That is wired but that&#39;s what sqlite has a duct typing if not opted out of it.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: header command</title>
      <link>https://meetgor.com/sqlog/sqlite-dot-command-header</link>
      <description>The command is crucial as might effect the way the result set is displayed. The first row would be set to the name of the columns in the output of the relevant</description>
      <pubDate>Wed, 03 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;The `.headers` command is crucial as might effect the way the result set is displayed. The first row would be set to the name of the columns in the output of the relevant commands like `.output`, `.once`, or even your simple standard out queries if it is set on.&#xA;&#xA;```&#xA;.headers on&#xA;&#xA;OR&#xA;&#xA;.header on&#xA;```&#xA;&#xA;Some how either of them work. You need to set it to `on` to enable the headers in the output of the result set. And you can turn off with `.headers off`.&#xA;&#xA;Which modes are effected with this command if set on or off?&#xA;&#xA;Well we need to think about how effected means&#xA;1. Only added in the first row&#xA;2. Added in each row&#xA;3. No effect&#xA;&#xA;---&#xA;&#xA;1. Only added in the first row&#xA;    1. ascii&#xA;    2. column&#xA;    3. csv&#xA;    4. html&#xA;    5. list&#xA;    6. quote&#xA;    7. tabs&#xA;    8. tcl&#xA;2. Added in each row&#xA;    1. insert&#xA;3. No effect&#xA;    1. box&#xA;    2. json&#xA;    3. line&#xA;    4. markdown&#xA;    5. table&#xA;&#xA;&#xA;Reference table&#xA;&#xA;```sql&#xA;CREATE TABLE IF NOT EXISTS books(&#xA;    id INTEGER PRIMARY KEY AUTOINCREMENT,&#xA;    title TEXT NOT NULL,&#xA;    author TEXT NOT NULL,&#xA;    pages INTEGER NOT NULL,&#xA;    release_date TEXT NOT NULL,&#xA;    price REAL NOT NULL&#xA;);&#xA;&#xA;INSERT INTO books(title, author, pages, release_date, price)&#xA;VALUES&#xA;(&#39;The Hobbit&#39;, &#39;J.R.R. Tolkien&#39;, 310, &#39;1937-09-21&#39;, 39.99),&#xA;(&#39;The Fellowship of the Ring&#39;, &#39;J.R.R. Tolkien&#39;, 423, &#39;1954-07-29&#39;, 49.99),&#xA;(&#39;The Two Towers&#39;, &#39;J.R.R. Tolkien&#39;, 352, &#39;1954-11-11&#39;, 49.99), &#xA;(&#39;The Return of the King&#39;, &#39;J.R.R. Tolkien&#39;, 416, &#39;1955-10-20&#39;, 49.99);&#xA;&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;Below are the outputs of each command&#xA;&#xA;### Added on the first row&#xA;&#xA;In the below modes, the headers command if set will add the column name as the header in the first row.&#xA;&#xA;1. ASCII&#xA;&#xA;```sqlite&#xA;.mode ascii&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;```&#xA;sqlite&gt; .mode ascii&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;idtitleauthorpagesrelease_dateprice1The HobbitJ.R.R. Tolkien3101937-09-2139.992The Fellowship of the RingJ.R.R. Tolkien4231954-07-2949.993The Two TowersJ.R.R. Tolkien3521954-11-1149.994The Return of the KingJ.R.R. Tolkien4161955-10-2049.99sqlite&gt;&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;1The HobbitJ.R.R. Tolkien3101937-09-2139.992The Fellowship of the RingJ.R.R. Tolkien4231954-07-2949.993The Two TowersJ.R.R. Tolkien3521954-11-1149.994The Return of the KingJ.R.R. Tolkien4161955-10-2049.99sqlite&gt;&#xA;sqlite&gt;&#xA;```&#xA;&#xA;2. Column&#xA;&#xA;```&#xA;.mode column&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output&#xA;&#xA;```&#xA;sqlite&gt; .mode column&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;id  title                       author          pages  release_date  price&#xA;--  --------------------------  --------------  -----  ------------  -----&#xA;1   The Hobbit                  J.R.R. Tolkien  310    1937-09-21    39.99&#xA;2   The Fellowship of the Ring  J.R.R. Tolkien  423    1954-07-29    49.99&#xA;3   The Two Towers              J.R.R. Tolkien  352    1954-11-11    49.99&#xA;4   The Return of the King      J.R.R. Tolkien  416    1955-10-20    49.99&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;1   The Hobbit                  J.R.R. Tolkien  310    1937-09-21    39.99&#xA;2   The Fellowship of the Ring  J.R.R. Tolkien  423    1954-07-29    49.99&#xA;3   The Two Towers              J.R.R. Tolkien  352    1954-11-11    49.99&#xA;4   The Return of the King      J.R.R. Tolkien  416    1955-10-20    49.99&#xA;sqlite&gt;&#xA;```&#xA;&#xA;3. CSV&#xA;&#xA;```sqlite&#xA;.mode csv&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode csv&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;id,title,author,pages,release_date,price&#xA;1,&#34;The Hobbit&#34;,&#34;J.R.R. Tolkien&#34;,310,1937-09-21,39.99&#xA;2,&#34;The Fellowship of the Ring&#34;,&#34;J.R.R. Tolkien&#34;,423,1954-07-29,49.99&#xA;3,&#34;The Two Towers&#34;,&#34;J.R.R. Tolkien&#34;,352,1954-11-11,49.99&#xA;4,&#34;The Return of the King&#34;,&#34;J.R.R. Tolkien&#34;,416,1955-10-20,49.99&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;1,&#34;The Hobbit&#34;,&#34;J.R.R. Tolkien&#34;,310,1937-09-21,39.99&#xA;2,&#34;The Fellowship of the Ring&#34;,&#34;J.R.R. Tolkien&#34;,423,1954-07-29,49.99&#xA;3,&#34;The Two Towers&#34;,&#34;J.R.R. Tolkien&#34;,352,1954-11-11,49.99&#xA;4,&#34;The Return of the King&#34;,&#34;J.R.R. Tolkien&#34;,416,1955-10-20,49.99&#xA;sqlite&gt;&#xA;```&#xA;&#xA;4. HTML&#xA;&#xA;```sqlite&#xA;.mode html&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode html&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;&lt;TR&gt;&lt;TH&gt;id&lt;/TH&gt;&#xA;&lt;TH&gt;title&lt;/TH&gt;&#xA;&lt;TH&gt;author&lt;/TH&gt;&#xA;&lt;TH&gt;pages&lt;/TH&gt;&#xA;&lt;TH&gt;release_date&lt;/TH&gt;&#xA;&lt;TH&gt;price&lt;/TH&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;1&lt;/TD&gt;&#xA;&lt;TD&gt;The Hobbit&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;310&lt;/TD&gt;&#xA;&lt;TD&gt;1937-09-21&lt;/TD&gt;&#xA;&lt;TD&gt;39.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;2&lt;/TD&gt;&#xA;&lt;TD&gt;The Fellowship of the Ring&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;423&lt;/TD&gt;&#xA;&lt;TD&gt;1954-07-29&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;3&lt;/TD&gt;&#xA;&lt;TD&gt;The Two Towers&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;352&lt;/TD&gt;&#xA;&lt;TD&gt;1954-11-11&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;4&lt;/TD&gt;&#xA;&lt;TD&gt;The Return of the King&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;416&lt;/TD&gt;&#xA;&lt;TD&gt;1955-10-20&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;&lt;TR&gt;&lt;TD&gt;1&lt;/TD&gt;&#xA;&lt;TD&gt;The Hobbit&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;310&lt;/TD&gt;&#xA;&lt;TD&gt;1937-09-21&lt;/TD&gt;&#xA;&lt;TD&gt;39.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;2&lt;/TD&gt;&#xA;&lt;TD&gt;The Fellowship of the Ring&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;423&lt;/TD&gt;&#xA;&lt;TD&gt;1954-07-29&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;3&lt;/TD&gt;&#xA;&lt;TD&gt;The Two Towers&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;352&lt;/TD&gt;&#xA;&lt;TD&gt;1954-11-11&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;4&lt;/TD&gt;&#xA;&lt;TD&gt;The Return of the King&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;416&lt;/TD&gt;&#xA;&lt;TD&gt;1955-10-20&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;sqlite&gt;&#xA;```&#xA;&#xA;5. List&#xA;&#xA;```sqlite&#xA;.mode list&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode list&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;id|title|author|pages|release_date|price&#xA;1|The Hobbit|J.R.R. Tolkien|310|1937-09-21|39.99&#xA;2|The Fellowship of the Ring|J.R.R. Tolkien|423|1954-07-29|49.99&#xA;3|The Two Towers|J.R.R. Tolkien|352|1954-11-11|49.99&#xA;4|The Return of the King|J.R.R. Tolkien|416|1955-10-20|49.99&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;1|The Hobbit|J.R.R. Tolkien|310|1937-09-21|39.99&#xA;2|The Fellowship of the Ring|J.R.R. Tolkien|423|1954-07-29|49.99&#xA;3|The Two Towers|J.R.R. Tolkien|352|1954-11-11|49.99&#xA;4|The Return of the King|J.R.R. Tolkien|416|1955-10-20|49.99&#xA;sqlite&gt;&#xA;```&#xA;&#xA;6. Quote&#xA;&#xA;```sqlite&#xA;.mode quote&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode quote&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;&#39;id&#39;,&#39;title&#39;,&#39;author&#39;,&#39;pages&#39;,&#39;release_date&#39;,&#39;price&#39;&#xA;1,&#39;The Hobbit&#39;,&#39;J.R.R. Tolkien&#39;,310,&#39;1937-09-21&#39;,39.99000000000000198&#xA;2,&#39;The Fellowship of the Ring&#39;,&#39;J.R.R. Tolkien&#39;,423,&#39;1954-07-29&#39;,49.99000000000000198&#xA;3,&#39;The Two Towers&#39;,&#39;J.R.R. Tolkien&#39;,352,&#39;1954-11-11&#39;,49.99000000000000198&#xA;4,&#39;The Return of the King&#39;,&#39;J.R.R. Tolkien&#39;,416,&#39;1955-10-20&#39;,49.99000000000000198&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;1,&#39;The Hobbit&#39;,&#39;J.R.R. Tolkien&#39;,310,&#39;1937-09-21&#39;,39.99000000000000198&#xA;2,&#39;The Fellowship of the Ring&#39;,&#39;J.R.R. Tolkien&#39;,423,&#39;1954-07-29&#39;,49.99000000000000198&#xA;3,&#39;The Two Towers&#39;,&#39;J.R.R. Tolkien&#39;,352,&#39;1954-11-11&#39;,49.99000000000000198&#xA;4,&#39;The Return of the King&#39;,&#39;J.R.R. Tolkien&#39;,416,&#39;1955-10-20&#39;,49.99000000000000198&#xA;sqlite&gt;&#xA;&#xA;```&#xA;&#xA;7. Tabs&#xA;&#xA;```sqlite&#xA;.mode tabs&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode tabs&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;id      title   author  pages   release_date    price&#xA;1       The Hobbit      J.R.R. Tolkien  310     1937-09-21      39.99&#xA;2       The Fellowship of the Ring      J.R.R. Tolkien  423     1954-07-29      49.99&#xA;3       The Two Towers  J.R.R. Tolkien  352     1954-11-11      49.99&#xA;4       The Return of the King  J.R.R. Tolkien  416     1955-10-20      49.99&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;1       The Hobbit      J.R.R. Tolkien  310     1937-09-21      39.99&#xA;2       The Fellowship of the Ring      J.R.R. Tolkien  423     1954-07-29      49.99&#xA;3       The Two Towers  J.R.R. Tolkien  352     1954-11-11      49.99&#xA;4       The Return of the King  J.R.R. Tolkien  416     1955-10-20      49.99&#xA;sqlite&gt;&#xA;```&#xA;&#xA;8. TCL&#xA;&#xA;```sqlite&#xA;.mode tcl&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode tcl&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;&#34;id&#34; &#34;title&#34; &#34;author&#34; &#34;pages&#34; &#34;release_date&#34; &#34;price&#34;&#xA;&#34;1&#34; &#34;The Hobbit&#34; &#34;J.R.R. Tolkien&#34; &#34;310&#34; &#34;1937-09-21&#34; &#34;39.99&#34;&#xA;&#34;2&#34; &#34;The Fellowship of the Ring&#34; &#34;J.R.R. Tolkien&#34; &#34;423&#34; &#34;1954-07-29&#34; &#34;49.99&#34;&#xA;&#34;3&#34; &#34;The Two Towers&#34; &#34;J.R.R. Tolkien&#34; &#34;352&#34; &#34;1954-11-11&#34; &#34;49.99&#34;&#xA;&#34;4&#34; &#34;The Return of the King&#34; &#34;J.R.R. Tolkien&#34; &#34;416&#34; &#34;1955-10-20&#34; &#34;49.99&#34;&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;&#34;1&#34; &#34;The Hobbit&#34; &#34;J.R.R. Tolkien&#34; &#34;310&#34; &#34;1937-09-21&#34; &#34;39.99&#34;&#xA;&#34;2&#34; &#34;The Fellowship of the Ring&#34; &#34;J.R.R. Tolkien&#34; &#34;423&#34; &#34;1954-07-29&#34; &#34;49.99&#34;&#xA;&#34;3&#34; &#34;The Two Towers&#34; &#34;J.R.R. Tolkien&#34; &#34;352&#34; &#34;1954-11-11&#34; &#34;49.99&#34;&#xA;&#34;4&#34; &#34;The Return of the King&#34; &#34;J.R.R. Tolkien&#34; &#34;416&#34; &#34;1955-10-20&#34; &#34;49.99&#34;&#xA;sqlite&gt;&#xA;```&#xA;&#xA;&#xA;### Added in each row&#xA;&#xA;In this type, each row has the column name as some form of the row.&#xA;&#xA;1. Insert&#xA;&#xA;```sqlite&#xA;.mode insert&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode insert&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(1,&#39;The Hobbit&#39;,&#39;J.R.R. Tolkien&#39;,310,&#39;1937-09-21&#39;,39.99000000000000198);&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(2,&#39;The Fellowship of the Ring&#39;,&#39;J.R.R. Tolkien&#39;,423,&#39;1954-07-29&#39;,49.99000000000000198);&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(3,&#39;The Two Towers&#39;,&#39;J.R.R. Tolkien&#39;,352,&#39;1954-11-11&#39;,49.99000000000000198);&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(4,&#39;The Return of the King&#39;,&#39;J.R.R. Tolkien&#39;,416,&#39;1955-10-20&#39;,49.99000000000000198);&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;INSERT INTO &#34;table&#34; VALUES(1,&#39;The Hobbit&#39;,&#39;J.R.R. Tolkien&#39;,310,&#39;1937-09-21&#39;,39.99000000000000198);&#xA;INSERT INTO &#34;table&#34; VALUES(2,&#39;The Fellowship of the Ring&#39;,&#39;J.R.R. Tolkien&#39;,423,&#39;1954-07-29&#39;,49.99000000000000198);&#xA;INSERT INTO &#34;table&#34; VALUES(3,&#39;The Two Towers&#39;,&#39;J.R.R. Tolkien&#39;,352,&#39;1954-11-11&#39;,49.99000000000000198);&#xA;INSERT INTO &#34;table&#34; VALUES(4,&#39;The Return of the King&#39;,&#39;J.R.R. Tolkien&#39;,416,&#39;1955-10-20&#39;,49.99000000000000198);&#xA;sqlite&gt;&#xA;&#xA;```&#xA;&#xA;&#xA;### No effect&#xA;&#xA;These modes show the headers or column names irrespective of the `.headers` flag, as for these modes the headers form quite a bit of structure to the output they are showing.&#xA;&#xA;1. Box&#xA;&#xA;```sqlite&#xA;.mode box&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode box&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;┌────┬────────────────────────────┬────────────────┬───────┬──────────────┬───────┐&#xA;│ id │           title            │     author     │ pages │ release_date │ price │&#xA;├────┼────────────────────────────┼────────────────┼───────┼──────────────┼───────┤&#xA;│ 1  │ The Hobbit                 │ J.R.R. Tolkien │ 310   │ 1937-09-21   │ 39.99 │&#xA;│ 2  │ The Fellowship of the Ring │ J.R.R. Tolkien │ 423   │ 1954-07-29   │ 49.99 │&#xA;│ 3  │ The Two Towers             │ J.R.R. Tolkien │ 352   │ 1954-11-11   │ 49.99 │&#xA;│ 4  │ The Return of the King     │ J.R.R. Tolkien │ 416   │ 1955-10-20   │ 49.99 │&#xA;└────┴────────────────────────────┴────────────────┴───────┴──────────────┴───────┘&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;┌────┬────────────────────────────┬────────────────┬───────┬──────────────┬───────┐&#xA;│ id │           title            │     author     │ pages │ release_date │ price │&#xA;├────┼────────────────────────────┼────────────────┼───────┼──────────────┼───────┤&#xA;│ 1  │ The Hobbit                 │ J.R.R. Tolkien │ 310   │ 1937-09-21   │ 39.99 │&#xA;│ 2  │ The Fellowship of the Ring │ J.R.R. Tolkien │ 423   │ 1954-07-29   │ 49.99 │&#xA;│ 3  │ The Two Towers             │ J.R.R. Tolkien │ 352   │ 1954-11-11   │ 49.99 │&#xA;│ 4  │ The Return of the King     │ J.R.R. Tolkien │ 416   │ 1955-10-20   │ 49.99 │&#xA;└────┴────────────────────────────┴────────────────┴───────┴──────────────┴───────┘&#xA;sqlite&gt;&#xA;```&#xA;&#xA;2. JSON&#xA;&#xA;```sqlite&#xA;.mode json&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode json&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;[{&#34;id&#34;:1,&#34;title&#34;:&#34;The Hobbit&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:310,&#34;release_date&#34;:&#34;1937-09-21&#34;,&#34;price&#34;:39.99000000000000198},&#xA;{&#34;id&#34;:2,&#34;title&#34;:&#34;The Fellowship of the Ring&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:423,&#34;release_date&#34;:&#34;1954-07-29&#34;,&#34;price&#34;:49.99000000000000198},&#xA;{&#34;id&#34;:3,&#34;title&#34;:&#34;The Two Towers&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:352,&#34;release_date&#34;:&#34;1954-11-11&#34;,&#34;price&#34;:49.99000000000000198},&#xA;{&#34;id&#34;:4,&#34;title&#34;:&#34;The Return of the King&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:416,&#34;release_date&#34;:&#34;1955-10-20&#34;,&#34;price&#34;:49.99000000000000198}]&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;[{&#34;id&#34;:1,&#34;title&#34;:&#34;The Hobbit&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:310,&#34;release_date&#34;:&#34;1937-09-21&#34;,&#34;price&#34;:39.99000000000000198},&#xA;{&#34;id&#34;:2,&#34;title&#34;:&#34;The Fellowship of the Ring&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:423,&#34;release_date&#34;:&#34;1954-07-29&#34;,&#34;price&#34;:49.99000000000000198},&#xA;{&#34;id&#34;:3,&#34;title&#34;:&#34;The Two Towers&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:352,&#34;release_date&#34;:&#34;1954-11-11&#34;,&#34;price&#34;:49.99000000000000198},&#xA;{&#34;id&#34;:4,&#34;title&#34;:&#34;The Return of the King&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:416,&#34;release_date&#34;:&#34;1955-10-20&#34;,&#34;price&#34;:49.99000000000000198}]&#xA;sqlite&gt;&#xA;```&#xA;&#xA;3. Line&#xA;&#xA;```sqlite&#xA;.mode line&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;```&#xA;sqlite&gt; .mode line&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;          id = 1&#xA;       title = The Hobbit&#xA;      author = J.R.R. Tolkien&#xA;       pages = 310&#xA;release_date = 1937-09-21&#xA;       price = 39.99&#xA;&#xA;          id = 2&#xA;       title = The Fellowship of the Ring&#xA;      author = J.R.R. Tolkien&#xA;       pages = 423&#xA;release_date = 1954-07-29&#xA;       price = 49.99&#xA;&#xA;          id = 3&#xA;       title = The Two Towers&#xA;      author = J.R.R. Tolkien&#xA;       pages = 352&#xA;release_date = 1954-11-11&#xA;       price = 49.99&#xA;&#xA;          id = 4&#xA;       title = The Return of the King&#xA;      author = J.R.R. Tolkien&#xA;       pages = 416&#xA;release_date = 1955-10-20&#xA;       price = 49.99&#xA;&#xA;&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;          id = 1&#xA;       title = The Hobbit&#xA;      author = J.R.R. Tolkien&#xA;       pages = 310&#xA;release_date = 1937-09-21&#xA;       price = 39.99&#xA;&#xA;          id = 2&#xA;       title = The Fellowship of the Ring&#xA;      author = J.R.R. Tolkien&#xA;       pages = 423&#xA;release_date = 1954-07-29&#xA;       price = 49.99&#xA;&#xA;          id = 3&#xA;       title = The Two Towers&#xA;      author = J.R.R. Tolkien&#xA;       pages = 352&#xA;release_date = 1954-11-11&#xA;       price = 49.99&#xA;&#xA;          id = 4&#xA;       title = The Return of the King&#xA;      author = J.R.R. Tolkien&#xA;       pages = 416&#xA;release_date = 1955-10-20&#xA;       price = 49.99&#xA;sqlite&gt;&#xA;```&#xA;&#xA;4. Markdown&#xA;&#xA;```sqlite&#xA;.mode markdown&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode markdown&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;|----|----------------------------|----------------|-------|--------------|-------|&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;|----|----------------------------|----------------|-------|--------------|-------|&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;sqlite&gt;&#xA;```&#xA;&#xA;5. Table&#xA;&#xA;```sqlite&#xA;.mode table&#xA;.headers on&#xA;select * from books;&#xA;.headers off&#xA;select * from books;&#xA;```&#xA;&#xA;Output:&#xA;&#xA;```&#xA;sqlite&gt; .mode table&#xA;sqlite&gt; .headers on&#xA;sqlite&gt; select * from books;&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;sqlite&gt; .headers off&#xA;sqlite&gt; select * from books;&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;sqlite&gt;&#xA;&#xA;```&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: Output mode separator command</title>
      <link>https://meetgor.com/sqlog/sqlite-mode-dot-command-separators</link>
      <description>Using the separator for the ouput If you wanted to use a specific separator for columns and rows while displaying the result set / table, you can use the dot co</description>
      <pubDate>Tue, 02 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Using the separator for the ouput&#xA;&#xA;If you wanted to use a specific separator for columns and rows while displaying the result set / table, you can use the `.separator` dot command which can take 2 arguments, first as the separator for the column and the second for the row.&#xA;&#xA;So, if we set use `.separator &#34;|&#34; &#34;---&#34;` then it will split the columns with `|` and each row with `---`. &#xA;&#xA;```&#xA;1|The Hobbit|J.R.R. Tolkien|310|1937-09-21|39.99---2|The Fellowship of the Ring|J.R.R. Tolkien|423|1954-07-29|49.99---3|The Two Towers|J.R.R. Tolkien|352|1954-11-11|49.99---4|The Return of the King|J.R.R. Tolkien|416|1955-10-20|49.99---&#xA;```&#xA;&#xA;The output looks wired but I was giving a example.&#xA;&#xA;The row separator is by default a `\n` character or `\r\n` on windows, which is for the list or any other mode. However if you want to add those, you need to specify it in the string like below:&#xA;&#xA;```&#xA;.separator &#34;|&#34; &#34;\n---&#34;&#xA;```&#xA;&#xA;```&#xA;&gt;sqlite&gt;.separator &#34;|&#34; &#34;\n---&#34;&#xA;sqlite&gt; select * from books;&#xA;1|The Hobbit|J.R.R. Tolkien|310|1937-09-21|39.99&#xA;---2|The Fellowship of the Ring|J.R.R. Tolkien|423|1954-07-29|49.99&#xA;---3|The Two Towers|J.R.R. Tolkien|352|1954-11-11|49.99&#xA;---4|The Return of the King|J.R.R. Tolkien|416|1955-10-20|49.99&#xA;---sqlite&gt;&#xA;```&#xA;&#xA;The `\n---` will add it at the end of each row, so the first row won&#39;t have it if you are seeing the above output and confused, then that&#39;s the expected result.&#xA;&#xA;You can also use `\r` or `\t` for the row separator, and for the column separator you can use `\n` or `\t` or `\r`.&#xA;&#xA;The modes that will respect the separator setting:&#xA;- list&#xA;- quote&#xA;&#xA;Sadly only two, since for changing the row separator you need to pass the column separator first, so that defeats the purpose of the change in the row separator, you&#39;ll have to override the column separator first and then the row.&#xA;&#xA;So, for the mode `tabs`, if you want to keep the columns separator as is, but change the row separator then you can do something like this:&#xA;```&#xA;.mode tabs&#xA;.separator &#34;\t&#34; &#34;\n\n&#34;&#xA;```&#xA;&#xA;This will keep the column separator as is and change the row separator to `\n\n`&#xA;&#xA;```&#xA;sqlite&gt; .mode tabs&#xA;sqlite&gt; .separator &#34;\t&#34; &#34;\n\n&#34;&#xA;sqlite&gt; select * from books;&#xA;id      title   author  pages   release_date    price&#xA;&#xA;1       The Hobbit      J.R.R. Tolkien  310     1937-09-21      39.99&#xA;&#xA;2       The Fellowship of the Ring      J.R.R. Tolkien  423     1954-07-29      49.99&#xA;&#xA;3       The Two Towers  J.R.R. Tolkien  352     1954-11-11      49.99&#xA;&#xA;4       The Return of the King  J.R.R. Tolkien  416     1955-10-20      49.99&#xA;&#xA;sqlite&gt;&#xA;```&#xA;Similarly for csv you would keep the separator as , and then pass the row separator&#xA;&#xA;Nice tricks to know.&#xA;&#xA;References:&#xA;- [SQLite dot commands: Changing output formats](https://www.sqlite.org/cli.html#changing_output_formats)&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: Output mode</title>
      <link>https://meetgor.com/sqlog/sqlite-mode-dot-command</link>
      <description>Output mode dot command The SQLite shell is a great way to test out and run bunch of queries, but the output might be squished or cluttered. You might need to v</description>
      <pubDate>Mon, 01 Sep 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Output mode dot command&#xA;&#xA;The SQLite shell is a great way to test out and run bunch of queries, but the output might be squished or cluttered. You might need to view the output in a specific way, and the creators of SQLite were already hearing your screams.&#xA;&#xA;They created not 5, not 10, but 14 different output modes, and to extend it even further, you can even use any delimiter/separator as well.&#xA;&#xA;The `.mode` will give you the currently configured/set output mode with some options which are default for certain type of modes.&#xA;&#xA;The default mode is `list`, with the option of `escape` with the ascii character set.&#xA;&#xA;```&#xA;current output mode: list --escape ascii&#xA;```&#xA;&#xA;The list mode defined in the [docs](https://www.sqlite.org/cli.html#changing_output_formats) as:&#xA;&#xA;&gt; In list mode, each row of a query result is written on one line of output and each column within that row is separated by a specific separator string. The default separator is a pipe symbol (&#34;|&#34;). List mode is especially useful when you are going to send the output of a query to another program (such as AWK) for additional processing.&#xA;&#xA;So, in short, the `list` output mode prints each row on a line with a `|` as the separator.&#xA;&#xA;You can get all the information you need with the `.help mode` command&#xA;&#xA;```&#xA;.mode ?MODE? ?OPTIONS?   Set output mode&#xA;   MODE is one of:&#xA;     ascii       Columns/rows delimited by 0x1F and 0x1E&#xA;     box         Tables using unicode box-drawing characters&#xA;     csv         Comma-separated values&#xA;     column      Output in columns.  (See .width)&#xA;     html        HTML \&lt;table\&gt; code&#xA;     insert      SQL insert statements for TABLE&#xA;     json        Results in a JSON array&#xA;     line        One value per line&#xA;     list        Values delimited by &#34;|&#34;&#xA;     markdown    Markdown table format&#xA;     qbox        Shorthand for &#34;box --wrap 60 --quote&#34;&#xA;     quote       Escape answers as for SQL&#xA;     table       ASCII-art table&#xA;     tabs        Tab-separated values&#xA;     tcl         TCL list elements&#xA;   OPTIONS: (for columnar modes or insert mode):&#xA;     --escape T     ctrl-char escape; T is one of: symbol, ascii, off&#xA;     --wrap N       Wrap output lines to no longer than N characters&#xA;     --wordwrap B   Wrap or not at word boundaries per B (on/off)&#xA;     --ww           Shorthand for &#34;--wordwrap 1&#34;&#xA;     --quote        Quote output text as SQL literals&#xA;     --noquote      Do not quote output text&#xA;     TABLE          The name of SQL table used for &#34;insert&#34; mode&#xA;```&#xA;&#xA;Let&#39;s set up with some sample data&#xA;&#xA;```sql&#xA;CREATE TABLE IF NOT EXISTS books(&#xA;    id INTEGER PRIMARY KEY AUTOINCREMENT,&#xA;    title TEXT NOT NULL,&#xA;    author TEXT NOT NULL,&#xA;    pages INTEGER NOT NULL,&#xA;    release_date TEXT NOT NULL,&#xA;    price REAL NOT NULL&#xA;);&#xA;&#xA;INSERT INTO books(title, author, pages, release_date, price)&#xA;VALUES&#xA;(&#39;The Hobbit&#39;, &#39;J.R.R. Tolkien&#39;, 310, &#39;1937-09-21&#39;, 39.99),&#xA;(&#39;The Fellowship of the Ring&#39;, &#39;J.R.R. Tolkien&#39;, 423, &#39;1954-07-29&#39;, 49.99),&#xA;(&#39;The Two Towers&#39;, &#39;J.R.R. Tolkien&#39;, 352, &#39;1954-11-11&#39;, 49.99), &#xA;(&#39;The Return of the King&#39;, &#39;J.R.R. Tolkien&#39;, 416, &#39;1955-10-20&#39;, 49.99);&#xA;&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;1|The Hobbit|J.R.R. Tolkien|310|1937-09-21|39.99&#xA;2|The Fellowship of the Ring|J.R.R. Tolkien|423|1954-07-29|49.99&#xA;3|The Two Towers|J.R.R. Tolkien|352|1954-11-11|49.99&#xA;4|The Return of the King|J.R.R. Tolkien|416|1955-10-20|49.99&#xA;```&#xA;&#xA;Not the best way to look at the data, but handy in some cases. We can change it, we have 14 different modes to choose from.&#xA;Its like choosing a taste flovour for an ice-cream, you want something different each time, or you stick to the simple vanilla one. Which one are you? I am a vanilla guy (I like the table mode, but use csv heavily too).&#xA;&#xA;Let&#39;s change it to a table format with `.mode table` This will set the mode as `table`&#xA;&#xA;&gt; table       ASCII-art table&#xA;&#xA;This will show the result sets in a ascii-like table structure with the `+++` and `---` to separate the rows and columns.&#xA;&#xA;```&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;```&#xA;&#xA;That is soo nice looking! Crystal clear.&#xA;&#xA;You can even see the current set mode precisely with the `.mode` command&#xA;&#xA;Since I changed the mode to table, let&#39;s see what the mode is now set as?&#xA;```&#xA;sqlite&gt; .mode&#xA;current output mode: table --wrap 60 --wordwrap off --noquote --escape ascii&#xA;```&#xA;&#xA;It has a bunch of options like `wrap`, `wordwrap`, `noquote`, and `escape` etc. You can take a look at the help and tweak them as per your liking, I usually don&#39;t change the options, but it might be very rare to switch from the defaults.&#xA;&#xA;&#xA;## Output modes&#xA;&#xA;### ASCII &#xA;&#xA;This mode separates the columns with by `0x1F` and rows with `0x1E`.&#xA;&#xA;```&#xA;.mode ascii&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;id�title�author�pages�release_date�price&#xA;1�The Hobbit�J.R.R. Tolkien�310�1937-09-21�39.99&#xA;2�The Fellowship of the Ring�J.R.R. Tolkien�423�1954-07-29�49.99&#xA;3�The Two Towers�J.R.R. Tolkien�352�1954-11-11�49.99&#xA;4�The Return of the King�J.R.R. Tolkien�416�1955-10-20�49.99&#xA;&#xA;```&#xA;&#xA;### Box&#xA;&#xA;The box mode renders the result set using unicode box-drawing characters.&#xA;&#xA;```&#xA;.mode box&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;┌────┬────────────────────────────┬────────────────┬───────┬──────────────┬───────┐&#xA;│ id │           title            │     author     │ pages │ release_date │ price │&#xA;├────┼────────────────────────────┼────────────────┼───────┼──────────────┼───────┤&#xA;│ 1  │ The Hobbit                 │ J.R.R. Tolkien │ 310   │ 1937-09-21   │ 39.99 │&#xA;│ 2  │ The Fellowship of the Ring │ J.R.R. Tolkien │ 423   │ 1954-07-29   │ 49.99 │&#xA;│ 3  │ The Two Towers             │ J.R.R. Tolkien │ 352   │ 1954-11-11   │ 49.99 │&#xA;│ 4  │ The Return of the King     │ J.R.R. Tolkien │ 416   │ 1955-10-20   │ 49.99 │&#xA;└────┴────────────────────────────┴────────────────┴───────┴──────────────┴───────┘&#xA;```&#xA;&#xA;&#xA;### CSV&#xA;&#xA;The csv mode simply outputs the result in the comma-separated values for columns and newline character for rows.&#xA;&#xA;```&#xA;.mode csv&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```csv&#xA;1,&#34;The Hobbit&#34;,&#34;J.R.R. Tolkien&#34;,310,1937-09-21,39.99&#xA;2,&#34;The Fellowship of the Ring&#34;,&#34;J.R.R. Tolkien&#34;,423,1954-07-29,49.99&#xA;3,&#34;The Two Towers&#34;,&#34;J.R.R. Tolkien&#34;,352,1954-11-11,49.99&#xA;4,&#34;The Return of the King&#34;,&#34;J.R.R. Tolkien&#34;,416,1955-10-20,49.99&#xA;```&#xA;&#xA;&#xA;### Column&#xA;&#xA;The column mode simply outputs the result in columns with certain width. We can set the option `.width` to change the width.&#xA;&#xA;```&#xA;.mode column&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;id  title                       author          pages  release_date  price&#xA;--  --------------------------  --------------  -----  ------------  -----&#xA;1   The Hobbit                  J.R.R. Tolkien  310    1937-09-21    39.99&#xA;2   The Fellowship of the Ring  J.R.R. Tolkien  423    1954-07-29    49.99&#xA;3   The Two Towers              J.R.R. Tolkien  352    1954-11-11    49.99&#xA;4   The Return of the King      J.R.R. Tolkien  416    1955-10-20    49.99&#xA;```&#xA;&#xA;&#xA;### HTML&#xA;&#xA;The html output mode simply renders the result set / tables into an table element in html.&#xA;&#xA;```&#xA;.mode html&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```html&#xA;&lt;TR&gt;&lt;TH&gt;id&lt;/TH&gt;&#xA;&lt;TH&gt;title&lt;/TH&gt;&#xA;&lt;TH&gt;author&lt;/TH&gt;&#xA;&lt;TH&gt;pages&lt;/TH&gt;&#xA;&lt;TH&gt;release_date&lt;/TH&gt;&#xA;&lt;TH&gt;price&lt;/TH&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;1&lt;/TD&gt;&#xA;&lt;TD&gt;The Hobbit&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;310&lt;/TD&gt;&#xA;&lt;TD&gt;1937-09-21&lt;/TD&gt;&#xA;&lt;TD&gt;39.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;2&lt;/TD&gt;&#xA;&lt;TD&gt;The Fellowship of the Ring&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;423&lt;/TD&gt;&#xA;&lt;TD&gt;1954-07-29&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;3&lt;/TD&gt;&#xA;&lt;TD&gt;The Two Towers&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;352&lt;/TD&gt;&#xA;&lt;TD&gt;1954-11-11&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;&lt;TR&gt;&lt;TD&gt;4&lt;/TD&gt;&#xA;&lt;TD&gt;The Return of the King&lt;/TD&gt;&#xA;&lt;TD&gt;J.R.R. Tolkien&lt;/TD&gt;&#xA;&lt;TD&gt;416&lt;/TD&gt;&#xA;&lt;TD&gt;1955-10-20&lt;/TD&gt;&#xA;&lt;TD&gt;49.99&lt;/TD&gt;&#xA;&lt;/TR&gt;&#xA;```&#xA;&#xA;&#xA;### Insert&#xA;&#xA;The insert mode simply outputs the result in SQL insert statements. This is really handy for bulk inserting or migrating data across databases or tables.&#xA;&#xA;```&#xA;.mode insert&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(1,&#39;The Hobbit&#39;,&#39;J.R.R. Tolkien&#39;,310,&#39;1937-09-21&#39;,39.99000000000000198);&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(2,&#39;The Fellowship of the Ring&#39;,&#39;J.R.R. Tolkien&#39;,423,&#39;1954-07-29&#39;,49.99000000000000198);&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(3,&#39;The Two Towers&#39;,&#39;J.R.R. Tolkien&#39;,352,&#39;1954-11-11&#39;,49.99000000000000198);&#xA;INSERT INTO &#34;table&#34;(id,title,author,pages,release_date,price) VALUES(4,&#39;The Return of the King&#39;,&#39;J.R.R. Tolkien&#39;,416,&#39;1955-10-20&#39;,49.99000000000000198);&#xA;```&#xA;&#xA;### JSON&#xA;&#xA;The json mode simply outputs the result in json format.&#xA;&#xA;```&#xA;.mode json&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```json&#xA;[&#xA;    {&#34;id&#34;:1,&#34;title&#34;:&#34;The Hobbit&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:310,&#34;release_date&#34;:&#34;1937-09-21&#34;,&#34;price&#34;:39.99000000000000198},&#xA;    {&#34;id&#34;:2,&#34;title&#34;:&#34;The Fellowship of the Ring&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:423,&#34;release_date&#34;:&#34;1954-07-29&#34;,&#34;price&#34;:49.99000000000000198},&#xA;    {&#34;id&#34;:3,&#34;title&#34;:&#34;The Two Towers&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:352,&#34;release_date&#34;:&#34;1954-11-11&#34;,&#34;price&#34;:49.99000000000000198},&#xA;    {&#34;id&#34;:4,&#34;title&#34;:&#34;The Return of the King&#34;,&#34;author&#34;:&#34;J.R.R. Tolkien&#34;,&#34;pages&#34;:416,&#34;release_date&#34;:&#34;1955-10-20&#34;,&#34;price&#34;:49.99000000000000198}&#xA;]&#xA;```&#xA;&#xA;&#xA;### List (The default)&#xA;&#xA;As we already know, the list mode is the default mode.&#xA;&#xA;```&#xA;.mode list&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;id|title|author|pages|release_date|price&#xA;1|The Hobbit|J.R.R. Tolkien|310|1937-09-21|39.99&#xA;2|The Fellowship of the Ring|J.R.R. Tolkien|423|1954-07-29|49.99&#xA;3|The Two Towers|J.R.R. Tolkien|352|1954-11-11|49.99&#xA;4|The Return of the King|J.R.R. Tolkien|416|1955-10-20|49.99&#xA;```&#xA;&#xA;### Markdown&#xA;&#xA;The markdown mode simply outputs the result in markdown format. We can use this output in a markdown file and it will nicely draw the tables out.&#xA;&#xA;```&#xA;.mode markdown&#xA;SELECT * FROM books;&#xA;```&#xA;```md&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;|----|----------------------------|----------------|-------|--------------|-------|&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;```&#xA;&#xA;### Qbox&#xA;&#xA;The qbox mode simply outputs the result in box format with the option as `--wrap 60 --quote`.&#xA;&#xA;```&#xA;.mode qbox&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;┌────┬──────────────────────────────┬──────────────────┬───────┬──────────────┬───────┐&#xA;│ id │            title             │      author      │ pages │ release_date │ price │&#xA;├────┼──────────────────────────────┼──────────────────┼───────┼──────────────┼───────┤&#xA;│ 1  │ &#39;The Hobbit&#39;                 │ &#39;J.R.R. Tolkien&#39; │ 310   │ &#39;1937-09-21&#39; │ 39.99 │&#xA;│ 2  │ &#39;The Fellowship of the Ring&#39; │ &#39;J.R.R. Tolkien&#39; │ 423   │ &#39;1954-07-29&#39; │ 49.99 │&#xA;│ 3  │ &#39;The Two Towers&#39;             │ &#39;J.R.R. Tolkien&#39; │ 352   │ &#39;1954-11-11&#39; │ 49.99 │&#xA;│ 4  │ &#39;The Return of the King&#39;     │ &#39;J.R.R. Tolkien&#39; │ 416   │ &#39;1955-10-20&#39; │ 49.99 │&#xA;└────┴──────────────────────────────┴──────────────────┴───────┴──────────────┴───────┘&#xA;```&#xA;&#xA;### Quote&#xA;&#xA;The quote mode simply outputs the result in SQL-string quote format.&#xA;&#xA;```&#xA;.mode quote&#xA;SELECT * FROM books;&#xA;```&#xA;```&#xA;&#39;id&#39;,&#39;title&#39;,&#39;author&#39;,&#39;pages&#39;,&#39;release_date&#39;,&#39;price&#39;&#xA;1,&#39;The Hobbit&#39;,&#39;J.R.R. Tolkien&#39;,310,&#39;1937-09-21&#39;,39.99000000000000198&#xA;2,&#39;The Fellowship of the Ring&#39;,&#39;J.R.R. Tolkien&#39;,423,&#39;1954-07-29&#39;,49.99000000000000198&#xA;3,&#39;The Two Towers&#39;,&#39;J.R.R. Tolkien&#39;,352,&#39;1954-11-11&#39;,49.99000000000000198&#xA;4,&#39;The Return of the King&#39;,&#39;J.R.R. Tolkien&#39;,416,&#39;1955-10-20&#39;,49.99000000000000198&#xA;```&#xA;&#xA;### Table&#xA;&#xA;The table mode simply outputs the result in table format. We saw this mode in the previous example.&#xA;&#xA;```&#xA;.mode table&#xA;SELECT * FROM books;&#xA;```&#xA;&#xA;```&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| id |           title            |     author     | pages | release_date | price |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;| 1  | The Hobbit                 | J.R.R. Tolkien | 310   | 1937-09-21   | 39.99 |&#xA;| 2  | The Fellowship of the Ring | J.R.R. Tolkien | 423   | 1954-07-29   | 49.99 |&#xA;| 3  | The Two Towers             | J.R.R. Tolkien | 352   | 1954-11-11   | 49.99 |&#xA;| 4  | The Return of the King     | J.R.R. Tolkien | 416   | 1955-10-20   | 49.99 |&#xA;+----+----------------------------+----------------+-------+--------------+-------+&#xA;```&#xA;&#xA;### Tabs&#xA;&#xA;The tabs mode simply outputs the result in tab-separated values.&#xA;&#xA;```&#xA;.mode tabs&#xA;SELECT * FROM books;&#xA;```&#xA;```&#xA;id      title   author  pages   release_date    price&#xA;1       The Hobbit      J.R.R. Tolkien  310     1937-09-21      39.99&#xA;2       The Fellowship of the Ring      J.R.R. Tolkien  423     1954-07-29      49.99&#xA;3       The Two Towers  J.R.R. Tolkien  352     1954-11-11      49.99&#xA;4       The Return of the King  J.R.R. Tolkien  416     1955-10-20      49.99&#xA;```&#xA;&#xA;### Tcl &#xA;&#xA;The Tcl mode simply outputs the result in Tcl format. Tcl or tool command language is a configuration friendly format.&#xA;```&#xA;.mode tcl&#xA;SELECT * FROM books;&#xA;```&#xA;```tcl&#xA;&#34;id&#34; &#34;title&#34; &#34;author&#34; &#34;pages&#34; &#34;release_date&#34; &#34;price&#34;&#xA;&#34;1&#34; &#34;The Hobbit&#34; &#34;J.R.R. Tolkien&#34; &#34;310&#34; &#34;1937-09-21&#34; &#34;39.99&#34;&#xA;&#34;2&#34; &#34;The Fellowship of the Ring&#34; &#34;J.R.R. Tolkien&#34; &#34;423&#34; &#34;1954-07-29&#34; &#34;49.99&#34;&#xA;&#34;3&#34; &#34;The Two Towers&#34; &#34;J.R.R. Tolkien&#34; &#34;352&#34; &#34;1954-11-11&#34; &#34;49.99&#34;&#xA;&#34;4&#34; &#34;The Return of the King&#34; &#34;J.R.R. Tolkien&#34; &#34;416&#34; &#34;1955-10-20&#34; &#34;49.99&#34;&#xA;```&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: change directory command</title>
      <link>https://meetgor.com/sqlog/sqlite-cd-dot-command</link>
      <description>Change directory dot command If you are in a sqlite shell and forgot to change directory or want to navigate to a separate directory, you can do that with the d</description>
      <pubDate>Sun, 31 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Change directory dot command&#xA;&#xA;If you are in a sqlite shell and forgot to change directory or want to navigate to a separate directory, you can do that with the `.cd` dot command.&#xA;&#xA;```sqlite&#xA;.cd /path/to/directory&#xA;```&#xA;&#xA;This is better than doing `.shell cd /path/to/directory` because it doesn&#39;t open a separate terminal process. So, the `.cd` is persistant throughout the session, whereas the `.shell cd &lt;path&gt;` will only within that command (subprocess).&#xA;&#xA;The `.cd` command changes the working directory of the SQLite shell itself, so the change persists for the rest of the session. This means commands like `.import`, `.read`, or `.output` will automatically look for files in the new directory.&#xA;&#xA;However, `.shell cd &lt;path&gt;` spawns a separate subprocess, and the directory change is discarded as soon as that command finishes. It does not affect SQLite’s own state of the current directory.&#xA;&#xA;So if you plan to read or write multiple files from a different location during your SQLite session, prefer the built-in `.cd` command.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: run system shell commands</title>
      <link>https://meetgor.com/sqlog/sqlite-shell-dot-command</link>
      <description>Shell dot command If you are in middle of a sqlite shell session, and you don&#39;t want to quit the shell to run arbitrary shell command, you can simply use the to</description>
      <pubDate>Sat, 30 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Shell dot command&#xA;&#xA;If you are in middle of a sqlite shell session, and you don&#39;t want to quit the shell to run arbitrary shell command, you can simply use the `.shell &lt;command&gt;` to execute any shell commands right from within the sqlite shell. How handy is this!&#xA;&#xA;```sqlite&#xA;.shell echo &#34;hello, world!&#34;&#xA;```&#xA;&#xA;That is a lame example, but it shows you the power of the `.shell` command.&#xA;&#xA;Let&#39;s say I want to run a golang project, I can do this:&#xA;&#xA;```sqlite&#xA;.shell go run main.go&#xA;```&#xA;&#xA;Its helpful if you want to do something but you don&#39;t want to quit the shell to do that:&#xA;- look up few files/datapoints from the local filesystem,&#xA;- run scripts to populate data&#xA;- populate database and then reopen the db shell&#xA;&#xA;This is are the things that I have stumbled upon, so far. Need more experience to see if there are more.&#xA;&#xA;&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: read command is recursive?</title>
      <link>https://meetgor.com/sqlog/sqlite-read-dot-command-is-recursive</link>
      <description>Yesterday, while exploring the command I discovered a little hack. We know you can read arbitrary sqlite shell commands from the operator, even the dot commands</description>
      <pubDate>Fri, 29 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;Yesterday, while exploring the `.read` command I discovered a little hack.&#xA;&#xA;We know you can read arbitrary sqlite shell commands from the `.read | ` operator, even the dot commands.&#xA;&#xA;So, you can even use `.read` itself again. Which means we can call the same file to read inside the script.&#xA;&#xA;Ops!&#xA;&#xA;Let&#39;s create a file called `loop.sh` this will emit certain valid sql/sqlite commands like so:&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;echo &#34;SELECT &#39;hello, world!&#39;;&#34;&#xA;echo &#34;.read |./loop.sh&#34;&#xA;```&#xA;&#xA;The first will print out &#34;SELECT &#39;hello world!&#39;;&#34; so when directed to the shell, it will be executed and print &#34;hello, world!&#34;&#xA;&#xA;Then the 3rd line `echo &#34;.read |./loop.sh&#34;` is the magic, it will execute the script again right?&#xA;&#xA;Because it&#39;s a valid sqlite shell command and it will execute it. It will take the read command and print everything when executing the `loop.sh` shell script.&#xA;&#xA;What&#39;s the output?&#xA;&#xA;```bash&#xA;$ cat loop.sh&#xA;#!/bin/bash&#xA;echo &#34;SELECT &#39;hello, world!&#39;;&#34;&#xA;echo &#34;.read |./loop.sh&#34;&#xA;&#xA;$ sqlite3&#xA;SQLite version 3.50.4 2025-07-30 19:33:53&#xA;Enter &#34;.help&#34; for usage hints.&#xA;Connected to a transient in-memory database.&#xA;Use &#34;.open FILENAME&#34; to reopen on a persistent database.&#xA;sqlite&gt; .read |./loop.sh&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;hello, world!&#xA;Input nesting limit (25) reached at line 2. Check recursion.&#xA;sqlite&gt;&#xA;```&#xA;Well, SQLite developers are smarter to handle the edge case I think. :)&#xA;&#xA;This is a little fun, but it&#39;s kind of a quirk, an not-so easter egg but interesting thing to know.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: read file or standard output</title>
      <link>https://meetgor.com/sqlog/sqlite-dot-commands-read-file-or-standard-output</link>
      <description>Read dot Command The dot command is a quick handy command to import and run your SQL queries in the current session. You can just pass a SQL file (usually a que</description>
      <pubDate>Thu, 28 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;## Read dot Command &#xA;&#xA;The `.read` dot command is a quick handy command to import and run your SQL queries in the current session.&#xA;&#xA;You can just pass a SQL file (usually a query ending with ; it will execute each query one by one)&#xA;&#xA;```bash&#xA;.read filename&#xA;```&#xA;&#xA;Let’s say this is a sql file containing the schema of a database, just one table users.&#xA;&#xA;```sql&#xA;CREATE TABLE users (&#xA;  id   INTEGER PRIMARY KEY,&#xA;  name TEXT&#xA;);&#xA;```&#xA;Writing this sql query in the `schema.sql` file&#xA;&#xA;If we use the .read command with the name of the file it will execute it line by line, (line meaning terminated by ; here or even any dot commands)&#xA;&#xA;```bash&#xA;.read schema.sql&#xA;```&#xA;&#xA;This will just execute the query, if there are SELECT statements possibly then will output the result set too.&#xA;&#xA;Let’s add some users to the table with the `insert_users.sql`&#xA;&#xA;```sql&#xA;INSERT INTO users(name) VALUES(&#39;abc&#39;), (&#39;def&#39;), (&#39;ghi&#39;);&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;Saving this and running .read insert_users.sql should now have displayed all the 3 inserted users. &#xA;&#xA;## Reading with dot commands&#xA;&#xA;Let’s create one more file with specific dot commands to render a nice table format of a result set from a select statement&#xA;&#xA;```sql&#xA;.mode table&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;I save the above contents into a file called `users.sql`&#xA;&#xA;Now if I run the .read command with the users.sql you should see all your users but with the mode as table format. It even takes the dot commands and all of the nice options out there. Be cautious though!&#xA;&#xA;```&#xA;.read users.sql&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; .read users.sql&#xA;+----+---------+&#xA;| id |  name   |&#xA;+----+---------+&#xA;| 1  | User_1  |&#xA;| 2  | User_2  |&#xA;| 3  | User_3  |&#xA;| 4  | User_4  |&#xA;| 5  | User_5  |&#xA;| 6  | User_6  |&#xA;| 7  | User_7  |&#xA;| 8  | User_8  |&#xA;| 9  | User_9  |&#xA;| 10 | User_10 |&#xA;+----+---------+&#xA;```&#xA;&#xA;## Read with standard output&#xA;&#xA;You can also run a script that will generate some SQL commands and execute them. By using the | operator after .read and providing the name of the script. this will run the script and any output of that script (valid sql) will be executed on the shell.&#xA;&#xA;For instance I have a script that would insert 10 users with a bash script like so called `inser_users.sh`&#xA;&#xA;```bash&#xA;#!/bin/bash&#xA;for i in $(seq 1 10); do&#xA;  echo &#34;INSERT INTO users (name) VALUES (&#39;User_$i&#39;);&#34;&#xA;done&#xA;```&#xA;&#xA;If you would use this as is it won’t work as its a bash script, you can’t run those in a sqlite shell. However by using `.read | filename` (shell script), that will indeed do the trick.&#xA;&#xA;The important thing to note here is that it would run all the things outputted from the script, so its kind of running the script and the std out of that script is taken as the input for the read command so like typical pipe operator in linux.&#xA;&#xA;```&#xA;.read | insert_users.sh&#xA;```&#xA;&#xA;This will run the script and the insert statements will be printed, so it will execute them. So, in essence we are basically piping the output of one command to the other in this case the read command.&#xA;&#xA;```sql&#xA;SELECT * FROM users;&#xA;```&#xA;&#xA;As you can see it has now inserted all the 10 users.&#xA;&#xA;The `|` command can be followed by any script, not limited to bash, it could be python, or any script or program that can print to the standard output a correct and legal SQL statements.&#xA;&#xA;This is a really great small yet highly important piece of functionality of sqlite shell.&#xA;&#xA;Reference:&#xA;- [SQLite CLI: Reading SQL from a File](https://sqlite.org/cli.html#reading_sql_from_a_file)&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite Scalar function: 3 valued iif and if scalar function</title>
      <link>https://meetgor.com/sqlog/sqlite-iff-if-scalar-functions</link>
      <description>In the recent SQLite version specifically the 3.48.0, the support for pair of conditions was added to the iif function. And also an alias for iif as if was adde</description>
      <pubDate>Wed, 27 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;In the recent SQLite version specifically the 3.48.0, the support for pair of conditions was added to the iif function. And also an alias for iif as if was added for compatibility and ease of use for people coming from other databases.&#xA;&#xA;## IIF Before 3.48&#xA;Before this release it was kind of 3 way /ternary operation kind of thing like this&#xA;&#xA;### Simple IF THEN ELSE&#xA;&#xA;We can use the IIF function by passing 3 paraeters, first is the condition to check, the second is the value to return if the condition is true and the third is the value to return if the condition is false.&#xA;&#xA;```sql&#xA;SELECT iif(1=0, &#39;one is zero&#39;, &#39;one is not zero&#39;);&#xA;```&#xA;&#xA;```&#xA;one is not zero&#xA;```&#xA;&#xA;### Ternary like nested IIF&#xA;&#xA;Optionally the third condition can be nested with another IIF as it will be a value in the end. So, we can use IIF like this&#xA;&#xA;```sql&#xA;SELECT iif(7%2=0, &#39;even&#39;, iif(7%3=0, &#39;multiple of 3&#39;, &#39;prime&#39;)) AS is_prime;&#xA;```&#xA;&#xA;```&#xA;+---------+&#xA;| is_prime|&#xA;+---------+&#xA;| prime   |&#xA;+---------+&#xA;```&#xA;&#xA;That is quite simple but it gets little messed up for more than one check, take this for instance&#xA;&#xA;```sql&#xA;SELECT iif(2025 % 400 = 0, &#39;leap&#39;,&#xA;       iif(2025 % 100 = 0, &#39;not leap&#39;,&#xA;       iif(2025 % 4 = 0, &#39;leap&#39;, &#39;not leap&#39;))) AS  is_leap_year;&#xA;```&#xA;&#xA;```&#xA;+--------------+&#xA;| is_leap_year |&#xA;+--------------+&#xA;| not leap     |&#xA;+--------------+&#xA;```&#xA;&#xA;I have printed it in a nice way, but still might get harder to read and even write, notice the number of closing brackets at the end.&#xA;&#xA;## IIF After 3.48&#xA;&#xA;With the latest 3.48 version, this changes quite a bit, you can have iif or if as like a case when then replacement like so:&#xA;&#xA;```sql&#xA;SELECT iif(2025 % 400 = 0, &#39;leap&#39;,&#xA;           2025 % 100 = 0, &#39;not leap&#39;,&#xA;           2025 % 4 = 0, &#39;leap&#39;, &#39;not leap&#39;) AS is_leap_year;&#xA;```&#xA;&#xA;```&#xA;+--------------+&#xA;| is_leap_year |&#xA;+--------------+&#xA;| not leap     |&#xA;+--------------+&#xA;```&#xA;&#xA;How clean is that, with this you can specify, Condition, Value as a pair and it can have N subsequent pairs of this condition - value.  All and this while maintaining backwards compatibility, that’s quite a remarkable thing to push I would say, not easy to handle these.&#xA;&#xA;Now it has 3 variants&#xA;- 3 valued expression (single expression, could be nested too, but three expressions, like →  condition, true, false)&#xA;- 2 valued expression (assumes the false value is NULL)&#xA;- N condition - valued pair expression (two pair of condition and value)&#xA;&#xA;These is neat and makes a lot of sense.&#xA;&#xA;Let’s see the same example in all 3 variants&#xA;&#xA;### 3 Valued Expression&#xA;&#xA;This is the most simple one&#xA;&#xA;```sql&#xA;SELECT iif(2025%4=0, &#39;probably leap&#39;, &#39;not leap&#39;) AS is_leap_year;&#xA;```&#xA;&#xA;```&#xA;+--------------+&#xA;| is_leap_year |&#xA;+--------------+&#xA;| not leap     |&#xA;+--------------+&#xA;```&#xA; &#xA;This returned the false value which happens when the condition evaluated to false.&#xA;&#xA;&#xA;### 2 Valued Expression&#xA;&#xA;```sql&#xA;SELECT iif(2025%4=0, &#39;probably leap&#39;) AS is_leap_year;&#xA;```&#xA;&#xA;```&#xA;+--------------+&#xA;| is_leap_year |&#xA;+--------------+&#xA;|              |&#xA;+--------------+&#xA;```&#xA;This returned NULL as 2025 is not divisible by 4. If there are 2N parameters to the function, and if all the conditions are false, it will return NULL. And the above example is the base case of it. Remember to have atleast 2 parameters, one the condition and one as the value if that condition evaluates to true. The last parameter is optional and if the condition is false, it would return NULL.&#xA;&#xA;&#xA;### N Pair Expression&#xA;&#xA;So moving on the final one which is the most readable version of it and allows more stuff&#xA;&#xA;```sql&#xA;SELECT iif(2025 % 400 = 0, &#39;leap&#39;,&#xA;           2025 % 100 = 0, &#39;not leap&#39;,&#xA;           2025 % 4 = 0, &#39;leap&#39;,&#xA;           &#39;not leap&#39;)&#xA;|| &#39; year&#39; AS is_leap_year;&#xA;```&#xA;&#xA;```&#xA;+---------------+&#xA;| is_leap_year  |&#xA;+---------------+&#xA;| not leap year |&#xA;+---------------+&#xA;```&#xA;&#xA;Elegant&#xA;Notice the last condition, if you don’t specify that, it will be NULL, so it would be 2N+1 parameters (where N is the number of conditions)&#xA;&#xA;Other way to remember is &#xA;&#xA;- Even parameters and all conditions are&#xA;    - True (the first value) or equivalent corresponding value for that condition&#xA;    - False (it will return NULL)&#xA;- Odd parameters and all conditions are &#xA;    - True (the first value) or equivalent corresponding value for that condition&#xA;    - False (it will return the last parameter which is the false condition)&#xA;&#xA;You can use iif or if interchangeably. Both these do the same thing as the CASE WHEN THEN conditions, I think IIF now is really easy to write as now it supports arguments in pairs&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite functions: unistr and unistr_quote</title>
      <link>https://meetgor.com/sqlog/sqlite-functions-unistr-and-unistr-quote</link>
      <description>SQLite introduced and functions These are for taking in escape sequences and returning the actual unicode string like emoji or other non-english characters outs</description>
      <pubDate>Tue, 26 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;SQLite [3.50](https://www.sqlite.org/releaselog/3_50_0.html) introduced [unistr](https://www.sqlite.org/lang_corefunc.html#unistr) and [unistr_quote](https://www.sqlite.org/lang_corefunc.html#unistr_quote) functions&#xA;&#xA;These are for taking in escape sequences and returning the actual unicode string like emoji or other non-english characters outside the ascii convention forming the utf-8 character set&#xA;&#xA;The `unistr` is used to take in the escape sequences and return the actual unicode string&#xA;The `unistr_quote` is used to escape those escape sequences in a sql query so your data can be stored with emojis.&#xA;&#xA;&gt; I wonder these functions were added because of LLMs emoji-heavy outputs?&#xA;&#xA;Here’s how you use unistr in a simple query&#xA;&#xA;```sql&#xA;SELECT unistr(&#39;\u2665, \U0001F600&#39;);&#xA;````&#xA;&#xA;This will print the :heart and :smiley emojis&#xA;&#xA;I won’t print the emojis, it might look like a AI-generated post (grunts with pride)&#xA;&#xA;This is handy to render the values but first you need to get the values stored in the db, for that you need to have it like a string i.e. escaped string&#xA;&#xA;That is where the `unistr_quote comes` in&#xA;&#xA;```sql&#xA;SELECT unistr_quote(&#39;\U0001F600&#39;);&#xA;```&#xA;&#xA;This will simply print &#39;\U0001F600&#39; notice the quotes here, it will be a escaped character sequence making it like a special string in middle of a string,&#xA;&#xA;so if we have something like this&#xA;&#xA;```sql&#xA;SELECT printf(&#39;what the heck is this%s symbol?&#39;, unistr_quote(&#39;\U0001F449&#39;));&#xA;```&#xA;&#xA;&gt; output: what the heck is this&#39;\U0001F449&#39; symbol?&#xA;&#xA;This will make the distinction clear, we get a quoted string for that special set of characters or escape sequence which can be useful if the storage medium would make storing emojis obsolete.&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite dot commands: html tables with output and once</title>
      <link>https://meetgor.com/sqlog/sqlite-dot-commands-html-tables-with-output-and-once</link>
      <description>The or was released in release last year. I was trying this command But was getting an error that it was not a valid argument, and the .www command doesn’t ex</description>
      <pubDate>Mon, 25 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;The `.once -w` or `.www` was released in [SQLite 3.47](https://www.sqlite.org/releaselog/3_47_0.html) release last year.&#xA;&#xA;I was trying this command&#xA;&#xA;```shell&#xA;.once -w results.html&#xA;```&#xA;&#xA;But was getting an error that it was not a valid argument, and the .www command doesn’t exist. I saw the git repo and found the source code and found out that it was released in the recent version. My laptop had SQLite 3.37 version installed.&#xA;&#xA;I ran a docker image of sqlite to the latest version and check out the working of that .www command or .once -w subcommand option.&#xA;&#xA;## What is .once -w or .output -w or .www&#xA;&#xA;All of these are equivalent&#xA;- `.once -w`&#xA;- `.www`&#xA;&#xA;The output is just for all the subsequent commands to be saved in a file, whereas the `.once -w` and `.www` are only for the next one query.&#xA;&#xA;This command basically allows the resultset to be saved in a html format/file. Note that you cannot specify a file, it will be stored in a temporary file, and if that is accesible and readable to the browser and enough permissions are set to the temp folder (whatever that is for linux and windows or mac), it should render an html page (its simple)&#xA;&#xA;However, if you are thinking of getting some specific file to write the results then you need to do some work&#xA;&#xA;```&#xA;.output result.html&#xA;.print &#39;&lt;!DOCTYPE html&gt;&lt;html&gt;&lt;body&gt;&lt;TABLE border=&#34;1&#34; cellspacing=&#34;0&#34; cellpadding=&#34;2&#34;&gt;&#39;&#xA;.mode html&#xA;.headers on&#xA;SELECT abs(random()%10)+1 as &#34;some numbers&#34;, &#39;number&#39; from generate_series(1,10);&#xA;.print &#39;&lt;/table&gt;&lt;/body&gt;&lt;/html&gt;&#39;&#xA;```&#xA;&#xA;This will store the html document in the result.html with proper tags and semantics not just the table element&#xA;&#xA;Phew! That might not be a lot, but that is the thing, the .mode html will only get you the table element but it wont render properly, browsers won’t be able to read the raw html elemetnts without the html doctype tags.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite: Randomly order rows</title>
      <link>https://meetgor.com/sqlog/sqlite-randomly-order-rows</link>
      <description>Return rows or records in a table with random order in SQLite Let&#39;s take a simple table for this: This will return the rows in a random order You can limit the</description>
      <pubDate>Sun, 24 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;Return rows or records in a table with random order in SQLite&#xA;&#xA;Let&#39;s take a simple table for this:&#xA;&#xA;```sql&#xA;CREATE TABLE user (&#xA;    id INTEGER PRIMARY KEY AUTOINCREMENT,&#xA;    name TEXT&#xA;);&#xA;INSERT INTO user (name) VALUES (&#39;abc&#39;), (&#39;def&#39;), (&#39;ghi&#39;), (&#39;jkl&#39;), (&#39;mno&#39;);&#xA;```&#xA;&#xA;```sql&#xA;SELECT * FROM user ORDER BY random();&#xA;```&#xA;&#xA;This will return the rows in a random order&#xA;&#xA;You can limit the number of rows&#xA;&#xA;```sql&#xA;SELECT * FROM user ORDER BY random() LIMIT 5;&#xA;```&#xA;&#xA;We can even randomly shuffle a subset of rows with limit and where clauses&#xA;&#xA;```sql&#xA;SELECT * FROM user&#xA;WHERE id BETWEEN 1 and 10 ORDER BY random() limit 5;&#xA;```&#xA;&#xA;This is cool if you want to get certain subset of samples but in no strict order.&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite Scalar Function: Random</title>
      <link>https://meetgor.com/sqlog/sqlite-scalar-function-random</link>
      <description>The random function in sqlite is quite handy to generate pseudo random numbers. The random function returns a value between and Note that it doesn&#39;t return the</description>
      <pubDate>Sat, 23 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;The random function in sqlite is quite handy to generate pseudo random numbers.&#xA;&#xA;The random function returns a value between `-9223372036854775807` and `+9223372036854775807`&#xA;&#xA;```sql&#xA;SELECT CAST(power(2,63) AS INTEGER);&#xA;SELECT CAST(-power(2,63) AS INTEGER);&#xA;```&#xA;Note that it doesn&#39;t return the max integer as `9223372036854775808` becuase using those might break where integer limit might overflow.&#xA;&#xA;This was actually documentated incorrectly before 25th July 2025, that&#39;s quite recent.&#xA;&#xA;We can use that to get absolute values, that is to avoid negative values and then mod (divide and get the remainder) it with the max number we want to generate upto.&#xA;&#xA;Example&#xA;&#xA;```sql&#xA;SELECT random() as random_number; &#xA;```&#xA;```&#xA;random_number&#xA;7855057830251041076&#xA;```&#xA;&#xA;If we want numbers between specific range then use this&#xA;&#xA;```sql&#xA;SELECT abs(random() % 10) + 1 as random_number; &#xA;```&#xA;```&#xA;random_number&#xA;6&#xA;```&#xA;Here 10 is the max number, 1 is the minimum number in the range, if you want negative as well as positive values, you can remove the abs function&#xA;&#xA;```sql&#xA;SELECT (random() % 10) + 1 AS random_number&#xA;    FROM generate_series(1,20);&#xA;```&#xA;&#xA;This will generae random numbers between - 10 and 10, 20 such numbers. We&#39;ll leverage the generate series function that I learned last day.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite functions: Generate_Series Examples</title>
      <link>https://meetgor.com/sqlog/sqlite-functions-generate-series-examples</link>
      <description>After taking a look at what the Generate Series function does, now let&#39;s see some examples that might be handy. Simple sequence To generate a series of numbers</description>
      <pubDate>Fri, 22 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;After taking a look at what the Generate Series function does, now let&#39;s see some examples that might be handy.&#xA;&#xA;## Simple sequence&#xA;&#xA;To generate a series of numbers from 1 to 5.&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(1, 5);&#xA;```&#xA;&#xA;## Incremental sequence&#xA;&#xA;To generate a series of numbers from 0 to 50 in steps of 10.&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(0, 50, 10);&#xA;```&#xA;&#xA;This will start from 0, increment by 10 till 50.&#xA;&#xA;```&#xA;0&#xA;10&#xA;20&#xA;30&#xA;40&#xA;50&#xA;```&#xA;&#xA;&#xA;## Backwards sequence&#xA;&#xA;To generate a series from 50 to 0 in steps of 10&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(0, 50, -10);&#xA;```&#xA;&#xA;This would start from 50 (becuase -ve step will flip the start and stop parameters) and decrement by 10 till 0&#xA;&#xA;```&#xA;50&#xA;40&#xA;30&#xA;20&#xA;10&#xA;0&#xA;```&#xA;&#xA;## Random Numbers&#xA;&#xA;To generate 5 random number between 1 to 5.&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(1, 5) ORDER BY RANDOM();&#xA;```&#xA;&#xA;This would generate 5 numbers 1 to 5, and the ordering will be random. This is becuase for each row, it will generate a random number between -max int to +max int and then order them based on those numbers.&#xA;&#xA;OR&#xA;&#xA;To generate 5 random number between 1 and 10.&#xA;&#xA;```&#xA;SELECT ABS((random()%10)) + 1 FROM generate_series(1, 5);&#xA;```&#xA;&#xA;This would generate 5 random numbers between 1 and 10. The random function generates a integer between -max int to +max int and then mod it to 10 (or the range you want to generate up to) this will leave us with -9 to 9 numbers but it also be negative so we make it ABS which will make it positive, and add one to make the 0 offset to 1.&#xA;&#xA;```&#xA;3&#xA;9&#xA;10&#xA;7&#xA;5&#xA;```&#xA;&#xA;&#xA;## Random Characters&#xA;&#xA;To generate 5 random characters from A to Z.&#xA;&#xA;```sql&#xA;SELECT char((random()%26)+65) FROM generate_series(1, 5);&#xA;```&#xA;&#xA;This would generate a random integer between -max int and + max int, mod it with 26 to get that number between 0 to 25, add 65 which will then can be casted to char in order to render it as an ASCII String.&#xA;&#xA;&#xA;## Date Ranges&#xA;&#xA;To generate dates for a month&#xA;&#xA;```sql&#xA;SELECT date(&#39;2025-08-01&#39;, &#39;+&#39; || (value-1) || &#39; day&#39;) as date_val&#xA;FROM generate_series(1, 31);&#xA;``` &#xA;&#xA;This will take the start date 2025-08-01 and add the value -1 (the value will be from 1 so we need to omit the first day) as a day. This will then generate a series of dates from 2025-08-01 to 2025-08-31.&#xA;&#xA;To generate first day of each month&#xA;&#xA;```sql&#xA;SELECT date(&#39;2025-08-01&#39;, &#39;+&#39; || (value-1) || &#39; month&#39;) as date_val&#xA;FROM generate_series(1, 12);&#xA;```&#xA;&#xA;&#xA;## Gap filling in existing ids&#xA;&#xA;To fill in the missing ids in a sequence of ids&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(1, 5) WHERE value NOT IN (1, 3, 5);&#xA;```&#xA;&#xA;Here you could imagine the not in list could be another table with the result set only with the ids.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite functions: generate_series</title>
      <link>https://meetgor.com/sqlog/sqlite-functions-generate-series</link>
      <description>The is a table valued function in sqlite and is available via the extension. The valued function is something that returns a table but is virtual (doesn&#39;t reall</description>
      <pubDate>Thu, 21 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;The [generate_series](https://www.sqlite.org/series.html) is a table valued function in sqlite and is available via the [generate_series](https://sqlite.org/src/artifact?ci=trunk&amp;filename=ext/misc/series.c) extension.&#xA;&#xA;&gt; The valued function is something that returns a table but is virtual (doesn&#39;t really have data or schema in it). It has hidden columns which are used as parameters to the function to constrain the output and generate the data according to those parameters to the function.&#xA;&#xA;The basic operation with `generate_series` would look like this:&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(1, 5);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT * FROM generate_series(1, 5);&#xA;+-------+&#xA;| value |&#xA;+-------+&#xA;| 1     |&#xA;| 2     |&#xA;| 3     |&#xA;| 4     |&#xA;| 5     |&#xA;+-------+&#xA;sqlite&gt;&#xA;```&#xA;&#xA;Pretty neat, it generated a series of numbers (integers) from 1 to 5 (5 included).&#xA;&#xA;Now are they really integers or?&#xA;&#xA;SQLite is wired, it doesn&#39;t really have types, but they are integers as default so to confirm:&#xA;&#xA;```sql&#xA;SELECT typeof(value) FROM generate_series(1, 5);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT typeof(value) FROM generate_series(1, 5);&#xA;+---------------+&#xA;| typeof(value) |&#xA;+---------------+&#xA;| integer       |&#xA;| integer       |&#xA;| integer       |&#xA;| integer       |&#xA;| integer       |&#xA;+---------------+&#xA;```&#xA;OK! Cool, now let&#39;s take this to a next level&#xA;&#xA;```sql&#xA;SELECT char(value+64) FROM generate_series(1, 5);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT char(value+64) FROM generate_series(1, 5);&#xA;+----------------+&#xA;| char(value+64) |&#xA;+----------------+&#xA;| A              |&#xA;| B              |&#xA;| C              |&#xA;| D              |&#xA;| E              |&#xA;+----------------+&#xA;```&#xA;OK! That is simple ascii and type casting shenanigans!&#xA;&#xA;We cast the value which would be integer 1 to 5 and add 64 to it to make it in the range 65 to 70 which would be ASCII equivalent of A to E, you could do the same thing but different parameters:&#xA;&#xA;```&#xA;SELECT char(value) FROM generate_series(65, 70);&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT char(value) FROM generate_series(65, 70);&#xA;+-------------+&#xA;| char(value) |&#xA;+-------------+&#xA;| A           |&#xA;| B           |&#xA;| C           |&#xA;| D           |&#xA;| E           |&#xA;| F           |&#xA;+-------------+&#xA;```&#xA;&#xA;Same result!&#xA;&#xA;But, ok, this is other rabbit hole, let&#39;s focus back on generate_series, the parameters are start and stop right?&#xA;Well there is one more, step. This will be like the incremental number for each step.&#xA;&#xA;If you are familiar with C styled for loops, you know this as the increment part, or the `i++` bit.&#xA;&#xA;```sql&#xA;SELECT value FROM generate_series(0, 20, 5)&#xA;```&#xA;&#xA;```&#xA;sqlite&gt; SELECT value FROM generate_series(0, 20, 5);&#xA;+-------+&#xA;| value |&#xA;+-------+&#xA;| 0     |&#xA;| 5     |&#xA;| 10    |&#xA;| 15    |&#xA;| 20    |&#xA;+-------+&#xA;sqlite&gt;&#xA;```&#xA;So, the third parameter increments the counter by that number. So we start from 0 and increment 5 for each step (row) till 20.&#xA;&#xA;Default step value is 1, however the quirk is 0 is still considered as 1.&#xA;&#xA;You can also go reverse&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(0, 20, -5);&#xA;```&#xA;&#xA;This will generate number series as 20,15,10,5,0: Note that the step is -1 not the stop value, this will basically flip the start and stop values, so we will start from 20 and decrement by 5 at each step.&#xA;&#xA;If you want to generate a series in negative ranges, the start or stop needs to modified accordingly, the start and stop control which numbers you iterate and the step controls in which direction it moves from start to end.&#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(-20, 20, -5);&#xA;```&#xA;&#xA;This will generate the number series from 20 to -20, start from 20 and decrement by 5 each step until we reach -20. So we had a start as -20 but why do we start from 20? well because the step is negative, we need to start from the higher value and end at the lower value, hence the start and stop will be flipped for a step value &lt; 0. &#xA;&#xA;```sql&#xA;SELECT * FROM generate_series(10) LIMIT 5;&#xA;```&#xA;&#xA;This gave 10,11,12,13,14 right, how? Because there is not just one column in the generate_series virtual table there are more&#xA;&#xA;Apart from value that is the integer we get from the function as the row (column cell)&#xA;&#xA;there are start, stop, step and rowid  columns which are hidden columns in the `generate_series` function or virtual table &#xA;&#xA;Let’s see&#xA;&#xA;SELECT rowid, start, stop, step, value FROM generate_series(0,10,2);&#xA;&#xA;```&#xA;+-------+-------+------+------+-------+&#xA;| rowid | start | stop | step | value |&#xA;+-------+-------+------+------+-------+&#xA;| 1     | 0     | 10   | 2    | 0     |&#xA;| 2     | 0     | 10   | 2    | 2     |&#xA;| 3     | 0     | 10   | 2    | 4     |&#xA;| 4     | 0     | 10   | 2    | 6     |&#xA;| 5     | 0     | 10   | 2    | 8     |&#xA;| 6     | 0     | 10   | 2    | 10    |&#xA;+-------+-------+------+------+-------+&#xA;```&#xA;&#xA;The start, stop, step values remain same, as those are the parameters those can’t change but notice the rowid and the value the first and the last column in the result set do change.&#xA;&#xA;Interestingly the start, stop, and step are the parameters to the function, so you can technically pass them however you like &#xA;&#xA;```sql&#xA;SELECT * FROM generate_series() WHERE start = 10 AND stop = 20 AND step = 5;&#xA;```&#xA;This will give 10,15,20 since the start is 10, increment by 5 until 20. The step is optional, note that stop is optional too, but never forget to provide it, it might cause a forever loop and keep on incrementing by 1 and never give the result back until your computer crashes.&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite Dot command: Once</title>
      <link>https://meetgor.com/sqlog/sqlite-dot-command-once</link>
      <description>The dot command is similar to the , however the distinction is that it is limited to the very next SQL query and not all the subsequent queries. The primary way</description>
      <pubDate>Wed, 20 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;The [once](https://sqlite.org/cli.html#writing_results_to_a_file) dot command is similar to the [output](https://sqlite.org/cli.html#writing_results_to_a_file), however the distinction is that it is limited to the very next SQL query and not all the subsequent queries.&#xA;&#xA;The primary way to use `once` is either by specifying the specific file name/path or opening the result set in the system editor.&#xA;&#xA;### Output to a file&#xA;&#xA;```&#xA;.once somefile.txt&#xA;```&#xA;&#xA;Once this is set, any query you execute, the result set of it will be logged to the specified file.&#xA;&#xA;```sql&#xA;SELECT printf(&#34;Hello %s! Bye, %s!&#34;, &#34;world&#34;, &#34;mars&#34;);&#xA;```&#xA;&#xA;```&#xA;$ cat somefile.txt&#xA;Hello world! Bye, mars!&#xA;```&#xA;&#xA;Pretty cool! But there&#39;s more&#xA;&#xA;There are three options available to perform different things for where this output can go&#xA;&#xA;- Into a temporary file opening in a system text editor with -e&#xA;- Into a temporary excel/csv/xlsx file in a system spreadsheet editor application with the -x option&#xA;- Into an excel/csv/xlsx file (compatible with Microsoft Excel) containing utf-8 character or symbols with the --bom option.&#xA;&#xA;&#xA;### Text Editor&#xA;&#xA;You can open the result set to a temporary file in a system text editor with the -e option.&#xA;&#xA;```&#xA;.once -e&#xA;```&#xA;&#xA;You can set the system editor with &#xA;&#xA;&#xA;### Spreadsheet Editing Application&#xA;&#xA;Also you can open the result set into a temporary xlsx or csv file in a system spreadsheet editing application like Microsoft Excel, LibreOffice Calc, etc.&#xA;&#xA;```&#xA;.once -x&#xA;```&#xA;&#xA;&#xA;### Unicode encoded Spreadsheet&#xA;&#xA;This option is used to create an excel/csv/xlsx file (compatible with Microsoft Excel) containing utf-8 character or symbols. If you used -x on Linux, it would work fine there, but the file format won&#39;t be compatible in the Microsoft Excel or other xlsx formats. To make sure the utf-8 characters are rendered and parsed properly in the excel file, just use the --bom which stands for byte order mark. This options adds certain bytes at the beginning of the file to make the application understand which encoding to use while rendering to use, like `EF BB BF` for utf-8, `FE FF` for utf-16, and so on.&#xA;&#xA;```&#xA;.mode csv&#xA;.once --bom filename.xlsx&#xA;```&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
    <item>
      <title>SQLite Dot command: Output</title>
      <link>https://meetgor.com/sqlog/sqlite-dot-command-output</link>
      <description>I will be starting to post something I learn daily about sql,sqlite or anything sql. There is so much to learn! Today I found you can output a result of query i</description>
      <pubDate>Tue, 19 Aug 2025 00:00:00 UTC</pubDate>
      <content>&#xA;&#xA;I will be starting to post something I learn daily about sql,sqlite or anything sql.&#xA;&#xA;There is so much to learn!&#xA;&#xA;Today I found you can output a result of query in sqlite shell with&#xA;&#xA;```&#xA; .output filename&#xA;```&#xA;This will start appending the result of all the queries executed in the shell to the file. It will preserve all the modes and configuration used for that specific instance of the shell. Basically just the output you would see after executing the query n the shell, it will dumo that to the mentioned file (it will not output to the screen/io as it will dump in the file)&#xA;&#xA;More info here: [SQLite Dot command: Output](https://sqlite.org/cli.html#writing_results_to_a_file)&#xA;&#xA;&#xA;</content>
      <type>sqlog</type>
    </item>
  </channel>
</rss>