File size: 10,297 Bytes
e19a510
 
 
 
 
 
cfbd02f
e19a510
cfbd02f
e19a510
cfbd02f
 
 
 
 
e19a510
cfbd02f
e19a510
cfbd02f
e19a510
cfbd02f
 
e19a510
cfbd02f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e5d803
 
cfbd02f
 
 
 
 
 
 
 
 
 
8e5d803
 
cfbd02f
 
 
8e5d803
cfbd02f
 
 
 
 
 
8e5d803
cfbd02f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e5d803
 
cfbd02f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e5d803
 
cfbd02f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e5d803
cfbd02f
 
 
 
 
 
 
 
 
 
 
 
 
c02076c
cfbd02f
 
c02076c
cfbd02f
8e5d803
 
 
 
 
 
 
 
 
cfbd02f
 
 
 
8e5d803
cfbd02f
 
 
 
 
 
 
 
8e5d803
cfbd02f
 
 
 
8e5d803
 
 
cfbd02f
8e5d803
 
 
 
 
 
cfbd02f
 
 
 
 
 
 
 
 
 
96e2e87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e19a510
96e2e87
 
 
 
 
e19a510
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# PySpark Data Engineering Assessment (Extended)\n",
    "\n",
    "Welcome! In this notebook, you'll practice:\n",
    "\n",
    "1. Reading the **Titanic CSV** in **Pandas** and **PySpark**.\n",
    "2. **Splitting** a single dataset into two DataFrames and **merging** them back together in both Pandas and Spark.\n",
    "3. Data cleaning and aggregations in Pandas and Spark.\n",
    "4. Writing and reading **Parquet** files.\n",
    "5. Creating a **PySpark UDF** that leverages a **lightweight transformer model** to compute embeddings for passenger names.\n",
    "\n",
    "---\n",
    "\n",
    "## Dataset\n",
    "\n",
    "- **`titanic.csv`**: This file is in the `../data/` directory, containing columns such as:\n",
    "  - `PassengerId`, `Name`, `Sex`, `Age`, `Fare`, `Survived`, etc.\n",
    "\n",
    "We will:\n",
    "1. Read `titanic.csv` into Pandas and Spark.\n",
    "2. Split the original DataFrame into two subsets (simulating two “tables”).\n",
    "3. Demonstrate merges/joins in Pandas and Spark using these subsets.\n",
    "4. Perform data cleaning and transformations.\n",
    "5. Write to Parquet.\n",
    "6. Implement a Spark UDF to generate embeddings for passenger names.\n",
    "\n",
    "---\n",
    "\n",
    "## Instructions\n",
    "\n",
    "Throughout the notebook, you'll see `TODO` sections. Please fill in the required code. Feel free to add extra cells or explanations as needed.\n",
    "\n",
    "When finished, please save or export this notebook and submit according to your instructions.\n",
    "\n",
    "Let's begin!\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. Imports and Spark Setup\n",
    "\n",
    "import os\n",
    "import pandas as pd\n",
    "\n",
    "# PySpark imports\n",
    "from pyspark.sql import SparkSession\n",
    "from pyspark.sql import functions as F\n",
    "from pyspark.sql.types import *\n",
    "\n",
    "# Create/initialize Spark session\n",
    "spark = SparkSession.builder \\\n",
    "    .appName(\"TitanicAssessmentExtended\") \\\n",
    "    .getOrCreate()\n",
    "\n",
    "print(\"Spark version:\", spark.version)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2. Read the Titanic CSV (Pandas & Spark)\n",
    "# ========================================\n",
    "\n",
    "# Path to the CSV file\n",
    "titanic_csv_path = os.path.join(\"..\", \"data\", \"titanic.csv\")\n",
    "\n",
    "# 2.1 TODO: Read 'titanic.csv' into a Pandas DataFrame (pd_df)\n",
    "# pd_df = ?\n",
    "\n",
    "# Inspect the shape and first few rows\n",
    "# print(\"pd_df shape:\", pd_df.shape)\n",
    "# display(pd_df.head())\n",
    "\n",
    "# 2.2 TODO: Read 'titanic.csv' into a Spark DataFrame (spark_df)\n",
    "# spark_df = ?\n",
    "\n",
    "# Check schema and row count\n",
    "# spark_df. ...\n",
    "# print(\"spark_df count:\", spark_df. ...)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 3. Split Data into Two Subsets for Merging/Joining\n",
    "# ==================================================\n",
    "# Split the dataset into two df's by column, then merge them \n",
    "# back together\n",
    "#   df_part1: subset of columns -> PassengerId, Name, Sex, Age\n",
    "#   df_part2: subset of columns -> PassengerId, Fare, Survived, Pclass\n",
    "#\n",
    "# \n",
    "\n",
    "# 3.1 Pandas Split\n",
    "# ----------------\n",
    "\n",
    "# TODO: Create two new DataFrames from pd_df:\n",
    "#    pd_part1 = pd_df[[\"PassengerId\", \"Name\", \"Sex\", \"Age\"]]\n",
    "#    pd_part2 = pd_df[...]\n",
    "\n",
    "# pd_part1 = ?\n",
    "# pd_part2 = ?\n",
    "\n",
    "# display(pd_part1.head())\n",
    "# display(pd_part2.head())\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 3.2 Spark Split\n",
    "# ---------------\n",
    "# TODO: Create two new DataFrames from spark_df:\n",
    "#    spark_part1 = spark_df. ...\n",
    "#    spark_part2 = spark_df. ...\n",
    "\n",
    "# spark_part1 = ?\n",
    "# spark_part2 = ?\n",
    "\n",
    "# spark_part1.show(5)\n",
    "# spark_part2.show(5)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 4. Merging / Joining the Split DataFrames\n",
    "# =========================================\n",
    "\n",
    "# 4.1 Merge in Pandas\n",
    "# -------------------\n",
    "# TODO: Merge pd_part1 and pd_part2 on \"PassengerId\"\n",
    "# We'll call the merged DataFrame \"pd_merged\".\n",
    "#\n",
    "\n",
    "# pd_merged = ?\n",
    "# print(\"pd_merged shape:\", pd_merged.shape)\n",
    "# display(pd_merged.head())\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 4.2 Join in Spark\n",
    "# -----------------\n",
    "# TODO: Join spark_part1 with spark_part2 on \"PassengerId\"\n",
    "# We'll call the joined DataFrame \"spark_merged\".\n",
    "#\n",
    "\n",
    "\n",
    "#Uncomment below\n",
    "# spark_merged = ?\n",
    "# print(\"spark_merged count:\", spark_merged.count())\n",
    "# spark_merged.show(5)\n",
    "# spark_merged.printSchema()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 5. Data Cleaning\n",
    "# ================\n",
    "# We'll focus on the merged DataFrames. For instance, drop rows that have missing\n",
    "# values in certain columns like 'Age' or 'Fare'.\n",
    "\n",
    "# 5.1 TODO: Pandas DataFrame cleaning\n",
    "# Create a cleaned version, 'pd_merged_clean',\n",
    "# dropping nulls in [\"Age\", \"Fare\"].\n",
    "\n",
    "# pd_merged_clean = ?\n",
    "\n",
    "# print(\"Before dropna:\", pd_merged.shape)\n",
    "# print(\"After dropna:\", pd_merged_clean.shape)\n",
    "# pd_merged_clean.head()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 5.2 TODO: Spark DataFrame cleaning\n",
    "# Create a cleaned version, 'spark_merged_clean',\n",
    "# dropping nulls in [\"Age\", \"Fare\"].\n",
    "\n",
    "# spark_merged_clean = ?\n",
    "\n",
    "# print(\"spark_merged count BEFORE dropna:\", spark_merged.count())\n",
    "# print(\"spark_merged_clean count AFTER dropna:\", spark_merged_clean.count())\n",
    "# spark_merged_clean.show(5)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 6. Basic Aggregations\n",
    "# =====================\n",
    "# Let's do a couple of group-by queries to glean insights.\n",
    "\n",
    "# 6.1 TODO: Pandas - Average fare by Pclass\n",
    "# e.g. group by 'Pclass' and compute mean fare in pd_merged_clean\n",
    "\n",
    "# pd_avg_fare = ?\n",
    "# pd_avg_fare\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 6.2 TODO: Spark - Survival rate by Sex and Pclass\n",
    "# Average survival rate by Sex and Pclass\n",
    "#\n",
    "# spark_survival_rate = ?\n",
    "# spark_survival_rate.show()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 7. Writing to Parquet\n",
    "# =====================\n",
    "# We'll write the cleaned Spark DataFrame to a Parquet file (e.g. \"../titanic_merged_clean.parquet\").\n",
    "\n",
    "# 7.1 TODO: Write spark_merged_clean to Parquet\n",
    "# e.g., spark_merged_clean.write. ...\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 7.2 TODO: Read it back into a new Spark DataFrame called 'spark_parquet_df'\n",
    "# spark_parquet_df = ?\n",
    "\n",
    "# print(\"spark_parquet_df count:\", spark_parquet_df.count())\n",
    "# spark_parquet_df.show(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 8. Create a Temp View and Query\n",
    "# ========================================\n",
    "# 8.1 TODO: Create a temp view with 'spark_merged_clean' (e.g. \"titanic_merged\")\n",
    "# spark_merged_clean.createOrReplaceTempView(\"titanic_merged\")\n",
    "\n",
    "# 8.2 TODO: Spark SQL query examples\n",
    "\n",
    "#Get the average passenger age grouped by PClass\n",
    "# result_df = spark.sql(\"SELECT ... FROM titanic_merged GROUP BY ...\")\n",
    "# result_df.show()\n",
    "\n",
    "# Calculate the Pearson correlation between passenger Fare and Survival\n",
    "# using either SQL or another method\n",
    "# Corr.(X, Y) = cov(X,Y)/(std(X)*std(Y))\n",
    "# corr = ..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 9. Bonus 2: Transformer Embeddings UDF\n",
    "# ======================================\n",
    "\n",
    "from sentence_transformers import SentenceTransformer\n",
    "from pyspark.sql.functions import udf\n",
    "from pyspark.sql.types import ArrayType, FloatType\n",
    "\n",
    "# Load the pre-trained MiniLM sentence transformer model\n",
    "model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')\n",
    "\n",
    "# Define a UDF to compute the embeddings\n",
    "def compute_embedding(text):\n",
    "    '''\n",
    "    Your function goes here\n",
    "    '''\n",
    "    pass\n",
    "\n",
    "# Register the UDF in Spark\n",
    "embedding_udf = None #Replace with your udf\n",
    "\n",
    "# Apply the UDF to compute embeddings for each document\n",
    "df_with_embeddings = spark_merged_clean.withColumn('mini-lm-vectors', '...')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}