Data Science & Visualization
Statistics and storytelling. Distributions, dashboards, charts that communicate, and the analysis discipline behind defensible product decisions.
skfolio: Build & Tune Portfolio Optimizers in Python
skfolio's scikit-learn API lets you construct, validate, and compare 18+ portfolio strategies—from baselines to HRP, Black-Litterman, factors, and tuned models—on S&P 500 returns with walk-forward CV and GridSearchCV.
Reproduce 2011 Sentiment Word Vectors in Python
Build sentiment-aware word embeddings from IMDb reviews via semantic learning with star ratings and linear SVM classification, reproducing Maas et al. (2011) – simple method rivals modern LLMs.
Scanpy Pipeline for PBMC scRNA-seq Clustering & Trajectories
Process PBMC-3k data with Scanpy: filter cells (min 200 genes, <2500 genes, <5% mt), remove Scrublet doublets, select HVGs (min_mean=0.0125, max_mean=3, min_disp=0.5), Leiden cluster at res=0.5, annotate via markers, infer PAGA/DPT trajectories, score IFN response.
NMI Bias Favors Complex Clusters Over Insight
Normalized Mutual Information (NMI) rewards over-segmentation and complexity in clustering, inflating scores for intuitively poor algorithms and distorting AI evaluations.
Balance Linear Simplicity and Nonlinear Flexibility to Avoid Fit Failures
Linear models underfit nonlinear data with rigid straight boundaries; nonlinear models overfit by memorizing noise with wiggly curves. Fix via bias-variance tradeoff for optimal generalization.
Time Series Fundamentals Before Modeling
Time series data depends on order—avoid shuffling or random splits. Decompose into trend, seasonality, cycles, noise; ensure stationarity (constant mean/variance/autocovariance) via differencing, logs, detrending; diagnose with ACF/PACF for AR/MA patterns.
Triple YOLO Recall with Adaptive Post-Processing
In crowded scenes, set YOLO confidence to 0.05, then filter dynamically by frame score distribution, box size (lower threshold for <5% height boxes), and pose keypoints (nose + shoulders) to detect 3x more people without retraining.
Synthetic Data Exposes Hidden ML Bias Before Production
Real training data hides bias via underrepresentation (e.g., rural at 9%), proxies, and skewed labels; generate synthetic data with controlled segments (e.g., rural at 25%) to reveal it through disaggregated AUC drops (0.791 to 0.768) and disparate impact <0.8, then retrain on mixed data to fix.
Momentum Dampens GD Zigzags via Gradient Averaging
On anisotropic loss surfaces (condition number 100), vanilla GD zigzags and takes 185 steps to converge (loss <0.001); momentum with β=0.9 converges in 159 steps by canceling steep-direction oscillations while accelerating flat directions—but β=0.99 diverges.
Track One User-Feature Pair to Catch ML Pipeline Bugs
A rec model's 0.91 AUC failed in prod after 4 days due to 21-hour stale user_30d_purchases features. Track user U-9842 and this feature through every pipeline layer to expose and prevent such mismatches.
Production ML Pipelines with ZenML: Custom Materializers & HPO
ZenML enables end-to-end ML pipelines with custom DatasetBundle materializers for metadata-rich serialization, fan-out over 4 hyperparameter configs for RandomForest/GradientBoosting/LogisticRegression, fan-in best-model selection by ROC AUC, full artifact tracking, and cache-driven reproducibility on breast cancer dataset.
Stream Parse TaskTrove Dataset for AI Task Insights
Stream multi-GB TaskTrove dataset without full download; parse gzip-compressed tar/zip/JSON binaries to analyze sources, sizes (median p50 KB compressed), filenames, and detect verifiers for RL-ready tasks via multi-signal heuristics.
Build Queryable Options IV DB from Live API Polls
Capture SpiderRock LiveImpliedQuote snapshots for TSLA every 10s into SQLite: append full history for audits (12k+ rows in 2min), upsert latest view per option_key. Query to reconstruct vol smiles and track ATM IV/skew changes over time.
Data Science Splits: Engineer Pipelines or Lead Decisions
Data scientist roles are dividing into technical data engineering (SQL up 18%, ETL up 18%) and strategic decision-making; AI automates mid-level generalist tasks, squeezing the middle—specialize in one side now.
Data And Beyond Grows to 49K Views, AI Topics Dominate
April 2026 stats: 49K views, 14.8K reads, +90 followers to 2K. Top stories cover Spark optimization, Claude AI leaks, clustering pitfalls, and RAG vs MCP.
Decompose Signals into Frequencies for Easier Analysis
Fourier transform breaks time-domain signals into frequency components, exposing periodic patterns buried in noise for filtering, compression, and fault detection—reversible and efficient via FFT.
ETL Pipeline Turns Messy HR Data into Star Schema Insights
Build a scalable ETL pipeline to restructure flat HR data into a star schema fact/dimension tables, enabling analysis of manager performance, diversity (60% White, 56.6% female), recruitment channels, and 71% accurate attrition prediction where tenure drives 47% of decisions.
Automate Weekly PDF Reports with Python ETL Pipeline
Load/merge e-commerce datasets, compute revenue/profit/AOV/growth metrics, generate PDF with matplotlib/ReportLab charts and rule-based insights, email via smtplib, schedule weekly via GitHub Actions cron.
Preprocessing Swings CNN Accuracy from 65% to 87% on CIFAR-10
Raw CIFAR-10 pixels yield 65% test accuracy; normalization/standardization lift to 69%; geometric augmentation maintains ~67%; photometric brightness/contrast crashes to 20%; combined pipeline with deeper CNN hits 87%.
Launch Data Governance via Pilot Projects, Not Big Plans
Start data governance with a narrow pilot project as a starting line to prove value quickly, then scale incrementally while building self-sustaining mechanisms like legislation, judiciary, and enforcement.
TabPFN Beats Tree Models on Tabular Accuracy with Zero Training
On a 5k-sample tabular dataset, TabPFN hits 98.8% accuracy vs CatBoost's 96.7% and Random Forest's 95.5%, with 0.47s setup but 2.21s inference due to in-context learning at predict time.
Cohort Analysis Exposes Donor Retention Risks
Rising aggregate retention (27% to 42%) hides leaky bathtub: 75% of 2025 revenue from 2024-2025 cohorts, with older cohorts contributing <2% each, risking collapse without long-term base.
Redash: SQL-First Open-Source BI for Dev Dashboards
SQL-proficient devs use Redash to query multiple sources (Postgres, BigQuery, etc.), visualize results, and build shareable dashboards in minutes via self-hosted Docker—no CSVs or pricey tools needed.
Better StackCleveland's Enduring Impact on Data Viz and Science
William Cleveland pioneered data visualization as a rigorous discipline via graphical perception studies and books like The Elements of Graphing Data, while outlining data science's foundations in 2001, shaping tools data workers use today.
Build FNO & PINN Surrogates for Darcy Flow with PhysicsNeMo
Step-by-step Colab guide: generate 2D Darcy datasets via GRF & finite differences, implement/train FNO operators and PINNs, add CNN baselines, benchmark inference speeds for fast physics surrogates.
DuckDB-Python: Fast Analytics Pipelines with Zero-Copy DataFrames
Integrate DuckDB with Python for zero-copy queries on Pandas/Polars/Arrow, advanced SQL (windows, UDFs, CTEs), bulk inserts (50k rows instantly), Parquet partitioning, and 10x+ Pandas speedups on 1M-row aggregations.
Snowflake-Native Fraud ML Pipeline: Train to Monitor
Build end-to-end fraud detection with XGBoost in Snowflake ML—data loading to drift monitoring—avoiding data gravity, handling 0.5-2% imbalance via scale_pos_weight=27.6, achieving ROC-AUC=0.7275 and optimal F1=0.5874 at threshold=0.58.
Minimal NumPy RNN for Char-Level Text Gen
Build a vanilla RNN language model from scratch in ~170 lines of NumPy: processes text chunks of 25 chars, trains with BPTT and Adagrad, generates samples after 100 iterations.
NES optimizes quadratic bowl via gaussian perturbations
Sample 50 perturbed weights from N(w, 0.1), weight by standardized rewards, update w by 0.001/(50*0.1) * sum(noise * weights) to converge in 300 iters.
NLP Progression: Word Clouds to Knowledge Graphs
Build semantic systems from text by progressing: word cloud (frequency) → TF-IDF (importance) → co-occurrence graph (relationships) → knowledge graph (durable meaning). Skip intermediates and your graph stores noise.
Showing 30 of 56