Blog

Notes on Building and Shipping

Practical writeups covering implementation details, product decisions, and tools that matter in real projects.

Page 2 of 3

  1. Why Quantization Is About to Make Local AI Explode

    Local AI adoption is accelerating because quantization shifts model memory and latency into ranges that regular developer hardware can actually sustain.

    local-aiquantizationollamacpu-inference
  2. The IDE Is Dying. Agent Environments Are Replacing It

    The center of software development is shifting from writing every line manually to directing agents across editor, terminal, and cloud environments.

    ai-codingcodexclaude-codecursor
  3. When Declarative Systems Break, and Where Imperative Still Wins

    Declarative systems scale intent and reliability, but they introduce failure modes where explicit imperative control is still the safer engineering choice.

    declarative-systemsimperative-controlarchitectureinfrastructure
  4. Databricks Medallion Architecture in Production

    A practical guide to running bronze, silver, and gold in production when schemas drift, sources are messy, and reliability actually matters.

    databricksdata-engineeringmedallionetl
  5. Why Declarative Systems Are Taking Over Everything

    Declarative systems are winning because teams can define intent once and let reconcilers handle execution, drift correction, and ordering at scale.

    declarative-systemsinfrastructurefrontenddata
  6. Letting AI Design Your Data Pipelines, and What Almost Broke

    AI can accelerate pipeline design, but production reliability still depends on idempotency, contract checks, and failure-path reviews.

    aidata-engineeringarchitecturereliability
  7. Column-Level Security vs Tokenization in Healthcare Pipelines

    Column-level security and tokenization solve different risks. A practical healthcare architecture uses both with explicit boundaries.

    healthcaresecuritytokenizationgovernance
  8. Are We Close to Autonomous Pentesting?

    AI pentesting tools are improving fast, but full autonomy still breaks down at reasoning and context management, not just tool execution.

    cybersecurityaipentestingagentic-systems
  9. Running Modern LLMs Without a GPU

    CPU-first LLM systems can run in production when request shape, queue control, and latency budgets are treated as product requirements.

    llminferencecpulocal-ai
  10. Healthcare Data Is a Graph Problem

    Healthcare analytics breaks when relationship logic is scattered across ad hoc joins. Graph modeling principles make clinical context more reliable.

    healthcaredata-modelinggraphinteroperability