Arrive with a laptop; leave with a tiny, repeatable AI workflow.
In this hands-on session, you’ll try OpenCLIP visual search on a provided demo image set (no uploads) to understand how vector search works, then run LLM experiments to clean a messy OCR page, extract people/places/dates/subjects, and export CSV/JSON. We’ll keep humans in the loop (simple review gates and fallback rules), talk costs, privacy, and sustainability, and end with a 90-day mini-roadmap for your own collection.
Everything is browser-only and beginner-friendly; optional power-user prompts are available for those who want to go deeper.
Level of experience: none required / beginner (intermediate welcome)
Detailed timetable (120 min):
-
00–10: Primer (what gen-AI is/isn’t; goals)
-
10–25: OpenCLIP demo—vector search basics; try queries on a hosted set
-
25–55: LLM transcript clean-up (before/after)
-
55–85: Metadata extraction → JSON → CSV (quick validation)
-
85–100: Group debrief (errors, ethics, trust)
-
100–115: From demo to pipeline: review gates, fallback rules, rough cost notes
-
115–120: Roadmap sprint: next steps & support needs
Technical requirements:
-
Laptop + modern browser; conference Wi-Fi
-
Free ChatGPT or Gemini account helpful; pairing/observer mode for those who can’t sign up
-
No installs / no USB; all tasks are browser-based
-
Sample data (image set + OCR page) provided by instructor