Dynamically Typed

By Leon Overweel (Dynamically Typed)

My thoughts and links on productized artificial intelligence, machine learning technology, and AI projects for the climate crisis. Delivered to your inbox every second Sunday.

My thoughts and links on productized artificial intelligence, machine learning technology, and AI projects for the climate crisis. Delivered to your inbox every second Sunday.

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Dynamically Typed will receive your email address.






#76: Dynamically Typed Hiatus

Hey everyone, Leon here. I don't have a regular edition of Dynamically Typed for you today, because, as you can probably guess from the subject, I'm taking a break from writing the newsletter.I've written a DT every second weekend for nearly three years now, …


#75: OpenAI's book summaries for the alignment problem, Translatotron 2, and AI-generated movie posters

📚 Wu et al. (2021) at OpenAI used a fine-tuned GPT-3 to recursively summarize books. The model first separately summarizes sections of a book, then concatenates those summaries together and summarizes the result, and continues the process until it converges o…


#74: Apple's privacy-focused facial recognition, DeepMind's multimodal Perceiver IO, and sea ice forecasting with IceNet

📱I first covered Cyril Diagne's AR cut and paste tool in May 2020 when it was a cool tech demo on Twitter, and then again when he productized it as ClipDrop in October. As a reminder, ClipDrop lets you take a picture of an object which it then segments ("clip…


#73: Merlin for sound-based bird identification, CCAI's big climate grant, and finger spelling with AI

🦜 Merlin, an app by the Cornell Lab of Ornithology, identifies birds based on their songs and calls. The app's Sound ID feature currently supports 450+ birds in the US and Canada. It works by visualizing an audio recording of a bird's song or call as a spectr…


#72: Towards talking to computers with OpenAI Codex

About seven years ago, when I was a junior in high school, I built a “self-learning natural language search engine” called Wykki. It used “natural language” in that it was able to separate a user’s prompt like “How old is Barack Obama” into a question stub (“…


#71: The AlphaFold Protein Structure Database, OpenAI Triton, and a CLIP "Tour of the Sacred Library"

🧬 AlphaFold, DeepMind’s protein folding neural network, represented a breakthrough in structural biology when it was released in December. Given a protein's sequenced "code" of amino acid chains, the model predicts what shape the molecule "folds" itself into …


#70: Karpathy on Tesla Autopilot at CVPR'21, Distill's hiatus, and tattling on Flemish Scrollers using computer vision

Karpathy on Tesla Autopilot at CVPR '21Tesla's head of AI Andrej Karpathy did a keynote at the CVPR 2021 Workshop on Autonomous Driving with updates on the company's Autopilot self-driving system. Just like his talk last year at Scaled ML 2020, this was a gre…


#69: GitHub Copilot + OpenAI Codex = Microsoft synergy?

GitHub CopilotGitHub previewed Copilot, "your AI pair programmer," this week. Accessed through a Visual Studio Code extension and powered by OpenAI's brand-new Codex language model, it auto-suggests "whole lines or entire functions right inside your editor." …


#68: What's socially acceptable for a language model to say?

What's socially acceptable for a language model to say?OpenAI's Irene Solaiman and Christy Dennison published a very interesting blog post on improving language model behavior around socially sensitive topics. They developed a process for finetuning models li…


#67: Has AI helped fight the COVID-19 pandemic?

Artificial Intelligence and COVID-19Although my daily new arXiv submissions notification emails have been full of papers about fighting COVID-19 with AI for the past year and a half, I've so far decided against writing about them in DT. From early on in the p…


#66: Google's controversial dermatology app, Twitter's AI feature removal, and a Dropbox image search deep-dive

🩺 Google previewed its AI-powered dermatology assist tool at I/O, its yearly developer conference. Integrated with Search, the app guides you through taking photos of your skin at different angles, and then uses a deep learning model published in Nature Medic…


#65: Live (jargon) transcription in Microsoft Teams, the EU's Artificial Intelligence Act, and NVIDIA's AI Art Gallery

🇪🇺 The European Commission has released its Artificial Intelligence Act, "the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally." The proposal covers software powered by anything from machin…


#64: Google's tips for reducing the CO2 emissions of training AI models

David Patterson wrote a blog post for Google's The Keyword blog on how the company is minimizing AI's carbon footprint, mostly covering his new paper on the topic: Carbon Emissions and Large Neural Network Training (Patterson et al. 2021). The paper went live…


#63: Three times Distill: Multimodal neurons, branch specialization, and weight banding

🕸 Distill #1: Multimodal Neurons in Artificial Neural Networks by Goh et al. (2021), which investigates CLIP, OpenAI's multimodal neural network that learned to match images on the internet to text snippets that surround them. (Probably) unlike older image cl…


#62: 4.5 billion GPT-3 words a day, Moore's Law for Everything, and Nothing Breaks like an AI heart

📱 After we saw GPT-3 — OpenAI's gargantuan language model that doesn't need finetuning — used for lots of cool demos, the model's API now powers 300+ apps and outputs an average of 4.5 billion (!) words per day. OpenAI published a blog post describing some of…


#61: The climate opportunity of gargantuan AI models

Climate change and the energy transitionClimate change is our generation's biggest challenge, and the transitions needed to reduce emissions and prevent it from becoming catastrophic will affect almost every part of society in the coming decades. On their exc…


#60: Google Pixel car crash detection, Model Search for TensorFlow, an a movie frame search engine

🚘 Apparently, Google's Pixel phones can detect car crashes. This was making the rounds on Twitter after a Reddit user wrote on r/GooglePixel that car crash detection saved them from hours of suffering because they had an accident on their own property, where …


#59: A visual search engine, Google's camera-based vitals measurements, and two AI lab tooling long reads

❤️ Google is adding camera-based vitals measurement to its Fit app on Android. Initially rolling out to Pixel phones, the new feature can measure your respiratory (breathing) rate by looking at your face and upper torso through the selfie camera — something t…


#58: Partnership on AI's AI Incident Database, and lots of productized AI

The Partnership on AI to Benefit People and Society (PAI) is an international coalition of organizations with the mision "to shape best practices, research, and public dialogue about AI’s benefits for people and society." Its 100+ member organizations cover a…


#57: DALL·E and CLIP, OpenAI's new multimodal neural networks

📸 Creative Commons photo sharing site Unsplash (where I also have a profile!) has launched a new feature: Visual Search, similar to Google's search by image. If you've found a photo you'd like to include in a blog post or presentation, for example, but the im…