Synthetic data: phi models, newest stable diffusion.
MoEs: megablocks, llava-moe.
PEFT: eg DoRA, vera - Vector-based Random Matrix Adaptation
Instruction tuning: alpaca,moe+IT paper from Google.
VLMs: Apple and HF papers,
LLM embeddings: eg llm2vec,
sholto-douglas-trenton-bricken
AI scaling
grokking
monosemanticity
sparse penalty
Distilled models
"one hot vector that says, “this is the token that you should have predicted.”"
chain-of-thought as adaptive compute.
key value weights
Roam notes on dwarkesh patel conversation with sholto douglas trenton bricken
Vision Transformers compared with CNN and its need for large datatset and their inductive biases. Swin (shifted window) ViT.
"CNN is even backbones behind some of the non-grid signal processing networks like equivariant nn, graphCNN and pointnet for point cloud etc."
Alternate and hybrid architectures possibly being used by Tesla FSD instead of CNNs.
A whole-slide foundation model for digital pathology from real-world data - GigaPath, a novel vision transformer for pretraining large pathology foundation models on gigapixel pathology slides.
Building human level intelligence with neuroanatomy approach or the parts of the brain approach where you build artificial cerebral cortex as in
"1.LLMs are basically the prefrontal cortex. 2.Tesla built something akin to a parietal and occipital cortex."Scaling theory podcast with Yann Le Cunn
Foundation models will be customised per use case instead of a giant catch all model pan languages. Building AI models is faster and cheaper than you probably think. Y combinator companies used two levers of better architectus or lesser data to reduce computation.
They are presumed to have high impact when the cumulative amount of compute used for its training exceeds 10^25 floating point operations (FLOPs),[23] - EU Law but you have learned just above that the models will try to not use much compute.
Regulation could be a threat to open models of Meta. Open models imply oversight and hence safer AI.
open models
open software stack
open OS - Linux servers, Apache server side frameworks
Pytorch is open
While finetuning the foundation models per language is a task. John Schulman in a chat with Dwarkesh Patel mentioned an interesting finding that if you do all your fine-tuning with English data, the model will automatically behave well in other languages. This can be extended to leverage this in robots too. The collaborators’ theory is that learning about the physical world in one robot body should help an AI to operate another — in the same way that learning in English can help a language model to generate Chinese, because the underlying concepts about the world that the words describe are the same.
Schulman says that a version of this with multimodal data where if you do text-only fine-tuning, you also get reasonable behavior with images.
Its language time. All languages need to prvide their data open source. If linguists can point out common rules of language, this can get furhter and faster.
Small scale AI startups fine tuning a foundation model should show a figure of merit.
Is Vision foundation model the future instead of billions of parameters? Considering that humans know more with so little data than what these moels are trained on in few years like a four year old.
generating consistently high-quality and accurately labeled data through various methods to facilitate the training of NLP algorithms.
Natural Language Processing with Deep Learning
gain the skills to move from word representation and syntactic processing to
designing and implementing complex deep learning models for
question answering, machine translation, and other language understanding tasks
Complexity Theory in Axiomatic Design
machine learning algorithms, Transformer, convolutional neural networks and their applications in Generative AI, NLP, computer vision and image/video processing
nalyzing ML workloads on different HW architecture — profiling and identifying the performance bottleneck in the system, coming up with suggestions for performance improvement either at algorithm, SW and HW level.