Wednesday, July 24, 2024

Unleashing the Power of a Million Experts: A Breakthrough in Language Model Efficiency

Perhaps one of the most important papers for large language models (LLMs) has been released this month titled: "Mixture of a Million Experts" by  Xu Owen He. I think this paper may be the most important paper in LLM advancement since the publication of "Attention is All You Need" by Vaswani et al (2017). The idea in this paper is what I was referring to in my recent post called "Move 37" where I talked about the needed future possible improvements to LLMs.

The idea of a million experts is an extension or an improvement over current "Mixture of Experts" (MoE) architectures. MoE has emerged as a favored approach for expanding the capabilities of large language models (LLMs) while managing computational expenses. Rather than engaging the full model for each input, MoE systems direct data to compact, specialized "expert" components. This strategy allows LLMs to grow in parameter count without a corresponding surge in inference costs. Several prominent LLMs incorporate MoE architecture and it reportedly is being used in GPT-4.

Despite these advantages, existing MoE methods face constraints that limit their scalability to a relatively modest number of experts. These limitations have prompted researchers to explore more efficient ways to leverage larger expert pools.

Xu Owen He from Google DeepMind introduces an innovative approach that could dramatically improve the efficiency of these models while maintaining or even enhancing their performance. Interestingly, the "Attention" paper also came out of DeepMind.

The historical problem of LLMs is that as these models grow larger, they become more capable but also more computationally expensive to train and run. This creates barriers to their widespread use and further development. The paper proposes a Parameter Efficient Expert Retrieval (PEER) architecture that addresses this challenge by enabling models to efficiently utilize over a million tiny "expert" neural networks, potentially unlocking new levels of performance without proportional increases in computational costs. 

These fine-grained mixture of experts, unlike traditional approaches that use a small number of large experts, PEER employs a vast number (over a million) of tiny experts, each consisting of just a single neuron. By activating only a small subset of experts for each input, PEER maintains a low computational cost while having access to a much larger total parameter count. It accomplishes this by introducing a novel product key technique for expert retrieval, allowing the model to efficiently select relevant experts from this huge pool. 

The implications of this architectures are far reaching:

  • Scaling Language Models: PEER could enable the development of much larger and more capable language models without proportional increases in computational requirements. This could accelerate progress in natural language processing and AI more broadly.
  • Democratization of AI: By improving efficiency, this approach could make advanced language models more accessible to researchers and organizations with limited computational resources.
  • Lifelong Learning: The authors suggest that this architecture could be particularly well-suited for lifelong learning scenarios, where new experts can be added over time to adapt to new data without forgetting old information. Imagine a model that has no knowledge cutoff. It is constantly learning and knowledgeable about what is going on in the world. This would open up new use cases for LLMs.
  • Energy Efficiency: More efficient models could lead to reduced energy consumption in AI applications, contributing to more sustainable AI development. These models could help reduce inference cost and money.
  • Overcome model forgetting: With over a million tiny experts, PEER allows for a highly distributed representation of knowledge. Each expert can specialize in a particular aspect of the task or domain, reducing the likelihood of new information overwriting existing knowledge.

Of course this is still early-stage research. Further work will be needed to fully understand the implications and potential limitations of this approach across a wider range of tasks and model scales. But this paper represents a significant step forward in improving the efficiency of large language models. By enabling models to efficiently leverage vast numbers of specialized neural networks, it could pave the way for the next generation of more powerful and accessible AI systems. 

Wednesday, July 10, 2024

Bridge RNAs: The Next Frontier in Precision Genome Editing Beyond CRISPR

In a very interesting new paper that was just published in Nature titled “Bridge RNAs direct programmable recombination of target and donor DNA” by Durant et al. (2024), they introduce what looks like a groundbreaking discovery in genetic engineering that uses a new class of non-coding RNAs (ncRNAs) called bridge RNAs that enable programmable DNA recombination. This expands the capabilities of nucleic-acid-guided systems beyond existing technologies like CRISPR.

This study reveals that IS110 insertion sequences, which are minimal mobile genetic elements, express structured ncRNAs that specifically bind to their encoded recombinase. These bridge RNAs contain two internal loops that base-pair with target and donor DNA, facilitating sequence-specific recombination. This discovery is particularly significant because the target-binding and donor-binding loops of the bridge RNA can be independently reprogrammed, allowing for programmable DNA insertion, excision, and inversion. 

I think this discovery of bridge RNAs as programmable tools for DNA recombination could be the next big thing in genome editing, taking us beyond what RNA interference (RNAi) and CRISPR have done so far. Bridge RNAs allow insertion, removal, or flipping DNA sequences without breaking the DNA strands, which means fewer mistakes and a more stable genome.

What would the future of genetic tools be building on this concept look like? Complex DNA changes could be designed with comparably high precision, combining bridge RNAs with other mechanisms to not only edit genes but also control their activity. For example, genes could be turned on or off or even tweaked their expression levels, giving a powerful way to study how genes work and develop new therapies.

Speculating even further, these new tools might also interact with more than just DNA. Think about targeting RNA transcripts to edit RNA sequences or modulate RNA splicing, or even interacting with proteins to change their activity or where they are in the cell. This would open up a whole new world of possibilities.

In medicine, this third generation of RNA-guided tools could lead to new treatments for genetic diseases. By making targeted and reversible changes to the genome and transcriptome, it could create more effective and personalized therapies with fewer side effects. There could also be an improvement in the safety and efficacy of advanced cell and gene therapies by controlling genomic rearrangements and gene expression more precisely.

Overall, these new RNA-guided tools could revolutionize genome engineering, offering new possibilities for research, biotechnology, and medicine. By building on the principles of RNAi, CRISPR, and bridge RNAs, it could be possible to manipulate biological systems with greater accuracy and flexibility, leading to innovative applications and groundbreaking advancements.

"Superhuman" Forecasting?

This just came out from the Center for AI Safety  called Superhuman Automated Forecasting . This is very exciting to me, because I've be...