Efficient Scaling of Large Language Models with Mixture of Experts and 3D Analog In-Memory Computing

Efficient Scaling of Large Language Models with Mixture of Experts and 3D Analog In-Memory Computing Academic Background In recent years, large language models (LLMs) have demonstrated remarkable capabilities in natural language processing, text generation, and other fields. However, as the scale of these models continues to grow, the costs of trai...

Efficient CORDIC-based Activation Function Implementations for RNN Acceleration on FPGAs

Efficient Implementation of RNN Activation Functions: Breakthroughs in CORDIC Algorithms and FPGA Hardware Acceleration Background and Research Significance In recent years, with the rapid advancement of deep learning technologies, Recurrent Neural Networks (RNNs), particularly Long Short-Term Memory (LSTM) networks, have demonstrated powerful capa...