Zijian Hu is a Machine Learning Research Engineer at Scale AI. He is interested in building efficient machine learning systems that can operate in noisy environments and can be trained and adapted with minimal supervision.
Download my CV (last updated on Nov 17, 2022) for more details.
MSc in Computer Science, 2020
University of Southern California
BSc in Computer Science, 2020
University of Southern California
PyTorch, TensorFlow, Keras, OpenCV
ROS, OpenCV, V-Rep
Statistical Learning, Probability, Calculus
Python, C/C++, Java, JavaScript (ES6), MATLAB
Node.js, Java EE, Angular
X86 Assembly/GAS, MIPS, Verilog, Arduino
FedML AI platform is democratizing large language models (LLMs) by enabling enterprises to train their own models on proprietary data. Today, we release FedLLM, an MLOps-supported training pipeline that allows for building domain-specific LLMs on proprietary data. The platform enables data collaboration, computation collaboration, and model collaboration, and supporting training on centralized and geo-distributed GPU clusters, as well as federated learning for data silos. FedLLM is compatible with popular LLM libraries such as HuggingFace and DeepSpeed, and is designed to improve efficiency and security/privacy. To get started, FedML users & developers only need to add 100 lines of source code. The complex steps of deployment and orchestration of training in enterprise environments are all handled by FedML MLOps platform.