DescriptionXilinx develops highly flexible and adaptive processing platforms that enable rapid innovation across a variety of technologies - from the endpoint to the edge to the cloud. Xilinx is the inventor of the FPGA, hardware programmable SoCs and the ACAP (Adaptive Compute Acceleration Platform), designed to deliver the most dynamic processor technology in the industry and enable the adaptable, intelligent and connected world of the future in a multitude of markets including Data Center (Compute, Storage and Networking); Wireless/5G and Wired Communications; Automotive/ADAS; Emulation & Prototyping; Aerospace & Defense; Industrial Scientific & Medical, and others. Xilinx's core strengths simultaneously address major industry trends including the explosion of data, heterogeneous computing after Moore's Law, and the dawn of artificial intelligence (AI).
--Research and develop algorithms of recurrent neural networks, focusing on speech recognition tasks;
--Research and develop LSTM neural network compression algorithms based on different frameworks;
--Analyze, test and improve LSTM neural network compression algorithms;
--BS or above degrees in Electronic Engineering or equivalent experience in relevant discipline （Speech Recognition/ Machine Translation/ Machine Learning, etc. ）;
--Familiar with C/C++/Python under Linux;
--Familiar with one or more deep learning algorithms, such as RNN/GRU/LSTM;
--Familiar with one or more deep learning frameworks, such as Tensorflow/PyTorch/Kaldi;
--Hard-working and creative; capable of learning the latest algorithms and theories via research papers;
--Strong sense of responsibility and team spirit, excellent learning abilities, willing to take on challenges and work under pressure;
--Experience in academic papers in CVPR/ ICCV/ ECCV/ NIPS/ ICLR/ TPAMI (or other top academic conference/journals) is preferred.