Skip to main content


  • MVM

    Remarkable advances in machine learning and artificial intelligence have been made in various domains, achieving near-human performance in a plethora of cognitive tasks including vision, speech and natural language processing. However, implementations of such cognitive algorithms in conventional “von-Neumann” architectures are orders of magnitude more area and power expensive than the biological brain. Therefore, it is imperative to search for fundamentally new approaches so that the improvement in computing performance and efficiency can keep up with the exponential growth of the AI computational demand.

  • NVM

    Emerging non-volatile memories (NVMs) have exhibited great potential for accelerating machine learning (ML) computation. A plethora of NVM technologies are being explored for various types of computing architecture such as systollic arrays and computing-in-memory crossbars. We are particularly interested in training acceleration, aiming to develop efficient ML hardware with learning capability. Such efficient hardware will open up a lot of exciting opportunities in resource-constrained scenarios such as edge devices. Training workloads typically require accurate and frequent updates of the neural network models. Therefore the processing of ML training will impose more stringent requirement on the write efficiency (energy and latency) and endurance in the underlying hardware fabrics.