<cite id="c6att"></cite>
  • <rp id="c6att"></rp>

          <cite id="c6att"></cite>
        1. <cite id="c6att"></cite>
        2. <rt id="c6att"></rt>
          <rp id="c6att"><nav id="c6att"></nav></rp><tt id="c6att"></tt><b id="c6att"></b>
        3. <source id="c6att"></source>
          <tt id="c6att"><form id="c6att"></form></tt>

          Micron at COMPUTEX 2019

          +

          COMPUTEX 2019 hosted a serial of COMPUTEX Forums targeting at different technology trends and topics. Among all, Thomas T. Eby, SVP & GM of CNBU participated at COMPUTEX Forum AI session to deliver a keynote speech “Intelligence Accelerated: Harnessing Memory to Enhance AI Performance” on behalf of Micron.

          At the beginning of the speech, Tom Eby said that let’s image that projecting 103 Zetabytes of data in 2023 that needs to be moved, stored and analyzed. Roughly a 10x change over 9 year span. Micron claims that Data is the currency of today. The challenge with all this data, is turning data into information or insight and that is where AI plays a critical role. And while the Brain is compute, the Heart of AI is Memory and Storage.

          To better understand the Industry demands, Micron commissioned an independent third party assess architects and IT specialists designing AI platforms. According to the Forrester report delivered in March 2019, identified as the number one concern with developing systems for AI was not the selection of compute but how to architect memory and storage to “feed the compute beast”. We found that compute and memory are moving closer to one another; 90% state locality of compute and memory important/critical to AI/ML; more than 90% state rearchitecting memory and storage important or critical to success; throughput of storage and memory viewed as more important than compute.

          Tom Eby, SVP & GM of CNBU said, “Micron is one of the best examples that leveraging artificial intelligence in our fabrication facilities to help us achieve higher yields, ensure a safer working environment and improve overall efficiencies.” Tom further indicated, “Micron found that implementing these AI tools we have experienced 25% faster time to yield maturity, 10% increase in manufacturing output; 35% fewer quality events."

          Computex 2

          In terms of the memory requirements of autonomous cars, Thomas T. Eby also mentioned that in the future, every L5 self-driving car will be equipped with 8-12 display screens with resolutions up to 4K-8K, and in order to support V2X communications, memory needs to process 0.5-1 TB per second, and in the infotainment system, the memory bandwidth needs to be 150-300 GB per second. The future self-driving cars will have a black box inside similar as an airplane, which will record clips every 30 seconds covering inside and outside of the car, thus the memory bandwidth requirement will reach 1 GB per second. In addition, during the lifecycle of the vehicle, the data that will be repeatedly written which added up to 150 PB (Petabyte), so the performance and endurance requirements for memory and storage devices will be extremely high. Moreover, there are trends in types of server hardware deployed, where servers are deployed and application trends affecting the dramatic growth of memory and storage. Those trends include:

          AI – Drivers for these trends are Applications like AI, workloads that demand proximity of memory to compute also being referred to as “compute in memory”.

          Heterogeneous Compute Platforms – Based on the AI workloads and deployment we are experiencing a trend away from traditional x86 platforms for ML & DL. The trend to heterogeneous compute platforms, meaning away from CPU only servers. Compute options include CPU/GPU/TPU/FPGA/SOC/ASIC. The trend away from homogeneous is due to varied workloads requiring different optimized compute solutions and with memory tightly bundled.

          Explosion Compute Core Count – Increased compute core count requires more traditional content (DRAM & Storage) per server. CORE COUNTS are increasing: Server Microprocessor unit forecast is up 4.5% CAGR 2018 – 2023 {Source is IDC March 28, 2019}

          Micron provides the only complete portfolio spanning the complete data center solutions for today and moving forward. The complete tiered from storage to memory is available with Micron, from low latency DRAM, high bandwidth NVDIMM, higher capacity 3D XPoint, to storage solutions such as TLC NAND SSD and QLC NAND SSD. Micron with the broadest DRAM and storage product portfolio which could address the demands of various kinds of data centers in nowadays and in the future.

          Computex 3

          +
          天天影视色香欲综合网