硅光學有望推動數(shù)據(jù)中心脫胎換骨
????上周,,英特爾公司(Intel)揭開了新一代光學硅芯片的面紗,標志著過去十年的研究工作結(jié)出了碩果,。這種芯片能大幅提高數(shù)據(jù)中心和超大規(guī)模計算機群間的數(shù)據(jù)傳輸速率,。而在進行這一開發(fā)時,英特爾不僅僅是將光速物理學應用于數(shù)據(jù)傳輸而已,,其“硅光子學”技術將徹底改變數(shù)據(jù)中心和高能計算設備的設計及架構方式,,從而不光是為英特爾本身,而是為整個計算行業(yè)開辟宏大的遠景,。 ????硅光學的概念其實并不復雜:銅線和其他傳統(tǒng)數(shù)據(jù)傳輸載體在傳輸特定數(shù)量的數(shù)據(jù)時往往具有極大的局限性,,而且沒有什么能比光速更快。如果數(shù)據(jù)中心或超級計算機里那些盤根錯節(jié),、到處蔓延的硬件設備能用光速傳輸載體相聯(lián),,其速度和效率將立刻實現(xiàn)飛躍。而相應的挑戰(zhàn)一直在于如何實現(xiàn)小型化,,同時降低復雜性,,這一點英特爾現(xiàn)在似乎已經(jīng)克服了。 ????簡單地說,,英特爾已經(jīng)開發(fā)出了一種能將極其細微的激光束——以及能在電子信號和光信號之間實現(xiàn)雙向轉(zhuǎn)化的接收器和傳輸器——置入一塊硅芯片的手段,,同時開發(fā)出了大規(guī)模生產(chǎn)的技術。兩周前面世的這塊硅光子芯片傳輸速度可達到每秒100G,,使現(xiàn)有的傳輸水平立刻相形見絀:現(xiàn)在機架上連接服務器的擴展插槽數(shù)據(jù)線的標準傳輸速度為每秒8G,,而將機架服務器連接起來的以太網(wǎng)數(shù)據(jù)線傳輸速度充其量也就是每秒40G。 ????到此為止,,這件事似乎就只是服務器內(nèi)部以及服務器之間的數(shù)據(jù)傳輸速度更快了,,數(shù)據(jù)中心和超級計算機群的運行更高效了,同時也為英特爾帶來了全新的巨額潛在收入【去年服務器的全球出貨量達到810萬臺,像亞馬遜(Amazon),、Facebook和蘋果(Apple)這樣的公司正投入巨資打造自己的云計算和大數(shù)據(jù)能力】,。實際上,這件事的意義遠不止于此,。機架服務器內(nèi)部及彼此間能實現(xiàn)超高速數(shù)據(jù)傳輸將徹底改變數(shù)據(jù)中心的設計方式,,促進更加高效、更有效能的計算中心和數(shù)據(jù)中心的涌現(xiàn),。 ????高德納公司(Gartner)技術及服務供應商研究集團的首席研究分析師舍基思?穆塞爾稱:“這使得重新定義計算系統(tǒng)的拓撲結(jié)構成為可能,,而這正是關鍵所在。我們將能建造規(guī)模大得多的計算系統(tǒng),。以前我們每次只能增加一臺服務器,,今后則能建造超大規(guī)模的服務器?!?/p> ????現(xiàn)有的數(shù)據(jù)中心架構受到諸多技術限制的局限,,其中很多與數(shù)據(jù)傳輸有關。一般來說,,每個機架服務器都需要一組存儲設備,、處理器和網(wǎng)絡基礎設施才能有效運轉(zhuǎn),這是因為這些組件之間的物理隔離會導致處理延遲,。這種系統(tǒng)往往需要花大量時間將電信號從一個物理位置通過銅質(zhì)或網(wǎng)絡數(shù)據(jù)線傳輸?shù)搅硪粋€位置,結(jié)果導致整個系統(tǒng)運行速度降低,。 ????咨詢公司Moor Insights & Strategy的高級分析師兼首席技術官保羅?泰西表示,,很多硬件公司正在設法解決這個問題。一般情況下,,它們的做法是,,在每個機架服務器中搭建新的架構,在更小數(shù)量級的水平上進一步整合存儲設備,、網(wǎng)絡和計算/處理器,,以降低延遲,同時提高數(shù)據(jù)處理能力,。不過英特爾卻的開發(fā)方向卻完全不同,。 |
????Last week, a research and development effort reaching back well into the last decade came to a head as Intel pulled back the curtain on a new breed of optical silicon chips that could drastically boost data transmission rates within data centers and hyperscalecomputing arrays. But in doing so, Intel (INTC) hasn't just applied light-speed physics to the science of data transmission. Its "silicon photonics" technology could fundamentally upend the way data centers and high-powered computing facilities are designed and organized, spelling big things not only for Intel but for the entire computing enterprise. ????The idea behind silicon photonics is relatively simple: Copper wiring and other conventional data transmission methods suffer from fundamental limitations on how fast they can transfer a given amount of data, but nothing moves faster than light. If the sprawling, distributed hardware inside a modern data center or supercomputer could be linked by speed-of-light communications, its speed and efficiency could immediately make a massive leap forward. The challenge, which Intel now appears to have overcome, has always been one of miniaturization and complexity. ????Simply put, Intel has figured out a means to package tiny lasers -- as well as receivers and transmitters that can convert electrical signals to optical ones and vice-versa -- into a silicon chip and develop the technology for mass production. The iteration of silicon photonics unveiled by Intel last week can achieve data rates of 100 gigabits per second, eclipsing the standard eight-gigabits-per-second rate of copper PCI-E data cables that connect servers on a rack, or even the Ethernet networking cables that connect the racks together (those cables can generally handle roughly 40 gigabits per second at the high end). ????The story here, then, is one of faster data transmission within and between servers and higher efficiency for data centers and supercomputing arrays, as well as of a potentially significant new revenue stream for Intel (8.1 million servers shipped globally last year, and companies like Amazon (AMZN), Facebook (FB), and Apple (AAPL) are pouring millions into their cloud and data capabilities). But that's not the whole story. The ability to transmit data at super-high speeds within and between server racks will be a paradigm-shifter for data center design, allowing for far more efficient and capable computing and data centers. ????"This opens up the ability to redefine the topology of systems, and that's the key thing," says SergisMushell, a principal research analyst with Gartner's technology and service provider research group. "We're going to be able to build much more massive systems. Where before we added one server at a time, we're going to be able to build massive servers." ????The current architecture of data centers is dictated by a variety of technological limitations, many of them tied to data transmission. Each rack generally requires some mix of storage, processing, and networking infrastructure in order to be effective, because physical separation between these components leads to latency. The system simply spends too much time beaming electronic signals from one physical location to another over across copper or network cables, and the whole system slows down as a result. ????Many hardware companies are working on ways to solve this, says Paul Teich, senior analyst and CTO at Moor Insights & Strategy. Generally, these new architectures involve further integrating storage, networking, and computing/processing at an even more granular level within each rack in order to reduce latency and enhance throughput. Intel is moving in the other direction entirely. |