快捷搜索: 紐約時報  抗疫  經濟學人  武漢  疫情  香港 

涓浗绔炲僵缃戠鐞冭绠?: 16000臺電腦一起找貓

篮球竞彩nba www.xvrnl.com   How Many Computers to Identify a Cat? 16,000


  MOUNTAIN VIEW, Calif. — Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.


  There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.



  Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.


  The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

  這個神經網絡自主學習識別貓兒的方法, 說實在的,這可不是什么瑣碎無聊的舉動。本周,研究人員將在蘇格蘭愛丁堡的一次會議上展示自己的研究成果。谷歌科學家和程序設計員將會說明,雖然互聯網上充滿貓兒視頻的事情已經不再是什么新聞,模擬的結果還是讓他們大吃了一驚。這個系統在20000個不同物體里識別目標物的精確度大致上提高了一倍,遠遠高于以往的任何一次同類實驗。

  The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.


  Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.


  “This is the hottest thing in the speech recognition field these days,” said Yann LeCun, a computer scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New York University.

  在紐約大學庫蘭特數學科學研究所(the CourantInstitute ofMathematical Sciences at New York University)從事機器學習技術研究的計算機科學家嚴恩·勒坤(YannLeCun)說:“現在這是語音辨識領域最熱門的事。”

紐約時報中英文網 篮球竞彩nba www.xvrnl.com

  And then, of course, there are the cats.


  To find them, the Google research team, lead by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.

  為了找到貓,由斯坦福大學(StanfordUniversity)的計算機學家安德魯·吳(Andrew Y. Ng)和谷歌員工杰夫·迪安(Jeff Dean)領導的谷歌研究小組用16000個處理器建造了一個神經網絡,這個網絡有10億多個連接點。隨后,他們向這個系統隨機提供從1000萬個YouTube視頻中截取的縮略圖,每個視頻截取一張。

  Currently much commercial machine vision technology is done by having humans “supervise” the learning process by labeling specific features. In the Google research, the machine was given no help in identifying features.


  “The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” Dr. Ng said.


  “We never told it during the training, ‘This is a cat,’ ” said Dr. Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. “It basically invented the concept of a cat. We probably have other ones that are side views of cats.”


  The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.


    蛐蛐英語 篮球竞彩nba www.xvrnl.com

  Neuroscientists have discussed the possibility of what they call the “grandmother neuron,” specialized cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of an individual.


  “You learn to identify a friend through repetition,” said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.

  “只有通過不斷重復,你才能記得朋友的長相,”加利福尼亞州帕洛阿爾托“工業知覺”(IndustrialPerception神經系統科學家加里·布拉德斯基(Gary Bradski)說。

  While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.


  “A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.


  “It is worth noting that our network is still tiny compared to the human visual cortex, which is 106 times larger in terms of the number of neurons and synapses,” the researchers wrote.


  Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.


  “The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,” said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

  佐治亞理工學院計算機系(Georgia Tech College ofComputing)高性能計算系統實驗室執行主任戴維·巴德(David A. Bader)表示:“和原來相比,斯坦福和谷歌的研究報告把神經網絡的規模上限提高了一個量級。”他說,計算機科技的迅速發展會在相對較短的時期內縮小電腦和人腦的差距。“在這個十年結束之前,整個兒地模擬人類視覺皮質也不是不可能的事情。”

  Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.


  Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.


  “It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,” said Dr. Ng.




  • 36小時環游新加坡
  • 中國頒布新規,限制未成年人玩游戲
  • 辭掉工作、花了57天,他們找回了走失的狗
  • 改善健康也許很簡單:每天少吃300卡
  • 倫敦也為空氣污染發愁
  • 最新評論

    留言與評論(共有 條評論)