日本午夜理伦三级 人人澡超碰碰中文字菷 久久久亚洲欧洲日产国码AV 绿巨人APP 被同事灌醉的日本电影 妇乱子伦交小说 最近最新中文字幕视频 年轻的搜子34 粗大紫红猛烈的贯穿h 阿v天堂2017在线播放 新欧美三级经典在线观看 2019中文字幕视频 人妻夜夜添夜夜无码AN 农村熟妇乱子伦拍拍视频 国产亚洲精品视频在钱一首页 久久丁香婷深爱五月天网 成年片黄色日本大片网站视频 免费人成视频XVIDEOS入口 国产av无码日韩av无码网站 噜啊噜色噜在线视频 德国极品少妇videossexhd 国产一区二区三区水蜜桃 全黄激性性视频 人妻少妇久久久久久97人妻 欧美XXXX精品另类 欧美最爽乱婬视频免费挤奶 高H喷水荡肉自慰爽文NP 曰批视频免费40分钟 ktv和闺蜜被强奷很舒服 日本一级淫色人妻 日韩精品一区二区中文最新章节 久久中文字幕无码A片不卡 香港三级强奷在线观看 亚洲日本乱理播放器 色综合天天99综合网观看 电影高清完整版在线观看 janpanese日本护士中文版 欧美日韩国产第一区 人妻厨房出轨上司HD院线波多野 欧美老熟妇乱子伦XX复古 精品中文亚洲字幕 成年视频大全免费 久久www免费人成一看片 亚洲AV无码日韩AV无码网站 CHINASPEAKING老大太 亚洲欧美日韩精品久久 看片神器app污免费视频大全 强奷绝色年轻女教师 小草青青在线最新手机免费观看 免费人妻无码不卡中文字幕18禁

New Imaging System Creates Pictures by Measuring Time

New imaging system creates pictures by measuring time 

Credit: University of Glasgow 

A radical new method of imaging that harnesses artificial intelligence to turn time into visions of 3-D space could help cars, mobile devices and health monitors develop 360-degree awareness.  

Photos and videos are usually produced by capturing photons—the building blocks of light—with digital sensors. For instance, digital cameras consist of millions of pixels that form images by detecting the intensity and color of the light at every point of space. 3-D images can then be generated either by positioning two or more cameras around the subject to photograph it from multiple angles, or by using streams of photons to scan the scene and reconstruct it in three dimensions. Either way, an image is only built by gathering spatial information of the scene. 

In a new paper published today in the journal Optica, researchers based in the U.K., Italy and the Netherlands describe an entirely new way to make animated 3-D images: by capturing temporal information about photons instead of their spatial coordinates. 

Their process begins with a simple, inexpensive single-point detector tuned to act as a kind of stopwatch for photons. Unlike cameras, measuring the spatial distribution of color and intensity, the detector only records how long it takes the photons produced by a split-second pulse of laser light to bounce off each object in any given scene and reach the sensor. The further away an object is, the longer it will take each reflected photon to reach the sensor. 

The information about the timings of each photon reflected in the scene—what the researchers call the temporal data—is collected in a very simple graph. 

Those graphs are then transformed into a 3-D image with the help of a sophisticated neural network algorithm. The researchers trained the algorithm by showing it thousands of conventional photos of the team moving and carrying objects around the lab, alongside temporal data captured by the single-point detector at the same time. 

Eventually, the network had learned enough about how the temporal data corresponded with the photos that it was capable of creating highly accurate images from the temporal data alone. In the proof-of-principle experiments, the team managed to construct moving images at about 10 frames per second from the temporal data, although the hardware and algorithm used has the potential to produce thousands of images per second. 

Dr. Alex Turpin, Lord Kelvin Adam Smith Fellow in Data Science at the University of Glasgow's School of Computing Science, led the University's research team together with Prof. Daniele Faccio, with support from colleagues at the Polytechnic University of Milan and Delft University of Technology. 

Dr. Turpin said: "Cameras in our cell phones form an image by using millions of pixels. Creating images with a single pixel alone is impossible if we only consider spatial information, as a single-point detector has none. However, such a detector can still provide valuable information about time. What we've managed to do is find a new way to turn one-dimensional data—a simple measurement of time—into a moving image which represents the three dimensions of space in any given scene. The most important way that differs from conventional image-making is that our approach is capable of decoupling light altogether from the process. Although much of the paper discusses how we've used pulsed laser light to collect the temporal data from our scenes, it also demonstrates how we've managed to use radar waves for the same purpose. We're confident that the method can be adapted to any system capable of probing a scene with short pulses and precisely measuring the return echo. This is really just the start of a whole new way of visualizing the world using time instead of light." 

Currently, the neural net's ability to create images is limited to what it has been trained to pick out from the temporal data of scenes created by the researchers. However, with further training and even by using more advanced algorithms, it could learn to visualize a varied range of scenes, widening its potential applications in real-world situations. 

Dr. Turpin added: "The single-point detectors which collect the temporal data are small, light and inexpensive, which means they could be easily added to existing systems like the cameras in autonomous vehicles to increase the accuracy and speed of their pathfinding. Alternatively, they could augment existing sensors in mobile devices like the Google Pixel 4, which already has a simple gesture-recognition system based on radar technology. Future generations of our technology might even be used to monitor the rise and fall of a patient's chest in hospital to alert staff to changes in their breathing, or to keep track of their movements to ensure their safety in a data-compliant way. We're very excited about the potential of the system we've developed, and we're looking forward to continuing to explore its potential. Our next step is to work on a self-contained, portable system-in-a-box and we're keen to start examining our options for furthering our research with input from commercial partners." 

The team's paper, titled "Spatial images from temporal data," is published in Optica. 

(From: https://phys.org/news/2020-07-imaging-pictures.html)
  Copyright © The Institute of Optics And Electronics, The chinese Academy of Sciences
Address: Box 350, Shuangliu, Chengdu, Sichuan, China
Email:dangban@ioe.ac.cn Post Code: 610 209 備案號:蜀ICP備05022581號
四虎精品亚洲无码