Health Ranger Inventions /healthrangerinventions Health Ranger Inventions Wed, 11 Jan 2017 10:37:18 +0000 en-US hourly 1 The Second Coming of Neuromorphic Computing /healthrangerinventions/2016-02-15-the-second-coming-of-neuromorphic-computing.html /healthrangerinventions/2016-02-15-the-second-coming-of-neuromorphic-computing.html#respond Wed, 30 Nov -0001 00:00:00 +0000 Just a few years ago, the promise of ultra-low power, high performance computing was tied to the rather futuristic-sounding vision of a “brain chip” or neuromorphic processor, which could mimic the brain’s structure and processing ability in silicon—quickly learning and chewing on data as fast as it could be generated.

(Article by Nicole Hemsoth, republished from

In that short amount of time, broader attention has shifted to other devices and software frameworks for achieving the same end. From custom ASICs designed to train and execute neural networks, to reprogrammable hardware with a low-power hook, like FPGAs, to ARM, GPUs, and other non-standard CPU cores, the range of neuromorphic approaches are less likely to garner press, even though the work that is happening in this area is legitimate, fascinating, and directly in range with the wave of new deep learning, machine learning, in-situ analysis, on-sensor processing and other capabilities that are rising to the fore.

The handful of neuromorphic devices that do exist that are based on a widely variable set of architectures and visions, but the goal is the same–to create a chip that operates on the same principles of the brain. The goal that has not been met, however, is the delivery of a revolution in computing. But the full story has not played out quite yet—neuromorphic devices may see a second (albeit tidal) wave of interest in coming years. To call it a second coming might not be entirely fair since neuromorphic computing never really died off to begin with. What did dissipate, however, was the focus and wider attention.

There was an initial window of opportunity for neuromorphic computing, which opened as a few major funding initiatives were afoot. While these propelled critical research and the production of actual hardware devices and programming tools, attention cooled as other trends rose to the fore. Still, the research groups dedicated to exploring the range of potential architectures, programming approaches, and potential use cases have moved ahead—and now might be their year to shine once again.

There have been a couple of noteworthy investments that have fed existing research for neuromorphic architectures. The DARPA Synapse program was one such effort, which beginning in 2008, eventually yielded IBM’s “True North” chip—a 4096-core device comprised of 256 programmable “neurons” that act much like synapses in the brain, resulting in a highly energy efficient architecture that while fascinating—means an entire rethink of programming approaches. Since that time, other funding from scientific sources, including the Human Brain Project, have pushed the area further, leading to the creation of the SpiNNaker neuromorphic device, although there is still a lack of a single architecture that appears best for neuromorphic computing in general.

The problem is really that there is no “general” purpose for such devices as of yet and no widely accepted device or programmatic approach. Much of this stems from the fact that many of the existing projects are built around specific goals that vary widely. For starters, there are projects around broader neuromorphic engineering that are more centered on robotics versus large-scale computing applications (and vice versa). One of several computing-oriented approaches taken by Stanford University’s Neurogrid project, which was presented in hardware in 2009 and remains an ongoing research endeavor, was to simulate the human brain, thus the programming approach and hardware design are both thus modeled as closely to the brain as possible while others are more oriented toward solving computer science related challenges related to power consumption and computational capability using the same concepts, including a 2011 effort at MIT, work at HP with memristors as a key to neuromorphic device creation, and various other smaller projects, including one spin-off of the True North architecture we described here.

A New Wave is Forming

What’s interesting about the above referenced projects is that their heyday appears to be from the 2009 to 2013 period with a large gap until the present, even if research is still ongoing. Still, one can make the argument that the attention around deep neural networks and other brain-inspired (although not brain-like) programmatic and algorithm trends might bring neuromorphic computing back to the fore.

“Neuromorphic computing is still in its beginning stages,” Dr. Catherine Schuman, a researcher working on such architectures at Oak Ridge National Laboratory tells The Next Platform. “We haven’t nailed down a particular architecture that we are going to run with. True North is an important one, but there are other projects looking at different ways to model a neuron or synapse. And there are also a lot of questions about how to actually use these devices as well, so the programming side of things is just as important.”

The programming approach varies from device to device, as Schuman explains. “With True North, for example, the best results come from training a deep learning network offline and moving that program onto the chip. Others that are biologically inspired implementations like Neurogrid, for instance, are based on spike timing dependent plasticity.”

The approach Schuman’s team is working on at Oak Ridge and the University of Tennessee is based on a neuromorphic architecture called NIDA, short for the Neuroscience Inspired Dynamic Architecture, which was implemented in FPGA in 2014 and now has a full SDK and tooling around it. The hardware implementation, called Dynamic Adaptive Neural Network Array (DANNA) differs from other approaches to neuromorphic computing in that is allows for programmability of structure and is trained using an evolutionary optimization approach—again, based as closely as possible to what we know (and still don’t know) about the way our brains work.

Schuman stresses the exploratory nature of existing neuromorphic computing efforts, including those at the lab, but does see a new host of opportunities for them on the horizon, presuming the programming models can be developed to suit both domain scientists and computer scientists. There are, she notes, two routes for neuromorphic devices in the next several years. First, as embedded processors on sensors and other devices, given their low power consumption and high performance processing capability. Second, and perhaps more important for a research center like Oak Ridge National Lab, neuromorphic devices could act “as co-processors on large-scale supercomputers like Titan today where the neuromorphic processor would sit alongside the traditional CPUs and GPU accelerators.” Where they tend to shine most, and where her team is focusing effort, is on the role they might play in real-time data analysis.

“For large simulations where there might be petabytes of data being created, normally that would all be spun off to tape. But neuromorphic devices can be intelligent processors handling data as it’s being created to guide scientists more quickly.”

What is really needed for these potential use cases, beyond research like Schuman’s and many others at IBM, HP, and others that are working toward such goals, is the development of a richer programming and vendor landscape. One promising effort from the Brain Corporation, a Qualcomm-backed venture, appears to be gaining some traction, even if it is slightly later to the neuromorphic device game relative to its competitors. Although it is more robotics and sensor-oriented (versus larger-scale computing/co-processing, which is encapsulated by Qualcomm’s coming Zeroth platform for machine learning, which is based on neuromorphic approaches), the team there is reported to have developed in silicon a neuromorphic device and the companion software environment as an interface for programmers.

Although the concept has been floating around since the 1980s and implemented in hardware across a number of projects, including some not mentioned here, the future of neuromorphic computing is still somewhat uncertain—even if the exploding range of applications puts it back in the lens once again. The small range of existing physical devices and an evolving set of programming approaches match a growing set of problems in research and enterprise—and this could very well be the year neuromorphic computing breaks into the mainstream.

Read more at:

/healthrangerinventions/2016-02-15-the-second-coming-of-neuromorphic-computing.html/feed 0
German test facility gears up for larger turbines, long blades /healthrangerinventions/2016-02-15-german-test-facility-gears-up-for-larger-turbines-long-blades.html /healthrangerinventions/2016-02-15-german-test-facility-gears-up-for-larger-turbines-long-blades.html#respond Wed, 30 Nov -0001 00:00:00 +0000 As the size of offshore wind turbines has grown, so has the need for new facilities at which to test them. New facilities are now in operation in Denmark and in the UK, adding to well known facilities such as those at NREL in the US. The dynamic nacelle testing laboratory at Fraunhofer IWES may not be the most powerful of these facilities, but scientists there believe it unique in its ability to test nacelles, drivetrains and components mechanically and electrically and do so at the same time. Opened in October 2015, the DyNaLab will be complemented from 2018 by another facility designed to make blade testing more realistic.

(Article, republished from,german-test-facility-gears-up-for-larger-turbines-long-blades_41846.htm)

“Although described as a nacelle testing laboratory, the DyNaLab is much more than that,” said Professor Jan Wenske, deputy director of Fraunhofer IWES. “We can use it to test nacelles, drivetrains, main bearings and a range of components. All test rigs differ, but in the DyNaLab, we have an especially high level of functionality. Unlike some other rigs, we also offer an extremely high level of grid emulation as well as all of the mechanical testing one would expect from a facility like this. The combination of the two – the high-level mechanical testing and the grid/electrical emulation – is what distinguishes this facility.”

Among Fraunhofer IWES’s first customers at the DyNaLab are Adwen, which is using the facility for long-term tests on its new 8MW turbine. Professor Wenske said the tests would make full use of advanced functionality that the facility can offer and would see Fraunhofer IWES test the entire drivetrain in the 8MW unit and components within it and conduct endurance testing of the turbine. A ‘virtual’ 36,000V medium voltage grid integrated into the 10MW rig enables short circuits and other faults in a grid to be tested with a high degree of accuracy and repeatability. The duration of testing can be adapted to a certifier’s specific requirements, and real-time models and control algorithms can be used to simulate real-world loads and interactions between the nacelle and rotor. “With the grid and the hardware-in-the-loop load simulations, a range of loading scenarios can be simulated in a reproducible manner,” he explained. “The performance of a turbine can be tested in the event of an emergency stop or multiple dips in the grid following storms or short circuits due to faulty pitch regulation. By simulating operational conditions with extreme loads, companies such as Adwen can accelerate the verification process. The process will allow individual and fully integrated subsystems to be validated as well as complete drivetrain operation at full power.”

Florian Sayer, head of department, structural components division at Fraunhofer IWES, explained the company has a successful history of testing turbine blades but wanted to upgrade its facilities in order to be able to test ever-larger turbine, longer units. Testing a 40m blade might take Fraunhofer IWES three to four months, but testing an 80m blade or blades that are longer still had become a more time-consuming process. Phase 1 of the new blade testing facility, which is being funded to the tune of €10 million by the Federal Ministry for Economic Affairs and Energy and the Federal State of Bremen, is due to get underway in mid-2018. Phase 2 of the work at the new facility should be underway by the end of 2018. Apart from enabling Fraunhofer IWES to test blades more quickly, speeding development of new technology and reducing costs, the new facility will also enable the testing organisation to undertake testing that is much more representative of real-world loading. It will also enable Fraunhofer IWES to undertake root segment and blade tip tests separately. This will mean that tests can be undertaken at higher frequencies and with a more accurate load profile. Individual sections with critically high loads and parts of the blade with greater material thickness or strong curvature can also be investigated separately. This approach should produce more detailed results and reduce testing times by around 30 per cent.

Read more at:,german-test-facility-gears-up-for-larger-turbines-long-blades_41846.htm

/healthrangerinventions/2016-02-15-german-test-facility-gears-up-for-larger-turbines-long-blades.html/feed 0
The Surgeon Will Skype You Now /healthrangerinventions/2016-02-15-the-surgeon-will-skype-you-now.html /healthrangerinventions/2016-02-15-the-surgeon-will-skype-you-now.html#respond Wed, 30 Nov -0001 00:00:00 +0000 The surgeon, who has spent 15 minutes gently tearing through tissue, suddenly pauses to gesture ever-so-slightly with his tiny scissors. “Do you see what’s on this side? That’s nerves.” He moves the instrument a few millimeters to the right. “And on this one? That’s cancer.”

(Article by Alexandra Ossola, republished from

Ashutosh Tewari is the head of the urology department at Mount Sinai Hospital in New York City. He is in the process of removing a patient’s cancerous prostate, the walnut-sized gland in the delicate area between the bladder and the penis. This surgery—one of three that Tewari performs on an average day—takes place entirely within an area the size of a cereal bowl. Tewari’s movements are deliberate and exact. Just a few wrong cuts could make the patient incontinent or unable to perform sexually for the rest of his life.

But Tewari is making those cuts from 10 feet away. With a robot.

From where I’m standing in the operating room, the patient is partially obscured by the large multi-armed robot that looms over him, as well as the team of surgical assistants and anesthesiologists that surround him. Tewari, meanwhile, sits at a large console. He stares into the 3D display while manipulating levers with his hands and fingers, which give him some haptic feedback. While the system resembles an old-school arcade video game, Tewari insists that there’s nothing game-like about it. Surgery is serious business.

Even from across the room, robots can make surgery better. For the surgeons, sitting at a console is less physically taxing than hunching over the body during an open procedure. The software is so sophisticated that it corrects a surgeon’s shaking hand. The zoomed-in camera view takes some getting used to, but for working in a small area, it’s great.

Read more at:

/healthrangerinventions/2016-02-15-the-surgeon-will-skype-you-now.html/feed 0
ScaAnalyzer: An award-winning tool to find computing bottlenecks /healthrangerinventions/2016-02-15-scaanalyzer-an-award-winning-tool-to-find-computing-bottlenecks.html /healthrangerinventions/2016-02-15-scaanalyzer-an-award-winning-tool-to-find-computing-bottlenecks.html#respond Wed, 30 Nov -0001 00:00:00 +0000 Computer developers work like runners in a race. One foot — software — has to keep pace with the advancement of the other foot — hardware. (And vice versa, of course).

(Article by Joseph Mclain, republished from

The computing world is full of the equivalents of blisters and scrunched-up socks, impediments to optimum speed that sometimes can be as much of a challenge to find tucked down among seemingly endless lines of code as a teeny pebble that migrates around a running shoe.

Xu Liu, a computer scientist at William & Mary, says bottlenecks hidden deep in a computer and its code are the equivalent of a pebble in a shoe. Liu, an assistant professor of computer science, and Bo Wu, a 2014 alumnus of William & Mary’s Ph.D. program in computer science, have developed a tool to find elusive software bottlenecks and which will allow computers to run faster and more efficiently.

They call the tool “ScaAnalyzer,” and their introduction of the tool was named Best Paper at the Supercomputing ’15 conference. The gathering is also known as the International Conference for High Performance Computing, Networking, Storage and Analysis. It was established in 1988 by the Association for Computing Machinery and the IEEE Computer Society. The gathering attracts thousands of engineers, scientists and other computing professionals involved in the design and operation of the world’s most powerful computational machines.

ScaAnalyzer is designed to address scalability problems in particular, Liu explained. Liu says that today’s computer programs require thousands, tens of thousands — even millions of lines of code, with no end in sight. And of course the computer hardware necessarily grows as well. The simple single-core central processing unit (CPU) has largely been supplanted by multi-core CPU systems.

“Smartphones, high-end servers, supercomputers: they have many-cores systems,” Liu said. “The CPU grows from four cores to 12 cores, to sometimes more than 60 cores.”

He explained that bottlenecks inhibit the scalability factor of the application, its ability to expand and take advantage of the increased computing potential of a multi-core system. Software bottlenecks are usually easy to deal with, and hardware bottlenecks can be eliminated in next-generation designs

But to get rid of any bottleneck, you have to find it first.

“One problem here is that it’s really, really difficult for application developers inside the code,” Liu explained. “If you have a really big code, thousands or millions of lines of code, how can you identify a smaller code portion that causes a big problem?”

ScaAnalyzer takes aim at a computer’s memory subsystem, a complex area that tends to breed both software and hardware bottlenecks. Liu says ScaAnalyzer can help to pinpoint trouble areas in both software and hardware.

“The hardware designers can design different memory layers. A layer might have different features like size, speed, bandwidth,” he explained. ScaAnalyzer can zero in on problem areas in the chip architecture. “We can give this feedback to the hardware vendors, and tell them ‘Maybe you should focus on this memory layer.’”

The Best Paper award attests to the perceived value of ScaAnalyzer among the supercomputing community. Liu and Wu’s paper won out over a field of five other finalists representing the best offerings from an international who’s who in supercomputing. Virginia Torczon, dean of graduate studies at William & Mary, said that Liu and Wu scored an immense coup.

“That two of William & Mary’s own could win out over papers from researchers from top universities, corporate research labs, national supercomputing centers and research institutes from around the world speaks to the high caliber of the computer science research program that we have managed to put together at William & Mary,” said Torczon, a computer scientist herself and former chair of the university’s Department of Computer Science.

Wu, now on the faculty of the Colorado School of Mines, is working with Liu to promote ScaAnalyzer. Despite its considerable value to the computing industry, Wu and Liu are making ScaAnalyzer available for free, as an open-source utility.

Read more at:

/healthrangerinventions/2016-02-15-scaanalyzer-an-award-winning-tool-to-find-computing-bottlenecks.html/feed 0
Innovative, New Biocompatible Column Hardware /healthrangerinventions/2016-02-15-innovative-new-biocompatible-column-hardware.html /healthrangerinventions/2016-02-15-innovative-new-biocompatible-column-hardware.html#respond Wed, 30 Nov -0001 00:00:00 +0000 IDEX Health & Science, LLC announces the launch of their revolutionary, biocompatible columns and accessories. These PEEK-lined stainless steel (PLS) columns combine the strength of traditional stainless steel with the chemical inertness of PEEK to provide a solution that ensures the integrity of biological samples while operating at ultra-high pressures of 20,000 psi. The higher operating pressures allow faster throughput separations. The PLS hardware comes with removable frit assemblies which can provide an economical replacement advantage to the column packers. IDEX Health & Science’s state-of-the-art MarvelX™ UHPLC Connection Systems complement the non-metal fluid path and ensure a guaranteed connection with zero dead volume.

(Article by Heidi Lechner, republished from

“Our PLS columns address the market for UHPLC bio inert applications. With the market evolution especially in the BioPharma industry, there is an increasing application need for high-performance separations in the biosciences. Our PLS hardware ensures the integrity of bio-molecules and minimizes unwanted surface interactions when working under harsh solvent or pH conditions,” says Saba Jazeeli, Product Solutions Manager for IDEX Health & Science.

Another unique feature of these biocompatible columns is with the PEEK encapsulated in the stainless steel, ultra-high pressure applications are possible. IDEX Health & Science offers an extensive portfolio of 2.1mm & 4.6mm ID column lengths with both PEEK and titanium frits options, based on pressure demands.

“This hardware is best suited to throughput critical applications that require metal free, highly inert flow paths. Some applications include high resolution biomolecule separations, ion chromatography, or applications that can be corrosive to traditional stainless steel hardware,” notes Dan Czarnecki, Engineer, for IDEX Health & Science.

About IDEX Health & Science LLC

IDEX Health & Science is the global authority in fluidics and optics for the life sciences market, offering a three-fold advantage to customers by bringing optofluidic paths to life with products, people, and engineering expertise. Respected worldwide for solving complex problems, IDEX Health & Science delivers complete life science instrumentation development innovation for analytical, diagnostic and biotechnology applications. With the industry’s broadest portfolio of state-of-the-art components and capabilities, IDEX Health & Science is changing the vision for optofluidic solutions, anticipating customers’ needs with intelligent solutions for life. Product offerings include: connections, valves, pumps, degassers, column hardware, manifolds, microfluidics, consumables, integrated fluidic assemblies, filters, lenses, shutters, laser sources, light engines and integrated optical assemblies. For more information visit:

Read more at:

/healthrangerinventions/2016-02-15-innovative-new-biocompatible-column-hardware.html/feed 0
Listening to the symphony of the universe /healthrangerinventions/2016-02-15-listening-to-the-symphony-of-the-universe.html /healthrangerinventions/2016-02-15-listening-to-the-symphony-of-the-universe.html#respond Wed, 30 Nov -0001 00:00:00 +0000 The historic announcement of the discovery of gravitational waves, by the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration (LSC) on February 11, marks the beginning of an entirely new way of looking at the universe. “We are now actually about to take off,” said David Reitze, the Executive Director of the LIGO Laboratory, after making the announcement. “The new window of gravitational astronomy has just opened up.”

(Article by R. Ramachandran, republished from

A new paradigm

For hundreds of years since Galileo’s times, the sky was being observed with optical telescopes that looked at celestial objects either by the visible light that they emitted or by the light that was scattered off them. With advances in technology, the universe began to be observed using different wavelengths of the electromagnetic spectrum. Observational windows in radio waves, infrared and ultraviolet rays, X-rays and gamma rays opened up, and many terrestrial and space-based instruments have enabled scientists to gain new insights into the working of the universe. Neutrino astronomy, which looks at neutrinos from solar and extrasolar sources, is another window to the universe that has opened up in the last few decades. Now, the discovery of gravitational waves opens up an entirely new paradigm in observational astronomy.

The event that led to the discovery of gravitational waves — the coalescence of two orbiting black holes — has itself thrown up very interesting questions. First, it is unusual that black holes of about 30-35 solar masses squeezed into about 150 km exist; from a stellar evolution perspective, you would expect black holes to be only a few to about 10-15 solar masses.

Even more unusual is the fact there were two of them orbiting each other at about half the speed of light and merging into a single black hole of nearly double the individual masses. What are the kinds of stars that leave behind ‘stellar black holes’ with tens of solar masses? Will more such objects show up as gravitational wave astronomy evolves?

Perhaps. More such objects, and others that are entirely unexpected, may reveal themselves through their gravitational wave radiations telling us new things about the universe. The discovery has thus now tuned our ears to an entirely new and unfamiliar symphony of the universe.

To listen to the full range of that symphony, a network of terrestrial instruments similar to the LIGO interferometer in the U.S. is needed — in particular, instruments in the southern hemisphere or closer to the equator that can look at the southern sky better.

With the two instruments in Washington and Louisiana, the source of the gravitational wave that signalled its discovery could be pinpointed only within a large patch of about 600 square degrees in the southern hemisphere; a crescent-like region of about 60 degrees x 10 degrees across.

The moon subtends an angle of 0.5 degrees on the Earth — a 0.25 square degree region in the sky. So, the uncertainty in the localisation was a region as wide as about 2,500 moons stacked together; an area as large as many stellar surveys cover. That, indeed, is a huge uncertainty and astronomers would like to do better by at least an order of magnitude.

Localisation of a source is done by the technique of triangulation, with a minimum of three stations, and the accuracy of this technique increases with longer baselines between any two of the instruments. The baseline between Louisiana and Washington corresponds to just a 7-10 millisecond (ms) time delay for a signal at the speed of light.

If the interferometer at Pisa in Italy called VIRGO, which collaborates with LIGO in gravitational wave observations, had been operational on September 14, 2015, when this gravitational wave arrived, the time delay would have been about 22 ms, and the localisation accuracy would have improved to a smaller 200 square degree window.

Imagine if there had been a LIGO-India set-up and working, the time delay between LIGO and India would have been much greater, about 36-39 ms, which would have narrowed down the localisation to a small 5-10 square degree patch in the sky, which is nearly a factor of hundred better.

The maximum separation possible on the globe is 42 ms, which is between the two poles of the Earth, and the baseline delay with India would be nearly that value. Therein lies the importance of a LIGO-like instrument in this part of the world.

The Indian proposal

The Indian proposal for such an instrument has been awaiting the government’s approval for nearly four years now. The proposal was made under the Indo-U.S. cooperation agreement after a formal offer of locating an interferometer in India was made by the U.S. LIGO lab and the U.S. National Science Foundation (NSF) in October 2011. In November 2011, a formal proposal was submitted to the Department of Atomic Energy (DAE) and the Department of Science and Technology (DST) by the Indian Initiative in Gravitational-wave Observations (IndiGO) Consortium, which was formed in 2009.

According to the proposal, the U.S. would ship all the hardware required for a LIGO-India and it will be India’s responsibility to construct and operate it. Both the NSF and the Indian government would fund the project. The estimated cost was placed at Rs. 1,260 crore.

In April 2012, the Indian Atomic Energy Commission approved LIGO-India as a mega science project of the DAE following which the U.S. National Science Board (NSB) formally approved the location of an advanced LIGO detector in India in August 2012. In December that year, the National Development Council included LIGO-India as one of the mega science projects to be taken up during the 12th Plan period. Since then, the project is awaiting a formal nod of the government.

“We have not really lost all of four years’ time,” said Tarun Souradeep of the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, the spokesperson for IndiGO Consortium. “IndiGO people have not been just sitting idle all this while. We have been doing all the necessary groundwork, getting subsystems that we need designed, prototypes made and tested, and getting ready to start building it as soon as approval is obtained. The NSF is still banking on us and backing us because of the progress that they have seen, and we are hopeful of the approval coming through,” he said. If approved today, the three station-network — with the two instruments of LIGO-U.S. and LIGO-India — is expected to start functioning in the 2022-2024 timeframe and be operational for 10 years.

The entire infrastructure will be India’s responsibility. The three lead institutions in the execution of the project will be the IUCAA, the Institute of Plasma Research in (IPR) in Ahmedabad, and the Raja Ramanna Centre for Advanced Technology (RRCAT) in Indore.

While the IPR, which has the experience of working with high-vacuum systems, has already designed and tested prototypes of appropriate systems and components that will be required, RRCAT has designed the components and material required for ultra-stable low-frequency sub-kilohertz lasers, which would be identical to the existing LIGO devices.

From an initial set of 22 potential sites, the group has also shortlisted two for the instrument: one at Kalyanpura near Udaipur/Chittorgarh in Rajasthan and the other at Aundha near Hingoli in Maharashtra. The IPR has drawn up a plan for its construction and the Tata Consulting Engineers Ltd. has completed a feasibility study of the project.

Since it is a project of the DAE of over Rs. 500 crore, it needs the approval of the Cabinet Committee on Security (CCS). Only after that will it go through the process of financial approval and so on. The Prime Minister’s in principle-approval through a Twitter message is, therefore, highly significant, as he is the Minister for Atomic Energy, the key ministry for this project.

“An Indian mega gravitational wave astronomy project, especially in the wake of this historic discovery, means a great opportunity for showcasing Indian capability at the cutting-edge of science and technology,” said Mr. Souradeep. “It will also enrich technological areas like precision metrology, photonics and control systems. But most importantly, it will inspire coming generations of young Indians to engage in international scientific research within the country.”

Read more at:

/healthrangerinventions/2016-02-15-listening-to-the-symphony-of-the-universe.html/feed 0
NASA dives deeper into how it’s really using HoloLens /healthrangerinventions/2016-02-15-nasa-dives-deeper-into-how-its-really-using-hololens.html /healthrangerinventions/2016-02-15-nasa-dives-deeper-into-how-its-really-using-hololens.html#respond Wed, 30 Nov -0001 00:00:00 +0000

A year ago we heard that NASA and Microsoft were teaming up to build Sidekick, a project that uses HoloLens to let astronauts and scientists collaborate remotely, as well as visualize 3D schematics. Now we’ve finally got a closer look at Sidekick in action thanks to NASA’s Jeff Norris, who discussed the project during a Vision Summit presentation. Norris, who leads mission control innovation for NASA’s Jet Propulsion Laboratory, mainly focuses on the 3D visualization aspect (“Procedure Mode”).

(Article by Devindra Hardawar, republished from

Rather than waiting for components to be assembled for testing, Sidekick lets NASA explore potential issues ahead of times. Engineers and scientists can do everything from view full-scale holograms to dive into individual components. The sample imagery in the video doesn’t look too complex at the moment, but it’s the sort of thing that will evolve along with Microsoft’s HoloLens hardware.


Read more at:

/healthrangerinventions/2016-02-15-nasa-dives-deeper-into-how-its-really-using-hololens.html/feed 0
New hardware to expand fast fiber-to-the-home /healthrangerinventions/2016-02-15-new-hardware-to-expand-fast-fiber-to-the-home.html /healthrangerinventions/2016-02-15-new-hardware-to-expand-fast-fiber-to-the-home.html#respond Wed, 30 Nov -0001 00:00:00 +0000 The cost of deploying fast fibre connections straight to homes could be dramatically reduced by new hardware designed and tested by UCL researchers. The innovative technology will help address the challenges of providing households with high bandwidths while futureproofing infrastructure against the exponentially growing demand for data.

(Article by University College London, republished from

While major advances have been made in core optical fibre networks, they often terminate in cabinets far from the end consumers. The so called ‘last mile’ which connects households to the global Internet via the cabinet, is still almost exclusively built with copper cables as the optical receiver needed to read fibre-optic signals is too expensive to have in every home.

Lead researcher, Dr Sezer Erkilinc (UCL Electronic & Electrical Engineering), said: “We have designed a simplified optical receiver that could be mass-produced cheaply while maintaining the quality of the optical signal. The average data transmission rates of copper cables connecting homes today are about 300 Mb/s and will soon become a major bottleneck in keeping up with data demands, which will likely reach about 5-10 Gb/s by 2025. Our technology can support speeds up to 10 Gb/s, making it truly futureproof.”

For the study, published today in the Journal of Lightwave Technology, scientists from the UCL Optical Networks Group and UNLOC programme developed a new way to solve the ‘last mile problem’ of delivering fibre connections direct to households with true fibre-to-the-home (FTTH) broadband technology. They simplified the design of the optical receiver, improving sensitivity and network reach compared to existing technology. Once commercialised, it will lower the cost of installing and maintaining active components between the central cabinet and homes.

Academic and industry experts, along with policy makers, largely agree that FTTH is the most futureproof solution to meet the fast and exponentially growing demand for bandwidth. Yet even in countries leading the way in implementing FTTH technology such as Japan, South Korea and Hong Kong, fewer than 50% of connections use FTTTH while this figure is less than 1% in the UK.

A major factor limiting the uptake of FTTH is the overall cost associated with laying optical fibre cables to each household and providing affordable optical receivers to connect them to the network. The highly sensitive coherent optical receivers used in core networks are desirable but are also complex, which makes them expensive to manufacture. Directly using such receivers in homes increases the cost of FTTH beyond the current copper based solutions.

The novel optical receiver retains many of the advantages of the conventional optical receivers typically used in core networks, but is smaller and contains around 75-80% fewer components, lowering the cost of manufacture and maintenance.

Co-author, Dr Seb Savory, previously at UCL and now at the University of Cambridge, added: “Our receiver, is much simpler, containing just a quarter of the detectors used in a conventional coherent optical receiver. We achieved this by applying a combination of two techniques. First a coding technique often used in wireless communications was used to enable the receiver to be insensitive to the polarisation of the incoming signals. Second we deliberately offset the receiver laser from the transmitter laser with the additional benefit that this allows the same single optical fibre to be used for both upstream and downstream data.”

The researchers are now investigating the laser stability of the receiver, which is an important step to building a commercial prototype of the system.

Dr Erkilinc added: “Once we’ve quantified the laser stability, we will be in a strong position to take the receiver design through field trials and into commercialisation. It is so exciting to engineer something that may one day be in everyone’s homes and make them a part of the digital revolution.”

Read more at:

/healthrangerinventions/2016-02-15-new-hardware-to-expand-fast-fiber-to-the-home.html/feed 0
Computers in Medical Laboratory /healthrangerinventions/2016-02-15-computers-in-medical-laboratory.html /healthrangerinventions/2016-02-15-computers-in-medical-laboratory.html#respond Wed, 30 Nov -0001 00:00:00 +0000 There are two broad areas of applications of computers in laboratory medicine. Firstly, computers automate the handling of the alphanumeric information needed to request, organise, perform, report, and interpret results as well as store information for processes carried out by clinical laboratories. These functions are loosely clubbed together as “data processing”.

(Article by Dr. Th Dhabali Singh,

On the other hand, computers monitor and control laboratory instruments and are increasingly becoming an integral component of the instrument. In both data processing and instrumentation, earlier, computers merely automated existing manual functions. However, as the speed, memory and computational capacity of the computer is more fully exploited, the quantitative increase in processing power leads to qualitative increase in new applications and numerous innovations.

Data processing and laboratory information systems

Right from the patient registration at the front desk and the delivery of reports, computers are extensively used in a laboratory to organise work, report test results and prepare patient bills. All these processes are mostly done through the Laboratory Information System (LIS).

An LIS is a computer software that processes, stores and manages data from all stages of medical processes and tests. Physicians and lab technicians use the LIS to coordinate varieties of medical testing in the laboratory. An LIS has features that manage patient registration, order entry, specimen processing, billing, result entry and patient demographics. It also tracks clinical details about a patient during a visit and keeps the information in its database for future reference or retrieval.

Once the information required for processing of laboratory results has been entered into a computer, it can be used in ways which would not have been possible manually. The transmission of information about patients or specimens in machine-readable forms like barcodes represent another data processing application of the computer to laboratory medicine.

The interfacing of machines to the LIS allows for direct entry of results from the various equipment in the laboratory. All these help prevent the potential risks inherent in manual entry of laboratory results or manual labelling of specimens. The ability to automatically perform complicated searches of large databases by computers makes it possible to readily provide specific information in an instant needed for the management of a particular patient.

Increasingly, computers are being used not to simply collect and organise clinical data but to interpret it as well. For example, computer programs for the identification of bacteria on the basis of the pattern of their biochemical test results have achieved accuracies comparable to that of trained microbiologists.

Computer-controlled instrumentation

Instrumentation is the variety of measuring instruments to monitor and control a process. Computers have been extensively used to collect test results from clinical laboratory instruments. Modern laboratory equipment and analysers incorporate computers which control the instrument, monitor its performance, and calculate analytical results.

Computers have made possible fundamentally new kinds of instruments. Automated blood cell classification by digital image processing and pattern recognition techniques have allowed instruments to make measurements which formerly required the visual recognition skills of trained technicians. Real-time polymerase chain reaction (PCR) techniques have fundamentally changed the molecular biology discipline.

Nanotechnology, biosensors, ELISA, computerised tomography scans, digital radiography, etc. are just examples of applications of computers and instruments that have completely revolutionised the way we look at medical diagnostics.

Computer-aided diagnosis and medical imaging

Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With CAD, radiologists use the computer output as a “second opinion” and make decisions. CAD is a concept established by taking into account roles of physicians and computers equally, whereas automated computer diagnosis is a concept based on computer algorithms only.

With CAD, the performance by computers does not have to comparable to or better than by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancer on mammograms. Another important application of computers in diagnostic radiology is the picture archiving and communication system (PACS).

PACS is a medical imaging technology which provides economical storage and convenient access to images from multiple modalities or systems. Electronic images and reports are transmitted digitally via PACS and this eliminates the need to manually file, retrieve, or transport films. Most PACSs handle images from various medical imaging instruments, including ultrasound, magnetic resonance imaging (MRI), computed tomography (CT scans), positron emission tomography (PET scans), endoscopy, mammograms, digital radiography, computed radiography, ophthalmology, etc.

Clinical areas beyond radiology: cardiology, oncology, gastroenterology, and even the laboratory are creating images that can be incorporated into PACS. This allows remote access by providing off-site viewing and reporting and thereby paving way for teleradiology and telediagnosis.


While computers are often introduced into the clinical laboratory to improve the accuracy and efficiency of traditional functions and instruments, they are increasingly being used to handle large medical databases in numerous innovative ways, to interpret laboratory data, to take over complex pattern recognition functions, and to construct instruments which generate qualitatively new kinds of information for patient care. The computer which came the medical laboratory as a clerk is now becoming a consultant.

Read more at:

/healthrangerinventions/2016-02-15-computers-in-medical-laboratory.html/feed 0
Icky roach-like robots might help in disasters /healthrangerinventions/2016-02-15-icky-roach-like-robots-might-help-in-disasters.html /healthrangerinventions/2016-02-15-icky-roach-like-robots-might-help-in-disasters.html#respond Wed, 30 Nov -0001 00:00:00 +0000

WASHINGTON (AP) — When buildings collapse in future disasters, the hero helping rescue trapped people may be a robotic cockroach.

(Article by The Associated Press, republished from

Repulsive as they may be, roaches have the remarkable ability to squish their bodies down to one quarter their normal size, yet still scamper at lightning speed. Also, they can withstand 900 times their body weight without being hurt.

The amazing cockroach inspired scientists to create a mini-robot that can mimic those feats of strength and agility.

The researchers hope swarms of future roach-like robots could be fitted with cameras, microphones and other sensors and then used in earthquakes and other disasters to help search for victims. The skittering robots could also let rescuers know if the rubble pile is stable.

Cockroaches “seem to be able to go anywhere,” said University of California at Berkeley biology professor Robert Full, coauthor of a study about the prototype cockroach robot. “I think they’re really disgusting and really revolting, but they always tell us something new.”

The study was published last week in the journal Proceedings of the National Academy of Sciences.

The palm-size prototype, called the Compressible Robot with Articulated Mechanisms, or CRAM, looks more like an armadillo and walks sort of like Charlie Chaplin when it’s compressed. It’s about 20 times the size of the roach that inspired it. And it’s simple and cheap.

Coauthor Kaushik Jayaram, a Harvard robotics researcher, said the most difficult part was the design, but after that he used off-the-shelf electronics and motors, cardboard, polyester and some knowledge of origami.

All told, the prototype probably cost less than $100, Jayaram said. He figures if mass-produced, with sensors and other equipment added on, the robots could eventually cost less than $10 apiece.

In the past, when engineers looked at trying to create robots that could get into tight places, they thought about shape-changing soft animals like worms, slugs or octopuses, Full said. But the cockroach, which already is studied by roboticists for other abilities, has certain advantages, including crush-resistance and speed

Read more at:

/healthrangerinventions/2016-02-15-icky-roach-like-robots-might-help-in-disasters.html/feed 0