Expand India's footprint in space
Vymanik Aerospace Blog
Friday, September 20, 2019
Tuesday, July 23, 2019
Chandrayaan 2, mission unlike any before
Chandrayaan 2 is on a mission unlike any before. Leveraging nearly a decade of scientific research and engineering development, India's second lunar expedition will shed light on a completely unexplored section of the Moon — its South Polar region. This mission will help us gain a better understanding of the origin and evolution of the Moon by conducting detailed topographical studies, comprehensive mineralogical analyses, and a host of other experiments on the lunar surface. While there, we will also explore discoveries made by Chandrayaan 1, such as the presence of water molecules on the Moon and new rock types with unique chemical composition. Through this mission, we aim to:
Inspire a future generation of scientists, engineers, and explorers
Surpass international aspirations
Chandrayaan - 2 launch scheduled on 15th July 2019 at 2:51hrs was called off due to a technical snag noticed at around one hour before launch. The launch is now rescheduled on July 22, 2019, at 14:43 hrs IST from Satish Dhawan Space Center at Sriharikota onboard GSLV Mk-III. It will be injected into an earth parking 170 x 39120 km orbit. A series of manoeuvres will be carried out to raise its orbit and put Chandrayaan-2 on Lunar Transfer Trajectory. On entering Moon's sphere of influence, on-board thrusters will slow down the spacecraft for Lunar Capture. The Orbit of Chandrayaan-2 around the moon will be circularized to 100x100 km orbit through a series of orbital manoeuvres. On the day of landing, the lander will separate from the Orbiter and then perform a series of complex manoeuvres comprising of rough braking and fine braking. Imaging of the landing site region prior to landing will be done for finding safe and hazard-free zones. The lander-Vikram will finally land near the South Pole of the moon on Sep 7, 2019. Subsequently, Rover will roll out and carry out experiments on Lunar surface for a period of 1 Lunar day which is equal to 14 Earth days. The orbiter will continue its mission for a duration of one year.
Science experiments
Chandrayaan-2 has several science payloads to expand the lunar scientific knowledge through the detailed study of topography, seismography, mineral identification and distribution, surface chemical composition, thermo-physical characteristics of topsoil and composition of the tenuous lunar atmosphere, leading to a new understanding of the origin and evolution of the Moon.
The Orbiter payloads will conduct remote-sensing observations from a 100 km orbit while the Lander and Rover payloads will perform in-situ measurements near the landing site.
For an understanding of the Lunar composition, it is planned to identify the elements and mapping its distribution on the lunar surface both at global and In-situ level. In addition detailed 3-dimensional mapping of the lunar regolith will be done. Measurements on the near-surface plasma environment and electron density in the Lunar ionosphere will be studied. Thermo-physical property of the lunar surface and seismic activities will also be measured. Water molecule distribution will be studied using infra red spectroscopy, synthetic aperture radiometry & polarimetry as well as mass spectroscopy techniques.
Key payloads
Chandrayaan 2 Large Area Soft X-ray Spectrometer
Elemental composition of the Moon
Imaging IR Spectrometer
Mineralogy mapping and water-ice confirmation
Synthetic Aperture Radar L & S Band
Polar-region mapping and sub-surface water-ice confirmation
Orbiter High-Resolution Camera
High-resolution topography mapping
Chandra's Surface Thermophysical Experiment
Thermal conductivity and temperature gradient
Alpha Particle X-ray Spectrometer and Laser-Induced Breakdown Spectroscope
In-situ elemental analysis and abundance in the vicinity of landing site
Why? Chandrayaan 2
Why are we going to the Moon?
The Moon is the closest cosmic body at which space discovery can be attempted and documented. It is also a promising testbed to demonstrate the technologies required for deep-space missions. Chandrayaan 2 attempts to foster a new age of discovery, increase our understanding of space, stimulate the advancement of technology, promote global alliances, and inspire a future generation of explorers and scientists.
What are the scientific objectives of Chandrayaan 2? Why explore the Lunar South Pole?
Moon provides the best linkage to Earth’s early history. It offers an undisturbed historical record of the inner Solar system environment. Though there are a few mature models, the origin of the Moon still needs further explanations. Extensive mapping of the lunar surface to study variations in lunar surface composition is essential to trace back the origin and evolution of the Moon. Evidence for water molecules discovered by Chandrayaan-1, requires further studies on the extent of water molecule distribution on the surface, below the surface and in the tenuous lunar exosphere to address the origin of water on Moon.
The lunar South Pole is especially interesting because of the lunar surface area here that remains in shadow is much larger than that at the North Pole. There is a possibility of the presence of water in permanently shadowed areas around it. In addition, the South Pole region has craters that are cold traps and contain a fossil record of the early Solar System.
Chandrayaan-2 will attempt to soft-land the lander -Vikram and rover- Pragyan in a high plain between two craters, Manzinus C and Simpelius N, at a latitude of about 70° south.
Monday, July 22, 2019
Chandrayaan 2
Inching towards the edge of discovery
Are you ready for the unknown?
Chandrayaan 2 is an Indian lunar mission that will boldly go where no country has ever gone before — the Moon's south polar region. Through this effort, the aim is to improve our understanding of the Moon — discoveries that will benefit India and humanity as a whole. These insights and experiences aimed at a paradigm shift in how lunar expeditions are approached for years to come — propelling further voyages into the farthest frontiers.
Friday, July 19, 2019
Behind the scenes of the Apollo mission at MIT
From making the lunar landings possible to interpret the meaning of the moon rocks, the Institute was a vital part of history.
Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today. MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today. MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated. Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it. A new age in electronics The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways. The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product. In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible. As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year. As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes. Priceless rocks MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said. Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls. The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces. MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples. Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well. Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says. Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says. Sorting out the facts In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says. For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot. But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist. Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says. In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.” Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.” Behind the scenes At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end. Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads. Many others who played major roles in that software and hardware development have also had their roles little-recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent. While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project. Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”
Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today. MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today. MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated. Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it. A new age in electronics The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways. The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product. In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible. As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year. As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes. Priceless rocks MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said. Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls. The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces. MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples. Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well. Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says. Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says. Sorting out the facts In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says. For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot. But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist. Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says. In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.” Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.” Behind the scenes At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end. Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads. Many others who played major roles in that software and hardware development have also had their roles little-recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent. While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project. Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”
Thursday, July 18, 2019
MIT CSAIL makes AI that helps drones land like a helicopter and flies like a plane
Drones are versatile machines, which is why they’ve been used to ferry food to golf courses, perform reconnaissance for firefighters and first responders, and put on light shows at the Olympics opening ceremonies. But their propeller-forward form factors aren’t exactly conducive to power efficiency, which limits their flight time.
Fortunately, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Dartmouth, and the University of Washington are investigating a new drone design approach that combines the best of quadcopters and fixed-wing airplanes. Their work, which they detail in a newly published paper that’ll be presented later this month at the Siggraph conference in Los Angeles, resulted in a novel AI system that allows users to dream up drones of different sizes and shapes that can switch between hovering and gliding with a single flight controller.
“Our method allows non-experts to design a model, wait a few hours to compute its controller, and walk away with a customized, ready-to-fly drone,” said MIT CSAIL grad student and lead author Jie Xu. “The hope is that a platform like this could make more these more versatile ‘hybrid drones’ much more accessible to everyone.”
As Xu and colleagues explain in the paper, traditional hybrid fixed-wing drones that can take off and land vertically are difficult to control, because they often require engineers to develop one system for hovering (“copter flight”) and another for gliding horizontally (“plane flight”), plus controllers for transitioning between the two modes.
AI can lend a hand here — researchers are increasingly turning to machine learning to create more adaptable control systems. But most methods lean heavily on simulation instead of real hardware, resulting in discrepancies.
To address this, the researchers’ system leverages reinforcement learning — an AI training technique that employs rewards to drive software policies toward goals — to train the model to track potential gaps between simulation and real-world scenarios, enabling the controller to adapt its output to compensate. It doesn’t need to store any modes, and it can switch from hovering to gliding and back again simply by updating the drone’s target velocity.
The team integrated their AI system into OnShape, a computer-aided design software, to allow users to select and match drone parts from a data set. Then, in a series of tests, they slotted the resulting design into a training simulator that tested its flight performance.
“With a new input representation and a novel reward function, we were able to narrow the reality gap that is common in reinforcement learning approaches,” wrote Xu and colleagues in the paper. “We expect that this proposed solution will find application in many other domains.”
The team leaves to future work fine-tuning the drone’s design — which they note currently doesn’t fully take into account the complex aerodynamic effects between the propeller’s airflow and the wings — to improve manoeuvrability. They also hope to overcome the copter’s inability to perform sharp turns.
Design of Drone inspired by Pill Bugs
Drone airframe design is inspired by pill bugs
Quadcopters are capable of extraordinary acrobatic feats, but even the most skilled pilots and algorithms can’t avoid every obstacle. That’s why a pair of scientists in the department of mechanical engineering and biomedical engineering at the City University of Hong Kong developed an airframe inspired by origami (and insects), which they describe in a newly published preprint paper (“A Quadrotor with an Origami-Inspired Protective Mechanism”). When a quadcopter outfitted with this frame collides with another object mid-flight, the design mechanically unfurls, reconfiguring its structure to protect sensitive components like cameras, computers, and thermal sensors.
“As the complexity of [drone] tasks grows, it inevitably escalates the chance of failures. Despite attempts to circumvent an accident, it is still likely unforeseen circumstances would lead to an undesired collision that destabilizes the flight,” wrote the coauthors. “The impact from a subsequent fall could lead to destructive damage on the robot.”
The 14.9-gram airframe — which consists of a ground tile, polyimide foldable arms sandwiched between fibreglass, a fold coupler, and fold triggers — was designed to remain rigid in flight in order to reduce its weight. The vertical thrust from the drone’s propellers generates an upward force, whose torque keeps the airframe’s arms from folding prematurely. On collision, the impact rotates the fold trigger to push part of the arms, overcoming the torque above the fold axis and adjusting the joints bit by bit until the entire structure folds in on itself.
“The airframe … shields the body part [of the drone] from the fall. The developed prototype makes use of intelligent mechanical design to overcome the contradicting requirements on the structural stiffness,” wrote the team. “By aligning a pair of planar structures in a perpendicular direction, we obtain the desired rigidity. The interlocking mechanism, or joint limits, let the robot use the thrust force to remain in the flight state.”
To validate their design, the researchers crashed a quadcopter equipped with their airframe mid-flight at a speed of roughly 1.5 miles per hour. The frame folded in less than 0.2 seconds before falling to the ground, which the researchers say implies it can be adapted to smaller drones with lower payload capacities.
“The proposed mechanism does not protect the robot from the top or bottom collisions,” wrote the coauthors. “[However, it] has been experimentally verified in both static measurements and actual flights.”
Wednesday, July 17, 2019
Insect Arm inspired drones invented using in-flight adjustments technology
Insect Arm inspired drones invented using in-flight adjustments technology
Researchers have developed a drone with 'movable-arm technology'. It is inspired by the wings and flight patterns of insects. The new technology enables the drones to function in windy conditions along with making it more energy efficient.The details about the technology is published in the Journal - Dynamic Systems, Measurement, and Control.Drones are used to assist in end number of functions, such as disaster relief, surveillance and spying, mineral exploration, geological surveying, agriculture, archaeology. Several other industries too use drones to achieve accurateness."Our drone design was inspired by the wings and flight patterns of insects. We created a drone design with automatic folding arms that can make in-flight adjustments." said Xiumin Diao. Diao worked with the Purdue Office of Technology Commercialization to patent his device. They are looking for additional researchers and partners to license the technology.Diao said the design provides drones with improved stability in windy conditions because the folding arms can move and change the center of gravity of the device during flight. He said the design also makes drones more energy efficient because the movable-arm technology allows for the use of the full range of rotor thrust.He added: "The drones on the market now have fixed arms and that greatly reduces their maximum payload capacity when the payload is offset their center of gravity. Our design allows a larger payload because the movable arms can liberate part of rotor thrust to fight the weight on the overall device."The foldable arms can also help in search-and-rescue operations using drones because they can more effectively navigate the air conditions in ravaged areas and morph by moving the arms to go through narrow spaces.A record of more than $700 million was invested in the drone industry in 2018 as military, government and consumer markets saw increased demand. (ANI)
Swarms of Drones, Piloted by Artificial Intelligence, May Soon Patrol Europe's Borders
Swarms of Drones, Piloted by Artificial Intelligence, May Soon Patrol Europe's Borders
Imagine you’re hiking through the woods near a border. Suddenly, you hear a mechanical buzzing, like a gigantic bee. Two quadcopters have spotted you and swoop in for a closer look. Antennae on both drones and on a nearby autonomous ground vehicle pick up the radio frequencies coming from the cell phone in your pocket. They send the signals to a central server, which triangulates your exact location and feeds it back to the drones. The robots close in.Cameras and other sensors on the machines recognize you as human and try to ascertain your intentions. Are you a threat? Are you illegally crossing a border? Do you have a gun? Are you engaging in acts of terrorism or organized crime? The machines send video feeds to their human operator, a border guard in an office miles away, who checks the videos and decides that you are not a risk. The border guard pushes a button, and the robots disengage and continue on their patrol.This is not science fiction. The European Union is financing a project to develop drones piloted by artificial intelligence and designed to autonomously patrol Europe’s borders. The drones will operate in swarms, coordinating and corroborating information among fleets of quadcopters, small fixed-wing airplanes, ground vehicles, submarines, and boats. Developers of the project, known as Roborder, say the robots will be able to identify humans and independently decide whether they represent a threat. If they determine that you may have committed a crime, they will notify border police.President Donald Trump has used the specter of criminals crossing the southern border to stir nationalist political sentiment and energize his base. In Europe, two years after the height of the migration crisis that brought more than a million people to the continent, mostly from the Middle East and Africa, immigration remains a hot-button issue, even as the number of new arrivals has dropped. Political parties across the European Union are winning elections on anti-immigrant platforms and enacting increasingly restrictive border policies. Tech ethicists and privacy advocates worry that Roborder and projects like it outsource too much law enforcement work to nonhuman actors and could easily be weaponized against people in border areas.“The development of these systems is a dark step into morally dangerous territory,” said Noel Sharkey, emeritus professor of robotics and artificial intelligence at Sheffield University in the U.K. and one of the founders of the International Committee for Robot Arms Control, a nonprofit that advocates against the military use of robotics. Sharkey lists examples of weaponized drones currently on the market: flying robots equipped with Tasers, pepper spray, rubber bullets, and other weapons. He warns of the implications of combining that technology with AI-based decision-making and using it in politically-charged border zones. “It’s only a matter of time before a drone will be able to take action to stop people,” Sharkey told The Intercept.Roborder’s developers also may be violating the terms of their funding, according to documents about the project obtained via European Union transparency regulations. The initiative is mostly financed by an €8 million EU research and innovation grant designed for projects that are exclusively nonmilitary, but Roborder’s developers acknowledge that parts of their proposed system involve military technology or could easily be converted for military use.Much of the development of Roborder is classified, but The Intercept obtained internal reports related to ethical considerations and concerns about the program. That documentation was improperly redacted and inadvertently released in full.In one of the reports, Roborder’s developers sought to address ethical criteria that are tied to their EU funding. Developers considered whether their work could be modified or enhanced to harm humans and what could happen if the technology or knowledge developed in the project “ended up in the wrong hands.” These ethical issues are raised, wrote the developers, when “research makes use of classified information, materials or techniques; dangerous or restricted materials[;] and if specific results of the research could present a danger to participants or to society as a whole.”Roborder’s developers argued that these ethical concerns did not apply to their work, stating that their only goal was to develop and test the new technology, and that it would not be sold or transferred outside of the European Union during the life cycle of the project. But in interviews with The Intercept, project developers acknowledged that their technology could be repurposed and sold, even outside of Europe, after the European project cycle has finished, which is expected to happen next year.Beyond the Roborder project, the ethics reports filed with the European Commission suggest a larger question: When it comes to new technology with the potential to be used against vulnerable people in places with few human rights protections, who decides what we should and should not develop?Roborder won its funding grant in 2017 and has set out to develop a marketable prototype — “a swarm of robotics to support border monitoring” — by mid-2020. Its developers hope to build and equip a collection of air, sea, and land drones that can be combined and sent out on border patrol missions, scanning for “threats” autonomously based on information provided by human operators, said StefanosVrochidis, Roborder’s project manager.The drones will employ optical, infrared, and thermal cameras; radar; and radio frequency sensors to determine threats along the border. Cell phone frequencies will be used to triangulate the location of people suspected of criminal activity, and cameras will identify humans, guns, vehicles, and other objects. “The main objective is to have as many sensors in the field as possible to assist patrol personnel,” said Kostas Ioannidis, Roborder’s technical manager.The end product will be tested by border police in several European countries, including Portugal, Hungary, and Greece, but the project has also generated considerable interest in the private sector. “Eventually, we have companies that would certainly like to exploit this commercially,” Vrochidis told The Intercept. “They might exploit the whole outcome or part of the outcome, depending. They can exploit this in Europe but also outside of Europe.”In their grant agreement, Roborder’s developers told the European Commission that they did not foresee any exports of their technology outside of the EU. In interviews, however, developers told The Intercept that the companies involved would be open to selling their technology beyond Europe. According to a spokesperson from the grant program funding Roborder, Horizon 2020, there is nothing Roborder’s EU backers can do to control where or how the technology they bankrolled is eventually used.The documents obtained by The Intercept show Roborder responding to some ethical concerns about the project but not about the technology itself. In their grant application, Roborder’s developers conceded that their research “may be exploited by criminal organizations and individual criminals when planning to perpetrate acts of serious crime or terrorism” but wrote that the consortium of public and private companies developing the technology would work to keep their data safe. That group includes drone manufacturing companies, several national police departments, two national guards, a defense ministry, a port authority, acyberdefense company, a company that specializes in developing equipment for electronic warfare, and another that provides “predictive analytics” for European police forces.As for the technology’s possible modification for future clients, the answers were less clear. The developers would not comment on the potential for military sales after the project cycle ends. Developers added that their work is delayed because one of Roborder’s key consortium partners, Portuguese drone manufacturer Tekever, has left the project. Spokespeople for Roborder, Tekever, and Horizon 2020 would not explain the rationale for Tekever’s departure.Horizon 2020 supports many security-oriented projects but maintains that “only research that has an exclusive focus on civil applications is eligible for funding.” The grant program previously funded a project that uses artificial intelligence to detect whether travelers are lying as they pass through automated border crossings.Yet the documents obtained by The Intercept highlight inconsistent statements about Roborder’s potential military uses. According to one report, the project has no potential for “dual use,” or both military and civil deployment. Ten pages later, asked whether their work involved items that could be considered dual-use by European standards, Roborder’s developers wrote that it did.Roborder hired a consultant, ReinhardHutter, as an external ethics adviser to the project, according to another document from the Horizon 2020 program that was inadvertently released in full by the European Commission. In his report, Hutter wrote that “Roborder involves technology with military potential,” and that “the results of this project have the potential to be used back in the defense sector.” The technology involved, Hutter wrote, had “some dual-use potential but no dual-use activity on the project.” In other words, it could be used for military purposes but wouldn’t be used that way within the scope of Roborder.Hutter declined to speak to The Intercept.This blurring of lines between military and civilian development by the EU funding program might be deliberate. In a 2014 guidebook on European funding for dual-use projects, the European Commission notes that the regulation establishing Horizon 2020 mandates that all funded projects have “an exclusive focus” on civilian development, but the document also says that “substantial parts of the research funded is of relevance for defense and can lead to technologies that will be used by defense actors.”The authors of a 2016 study commissioned by the security and defense sub-committee of the European Parliament went further, arguing that Horizon 2020’s clause on exclusive civilian development should be reinterpreted to include defense research. In order to compete with U.S. technological development, the study’s authors advocated for the creation of a European equivalent of DARPA, the U.S. Defense Advanced Research Projects Agency, whose work contributed to the development of the internet, GPS, and other technologies. In a 2017 speech, French President Emmanuel Macron echoed that, calling for, “a European agency for disruptive innovation in the same vein as” DARPA.The 2016 report does not represent the views of the European Parliament or its security and defense sub-committee, and was not used to develop any specific legislation, a spokesperson for the European Parliament said. A spokesperson for Horizon 2020 rejected the idea that there was any ambiguity in what kind of projects the European Union would fund.“The European Commission does not fund research intended for military use,” she said.The drones Roborder plans to deploy are common technology. What would be groundbreaking for the companies involved is a functional system that allows swarms of drones to operate autonomously and in unison to reliably identify targets. AI threat detection is often inaccurate, according to robotics researchers, so any system that could correctly and consistently identify people, cars, and weapons, among other things, would be a substantial and lucrative advancement.Drone cameras will not use facial recognition technology within the scope of the project, explained Ioannidis, Roborder’s technical manager, nor will they be able to determine any human characteristics, such as height, weight, age, skin color, or perceived gender. “The system will only identify that ‘this object is human,’” he added, “nothing more.”Still, Ioannidis admitted that adding facial recognition to the Roborder system later on would be “technologically possible.” What about weaponizing the Roborder system to act against humans? “No,” he said, firmly. “The robots don’t have any authority to take any action against anyone. It’s just monitoring and giving alerts.”But Sharkey, the U.K. robotics professor, argues that there is a thin line between using robots to monitor a border and using them to enforce one. Weaponizing a drone is relatively easy, he said, citing the 2015 case of the Connecticut teenager who equipped a drone with a handgun and a flamethrower. Sharkey worries about the implications of developing autonomous systems to patrol borders, including how the system could be used by a country coping with a large influx of people.“The question is, where is this going?” Sharkey asked. “The current project does not propose to weaponize them, but it would just be too tempting for countries if a tipping point were to happen with mass migration.”Hannah Couchman, a researcher at the U.K. human rights organization Liberty, agrees. “There are deep human rights and civil liberty concerns with this technology,” she told The Intercept. “Once this tech is developed, it’s seen as a solution, as a response to austerity, and a way to do a job efficiently with a lower cost, all rolled out without proper consultation and legislative scrutiny.”“It’s not just about mitigating the human rights risk,” Couchman said. “It’s about whether we should use the technology in the first place.”Source: https://theintercept.com/2019/05/11/drones-artificial-intelligence-europe-roborder/
Tuesday, July 9, 2019
Using Immersive 360 Degree Images to Enhance Active Involvement
A 360-degree panoramic view of RCI VidyaVihar
https://momento360.com/e/u/8c8cff88815a4c8f9ecb43a22ca25049?utm_campaign=embed&utm_source=other&utm_medium=other&heading=-135.5902877635652&pitch=-4.215925364052842&field-of-view=100
https://momento360.com/e/u/8c8cff88815a4c8f9ecb43a22ca25049?utm_campaign=embed&utm_source=other&utm_medium=other&heading=-135.5902877635652&pitch=-4.215925364052842&field-of-view=100
Sunday, July 7, 2019
Shooting a DRONE of US by Iran caused Crude Oil Price boom
Crude Climbs by Most This Year as Iran Shoots Down U.S. Drone
Oil jumped the most this year as attacks by Iran and its proxies in the sky and on sea and land prompted U.S. President Donald Trump to warn the Islamic Republic it made “a very big mistake.”
Futures climbed 5.4% in New York on Thursday after Iran shot down an American drone just a week after two tankers were targeted in the region. Meanwhile, Iran-backed Houthi rebels in Yemen said they hit a Saudi Arabian power plant with a cruise missile, at least the third such attack on the kingdom’s infrastructure in a week.
Trump appeared to downplay the Iranian drone downing, saying a “loose and stupid” individual -- rather than Iran’s top leaders -- may have been culpable.
Crude was also boosted by an equity rally after the U.S. Federal Reserve signalled it’s ready to lower interest rates for the first time since 2008.
After slipping into a bear market earlier this month, U.S. oil futures have surged more than 10% since the middle of last week, as America and Saudi Arabia blamed Iran for a spate of attacks while the Trump administration tightened sanctions on the OPEC member. The word that Trump and Chinese President Xi Jinping are set to resume trade talks at the G-20 summit in Japan have also brightened sentiment about global growth.
“The Iran conflict isn’t going away any time soon,” said Michael Hiley, head of over-the-counter energy trading at LPS Futures in New York. “If you combine that with Trump and Xi making nice -- they are at least saying the right things -- then that’s certainly going to prop up” the oil market.
West Texas Intermediate for July delivery, which expires Thursday, rose $2.89 to settle at $56.65 a barrel on the New York Mercantile Exchange, the biggest gain since Dec. 26. The more-active August contract advanced 5.7% to close at $57.07 a barrel.
Brent for August climbed $2.63 to end the session at $64.45 a barrel on London’s ICE Futures Europe Exchange. The global benchmark crude traded at a $7.38 premium to WTI for the same month.
Iran said the craft it shot down was spying and stated it would defend its airspace and maritime boundaries “with all our might.” Iran and the U.S. continue to dispute whether the U.S. Navy drone was over international or Iranian waters when it was shot down near the entrance to the Persian Gulf.
Speculative traders who’d turned against crude recently are ready to pile back in, said LPS’s Hiley. “Certainly there are some buying bullets out there; there’s some dry powder,” he said. “You can’t be short in this market.”
Source: https://www.bloomberg.com/news/articles/2019-06-19/oil-finds-support-on-tighter-u-s-supplies-robust-gas-demand
Source: https://www.bloomberg.com/news/articles/2019-06-19/oil-finds-support-on-tighter-u-s-supplies-robust-gas-demand
Subscribe to:
Posts (Atom)
-
Crude Climbs by Most This Year as Iran Shoots Down U.S. Drone Oil jumped the most this year as attacks by Iran and its proxies in the sk...
-
From making the lunar landings possible to interpret the meaning of the moon rocks, the Institute was a vital part of history. Fifty y...