Is a Master's in Information Technology Worth it?
A information technology master's degree can position you for management [...]
In the 1951 movie No Highway in the Sky, Jimmy Stewart plays a scientist who sabotages an unsafe plane rather than let it take off and put passengers at risk. He knows about the plane’s fatal flaw because he had tested it for hundreds of hours in his laboratory.
Were that movie to be remade today, Stewart’s role would likely be replaced by a computer. That’s because rapid software simulations have replaced the kind of physical testing the Stewart character conducts. In developing its Overture airliner—a supersonic transport that promises to take passengers from New York to Paris in only 3.5 hours—Boom Supersonic used supercomputers to run 53 million compute hours on Amazon Web Services as a substitute for physical prototyping and wind-tunnel testing. The company plans to add more than 100 million more compute hours to that total before rolling out its finished product.
This type of high performance data and computing (HPDC) not only provides more detailed and more accurate information than physical testing but does so more efficiently. The Concorde, the original supersonic passenger jet, required 25 years of research and development and countless hours of real-world testing before making its first commercial flight in 1976. Overture should be airborne after a development period roughly half as long.
The same principles used to develop Overture apply to spaceflight. Today, NASA looks to supercomputer frameworks to help it resolve its next challenges. “Our current spaceflight computers were developed almost 30 years ago,” says Wesley Powell, NASA’s principal technologist for advanced avionics. “Future NASA missions demand significantly increased onboard computing capabilities and reliability. The new computing processor will provide the advances required in performance, fault tolerance, and flexibility to meet these future mission needs.”
High-speed computing systems provide solutions to a broad range of challenges in the air, beneath the sea, and on the ground. What is high performance data and computing (HMDC)? This article addresses that question. It also explores:
Just how super is a supercomputer? The term is misleading. High performance data and computing (HPDC or HPC) involves multiple computers—each called a node—pooling their resources, not a single mega-machine. These nodes are linked through a communications network and work together to solve large, complex problems in such fields as healthcare, engineering, science, and business. The computers may be in close proximity on premises or widely dispersed and accessed via the cloud.
The numbers and terms associated with HPDC are so astronomical they sound made up. The Yale Center for Research Computing, which opened in 2015, includes a cluster of computers called Ruddle used in genetic research. “Ruddle has roughly the same computational power as you would get by stringing together 2,000 or so high-end laptop computers,” according to Yale Medicine Magazine. “In addition, it possesses three petabytes (three quadrillion bytes) of digital storage. That’s a lot. If you were to fill Ruddle’s storage devices with MP3-encoded songs, you could play music for 6,000 years.” Talk about big data.
Meanwhile, over in Switzerland, researchers used HPC power to set a new record for calculating the number pi (π). They smashed the old record, adding 12.8 trillion new digits, bringing the new total to 62.8 trillion.
HPDC has revolutionized DNA sequencing, weather forecasting, vehicle design, finance, and other fields. Supercomputers can handle massive amounts of data and run different scenarios simultaneously. The global HPDC market will reach $44 billion in 2022, according to Hyperion Research. Almost every industry represented in the Fortune 1000 uses HPC applications.
University and Program Name | Learn More |
The University of Tennessee:
Online Master of Computer Science
|
|
Merrimack College:
Master of Science in Computer Science
|
|
Stevens Institute of Technology:
Master of Science in Computer Science
|
|
Tufts University:
Master of Science in Computer Science
|
The power of high performance computing has spawned breakthroughs in many fields. The biggest drawback appears to be its cost, both in terms of money and the labor it takes to set up an HPDC system. Those issues have made cloud computing alternatives increasingly popular.
Modern computers are no slouch when it comes to processing data, with the average desktop machine capable of performing billions of calculations per second. HPC leaves the average PC in the dust, however, doing quadrillions of calculations per second. If a single computer can solve a problem at all, it might take hours, days, weeks, months, or years to do so. A supercomputer can knock that time down to minutes, hours, days, or weeks.
Oracle cites multiple challenges to locally hosted HPDC deployments:
All of which explains the increasing move to cloud-based supercomputing. “Instead of having to architect, engineer, and build a supercomputer, companies can now rent hours on the cloud, making it possible bring tremendous computational power to bear on R&D,” Joris Poort, founder and CEO of intelligent cloud supercomputing platform Rescale, wrote in the Harvard Business Review.
Computer simulations accelerate the speed of Boom Supersonic getting its jets in the air. Poort cites other industries using technology to a similar end, including Commonwealth Fusion Systems, a fusion nuclear reactor startup that “relies on simulations to validate potential reactor designs, as no commercial fusion reactor has ever existed.” Machine learning algorithms train software for advanced driver assistance systems.
Firefly Aerospace uses computational engineering to develop commercial rockets it plans to send to the moon. “Similarly, drug manufacturers need complex simulations to know how molecules will interact with a biological environment before they can commit to producing new drug discovery breakthroughs,” Poort writes.
Weather forecasting models use data analytics to predict everything from local thunderstorms to hurricanes that threaten multiple states. The U.S. Global Forecast System hopes to launch a new hurricane forecast model, the Hurricane Analysis and Forecast System, in time for the 2023 storm season. “Researchers are developing new ensemble-based forecast models at record speed, and now we have the computing power needed to implement many of these substantial advancements to improve weather and climate prediction,” says Ken Graham, director of the National Weather Service.
Remember the genetic researchers at Yale with three petabytes of digital storage? Those scientists and others like them have made great strides in gene sequencing, diagnosis of mysterious disorders, and new and improved treatment methods. “Having a specific genetic diagnosis can help doctors and families have a better understanding of treatments, outcomes, and predictions,” says Lauren Jeffries, clinical genetics coordinator for Yale Medicine’s Pediatric Genomics Discovery Program.
The spread of HPDC across a wide range of businesses, industries, and government agencies has opened up a similarly wide range of job opportunities. HPDC-related jobs tend to fall at the high end of the IT scale. They include:
A degree in computer science or engineering will give you a solid foundation for HPDC work. An advanced degree can give you a competitive edge in terms of job opportunities and higher salaries. For example, Case Western Reserve University and Southern Methodist University both offer master’s degrees in computer science. Some schools, including the University of Houston and Temple University, offer HPDC certificate programs.
Questions or feedback? Email editor@noodle.com
A information technology master's degree can position you for management [...]
Categorized as: Computer Science, Information Technology & Engineering