5G Technology World

  • 5G Technology and Engineering
  • FAQs
  • Apps
  • Devices
  • IoT
  • RF
  • Radar
  • Wireless Design
  • Learn
    • 5G Videos
    • Ebooks
    • EE Training Days
    • FAQs
    • Learning Center
    • Tech Toolboxes
    • Webinars/Digital Events
  • Handbooks
    • 2024
    • 2023
    • 2022
    • 2021
  • Resources
    • Design Guide Library
    • EE World Digital Issues
    • Engineering Diversity & Inclusion
    • Engineering Training Days
    • LEAP Awards
  • Advertise
  • Subscribe

Researchers Craft Program to Stop Cloud Computer Problems Before They Start

By Staff Author | September 12, 2012

Researchers from North Carolina State University have developed a new software tool to prevent performance disruptions in cloud computing systems by automatically identifying and responding to potential anomalies before they can develop into problems.

Cloud computing enables users to create multiple “virtual machines” that operate independently, even though they are all operating on one large computing platform. However, this approach can cause performance issues when a software bug, or other problem, in one virtual machine disrupts the entire cloud.

Now researchers have designed software that looks at the amount of memory being used, network traffic, CPU usage and other system-level data in a cloud computing infrastructure to develop a definition of the wide range of behaviors that can be considered “normal.” CPU usage is the amount of computing power being used at any given time. The program defines normal behavior for every virtual machine in the cloud, and can then look for deviations and predict anomalies that could affect the system’s ability to provide service to users.

One advantage of this approach is that it does not require users to provide so-called “training data” about what constitutes abnormal behavior, which is important because training data are often difficult to obtain in production cloud systems. Moreover, this approach is also able to predict anomalies that have never been seen before.

If the program spots a virtual machine that is deviating from its normal behavior, it runs a “black box” diagnostic that can determine which metrics – such as CPU usage – may be affected, without exposing user data. This metric data can then be used to trigger the appropriate prevention system, which will address the deviation and prevent it from becoming a problem.

“If we can identify the initial deviation and launch an automatic response, we can not only prevent a major disturbance, but actually prevent the user from even experiencing any change in system performance,” says Dr. Helen Gu, an assistant professor of computer science at NC State and co-author of a paper describing the research. “Also, it’s important to note that this program does not access any user’s individual information. We’re looking only at system-level behavior.”

The program is also lightweight, meaning it does not use much of the cloud’s computing power to operate. It is able to collect the initial data and define normal behavior much faster than existing approaches. Once it is up and running, it uses less than 1 percent of the CPU load and 16 megabytes of memory.

In benchmark testing, the program identified up to 98 percent of anomalies, which is much higher than the rate found in existing approaches. “It also had a 1.7 percent rate of false positives, meaning it triggered very few false alarms,” Gu says. “And because the false alarms resulted in automatic responses, which are easily reversible, the cost of the false alarms is negligible.”

Gu says her team’s next step is to incorporate more detailed “white box” diagnostic tools into the software, so they can identify the software bugs causing any anomalies and correct them.

The paper, “UBL: Unsupervised Behavior Learning for Predicting Performance Anomalies in Virtualized Cloud Systems,” was co-authored by NC State Ph.D. students Daniel Dean and Hiep Nguyen. The paper will be presented Sept. 20 at the 9th Annual ACM International Conference on Autonomic Computing in San Jose, Calif. The research was supported by the National Science Foundation, the U.S. Army Research Office, an IBM faculty award and a Google research award.

North Carolina State University

September 12, 2012


Filed Under: RF

 

Next Article

← Previous Article
Next Article →

Related Articles Read More >

Butler Matrix
Butler Matrix drives Wi-Fi and other phased-array antennas
Long-wire dipole antennas: still viable after more than a century
RemCom Wireless InSite 4.0
Software simulates RF conditions from the Earth to the Moon
FAQ on the Butler matrix for beamforming: part 2

Featured Contributions

  • Overcome Open RAN test and certification challenges
  • Wireless engineers need AI to build networks
  • Why AI chips need PCIe 7.0 IP interconnects
  • circuit board timing How timing and synchronization improve 5G spectrum efficiency
  • Wi-Fi 7 and 5G for FWA need testing
More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: Internet of Things
Explore practical strategies for minimizing attack surfaces, managing memory efficiently, and securing firmware. Download now to ensure your IoT implementations remain secure, efficient, and future-ready.

EE LEARNING CENTER

EE Learning Center
“5g
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

Engineering Training Days

engineering
“bills
5G Technology World
  • Enews Signup
  • EE World Online
  • DesignFast
  • EDABoard Forums
  • Electro-Tech-Online Forums
  • Microcontroller Tips
  • Analogic Tips
  • Connector Tips
  • Engineer’s Garage
  • EV Engineering
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips
  • About Us
  • Contact Us
  • Advertise

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy

Search 5G Technology World