Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Bandwidth resource prediction and management of Web applications hosted on cloud
SUN Tianqi, HU Jianpeng, HUANG Juan, FAN Ying
Journal of Computer Applications    2020, 40 (1): 181-187.   DOI: 10.11772/j.issn.1001-9081.2019050903
Abstract398)      PDF (1217KB)(578)       Save
To address the problem of bandwidth resource management in Web applications, a prediction method for bandwidth requirement and Quality of Service (QoS) of Web applications based on network simulation was proposed. A modeling framework and formal specification were presented for Web services, a simplified parallel workload model was adopted, the model parameters were extracted from Web application access logs by means of automated data mining, and the complex network transmission process was simulated by using network simulation tool. As a result, the bandwidth requirement and changes on QoS were able to be predicted under different workload intensities. A classic benchmark system named TPC-W was used to evaluate the accuracy of prediction results. Theoretical analysis and simulation results show that compared with traditional linear regression prediction, network simulation can stably simulate real system, the predicted average relative error for total request number and total byte number is 4.6% and 3.3% respectively. Finally, with different bandwidth scaling schemes simulated and evaluated based on the TPC-W benchmark system, the results can provide decision support for resource management of Web applications.
Reference | Related Articles | Metrics
Classifier ensemble based on fuzzy clustering
FAN Ying Hua JI Hua-xiang ZHANG
Journal of Computer Applications   
Abstract1816)      PDF (799KB)(1141)       Save
A novel algorithm for the creation of classifier ensemble based on fuzzy clustering was introduced. The algorithm got the distribution characteristics of the training sets by fuzzy clustering and sampled different training dataset to train different individual classifiers. Then the algorithm adjusted every sample's weight to get more classifiers through evaluating the quality of the classifier until certain termination condition was satisfied. The algorithm was tested on the UCI benchmark data sets and compared with two other classical algorithms: AdaBoost and Bagging. Results show that the new algorithm is more robust and has higher accuracy.
Related Articles | Metrics