Search results for: network optimization methods
19067 A Comparative Study for Various Techniques Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifyig the red blood cells as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectivelyKeywords: red blood cells, classification, radial basis function neural networks, suport vector machine, k-nearest neighbors algorithm
Procedia PDF Downloads 47919066 Hyperspectral Band Selection for Oil Spill Detection Using Deep Neural Network
Authors: Asmau Mukhtar Ahmed, Olga Duran
Abstract:
Hydrocarbon (HC) spills constitute a significant problem that causes great concern to the environment. With the latest technology (hyperspectral images) and state of the earth techniques (image processing tools), hydrocarbon spills can easily be detected at an early stage to mitigate the effects caused by such menace. In this study; a controlled laboratory experiment was used, and clay soil was mixed and homogenized with different hydrocarbon types (diesel, bio-diesel, and petrol). The different mixtures were scanned with HYSPEX hyperspectral camera under constant illumination to generate the hypersectral datasets used for this experiment. So far, the Short Wave Infrared Region (SWIR) has been exploited in detecting HC spills with excellent accuracy. However, the Near-Infrared Region (NIR) is somewhat unexplored with regards to HC contamination and how it affects the spectrum of soils. In this study, Deep Neural Network (DNN) was applied to the controlled datasets to detect and quantify the amount of HC spills in soils in the Near-Infrared Region. The initial results are extremely encouraging because it indicates that the DNN was able to identify features of HC in the Near-Infrared Region with a good level of accuracy.Keywords: hydrocarbon, Deep Neural Network, short wave infrared region, near-infrared region, hyperspectral image
Procedia PDF Downloads 10819065 Shape Optimization of a Hole for Water Jetting in a Spudcan for a Jack-Up Rig
Authors: Han Ik Park, Jeong Hyeon Seong, Dong Seop Han, Su-Chul Shin, Young Chul Park
Abstract:
A Spudcan is mounted on the lower leg of the jack-up rig, a device for preventing a rollover of a structure and to support the structure in a stable sea floor. At the time of inserting the surface of the spud can to penetrate when the sand layer is stable and smoothly pulled to the clay layer, and at that time of recovery when uploading the spud can is equipped with a water injection device. In this study, it is significant to optimize the shape of pipelines holes for water injection device and it was set in two kinds of shape, the oval and round. Interpretation of the subject into the site of Gulf of Mexico offshore Wind Turbine Installation Vessels (WTIV)was chosen as a target platform. Using the ANSYS Workbench commercial programs, optimal design was conducted. The results of this study can be applied to the hole-shaped design of various marine structures.Keywords: kriging method, jack-up rig, shape optimization, spudcan
Procedia PDF Downloads 50819064 Optimization of Bioremediation Process to Remove Hexavalent Chromium from Tannery Effluent
Authors: Satish Babu Rajulapati
Abstract:
The removal of toxic and heavy metal contaminants from wastewater streams and industrial effluents is one of the most important environmental issues being faced world over. In the present study three bacterial cultures tolerating high concentrations of chromium were isolated from the soil and wastewater sample collected from the tanneries located in Warangal, Telangana state. The bacterial species were identified as Bacillus sp., Staphylococcus sp. and pseudomonas sp. Preliminary studies were carried out with the three bacterial species at various operating parameters such as pH and temperature. The results indicate that pseudomonas sp. is the efficient one in the uptake of Cr(VI). Further, detailed investigation of Pseudomonas sp. have been carried out to determine the efficiency of removal of Cr(VI). The various parameters influencing the biosorption of Cr(VI) such as pH, temperature, initial chromium concentration, innoculum size and incubation time have been studied. Response Surface Methodology (RSM) was applied to optimize the removal of Cr(VI). Maximum Cr(VI) removal was found to be 85.72% Cr(VI) atpH 7, temperature 35 °C, initial concentration 67mg/l, inoculums size 9 %(v/v) and time 60 hrs.Keywords: Staphylococcus sp, chromium, RSM, optimization, Cr(IV)
Procedia PDF Downloads 32019063 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 12019062 Computer-Aided Diagnosis of Polycystic Kidney Disease Using ANN
Authors: G. Anjan Babu, G. Sumana, M. Rajasekhar
Abstract:
Many inherited diseases and non-hereditary disorders are common in the development of renal cystic diseases. Polycystic kidney disease (PKD) is a disorder developed within the kidneys in which grouping of cysts filled with water like fluid. PKD is responsible for 5-10% of end-stage renal failure treated by dialysis or transplantation. New experimental models, application of molecular biology techniques have provided new insights into the pathogenesis of PKD. Researchers are showing keen interest for developing an automated system by applying computer aided techniques for the diagnosis of diseases. In this paper a multi-layered feed forward neural network with one hidden layer is constructed, trained and tested by applying back propagation learning rule for the diagnosis of PKD based on physical symptoms and test results of urinanalysis collected from the individual patients. The data collected from 50 patients are used to train and test the network. Among these samples, 75% of the data used for training and remaining 25% of the data are used for testing purpose. Furthermore, this trained network is used to implement for new samples. The output results in normality and abnormality of the patient.Keywords: dialysis, hereditary, transplantation, polycystic, pathogenesis
Procedia PDF Downloads 37819061 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism
Abstract:
Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning
Procedia PDF Downloads 1619060 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 12919059 Optimization of Loudspeaker Part Design Parameters by Air Viscosity Damping Effect
Authors: Yue Hu, Xilu Zhao, Takao Yamaguchi, Manabu Sasajima, Yoshio Koike, Akira Hara
Abstract:
This study optimized the design parameters of a cone loudspeaker as an example of high flexibility of the product design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to optimize each parameter of the loudspeaker design. To overcome the limitation of the design problem in practice, this study presents an acoustic analysis algorithm to optimize the design parameters of the loudspeaker. The material character of cone paper and the loudspeaker edge were the design parameters, and the vibration displacement of the cone paper was the objective function. The results of the analysis showed that the design had high accuracy as compared to the predicted value. These results suggested that although the parameter design is difficult, with experience and intuition, the design can be performed easily using the optimized design found with the acoustic analysis software.Keywords: air viscosity, design parameters, loudspeaker, optimization
Procedia PDF Downloads 51119058 Early Prediction of Disposable Addresses in Ethereum Blockchain
Authors: Ahmad Saleem
Abstract:
Ethereum is the second largest crypto currency in blockchain ecosystem. Along with standard transactions, it supports smart contracts and NFT’s. Current research trends are focused on analyzing the overall structure of the network its growth and behavior. Ethereum addresses are anonymous and can be created on fly. The nature of Ethereum network and addresses make it hard to predict their behavior. The activity period of an ethereum address is not much analyzed. Using machine learning we can make early prediction about the disposability of the address. In this paper we analyzed the lifetime of the addresses. We also identified and predicted the disposable addresses using machine learning models and compared the results.Keywords: blockchain, Ethereum, cryptocurrency, prediction
Procedia PDF Downloads 9619057 Determining the Number of Single Models in a Combined Forecast
Authors: Serkan Aras, Emrah Gulay
Abstract:
Combining various forecasting models is an important tool for researchers to attain more accurate forecasts. A great number of papers have shown that selecting single models as dissimilar models, or methods based on different information as possible leads to better forecasting performances. However, there is not a certain rule regarding the number of single models to be used in any combining methods. This study focuses on determining the optimal or near optimal number for single models with the help of statistical tests. An extensive experiment is carried out by utilizing some well-known time series data sets from diverse fields. Furthermore, many rival forecasting methods and some of the commonly used combining methods are employed. The obtained results indicate that some statistically significant performance differences can be found regarding the number of the single models in the combining methods under investigation.Keywords: combined forecast, forecasting, M-competition, time series
Procedia PDF Downloads 35319056 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security
Authors: D. Pugazhenthi, B. Sree Vidya
Abstract:
Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification
Procedia PDF Downloads 25919055 Performance Analysis of Scalable Secure Multicasting in Social Networking
Authors: R. Venkatesan, A. Sabari
Abstract:
Developments of social networking internet scenario are recommended for the requirements of scalable, authentic, secure group communication model like multicasting. Multicasting is an inter network service that offers efficient delivery of data from a source to multiple destinations. Even though multicast has been very successful at providing an efficient and best-effort data delivery service for huge groups, it verified complex process to expand other features to multicast in a scalable way. Separately, the requirement for secure electronic information had become gradually more apparent. Since multicast applications are deployed for mainstream purpose the need to secure multicast communications will become significant.Keywords: multicasting, scalability, security, social network
Procedia PDF Downloads 29119054 Convergence of Generalized Jacobi, Gauss-Seidel and Successive Overrelaxation Methods for Various Classes of Matrices
Authors: Manideepa Saha, Jahnavi Chakrabarty
Abstract:
Generalized Jacobi (GJ) and Generalized Gauss-Seidel (GGS) methods are most effective than conventional Jacobi and Gauss-Seidel methods for solving linear system of equations. It is known that GJ and GGS methods converge for strictly diagonally dominant (SDD) and for M-matrices. In this paper, we study the convergence of GJ and GGS converge for symmetric positive definite (SPD) matrices, L-matrices and H-matrices. We introduce a generalization of successive overrelaxation (SOR) method for solving linear systems and discuss its convergence for the classes of SDD matrices, SPD matrices, M-matrices, L-matrices and for H-matrices. Advantages of generalized SOR method are established through numerical experiments over GJ, GGS, and SOR methods.Keywords: convergence, Gauss-Seidel, iterative method, Jacobi, SOR
Procedia PDF Downloads 18619053 The Proposal of Modification of California Pipe Method for Inclined Pipe
Authors: Wojciech Dąbrowski, Joanna Bąk, Laurent Solliec
Abstract:
Nowadays technical and technological progress and constant development of methods and devices applied to sanitary engineering is indispensable. Issues related to sanitary engineering involve flow measurements for water and wastewater. The precise measurement is very important and pivotal for further actions, like monitoring. There are many methods and techniques of flow measurement in the area of sanitary engineering. Weirs and flumes are well–known methods and common used. But also there are alternative methods. Some of them are very simple methods, others are solutions using high technique. The old–time method combined with new technique could be more useful than earlier. Paper describes substitute method of flow gauging (California pipe method) and proposal of modification of this method used for inclined pipe. Examination of possibility of improving and developing old–time methods is direction of the investigation.Keywords: California pipe, sewerage, flow rate measurement, water, wastewater, improve, modification, hydraulic monitoring, stream
Procedia PDF Downloads 43819052 Review of Cable Fault Locating Methods and Usage of VLF for Real Cases of High Resistance Fault Locating
Authors: Saadat Ali, Rashid Abdulla Ahmed Alshehhi
Abstract:
Cable faults are always probable and common during or after commissioning, causing significant delays and disrupting power distribution or transmission network, which is intolerable for the utilities&service providers being their reliability and business continuity measures. Therefore, the adoption of rapid localization & rectification methodology is the main concern for them. This paper explores the present techniques available for high voltage cable localization & rectification and which is preferable with regards to easier, faster, and also less harmful to cables. It also provides insight experience of high resistance fault locating by utilization of the Very Low Frequency (VLF) method.Keywords: faults, VLF, real cases, cables
Procedia PDF Downloads 11119051 Risk and Uncertainty in Aviation: A Thorough Analysis of System Vulnerabilities
Authors: C. V. Pietreanu, S. E. Zaharia, C. Dinu
Abstract:
Hazard assessment and risks quantification are key components for estimating the impact of existing regulations. But since regulatory compliance cannot cover all risks in aviation, the authors point out that by studying causal factors and eliminating uncertainty, an accurate analysis can be outlined. The research debuts by making delimitations on notions, as confusion on the terms over time has reflected in less rigorous analysis. Throughout this paper, it will be emphasized the fact that the variation in human performance and organizational factors represent the biggest threat from an operational perspective. Therefore, advanced risk assessment methods analyzed by the authors aim to understand vulnerabilities of the system given by a nonlinear behavior. Ultimately, the mathematical modeling of existing hazards and risks by eliminating uncertainty implies establishing an optimal solution (i.e. risk minimization).Keywords: control, human factor, optimization, risk management, uncertainty
Procedia PDF Downloads 24819050 Reliability-Based Maintenance Management Methodology to Minimise Life Cycle Cost of Water Supply Networks
Authors: Mojtaba Mahmoodian, Joshua Phelan, Mehdi Shahparvari
Abstract:
With a large percentage of countries’ total infrastructure expenditure attributed to water network maintenance, it is essential to optimise maintenance strategies to rehabilitate or replace underground pipes before failure occurs. The aim of this paper is to provide water utility managers with a maintenance management approach for underground water pipes, subject to external loading and material corrosion, to give the lowest life cycle cost over a predetermined time period. This reliability-based maintenance management methodology details the optimal years for intervention, the ideal number of maintenance activities to perform before replacement and specifies feasible renewal options and intervention prioritisation to minimise the life cycle cost. The study was then extended to include feasible renewal methods by determining the structural condition index and potential for soil loss, then obtaining the failure impact rating to assist in prioritising pipe replacement. A case study on optimisation of maintenance plans for the Melbourne water pipe network is considered in this paper to evaluate the practicality of the proposed methodology. The results confirm that the suggested methodology can provide water utility managers with a reliable systematic approach to determining optimum maintenance plans for pipe networks.Keywords: water pipe networks, maintenance management, reliability analysis, optimum maintenance plan
Procedia PDF Downloads 15519049 Inversion of the Spectral Analysis of Surface Waves Dispersion Curves through the Particle Swarm Optimization Algorithm
Authors: A. Cerrato Casado, C. Guigou, P. Jean
Abstract:
In this investigation, the particle swarm optimization (PSO) algorithm is used to perform the inversion of the dispersion curves in the spectral analysis of surface waves (SASW) method. This inverse problem usually presents complicated solution spaces with many local minima that make difficult the convergence to the correct solution. PSO is a metaheuristic method that was originally designed to simulate social behavior but has demonstrated powerful capabilities to solve inverse problems with complex space solution and a high number of variables. The dispersion curve of the synthetic soils is constructed by the vertical flexibility coefficient method, which is especially convenient for soils where the stiffness does not increase gradually with depth. The reason is that these types of soil profiles are not normally dispersive since the dominant mode of Rayleigh waves is usually not coincident with the fundamental mode. Multiple synthetic soil profiles have been tested to show the characteristics of the convergence process and assess the accuracy of the final soil profile. In addition, the inversion procedure is applied to multiple real soils and the final profile compared with the available information. The combination of the vertical flexibility coefficient method to obtain the dispersion curve and the PSO algorithm to carry out the inversion process proves to be a robust procedure that is able to provide good solutions for complex soil profiles even with scarce prior information.Keywords: dispersion, inverse problem, particle swarm optimization, SASW, soil profile
Procedia PDF Downloads 18419048 The Utilization of Particle Swarm Optimization Method to Solve Nurse Scheduling Problem
Authors: Norhayati Mohd Rasip, Abd. Samad Hasan Basari , Nuzulha Khilwani Ibrahim, Burairah Hussin
Abstract:
The allocation of working schedule especially for shift environment is hard to fulfill its fairness among them. In the case of nurse scheduling, to set up the working time table for them is time consuming and complicated, which consider many factors including rules, regulation and human factor. The scenario is more complicated since most nurses are women which have personnel constraints and maternity leave factors. The undesirable schedule can affect the nurse productivity, social life and the absenteeism can significantly as well affect patient's life. This paper aimed to enhance the scheduling process by utilizing the particle swarm optimization in order to solve nurse scheduling problem. The result shows that the generated multiple initial schedule is fulfilled the requirements and produces the lowest cost of constraint violation.Keywords: nurse scheduling, particle swarm optimisation, nurse rostering, hard and soft constraint
Procedia PDF Downloads 37219047 Elicitation Methods of Requirements Gathering in Shopping Mobile Application Development
Authors: Xiao Yihong, Li Zhixuan, Wong Kah Seng, Shen Xingcang
Abstract:
Requirement Elicitation is one of the important factors in developing any new application. Most systems fail just because of wrong elicitation practice. As a result, developers always choose different methods in different fields to achieve optimal results. This paper analyses four cases to understand the effectiveness of different requirement elicitation methods in the field of mobile shopping applications. The elicitation methods we studied included interviews, questionnaires, prototypes, analysis of existing systems, focus groups, brainstorming, and so on. Through the research and analysis results, we ensured the need for a mixture of elicitation methods. Meanwhile, the method adopted should be determined according to the scale of the project and be operated in a reasonable order to ensure the high efficiency of requirement elicitation.Keywords: requirements elicitation method, shopping, mobile application, software requirement engineering
Procedia PDF Downloads 12319046 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Authors: Emad Alenany, M. Adel El-Baz
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.Keywords: queueing network, discrete-event simulation, health applications, SPT
Procedia PDF Downloads 18519045 Development of Energy Management System Based on Internet of Things Technique
Authors: Wen-Jye Shyr, Chia-Ming Lin, Hung-Yun Feng
Abstract:
The purpose of this study was to develop an energy management system for university campuses based on the Internet of Things (IoT) technique. The proposed IoT technique based on WebAccess is used via network browser Internet Explore and applies TCP/IP protocol. The case study of IoT for lighting energy usage management system was proposed. Structure of proposed IoT technique included perception layer, equipment layer, control layer, application layer and network layer.Keywords: energy management, IoT technique, sensor, WebAccess
Procedia PDF Downloads 33319044 Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach
Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf
Abstract:
This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.Keywords: Gaussian process regression, ensemble kernels, bayesian optimization, pharmaceutical sales analysis, time series forecasting, data analysis
Procedia PDF Downloads 6819043 Minimizing Vehicular Traffic via Integrated Land Use Development: A Heuristic Optimization Approach
Authors: Babu Veeregowda, Rongfang Liu
Abstract:
The current traffic impact assessment methodology and environmental quality review process for approval of land development project are conventional, stagnant, and one-dimensional. The environmental review policy and procedure lacks in providing the direction to regulate or seek alternative land uses and sizes that exploits the existing or surrounding elements of built environment (‘4 D’s’ of development – Density, Diversity, Design, and Distance to Transit) or smart growth principles which influence the travel behavior and have a significant effect in reducing vehicular traffic. Additionally, environmental review policy does not give directions on how to incorporate urban planning into the development in ways such as incorporating non-motorized roadway elements such as sidewalks, bus shelters, and access to community facilities. This research developed a methodology to optimize the mix of land uses and sizes using the heuristic optimization process to minimize the auto dependency development and to meet the interests of key stakeholders. A case study of Willets Point Mixed Use Development in Queens, New York, was used to assess the benefits of the methodology. The approved Willets Point Mixed Use project was based on maximum envelop of size and land use type allowed by current conventional urban renewal plans. This paper will also evaluate the parking accumulation for various land uses to explore the potential for shared parking to further optimize the mix of land uses and sizes. This research is very timely and useful to many stakeholders interested in understanding the benefits of integrated land uses and its development.Keywords: traffic impact, mixed use, optimization, trip generation
Procedia PDF Downloads 21119042 Using Two-Mode Network to Access the Connections of Film Festivals
Authors: Qiankun Zhong
Abstract:
In a global cultural context, film festival awards become authorities to define the aesthetic value of films. To study which genres and producing countries are valued by different film festivals and how those evaluations interact with each other, this research explored the interactions between the film festivals through their selection of movies and the factors that lead to the tendency of film festivals to nominate the same movies. To do this, the author employed a two-mode network on the movies that won the highest awards at five international film festivals with the highest attendance in the past ten years (the Venice Film Festival, the Cannes Film Festival, the Toronto International Film Festival, Sundance Film Festival, and the Berlin International Film Festival) and the film festivals that nominated those movies. The title, genre, producing country and language of 50 movies, and the range (regional, national or international) and organizing country or area of 129 film festivals were collected. These created networks connected by nominating the same films and awarding the same movies. The author then assessed the density and centrality of these networks to answer the question: What are the film festivals that tend to have more shared values with other festivals? Based on the Eigenvector centrality of the two-mode network, Palm Springs, Robert Festival, Toronto, Chicago, and San Sebastian are the festivals that tend to nominate commonly appreciated movies. In contrast, Black Movie Film Festival has the unique value of generally not sharing nominations with other film festivals. A homophily test was applied to access the clustering effects of film and film festivals. The result showed that movie genres (E-I index=0.55) and geographic location (E-I index=0.35) are possible indicators of film festival clustering. A blockmodel was also created to examine the structural roles of the film festivals and their meaning in real-world context. By analyzing the same blocks with film festival attributes, it was identified that film festivals either organized in the same area, with the same history, or with the same attitude on independent films would occupy the same structural roles in the network. Through the interpretation of the blocks, language was identified as an indicator that contributes to the role position of a film festival. Comparing the result of blockmodeling in the different periods, it is seen that international film festivals contrast with the Hollywood industry’s dominant value. The structural role dynamics provide evidence for a multi-value film festival network.Keywords: film festivals, film studies, media industry studies, network analysis
Procedia PDF Downloads 31519041 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 18819040 Automating 2D CAD to 3D Model Generation Process: Wall pop-ups
Authors: Mohit Gupta, Chialing Wei, Thomas Czerniawski
Abstract:
In this paper, we have built a neural network that can detect walls on 2D sheets and subsequently create a 3D model in Revit using Dynamo. The training set includes 3500 labeled images, and the detection algorithm used is YOLO. Typically, engineers/designers make concentrated efforts to convert 2D cad drawings to 3D models. This costs a considerable amount of time and human effort. This paper makes a contribution in automating the task of 3D walls modeling. 1. Detecting Walls in 2D cad and generating 3D pop-ups in Revit. 2. Saving designer his/her modeling time in drafting elements like walls from 2D cad to 3D representation. An object detection algorithm YOLO is used for wall detection and localization. The neural network is trained over 3500 labeled images of size 256x256x3. Then, Dynamo is interfaced with the output of the neural network to pop-up 3D walls in Revit. The research uses modern technological tools like deep learning and artificial intelligence to automate the process of generating 3D walls without needing humans to manually model them. Thus, contributes to saving time, human effort, and money.Keywords: neural networks, Yolo, 2D to 3D transformation, CAD object detection
Procedia PDF Downloads 14419039 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan
Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen
Abstract:
In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.Keywords: automation, integration, value, communication
Procedia PDF Downloads 14519038 Thick Data Analytics for Learning Cataract Severity: A Triplet Loss Siamese Neural Network Model
Authors: Jinan Fiaidhi, Sabah Mohammed
Abstract:
Diagnosing cataract severity is an important factor in deciding to undertake surgery. It is usually conducted by an ophthalmologist or through taking a variety of fundus photography that needs to be examined by the ophthalmologist. This paper carries out an investigation using a Siamese neural net that can be trained with small anchor samples to score cataract severity. The model used in this paper is based on a triplet loss function that takes the ophthalmologist best experience in rating positive and negative anchors to a specific cataract scaling system. This approach that takes the heuristics of the ophthalmologist is generally called the thick data approach, which is a kind of machine learning approach that learn from a few shots. Clinical Relevance: The lens of the eye is mostly made up of water and proteins. A cataract occurs when these proteins at the eye lens start to clump together and block lights causing impair vision. This research aims at employing thick data machine learning techniques to rate the severity of the cataract using Siamese neural network.Keywords: thick data analytics, siamese neural network, triplet-loss model, few shot learning
Procedia PDF Downloads 109